Inventory? What a pain in the assets!

In a recent bug bounty I was getting bored hammering on the main site of the client so I decided to re-read the rules of engagement.  It contained all the standard stuff, then something caught my eye.  This bounty excluded specific pages of the site (one was a payment processor and the other was a message board).  This particular bounty allowed for discovery of * outside of the exclusions noted above.

Unfortunately due to disclosure terms and timelines this post is NOT a story about what I happened to discover.  This is a post about the importance of inventory management.

Vulnerability management has lots of tooling available.  So much tooling, in fact, that determining which tool to use is often a challenge in and of itself.  The reality, however,  is that once you select a tool and get it configured that is no longer the hardest part of the job.  The issue then becomes two-fold.

First you need to make sure that you are scanning everything in your environment.  Sounds easy right?  Just pop the IP space in your scanner and let it rip.  Sure, this will get you started with your local footprint but it also brings me to the second point which I’ll get to in a second.  Anything that is hosted in the cloud, anything that is hosted by another supplier, even any web redirect to another partner using your URL namespace all present a potential attack surface.  This is why a CMDB (or an asset database if your not all into the ITIL) really is important to vulnerability management.  You have to know what is supposed to be there in order for you to identify what doesn’t belong.

This brings be to that second point I promised earlier.  CONTACTS!  Even if you know all the stuff that is supposed to be out there, you still need to know who to talk to to get things fixed.  This ranges from PBXs in the telcom closet all the way to the edge of the public facing web site.  You need to know who to talk to when all of your nifty tools actually find a problem.

It turns out that someone standing a system up “under their desk” and forgetting about it is one of the key pivot points for a modern attacker.  This also happens in the cloud so don’t think you’re safe there.  Someone testing out that new, awesome, vulnerability riddled, service and forgetting to shut it down before they went home for the weekend.

Well, that brings me to the end of this ramble.  Remember to keep track of your stuff and who it belongs to.  That makes the business of vulnerability management substantially easier.  Oh, and as for the bug bounty…I’ll bet if you were paying attention through this whole article you probably pieced together…the rest of the story.

As always….feedback is welcome in the comments.  You can follow me on Twitter @JaredGroves

Happy hunting!


SSL, TLS, and Ciphers: OH MY!

Web security is hard.  Making things even more difficult is the rate of change for what is considered “secure.”  That being said, the topics covered in this article can be considered current as of the date of publication.  Things may have changed, YMMV.

Now that you’ve been warned, let’s get back to the topic at hand…

Today we are talking about two components of a secure web connection: the protocol and the cipher suite.  If you imagine the protocol defines how the lock and key function the cipher defines different ways to make the ridges on the keys.



When HTTPS was born the first iteration of the protocol was known as SSL (Secure Socket Layer).  SSLv1.0 never actually “made it out of the lab” making the first public release of the protocol SSLv2.0 in 1995.  SSLv3.0 was released in 1996 representing a complete redesign of the protocol which would become the basis for future work.

Due to vulnerabilities and increasing compute speed SSLv2 was considered obsolete in 2011 and SSLv3 in June 2015.

This is why you need to be scanning for and deploying modern crypto.  You can read about some tools and challenges here: Beware the False Negative -SSLv2 Detection Issues


TLS 1.0 was introduced in 1999 but had interoperability issues with SSLv3.0 and ultimately a downgrade vulnerability was discovered allowing clients to bypass TLS 1.0 and fall back to SSLv3.0 when it was also available.  (Disable SSLv3!!!)

TLS1.1 was defined in 2006 and is currently considered secure but not preferred.

TLS 1.2 was defined in 2008 and is considered the current “secure” standard (when configured with the correct options and cipher suite).  This is the option you want to make sure to support.  Everything else is for backward compatibility, at least until TLS 1.3 is released.

TLS 1.3 is very close to being ratified (2017).  Once approved it will take some time for the servers and the browsers to release stable implementations of the protocol.

You can read some considerations about backward compatibility in my post: HTTPS, check. Secure? Maybe…

Cipher Suites

Cipher suites are a combination of complex mathematical functions for which both the client and server must agree for the connection so proceed successfully.

In the TLS supported cipher suites have 4 parts:

  • Key Exchange Algorithm
  • Bulk Encryption Algorithm
  • Message authentication code (MAC)
  • Pseudorandom function

Each of these areas is complex enough to merit a post of their own (and maybe I will!) but it is beyond the scope of this post to get into the details.

What I will leave you with is what is currently considered the secure cipher suites:

Modern (Most secure):


Intermediate (Compatible):

This has removed the known problematic cipher suites but keeps many older but ‘safe’ (for now) suites:

  • DHE-RSA-AES128-SHA256
  • DHE-RSA-AES256-SHA256
  • AES128-GCM-SHA256
  • AES256-GCM-SHA384
  • AES128-SHA256
  • AES256-SHA256
  • AES128-SHA
  • AES256-SHA

More information on cipher selection can be found at

Cipher Suite Order

Are we done yet?  Almost!

Make sure your web server is configured to enforce cipher order.  The way they are listed above is in order of “most to least” secure.  Browsers (behaving properly) should select the most secure supported common cipher suite and complete the connection.


You said almost done!  I did.  I lied, but, it was for the greater good.  Consider this a bonus tip.  Make sure HSTS is configured on your platform.  This forces the browser to use a secure connection in the event a clear text connection is also available.

That’s it for real this time!  Hope you enjoyed the article.

Feedback is welcome in the comments.  You can follow me on Twitter @JaredGroves



HTTPS, check. Secure? Maybe…

It is now 2017 and we no longer live in a plain text world by default.  Chrome has started informing us that http is not secure and the push for TLS based browser connections is accelerating faster than ever.  Modern compute (even in the mobile space) has become so fast that the previous issue of overhead due to TLS connections is far less of an issue today. Free services like are eliminating the financial barriers preventing a ” TLS by default” model.  Recognizing that and with rate of new breaches surfacing every day you know that you need to secure your web property.

Ok, so, you’ve opened port 443 on your firewall, you’ve generated your certificates, you’ve installed them, and you’ve protected your private keys  (If not, stop reading do that and come back.  We’ll wait..)  You refresh and the page comes up on https!  Website secured right?  Well, maybe.  As with most things the answer is still, “It depends.”  Before we get into that ramble, make sure that once you have enabled your secure platform, you properly disable your clear text http access both at the server and the firewall.  All the security in the world doesn’t mater if you lock the door but leave the windows open.

Anyhow, as I was saying, “It depends….”   In this case it depends on a number of factors but the most frequent concern is typically backwards compatibility.  That is balanced on the other end of the spectrum with “total security” (which is a myth but we sure try hard.).

In this case backwards compatibility is important because we want people to be able to access our site without having issues.  Impressions are so valuable we don’t dare risk missing out on a reader or a transaction just because we didn’t support their browser technology.  Those who want “total security” will achieve this at the cost of compatibility and complexity.  The transactions will be secure but only those using the most modern technology will be able to “understand” the crypto that is required for this level of security.  Depending on what your risk tolerance is and the sensitivity of the information you are trying to protect will help define “how secure do I need to be?”.  Common sense also goes a long way.

If you are a retail web property with many users spread across the globe you need to (potentially) accept more of the legacy browsers/protocols when compared to securing a point to point connection between two servers.  In the case of two servers, you know what’s on both ends of this connection,  opt for the most secure option that both sides support that performs within the tolerance of the system.


A great site explaining the various configuration options can be found at the wiki.

The configuration generator is unbelievably useful if you are using Apache, Nginx, Lighttpd, HAProxy, or AWS ELB.


For a new deployment I will go to the configuration generator and start with a “Modern” profile and deploy those options and wait for feedback from the users.  If there are no problems, awesome, gold star.

Sometimes backwards compatibility becomes an issue in corporations who have manufacturing and engineering equipment that are expensive, specialized, and don’t support the newer protocols.  Compatibility issues also can arise in parts of the world where modern technology is not immediately available or affordable.  In these cases you can use the configuration generator to work backwards to determine your most secure common protocol.

Other times you will be faced with an old server that “can’t be upgraded, moved, retired, rebooted…” yeah, you know the one…

In this case you can use the configuration generator to put in your server and SSL version and generate the best options that you can based on that.  In these cases it’s also recommended to isolate the connections to these systems using a firewall or equivalent technology.  Defense in depth…

Be sure to check out my post SSL, TLS, and Ciphers: OH MY! for more information about secure server configuration.


As with any set of changes, once you’ve implemented you need to verify.  You can read about about my preferred tools for the job in my post:  Beware the False Negative – SSLv2 Detection Issues 


Have some feedback about this ramble?  Let us know in the comments or find me on Twitter @JaredGroves




Beware the False Negative -SSLv2 Detection Issues

In vulnerability management there are few things worse than the false negative.  Analysts happily going about their day, thinking things are fine while skr1pt kiddies are doing somersaults  through old vulnerabilities.  In my case we were looking for systems that still supported SSLv2.

I was most recently caught by the false negative situation when asked to do a verification of some settings changes aimed at tightening up the crypto on some of our web servers.  It being a busy day I turned to the trusty nmap scanning tool to perform the verification that SSLv2 was, in fact, disabled on these web servers.

nmap --script ssl-enum-ciphers -p 443 [hostname]

nmap -sV -sC -p 443 [hostname]

I look through the results of both checks, no sslv2.  No complaints, no problem, right?  WRONG!

More secure is always better, right?  Well, not if you are responsible for vulnerability management.  In this case the default crypto libraries on Windows no longer support SSLv2 and therefore don’t detect it as an available option when offered by the server.  This results in certain tools returning the dreaded false negative.

Knowing there’s a problem is a big step towards the solution.  There are still specialized tools available that use various methods to detect supported ciphers and protocols that can help.

If you have a web facing system and want to do a quick check there is always Qualys SSL Labs.  I’d recommend selecting the ‘do not show the results on the boards’ tick box, at least time you run your site.

My favorite offline tool for this task is  Unfortunately it is only Linux or Cygwin so you are out of luck native on Windows.  I haven’t tested this on the new bash shell for Windows.  Anyone tried it?  Send feedback with your results.

If you are testing from a Windows box ssl scan is always an option.

Hope this helps. Happy hunting!

You can find my post on securing TLS on your system here:  HTTPS check. Secure? Maybe.

Have feedback?  Feel free to leave it in the comments or find me on Twitter @JaredGroves

Podcast Education

Not so long ago I was thinking about how, when I was a child, my father would come home after work with AM (talk) radio cranked up so loud the whole neighborhood could hear. Of course, I took it as my duty to mock my father for both the content and volume.  I swore that my tape deck (yep!) would forever play music and talk radio was for grownups.

I no longer have a tape deck in my truck, nor do I tune in to the AM radio band all that often, however, the magic of the Bluetooth connection for me is now used to crank InfoSec podcasts so loud my neighborhood can hear (or so my kids tell me).

For those who are in the Information Security field especially those who are tasked with vulnerability management it is important to make some time each day/week to listen to an InfoSec podcast of your choice.  Things change so quickly it’s a great way to stay on top of what’s going on in the industry.  Up to date intelligence is just as important as current technical skills in the InfoSec space.

Without further ado, my playlist of choice:

My weekly “must listen” show, without question is:  Security Now.  This podcast is like taking a training course for a couple of hours every week.

Steve Gibson brings wisdom and engaging content to every episode.  I listen faithfully and look forward to Tuesdays when the new episode is recorded.

Next up is Defensive Security.  Jerry Bell and Andrew Kalat cover current events and blue team topics often overlooked by other shows.

Risky Business has a slightly different format.  This show produced out of Austrailia by Patrick Gray.  The show is typically made up of 3 segments.  First is the news/current events with Adam Boileau which is often my favorite part of the show.  Next up is typically a featured interview which takes a specific topic and goes into a bit of a deep dive.  Finally there is the sponsored interview.  Patrick does a pretty good job keeping the guests balanced between the marketing hype and the actual details of their product.

Finally, a shift wouldn’t be complete without a daily dose of the SANS Internet Stormcast.  This is a 5-10 minute review of the latest threats and information in information security.

What are your favorites?  Have something you think I missed?  Let me know via the comments or on Twitter @JaredGroves

Disclosure:  I am just a fanboy and I am not receiving any compensation for these recommendations.

01001000 01100101 01101100 01101100 01101111 00100000 01010111 01101111 01110010 01101100 01100100 00101110

Welcome to my blog.

First off, this is a personal blog.  The thoughts and opinions expressed here are my own and are in no way associated with my employer.

Whew, glad that’s out of the way.

I’m not sure what you were doing on the Internet that led you here, but welcome!  Thanks for stopping by.

This blog is a space dedicated to information security ramblings as well as anything else I might find interesting.  Your mileage may vary.

Have something to say?  Reply in the comments or find me on Twitter @JaredGroves.