Qualys: Tag “All Systems” – How hard can it be?

We received a lot of support requests when first deploying the Qualys platform.

In this particular incident the question was as simple as they come, “Which tag should I use to get all of the systems in my scope?”

This isn’t an issue if all of the users systems are conveniently tagged as “users systems”. In some cases this actually works out really well. In larger implementations it can become difficult to assign a single or just a few tags to a user for them to use in their filters. Then the issue becomes, “how does the user know which tags to use?” This becomes very complex very quickly.

Qualys uses a system of tags to filter and sort assets according to various criteria. The combination of static tagging and various dynamic rule driven tagging options make the tagging engine a very powerful option when trying to carve up your data. One of the tagging options that is missing (as of this writing) was the wildcard, or “all systems”. This becomes an issue when you want to run a report and use tags as opposed to asset groups (which does have an “all” option).

This led to the idea of using a common QID in an asset search dynamic tag to assign all scanned systems a tag that could be called “My Systems” addressing the requirement above.

First attempt was to take QID 45038 – Host scan time

<?xml version="1.0" encoding="UTF-8"?><TAG_CRITERIA><DETECTION><QID_LIST><QID>45038</QID></QID_LIST></DETECTION></TAG_CRITERIA>

This was good but only captured systems that were touched by the IP scanner and excluded the agent instrumented systems that hadn’t been scanned for whatever reason.

To address this we added QID 45531: Host Scan Time – CloudAgent which has been broken out into its own QID to cover these assets. The updated query becomes:

<?xml version="1.0" encoding="UTF-8"?><TAG_CRITERIA><DETECTION><QID_LIST><QID>45531</QID><QID>45038</QID></QID_LIST></DETECTION></TAG_CRITERIA

This at least now provides a single tag that can be used for all scanned systems for use in reports and API calls. Let me know via the comments if you know of a better way!

If you have any questions or feedback you can reach me here or on Twitter @JaredGroves

Nessus XML and the missing netbios-name

It’s been a while since my last ramble but this issue seemed worth getting out there…

If you do any real volume of scanning you probably have had to write a parser (or at least find someone else who did) for all of the wonderful data that Nessus gathers over the course of a scan. I have a number of such parsers in various languages.

Recently I noticed that I stopped getting the netbios-name tag returned in my Nessus scan results.

*poof*

<ReportHost name="192.168.1.100"><HostProperties>
<tag name="system-type">general-purpose</tag>
<tag name="operating-system">Microsoft Windows Server 2012 R2 Standard</tag>
<tag name="Credentialed_Scan">true</tag>
<tag name="HOST_END">Thu Jan 3 19:09:27 2019</tag>
<tag name="smb-login-used">mydomain\myuser</tag>
<tag name="local-checks-proto">smb</tag>
<tag name="host-fqdn">myhost.mydomain.com</tag>
<tag name="host-rdns">myhost.mydomain.com</tag>
<tag name="host-ip">192.168.1.100</tag>
<tag name="HOST_START">Thu Jan 3 19:07:23 2019</tag>
</HostProperties>

What!?!? No tag….

Naturally I assumed that I must have forgotten to enable plugin 10150 – Windows NetBIOS /SMB Remote Hast Information Disclosure. After a review of the policy I was surprised to see it in there. Furthermore when I looked in the data in the *.nessus file I found that there were results from 10150 including the NetBIOS name!


<ReportItem port="137" svc_name="netbios-ns" protocol="udp" severity="0" pluginID="10150" pluginName="Windows NetBIOS / SMB Remote Host Information Disclosure" pluginFamily="Windows">
<description>The remote host is listening on UDP port 137 or TCP port 445, and replies to NetBIOS nbtscan or SMB requests.

Note that this plugin gathers information to be used in other plugins, but does not itself generate a report.</description>

<...snip...>

<plugin_output>The following 4 NetBIOS names have been gathered :

MYHOST = Computer name
MYDOMAIN= Workgroup / Domain name
MYHOST = File Server Service
MYDOMAIN = Browser Service Elections

The remote host has the following MAC address on its adapter :

00:16:35:aa:aa:aa</plugin_output>
</ReportItem>

As of November 2018 Nessus changed the way the tag generation works. It is now necessary to include plugin 118730 to the policy to include the netbios-name tag in the Nessus XML output.

After adding that plugin to the policy all was well…

<ReportHost name="192.168.1.100"><HostProperties>
<tag name="HOST_END">Fri Jan 4 09:55:43 2019</tag>
<tag name="system-type">general-purpose</tag>
<tag name="operating-system">Microsoft Windows Server 2003 Service Pack 2</tag>
<tag name="netbios-name">MYHOST</tag>
<tag name="hostname">MYHOST</tag>
<tag name="Credentialed_Scan">true</tag>
<tag name="host-fqdn">myhost.mydomain.com</tag>
<tag name="host-rdns">myhost.mydomain.com</tag>
<tag name="smb-login-used">MYDOMAIN\MYUSER</tag>
<tag name="local-checks-proto">smb</tag>
<tag name="host-ip">192.168.1.100</tag>
<tag name="HOST_START">Fri Jan 4 09:54:49 2019</tag>
</HostProperties>

The following plugins are now my bare minimum for a Nessus policy related to Windows information gathering:

plugin id plugin description
10150 Windows NetBios /SMB Remote Host Information Disclosure
10917 SMB Scope
11936 OS Identification
118730 Windows NetBIOS / SMB Remote Host Report Tag

Make sure you have the auto_enable_dependencies value set to yes in the advanced settings menu of your Nessus scanner.

If you have any questions or feedback you can reach me here or on Twitter @JaredGroves

Where’s the Certificate Info – Chrome?

Quick post here today.  Recently I’ve been thinking I’ve lost my mind due to the fact that I could no longer find the certificate details up on the the awesome bar in Chrome.  I may be losing my mind, but it isn’t over this.  I’m not sure why Google decided to move this functionality, making the bar just a little less awesome…but they did.

The good news is that this information is not lost.  While in Chrome if you press ctrl-shift-i and go to the security tab you can find the familiar “View Certificate” button in there.

Like I said, short ramble today…

As always, feedback welcome in the comments.  Find me on Twitter @JaredGroves

CTF: Most Fun You’ll Ever Have Learning!

Spend enough time learning any skill and you will eventually get good at it.  Information security is no different in this regard.  What does set information security apart from other professions are some of the tools available to augment your training and get some practice and have some fun at the same time.

My personal favorite “training” tool is capture the flag.  In this scenario servers are setup which contain various hacking challenges to unlock the “flag” which is typically a string that needs to be fed back into the control server.  These systems are typically setup with leaderboards for those who like the competition as well as accompanying documentation to help out when you’re stuck (or just have no idea where to start!)

Without a doubt my favorite capture the flag style event is the SANS Holiday Hack Challenge which is released in December of each year.  This is a unique blend of in game discovery and web research.  This is coupled with hands-on “hacking” activities necessary to progress.  Very clever story lines and interesting challenges make this worth checking out.  The challenges range in skill levels so even if you’ve never tried something like this before, this is as good a place to start as any. This is a little different than your standard capture the flag, better in many regards, and certainly worth checking out.

Some of my favorite online CTF challenge sites can be found below.  I am always looking for new CTF challenge sites so please share in the comments!

If you are looking for challenges that can be installed/hosted locally check out some of these.  Be careful not to expose these publicly.  They are vulnerable by design so be careful!  Best to keep them bound to loopback (127.0.0.1) if possible):

As I said before, I love these things.  Please share your favorites in the comments!

…another ramble in the can!  You can follow me on Twitter@JaredGroves

Inventory? What a pain in the assets!

In a recent bug bounty I was getting bored hammering on the main site of the client so I decided to re-read the rules of engagement.  It contained all the standard stuff, then something caught my eye.  This bounty excluded specific pages of the site (one was a payment processor and the other was a message board).  This particular bounty allowed for discovery of *.domain.com outside of the exclusions noted above.

Unfortunately due to disclosure terms and timelines this post is NOT a story about what I happened to discover.  This is a post about the importance of inventory management.

Vulnerability management has lots of tooling available.  So much tooling, in fact, that determining which tool to use is often a challenge in and of itself.  The reality, however,  is that once you select a tool and get it configured that is no longer the hardest part of the job.  The issue then becomes two-fold.

First you need to make sure that you are scanning everything in your environment.  Sounds easy right?  Just pop the IP space in your scanner and let it rip.  Sure, this will get you started with your local footprint but it also brings me to the second point which I’ll get to in a second.  Anything that is hosted in the cloud, anything that is hosted by another supplier, even any web redirect to another partner using your URL namespace all present a potential attack surface.  This is why a CMDB (or an asset database if your not all into the ITIL) really is important to vulnerability management.  You have to know what is supposed to be there in order for you to identify what doesn’t belong.

This brings be to that second point I promised earlier.  CONTACTS!  Even if you know all the stuff that is supposed to be out there, you still need to know who to talk to to get things fixed.  This ranges from PBXs in the telcom closet all the way to the edge of the public facing web site.  You need to know who to talk to when all of your nifty tools actually find a problem.

It turns out that someone standing a system up “under their desk” and forgetting about it is one of the key pivot points for a modern attacker.  This also happens in the cloud so don’t think you’re safe there.  Someone testing out that new, awesome, vulnerability riddled,  beta42.example.com service and forgetting to shut it down before they went home for the weekend.

Well, that brings me to the end of this ramble.  Remember to keep track of your stuff and who it belongs to.  That makes the business of vulnerability management substantially easier.  Oh, and as for the bug bounty…I’ll bet if you were paying attention through this whole article you probably pieced together…the rest of the story.

As always….feedback is welcome in the comments.  You can follow me on Twitter @JaredGroves

Happy hunting!

 

SSL, TLS, and Ciphers: OH MY!

Web security is hard.  Making things even more difficult is the rate of change for what is considered “secure.”  That being said, the topics covered in this article can be considered current as of the date of publication.  Things may have changed, YMMV.

Now that you’ve been warned, let’s get back to the topic at hand…

Today we are talking about two components of a secure web connection: the protocol and the cipher suite.  If you imagine the protocol defines how the lock and key function the cipher defines different ways to make the ridges on the keys.

Protocols

SSL

When HTTPS was born the first iteration of the protocol was known as SSL (Secure Socket Layer).  SSLv1.0 never actually “made it out of the lab” making the first public release of the protocol SSLv2.0 in 1995.  SSLv3.0 was released in 1996 representing a complete redesign of the protocol which would become the basis for future work.

Due to vulnerabilities and increasing compute speed SSLv2 was considered obsolete in 2011 and SSLv3 in June 2015.

This is why you need to be scanning for and deploying modern crypto.  You can read about some tools and challenges here: Beware the False Negative -SSLv2 Detection Issues

TLS

TLS 1.0 was introduced in 1999 but had interoperability issues with SSLv3.0 and ultimately a downgrade vulnerability was discovered allowing clients to bypass TLS 1.0 and fall back to SSLv3.0 when it was also available.  (Disable SSLv3!!!)

TLS1.1 was defined in 2006 and is currently considered secure but not preferred.

TLS 1.2 was defined in 2008 and is considered the current “secure” standard (when configured with the correct options and cipher suite).  This is the option you want to make sure to support.  Everything else is for backward compatibility, at least until TLS 1.3 is released.

TLS 1.3 is very close to being ratified (2017).  Once approved it will take some time for the servers and the browsers to release stable implementations of the protocol.

You can read some considerations about backward compatibility in my post: HTTPS, check. Secure? Maybe…

Cipher Suites

Cipher suites are a combination of complex mathematical functions for which both the client and server must agree for the connection so proceed successfully.

In the TLS supported cipher suites have 4 parts:

  • Key Exchange Algorithm
  • Bulk Encryption Algorithm
  • Message authentication code (MAC)
  • Pseudorandom function

Each of these areas is complex enough to merit a post of their own (and maybe I will!) but it is beyond the scope of this post to get into the details.

What I will leave you with is what is currently considered the secure cipher suites:

Modern (Most secure):

  • ECDHE-ECDSA-AES256-GCM-SHA384
  • ECDHE-RSA-AES256-GCM-SHA384
  • ECDHE-ECDSA-CHACHA20-POLY1305
  • ECDHE-RSA-CHACHA20-POLY1305
  • ECDHE-ECDSA-AES128-GCM-SHA256
  • ECDHE-RSA-AES128-GCM-SHA256
  • ECDHE-ECDSA-AES256-SHA384
  • ECDHE-RSA-AES256-SHA384
  • ECDHE-ECDSA-AES128-SHA256
  • ECDHE-RSA-AES128-SHA256

Intermediate (Compatible):

This has removed the known problematic cipher suites but keeps many older but ‘safe’ (for now) suites:

  • ECDHE-ECDSA-CHACHA20-POLY1305
  • ECDHE-RSA-CHACHA20-POLY1305
  • ECDHE-ECDSA-AES128-GCM-SHA256
  • ECDHE-RSA-AES128-GCM-SHA256
  • ECDHE-ECDSA-AES256-GCM-SHA384
  • ECDHE-RSA-AES256-GCM-SHA384
  • DHE-RSA-AES128-GCM-SHA256
  • DHE-RSA-AES256-GCM-SHA384
  • ECDHE-ECDSA-AES128-SHA256
  • ECDHE-RSA-AES128-SHA256
  • ECDHE-ECDSA-AES128-SHA
  • ECDHE-RSA-AES256-SHA384
  • ECDHE-RSA-AES128-SHA
  • ECDHE-ECDSA-AES256-SHA384
  • ECDHE-ECDSA-AES256-SHA
  • ECDHE-RSA-AES256-SHA
  • DHE-RSA-AES128-SHA256
  • DHE-RSA-AES128-SHA
  • DHE-RSA-AES256-SHA256
  • DHE-RSA-AES256-SHA
  • ECDHE-ECDSA-DES-CBC3-SHA
  • ECDHE-RSA-DES-CBC3-SHA
  • EDH-RSA-DES-CBC3-SHA
  • AES128-GCM-SHA256
  • AES256-GCM-SHA384
  • AES128-SHA256
  • AES256-SHA256
  • AES128-SHA
  • AES256-SHA
  • DES-CBC3-SHA:!DSS

More information on cipher selection can be found at mozilla.org.

Cipher Suite Order

Are we done yet?  Almost!

Make sure your web server is configured to enforce cipher order.  The way they are listed above is in order of “most to least” secure.  Browsers (behaving properly) should select the most secure supported common cipher suite and complete the connection.

HSTS

You said almost done!  I did.  I lied, but, it was for the greater good.  Consider this a bonus tip.  Make sure HSTS is configured on your platform.  This forces the browser to use a secure connection in the event a clear text connection is also available.

That’s it for real this time!  Hope you enjoyed the article.

Feedback is welcome in the comments.  You can follow me on Twitter @JaredGroves

 

 

HTTPS, check. Secure? Maybe…

It is now 2017 and we no longer live in a plain text world by default.  Chrome has started informing us that http is not secure and the push for TLS based browser connections is accelerating faster than ever.  Modern compute (even in the mobile space) has become so fast that the previous issue of overhead due to TLS connections is far less of an issue today. Free services like letsencrypt.org are eliminating the financial barriers preventing a ” TLS by default” model.  Recognizing that and with rate of new breaches surfacing every day you know that you need to secure your web property.

Ok, so, you’ve opened port 443 on your firewall, you’ve generated your certificates, you’ve installed them, and you’ve protected your private keys  (If not, stop reading do that and come back.  We’ll wait..)  You refresh and the page comes up on https!  Website secured right?  Well, maybe.  As with most things the answer is still, “It depends.”  Before we get into that ramble, make sure that once you have enabled your secure platform, you properly disable your clear text http access both at the server and the firewall.  All the security in the world doesn’t mater if you lock the door but leave the windows open.

Anyhow, as I was saying, “It depends….”   In this case it depends on a number of factors but the most frequent concern is typically backwards compatibility.  That is balanced on the other end of the spectrum with “total security” (which is a myth but we sure try hard.).

In this case backwards compatibility is important because we want people to be able to access our site without having issues.  Impressions are so valuable we don’t dare risk missing out on a reader or a transaction just because we didn’t support their browser technology.  Those who want “total security” will achieve this at the cost of compatibility and complexity.  The transactions will be secure but only those using the most modern technology will be able to “understand” the crypto that is required for this level of security.  Depending on what your risk tolerance is and the sensitivity of the information you are trying to protect will help define “how secure do I need to be?”.  Common sense also goes a long way.

If you are a retail web property with many users spread across the globe you need to (potentially) accept more of the legacy browsers/protocols when compared to securing a point to point connection between two servers.  In the case of two servers, you know what’s on both ends of this connection,  opt for the most secure option that both sides support that performs within the tolerance of the system.

Resources

A great site explaining the various configuration options can be found at the mozilla.org wiki.

The configuration generator is unbelievably useful if you are using Apache, Nginx, Lighttpd, HAProxy, or AWS ELB.

Approach

For a new deployment I will go to the configuration generator and start with a “Modern” profile and deploy those options and wait for feedback from the users.  If there are no problems, awesome, gold star.

Sometimes backwards compatibility becomes an issue in corporations who have manufacturing and engineering equipment that are expensive, specialized, and don’t support the newer protocols.  Compatibility issues also can arise in parts of the world where modern technology is not immediately available or affordable.  In these cases you can use the configuration generator to work backwards to determine your most secure common protocol.

Other times you will be faced with an old server that “can’t be upgraded, moved, retired, rebooted…” yeah, you know the one…

In this case you can use the configuration generator to put in your server and SSL version and generate the best options that you can based on that.  In these cases it’s also recommended to isolate the connections to these systems using a firewall or equivalent technology.  Defense in depth…

Be sure to check out my post SSL, TLS, and Ciphers: OH MY! for more information about secure server configuration.

Verification

As with any set of changes, once you’ve implemented you need to verify.  You can read about about my preferred tools for the job in my post:  Beware the False Negative – SSLv2 Detection Issues 

 

Have some feedback about this ramble?  Let us know in the comments or find me on Twitter @JaredGroves

 

 

 

Beware the False Negative -SSLv2 Detection Issues

In vulnerability management there are few things worse than the false negative.  Analysts happily going about their day, thinking things are fine while skr1pt kiddies are doing somersaults  through old vulnerabilities.  In my case we were looking for systems that still supported SSLv2.

I was most recently caught by the false negative situation when asked to do a verification of some settings changes aimed at tightening up the crypto on some of our web servers.  It being a busy day I turned to the trusty nmap scanning tool to perform the verification that SSLv2 was, in fact, disabled on these web servers.

nmap --script ssl-enum-ciphers -p 443 [hostname]

nmap -sV -sC -p 443 [hostname]

I look through the results of both checks, no sslv2.  No complaints, no problem, right?  WRONG!

More secure is always better, right?  Well, not if you are responsible for vulnerability management.  In this case the default crypto libraries on Windows no longer support SSLv2 and therefore don’t detect it as an available option when offered by the server.  This results in certain tools returning the dreaded false negative.

Knowing there’s a problem is a big step towards the solution.  There are still specialized tools available that use various methods to detect supported ciphers and protocols that can help.

If you have a web facing system and want to do a quick check there is always Qualys SSL Labs.  I’d recommend selecting the ‘do not show the results on the boards’ tick box, at least time you run your site.

My favorite offline tool for this task is testssl.sh.  Unfortunately it is only Linux or Cygwin so you are out of luck native on Windows.  I haven’t tested this on the new bash shell for Windows.  Anyone tried it?  Send feedback with your results.

If you are testing from a Windows box ssl scan is always an option.

Hope this helps. Happy hunting!

You can find my post on securing TLS on your system here:  HTTPS check. Secure? Maybe.

Have feedback?  Feel free to leave it in the comments or find me on Twitter @JaredGroves

Podcast Education

Not so long ago I was thinking about how, when I was a child, my father would come home after work with AM (talk) radio cranked up so loud the whole neighborhood could hear. Of course, I took it as my duty to mock my father for both the content and volume.  I swore that my tape deck (yep!) would forever play music and talk radio was for grownups.

I no longer have a tape deck in my truck, nor do I tune in to the AM radio band all that often, however, the magic of the Bluetooth connection for me is now used to crank InfoSec podcasts so loud my neighborhood can hear (or so my kids tell me).

For those who are in the Information Security field especially those who are tasked with vulnerability management it is important to make some time each day/week to listen to an InfoSec podcast of your choice.  Things change so quickly it’s a great way to stay on top of what’s going on in the industry.  Up to date intelligence is just as important as current technical skills in the InfoSec space.

Without further ado, my playlist of choice:

My weekly “must listen” show, without question is:  Security Now.  This podcast is like taking a training course for a couple of hours every week.

Steve Gibson brings wisdom and engaging content to every episode.  I listen faithfully and look forward to Tuesdays when the new episode is recorded.

Next up is Defensive Security.  Jerry Bell and Andrew Kalat cover current events and blue team topics often overlooked by other shows.

Risky Business has a slightly different format.  This show produced out of Austrailia by Patrick Gray.  The show is typically made up of 3 segments.  First is the news/current events with Adam Boileau which is often my favorite part of the show.  Next up is typically a featured interview which takes a specific topic and goes into a bit of a deep dive.  Finally there is the sponsored interview.  Patrick does a pretty good job keeping the guests balanced between the marketing hype and the actual details of their product.

Finally, a shift wouldn’t be complete without a daily dose of the SANS Internet Stormcast.  This is a 5-10 minute review of the latest threats and information in information security.

What are your favorites?  Have something you think I missed?  Let me know via the comments or on Twitter @JaredGroves

Disclosure:  I am just a fanboy and I am not receiving any compensation for these recommendations.