Qualys: Tag “All Systems” – How hard can it be?

We received a lot of support requests when first deploying the Qualys platform.

In this particular incident the question was as simple as they come, “Which tag should I use to get all of the systems in my scope?”

This isn’t an issue if all of the users systems are conveniently tagged as “users systems”. In some cases this actually works out really well. In larger implementations it can become difficult to assign a single or just a few tags to a user for them to use in their filters. Then the issue becomes, “how does the user know which tags to use?” This becomes very complex very quickly.

Qualys uses a system of tags to filter and sort assets according to various criteria. The combination of static tagging and various dynamic rule driven tagging options make the tagging engine a very powerful option when trying to carve up your data. One of the tagging options that is missing (as of this writing) was the wildcard, or “all systems”. This becomes an issue when you want to run a report and use tags as opposed to asset groups (which does have an “all” option).

This led to the idea of using a common QID in an asset search dynamic tag to assign all scanned systems a tag that could be called “My Systems” addressing the requirement above.

First attempt was to take QID 45038 – Host scan time

<?xml version="1.0" encoding="UTF-8"?><TAG_CRITERIA><DETECTION><QID_LIST><QID>45038</QID></QID_LIST></DETECTION></TAG_CRITERIA>

This was good but only captured systems that were touched by the IP scanner and excluded the agent instrumented systems that hadn’t been scanned for whatever reason.

To address this we added QID 45531: Host Scan Time – CloudAgent which has been broken out into its own QID to cover these assets. The updated query becomes:

<?xml version="1.0" encoding="UTF-8"?><TAG_CRITERIA><DETECTION><QID_LIST><QID>45531</QID><QID>45038</QID></QID_LIST></DETECTION></TAG_CRITERIA

This at least now provides a single tag that can be used for all scanned systems for use in reports and API calls. Let me know via the comments if you know of a better way!

If you have any questions or feedback you can reach me here or on Twitter @JaredGroves

Nessus XML and the missing netbios-name

It’s been a while since my last ramble but this issue seemed worth getting out there…

If you do any real volume of scanning you probably have had to write a parser (or at least find someone else who did) for all of the wonderful data that Nessus gathers over the course of a scan. I have a number of such parsers in various languages.

Recently I noticed that I stopped getting the netbios-name tag returned in my Nessus scan results.


<ReportHost name=""><HostProperties>
<tag name="system-type">general-purpose</tag>
<tag name="operating-system">Microsoft Windows Server 2012 R2 Standard</tag>
<tag name="Credentialed_Scan">true</tag>
<tag name="HOST_END">Thu Jan 3 19:09:27 2019</tag>
<tag name="smb-login-used">mydomain\myuser</tag>
<tag name="local-checks-proto">smb</tag>
<tag name="host-fqdn">myhost.mydomain.com</tag>
<tag name="host-rdns">myhost.mydomain.com</tag>
<tag name="host-ip"></tag>
<tag name="HOST_START">Thu Jan 3 19:07:23 2019</tag>

What!?!? No tag….

Naturally I assumed that I must have forgotten to enable plugin 10150 – Windows NetBIOS /SMB Remote Hast Information Disclosure. After a review of the policy I was surprised to see it in there. Furthermore when I looked in the data in the *.nessus file I found that there were results from 10150 including the NetBIOS name!

<ReportItem port="137" svc_name="netbios-ns" protocol="udp" severity="0" pluginID="10150" pluginName="Windows NetBIOS / SMB Remote Host Information Disclosure" pluginFamily="Windows">
<description>The remote host is listening on UDP port 137 or TCP port 445, and replies to NetBIOS nbtscan or SMB requests.

Note that this plugin gathers information to be used in other plugins, but does not itself generate a report.</description>


<plugin_output>The following 4 NetBIOS names have been gathered :

MYHOST = Computer name
MYDOMAIN= Workgroup / Domain name
MYHOST = File Server Service
MYDOMAIN = Browser Service Elections

The remote host has the following MAC address on its adapter :


As of November 2018 Nessus changed the way the tag generation works. It is now necessary to include plugin 118730 to the policy to include the netbios-name tag in the Nessus XML output.

After adding that plugin to the policy all was well…

<ReportHost name=""><HostProperties>
<tag name="HOST_END">Fri Jan 4 09:55:43 2019</tag>
<tag name="system-type">general-purpose</tag>
<tag name="operating-system">Microsoft Windows Server 2003 Service Pack 2</tag>
<tag name="netbios-name">MYHOST</tag>
<tag name="hostname">MYHOST</tag>
<tag name="Credentialed_Scan">true</tag>
<tag name="host-fqdn">myhost.mydomain.com</tag>
<tag name="host-rdns">myhost.mydomain.com</tag>
<tag name="smb-login-used">MYDOMAIN\MYUSER</tag>
<tag name="local-checks-proto">smb</tag>
<tag name="host-ip"></tag>
<tag name="HOST_START">Fri Jan 4 09:54:49 2019</tag>

The following plugins are now my bare minimum for a Nessus policy related to Windows information gathering:

plugin id plugin description
10150 Windows NetBios /SMB Remote Host Information Disclosure
10917 SMB Scope
11936 OS Identification
118730 Windows NetBIOS / SMB Remote Host Report Tag

Make sure you have the auto_enable_dependencies value set to yes in the advanced settings menu of your Nessus scanner.

If you have any questions or feedback you can reach me here or on Twitter @JaredGroves

Inventory? What a pain in the assets!

In a recent bug bounty I was getting bored hammering on the main site of the client so I decided to re-read the rules of engagement.  It contained all the standard stuff, then something caught my eye.  This bounty excluded specific pages of the site (one was a payment processor and the other was a message board).  This particular bounty allowed for discovery of *.domain.com outside of the exclusions noted above.

Unfortunately due to disclosure terms and timelines this post is NOT a story about what I happened to discover.  This is a post about the importance of inventory management.

Vulnerability management has lots of tooling available.  So much tooling, in fact, that determining which tool to use is often a challenge in and of itself.  The reality, however,  is that once you select a tool and get it configured that is no longer the hardest part of the job.  The issue then becomes two-fold.

First you need to make sure that you are scanning everything in your environment.  Sounds easy right?  Just pop the IP space in your scanner and let it rip.  Sure, this will get you started with your local footprint but it also brings me to the second point which I’ll get to in a second.  Anything that is hosted in the cloud, anything that is hosted by another supplier, even any web redirect to another partner using your URL namespace all present a potential attack surface.  This is why a CMDB (or an asset database if your not all into the ITIL) really is important to vulnerability management.  You have to know what is supposed to be there in order for you to identify what doesn’t belong.

This brings be to that second point I promised earlier.  CONTACTS!  Even if you know all the stuff that is supposed to be out there, you still need to know who to talk to to get things fixed.  This ranges from PBXs in the telcom closet all the way to the edge of the public facing web site.  You need to know who to talk to when all of your nifty tools actually find a problem.

It turns out that someone standing a system up “under their desk” and forgetting about it is one of the key pivot points for a modern attacker.  This also happens in the cloud so don’t think you’re safe there.  Someone testing out that new, awesome, vulnerability riddled,  beta42.example.com service and forgetting to shut it down before they went home for the weekend.

Well, that brings me to the end of this ramble.  Remember to keep track of your stuff and who it belongs to.  That makes the business of vulnerability management substantially easier.  Oh, and as for the bug bounty…I’ll bet if you were paying attention through this whole article you probably pieced together…the rest of the story.

As always….feedback is welcome in the comments.  You can follow me on Twitter @JaredGroves

Happy hunting!


SSL, TLS, and Ciphers: OH MY!

Web security is hard.  Making things even more difficult is the rate of change for what is considered “secure.”  That being said, the topics covered in this article can be considered current as of the date of publication.  Things may have changed, YMMV.

Now that you’ve been warned, let’s get back to the topic at hand…

Today we are talking about two components of a secure web connection: the protocol and the cipher suite.  If you imagine the protocol defines how the lock and key function the cipher defines different ways to make the ridges on the keys.



When HTTPS was born the first iteration of the protocol was known as SSL (Secure Socket Layer).  SSLv1.0 never actually “made it out of the lab” making the first public release of the protocol SSLv2.0 in 1995.  SSLv3.0 was released in 1996 representing a complete redesign of the protocol which would become the basis for future work.

Due to vulnerabilities and increasing compute speed SSLv2 was considered obsolete in 2011 and SSLv3 in June 2015.

This is why you need to be scanning for and deploying modern crypto.  You can read about some tools and challenges here: Beware the False Negative -SSLv2 Detection Issues


TLS 1.0 was introduced in 1999 but had interoperability issues with SSLv3.0 and ultimately a downgrade vulnerability was discovered allowing clients to bypass TLS 1.0 and fall back to SSLv3.0 when it was also available.  (Disable SSLv3!!!)

TLS1.1 was defined in 2006 and is currently considered secure but not preferred.

TLS 1.2 was defined in 2008 and is considered the current “secure” standard (when configured with the correct options and cipher suite).  This is the option you want to make sure to support.  Everything else is for backward compatibility, at least until TLS 1.3 is released.

TLS 1.3 is very close to being ratified (2017).  Once approved it will take some time for the servers and the browsers to release stable implementations of the protocol.

You can read some considerations about backward compatibility in my post: HTTPS, check. Secure? Maybe…

Cipher Suites

Cipher suites are a combination of complex mathematical functions for which both the client and server must agree for the connection so proceed successfully.

In the TLS supported cipher suites have 4 parts:

  • Key Exchange Algorithm
  • Bulk Encryption Algorithm
  • Message authentication code (MAC)
  • Pseudorandom function

Each of these areas is complex enough to merit a post of their own (and maybe I will!) but it is beyond the scope of this post to get into the details.

What I will leave you with is what is currently considered the secure cipher suites:

Modern (Most secure):


Intermediate (Compatible):

This has removed the known problematic cipher suites but keeps many older but ‘safe’ (for now) suites:

  • DHE-RSA-AES128-SHA256
  • DHE-RSA-AES256-SHA256
  • AES128-GCM-SHA256
  • AES256-GCM-SHA384
  • AES128-SHA256
  • AES256-SHA256
  • AES128-SHA
  • AES256-SHA

More information on cipher selection can be found at mozilla.org.

Cipher Suite Order

Are we done yet?  Almost!

Make sure your web server is configured to enforce cipher order.  The way they are listed above is in order of “most to least” secure.  Browsers (behaving properly) should select the most secure supported common cipher suite and complete the connection.


You said almost done!  I did.  I lied, but, it was for the greater good.  Consider this a bonus tip.  Make sure HSTS is configured on your platform.  This forces the browser to use a secure connection in the event a clear text connection is also available.

That’s it for real this time!  Hope you enjoyed the article.

Feedback is welcome in the comments.  You can follow me on Twitter @JaredGroves



HTTPS, check. Secure? Maybe…

It is now 2017 and we no longer live in a plain text world by default.  Chrome has started informing us that http is not secure and the push for TLS based browser connections is accelerating faster than ever.  Modern compute (even in the mobile space) has become so fast that the previous issue of overhead due to TLS connections is far less of an issue today. Free services like letsencrypt.org are eliminating the financial barriers preventing a ” TLS by default” model.  Recognizing that and with rate of new breaches surfacing every day you know that you need to secure your web property.

Ok, so, you’ve opened port 443 on your firewall, you’ve generated your certificates, you’ve installed them, and you’ve protected your private keys  (If not, stop reading do that and come back.  We’ll wait..)  You refresh and the page comes up on https!  Website secured right?  Well, maybe.  As with most things the answer is still, “It depends.”  Before we get into that ramble, make sure that once you have enabled your secure platform, you properly disable your clear text http access both at the server and the firewall.  All the security in the world doesn’t mater if you lock the door but leave the windows open.

Anyhow, as I was saying, “It depends….”   In this case it depends on a number of factors but the most frequent concern is typically backwards compatibility.  That is balanced on the other end of the spectrum with “total security” (which is a myth but we sure try hard.).

In this case backwards compatibility is important because we want people to be able to access our site without having issues.  Impressions are so valuable we don’t dare risk missing out on a reader or a transaction just because we didn’t support their browser technology.  Those who want “total security” will achieve this at the cost of compatibility and complexity.  The transactions will be secure but only those using the most modern technology will be able to “understand” the crypto that is required for this level of security.  Depending on what your risk tolerance is and the sensitivity of the information you are trying to protect will help define “how secure do I need to be?”.  Common sense also goes a long way.

If you are a retail web property with many users spread across the globe you need to (potentially) accept more of the legacy browsers/protocols when compared to securing a point to point connection between two servers.  In the case of two servers, you know what’s on both ends of this connection,  opt for the most secure option that both sides support that performs within the tolerance of the system.


A great site explaining the various configuration options can be found at the mozilla.org wiki.

The configuration generator is unbelievably useful if you are using Apache, Nginx, Lighttpd, HAProxy, or AWS ELB.


For a new deployment I will go to the configuration generator and start with a “Modern” profile and deploy those options and wait for feedback from the users.  If there are no problems, awesome, gold star.

Sometimes backwards compatibility becomes an issue in corporations who have manufacturing and engineering equipment that are expensive, specialized, and don’t support the newer protocols.  Compatibility issues also can arise in parts of the world where modern technology is not immediately available or affordable.  In these cases you can use the configuration generator to work backwards to determine your most secure common protocol.

Other times you will be faced with an old server that “can’t be upgraded, moved, retired, rebooted…” yeah, you know the one…

In this case you can use the configuration generator to put in your server and SSL version and generate the best options that you can based on that.  In these cases it’s also recommended to isolate the connections to these systems using a firewall or equivalent technology.  Defense in depth…

Be sure to check out my post SSL, TLS, and Ciphers: OH MY! for more information about secure server configuration.


As with any set of changes, once you’ve implemented you need to verify.  You can read about about my preferred tools for the job in my post:  Beware the False Negative – SSLv2 Detection Issues 


Have some feedback about this ramble?  Let us know in the comments or find me on Twitter @JaredGroves




Beware the False Negative -SSLv2 Detection Issues

In vulnerability management there are few things worse than the false negative.  Analysts happily going about their day, thinking things are fine while skr1pt kiddies are doing somersaults  through old vulnerabilities.  In my case we were looking for systems that still supported SSLv2.

I was most recently caught by the false negative situation when asked to do a verification of some settings changes aimed at tightening up the crypto on some of our web servers.  It being a busy day I turned to the trusty nmap scanning tool to perform the verification that SSLv2 was, in fact, disabled on these web servers.

nmap --script ssl-enum-ciphers -p 443 [hostname]

nmap -sV -sC -p 443 [hostname]

I look through the results of both checks, no sslv2.  No complaints, no problem, right?  WRONG!

More secure is always better, right?  Well, not if you are responsible for vulnerability management.  In this case the default crypto libraries on Windows no longer support SSLv2 and therefore don’t detect it as an available option when offered by the server.  This results in certain tools returning the dreaded false negative.

Knowing there’s a problem is a big step towards the solution.  There are still specialized tools available that use various methods to detect supported ciphers and protocols that can help.

If you have a web facing system and want to do a quick check there is always Qualys SSL Labs.  I’d recommend selecting the ‘do not show the results on the boards’ tick box, at least time you run your site.

My favorite offline tool for this task is testssl.sh.  Unfortunately it is only Linux or Cygwin so you are out of luck native on Windows.  I haven’t tested this on the new bash shell for Windows.  Anyone tried it?  Send feedback with your results.

If you are testing from a Windows box ssl scan is always an option.

Hope this helps. Happy hunting!

You can find my post on securing TLS on your system here:  HTTPS check. Secure? Maybe.

Have feedback?  Feel free to leave it in the comments or find me on Twitter @JaredGroves