All change (Again) and a new Blog

So I've just started (another) new company, and I'm moving on from 7 Elements. The new company is ScotSTS. I'm taking the opportunity to refresh my blogging software, and I'll be moving new posts over to the ScotSTS blog, and it also has a copy of all the content from this blog. The RSS feed for the new blog is here

Just the Facts Ma'am

Sometimes when you're testing it's good to be able to quickly get a feel for where to focus your attention or to get an overview of all the ports you've got open, so you can be sure you investigate all of them. Once you've done several scans as part of a job, you end up with a stack of nmap and nessus output files, and it can be hard to keep an eye on exactly what's been found so far, and it's good to have a way to just get the facts.

As a result a lot of testers will have scripts to help parse collate output from common tools like nmap.

They tend not to be the prettiest code in the universe or to produce lovely management friendly reports, but handy nonetheless.

Having set expectations about the quality of this code :) Here's a couple of scripts which may come in useful for testers in managing some of those xml report files.

NMAP Auto Analyzer can parse a single nmap xml file or a directory of nmap xml files and provide a concise report on ports open across them.

Nessus Auto Analyzer somewhat unsurprisingly, does the same job for .nessus report files (v2 only at the moment)

Both have reasonable help files, so should be fairly straightforward to use, any questions/queries welcome either in comments or in e-mail (rorym at mccune dot org dot uk)

We left off last time having created a simple vulnerability database using Ruby on Rails. So the next piece of the puzzle is getting that data into Dradis.

Luckily Dradis has a nice plugin system which is designed to ease the process of importing and exporting data from Dradis, so this isn't too tricky.

Creating the Plugin

As dradis has rails generators for import plugins, we can use that to create the basic structure. First off, obviously, we need a working Dradis installation to work from. There are instructions on the site for the latest svn version here and following those should give you a working version of the latest code.

Once that's done we can enter the dradis server directory an use this command to create the our import plugin.

rails generate import_plugin simple_vulndb

This creates a directory simple_vulndb_import under vendor/plugins and also creates a number of files for us to modify.

Configuring the Plugin

Here we'll just step through the bits that are necessary to get the plugin up and working. there's a number of files that we need to modify to get everything working ok. Most of this is just a modified version of the default vulndb_import plugin which is provided as part of Dradis.

First up is the configuration file in the plugin config directory.

Dradis uses YML config files which is a pretty easy syntax which is parameter : value

Here we can define the hostname, port and path for Dradis to access our vulnerability database. This also provides you the flexibility to change it (for instance if you've got a centralised version of the database as opposed to one hosted locally). The settings below are based on what we configured for the vulnerability database in the last post.

host: localhost
port: 3003
path: /vuln_search.json

with that done we can move on. Next up is the meta.rb file which can be found in lib/simple_vulndb_import/ . Here we just define the name of the plugin and the version information. So for example

  NAME = "Simple Vulnerability Database Import"
  # change this to the appropriate version
  module VERSION #:nodoc:
    MAJOR = 0
    MINOR = 1
    TINY = 0
     STRING = [MAJOR, MINOR, TINY].join('.')

would work fine. Next up the main piece we need to change, the filters.rb file. This is found in the same directory as the meta.rb file.

Creating the Filters

There's two main pieces to how I've set this up. The first is the filters. Essentially if we configure one of these for each of the search_types that we defined in the database (description, OWASP reference, Severity and Test Type) then we'll be able to search by those methods from within Dradis).

Dradis handles filters by creating a module within the filters module that you'll see pre-defined in the filters.rb template.

So for each of our searches we need to create a new module which looks a bit like this.

module TestTypeSearch
NAME = 'Search Database by Test Types'
  def self.run(params={})
    result = Filters::get_records('test_type',params['query'])
    records = Filters::prepare_results(result.body)
    return records
  end
end

what we're doing here is essentially setting up a NAME constant which contains (rather unsurprisingly) the filter name. then defining the behaviour when the filter is run. this is rather short as we're just calling two class methods and then returning the result.

When I was writing this file I realised that I was essentially just writing variations of the same logic four times, so in good ruby practice I tried to DRY up the code and moved most of the logic into the class methods get_records and prepare_results

get_records looks like this

def self.get_records(search_type,query)
  require 'cgi'
  conf_file = File.join(Rails.root, 'config', 'rvulndb_import.yml')
  conf = YAML::load( File.read CONF_FILE )
  http = Net::HTTP.new(conf['host'], conf['port'])      
  res = http.get(conf['path'] + '?search_type=' + search_type +'&query=' + CGI::escape(query))
end

So this method opens the configuration file that we defined earlier (you'll notice that it looks in the config directory under the rails root, so it's a good idea to put a copy in there). Once it's opened that it uses rubys' YAML class to read the file, sets up an http connection to the database mentioned in the config file and executes the query on the database. One thing to note here is the use of CGI::escape. This helps manage any use of characters that aren't allowed in URLs in our query string.

Ok, so after that method has completed we should have an array of 0 or more records that we can setup to be returned into dradis.

Next method up preps the records for input into Dradis

def self.prepare_results(json_data)
  recs = []
  jrec = ActiveSupport::JSON::decode(json_data)
  if jrec.length == 0
    error = Hash.new
    error['title'] = "No records found"
    error['description'] = "The search didn't return any records!"
    recs << error
    return recs
  end
  
  jrec.each do |jr|
    newrec = Hash.new
    newrec['title'] = jr['vulnerability']['title']
    newrec['description'] = Filters::build_description(jr['vulnerability'])
    recs << newrec
  end
  return recs
end

So this code just loads up the JSON data that our query should have returned, checks to make sure that we got some records (and returns an error if we didn't) then creates a hash for each record. There's one more bit of logic to explain in here which is the call to Filters::build_description. For neatness sake I broke that bit out. At the moment it's a pretty ugly text creation, but does the job :)

def self.build_description(note_data)
      <<-eos
Vulnerbility Title
------------------
#{note_data['title']}      
      
Vulnerability Description
------------------------
#{note_data['description']}
        
        
Vulnerability Remediation
-------------------------
#{note_data['remediation']}
        
        
Technical Notes
--------------
#{note_data['technical_notes']}
      eos
end

This just puts together the body of the note description for each finding, as one long string.

There's obviously a lot more that could be done with this (like better error handling and writing tests) but with those files complete, the module should work ok and you should be able to import vulns from your database directly into Dradis using the "import note" feature.

I've put a copy of the code for the plugin up here, in case it's helpful :)

One of the main tools that I've found useful in pen. testing is the Dradis Framework, it's a good way of keeping track of findings and notes during a test and I've also found it's template feature is good for keeping a list of things to remember during a test.

One of the features available in Dradis is import plugins. This lets you create a link to an external information source, such as a OSVDB or a database of vulnerabilities.

Having a database of vulnerabilities or findings can be pretty useful in cutting down the time required for reporting on a test as you can keep standard wordings in place (who really wants to write the same section about preventing XSS more than once!).

So recently I knocked up a simple vulnerability database to link in to Dradis and I thought it might be of use, so here's the process.


Creating the App

We're going to use Ruby on Rails for this as it's nice and easy to develop for (as you'll see) and also that's what Dradis is based on, so makes sense to keep all the coding in the same underlying language. Also rails apps are very portable, they're basically contained within a single directory structure, so it's relatively easy to move them from place to place.

Before starting the application, there's the usual pre-req's. I'm using Ruby 1.9.2 and Rails 3 so having those installed is a good thing. If you're using Linux then it's helpful to have RVM working as some distros don't have ruby 1.9.2 packaged up as yet.

once you've got the pre-req's working, we can start by creating a rails app

rails new vulnlist

This creates a new application called vulnlist and adds all the standard rails files in.


Creating the Scaffold

Once we've got the app created, we can use rails scaffolding to quickly create the basic structure for our app. The web pages that scaffolding creates aren't the most pretty, but they'll do for now.

With the scaffold we can specify what fields we want to create in the database and also what data types those fields are.

rails generate scaffold Vulnerability title:string test_type:string description:text remediation:text technical_notes:text severity:string owasp_reference:string

Once we've completed this we can look at the basic app by setting up the database with

rake db:migrate

Ensuring that all our gems are installed ok with

bundle install

and launching the app

rails server

At this point browsing to http://127.0.0.1:3000/vulnerabilities should show a blank page with our fields in it. From this page we can create new vulnerabilities and edit or delete existing ones.

Now that we've got this basic structure setup it's worth using git to keep a handle on the source code. On Linux the procedure for this is pretty easy

If you've not already got it installed

sudo apt-get install git-core

then in the root of the application

git init
git add .
git commit -m "Initial Commit with Scaffold"

Having git running on the app will make it pretty easy to revert any mistakes made along the way, as long as we've done regular commits.

Setting up the Searches

So far we've got a basic structure in place and can do the basic Create, Read, Update, Delete cycle on our data. However for the dradis integration, what'd be useful is if we could search for vulnerabilities using various criteria and have the results returned to Dradis.

This turns out to be relatively straightforward. First what we need is a new action in our controller. Opening vulnlist/app/controllers/vulnerabilities_controller.rb we can see the existing actions that we've got for the application.

What we need to do now is add a new action to allow for vulnerability searches.

def vuln_search
    case params[:search_type]
    when "description"
      @vulnerabilities = Vulnerability.where("description like ?", "%"+params[:query]+"%")
    when "owasp"
      @vulnerabilities = Vulnerability.where("owasp_reference like ?",params[:query]+"%") 
    when "severity"
      @vulnerabilities = Vulnerability.find_all_by_severity(params[:query])
    when "test_type"
      @vulnerabilities = Vulnerability.find_all_by_test_type(params[:query])
    end
    respond_to do |format|
      format.xml {render :xml => @vulnerabilities}
      format.json {render :json => @vulnerabilities}
    end
  end

This defines a new method called "vuln_search" which takes two parameters, search_type and query. The search type parameter lets us pick from different finders. Rails provides access into the application database via ActiveRecord and this just uses a couple of the finder types for different parameters. Where the query is going to be one of a number of fixed values like "severity" which will be something like high, medium or low, we can just use a standard find_all_by_ approach, but where it's a more free text style search, we use Vulnerability.where and pass in the query parameter that way.

The respond_to block is a really nice feature of rails. By adding in the two lines for :xml and :json rails wires up responses so that we can get the data out in those format, no additional code required.

Now that we've got the basic code in place, we just need to modify the rails routes so that the application knows how to access our new method.

This is done by modifying the vulnlist/config/routes.rb file, and adding the following code

controller :vulnerabilities do
  get 'vuln_search' => :vuln_search
end

At this point, we've got the application basically working. If you put in a couple of test findings, then you should be able to go to http://127.0.0.1:3000/vuln_search.xml?search_type=Severity&Query=High for example and get some XML data back.

Tidying up

Now that we've got the basics working, there's a couple of additional steps that it's worth looking at to tidy some things up.

Selectors

First off, we'd like some of our fields (OWASP Reference, Severity and test type) to be one of a number of defined values. The "proper" way to do this would be to create additional models for these and link them in to the main vulnerabilities controller, but there's a quicker way which is probably going to work well enough for our purposes.

Opening up vulnlist/app/models/vulnerability.rb we can specify some Constant values for these settings

TEST_TYPES = ["Web Application","Windows Server","Unix Server","Wireless","Web Server","Oracle","MS SQL","MySQL","DB2"]
SEVERITY_LEVELS = ["Critical","High","Medium","Low","No Impact"]
OWASP_TOP_10 = ["A1 - Injection","A2 - Cross Site Scripting (XSS)","A3 - Broken Authentication and Session Management","A4 - Insecure Direct Object Reference","A5 - Cross-Site Request Forgery (CSRF)","A6 - Security Misconfiguration","A7 - Insecure Cryptographic Storage","A8 - Failure to Restrict URL Access","A9 - Insufficient Transport Layer Protection","A10 - Unvalidated Redirects and Forwards"]

Then we can modify the form that the scaffolding process created to use these arrays as a select list. The form is found in vulnlist/app/views/vulnerabilities/_form.html.erb. In that file we just need to replace the "text_field" lines for those three fields with the following select lines

<%= f.select :test_type, Vulnerability::TEST_TYPES, :prompt => "Select the test type" %>
<%= f.select :severity, Vulnerability::SEVERITY_LEVELS, :prompt => "Select the severity level" %>
<%= f.select :owasp_reference, Vulnerability::OWASP_TOP_10, :prompt => "Select the appropriate OWASP top 10 reference" %>

This picks up the Constants from our model and helps keep the data consistent.

Localhost Only

As you'll have noticed with this application, there's pretty much no security whatsoever. At the moment it's setup as a personal database only and isn't suitable to be on any kind of network. Adding that security isn't too difficult with rails, however it's not really a problem for the basic use case that we have here. Both the vulnerability list and the dradis installation only need to listen on the localhost.

Configuring rails to only listen on the localhost (as opposed to specifying it on the command line) is a bit hacky, but here's a way to do it based on this post and this dradis change . We need to modify the vulnlist/script/rails file and add the following lines

require 'rubygems'
require 'rails/commands/server'
require 'rack'
require 'webrick'

module Rails
class Server < ::Rack::Server
def default_options
super.merge({
:Port => 3003,
:Host => "127.0.0.1",
:environment => (ENV['RAILS_ENV'] || "development").dup,
:daemonize => false,
:debugger => false,
:pid => File.expand_path("tmp/pids/server.pid"),
:config => File.expand_path("config.ru")
})
end
end
end

This also moves the application off the default port of 3000, to a new one of 3003 which hopefully shouldn't clash with other services.

Default Routes

At the moment if we visit the root page of our application, now at http://127.0.0.1:3003 we get the default rails welcome page. What'd be nicer is if we were re-directed to the vulnerability listing automatically.

That's easily done with two steps. First edit the vulnlist/config/routes.rb file and add the line

  root :to => "vulnerabilities#index"

then delete the file vulnlist/public/index.html file.

Summary

So at the end of this first part we've created a basic vulnerability database which we can search easily on a number of parameters.

The next step is to create the dradis plugin to hook the two together, which as I'll cover next time is a reasonably easy thing to do.

New Role, New Blog

I've just started a new role as a director at 7 Elements. We're providing technical security consultancy and penetration testing services, focusing on the scottish market.

As part of that we've started up a blog here to talk through some of the ideas we've got for approaching security and testing in a pragmatic way.

I'm planning to keep this blog running for now (not that you could tell from the level of posts!), but more of my security/testing stuff will probably pop up on the 7 Elements blog...

Wireless Scanning and a new tool

| 2 Comments

I had some cause to do some wireless work recently, which got me interested in doing some more war-walking (and hey, the weathers actually been nice enough to make it pleasant recently).

It was interesting to see the density of wireless networks in the suburban area near where I live, a quick 30 minute walk can easily pick up several hundred APs. Also some of the stats on encryption were interesting with about 25% of networks either using WEP or having no encryption at all, so still rich pickings there for anyone who wants free access or to attack some home networks directly.

I also did a bit of scanning with my N900 in Glasgow, near the apple store and noticed they've got an awful lot of clients connected to their unencrypted wireless networks there (~ 260 client spread over 3 APs), hope everyone is using VPNs or SSL only sites ;op

Also couldn't find something to do the analysis the way I was looking for it, so I knocked up a quick script in ruby to analyse the .netxml output files from kismet.

It's available here . It needs ruby, rubygems and nokogiri to work. worth noting that on linux installs you'll need some xml parsing libraries installed before installing nokogiri (libxslt libxml2 libxml2-dev)


Basic syntax is very straightforward.

./kis_analysis.rb -f [netxml file] -r [report name]

you can add -g if you've got GPS data to add links from each network to a google maps point and -m to draw a map of all the networks seen.

Any feedback/comments welcome either on the blog or to rorym@nmrconsult.net

One of the aspects of the move to cloud computing I find most interesting is the new and emergent risks that come with the move of services from a traditional networked IT environment, to being hosted "out in the open" of the cloud.

Whilst attention gets paid to some of the technical risks, I don't think there's been a lot of focus on some of the more procedural/human aspects of it.

One example is the visibility/effect of configuration mistakes. In a traditional IT environment, mistakes can be partially contained by the network perimeter (albeit that containment is usually weaker than it used to be).

If someone makes an access control change which allows anonymous access to data, that mistake is likely only to be exploitable and visible to a limited group of people.

With the move to Cloud computing though, that same mistake could be instantly visible to the whole world and all it's attacker communities.

A really good example of this comes up in a vulnerability found by Jonathan Siegel (background story here and here).

In essence the problem seems to be that users of Amazon Web Services have made access control errors which set disk snapshots to be publicly available to everyone in a given region, and in the examples Jonathan gives this has included a database of user accounts for a web service and a full copy of a news services' web site.

So what would have likely been a relatively minor access control issue in an Internal network setup, becomes a situation where all the data in question should be considered compromised.

Find recent content on the main index or look in the archives to find all content.

Pages

Powered by Movable Type 4.37

Find recent content on the main index or look in the archives to find all content.

Recent Comments

  • Rory2: Cool glad to hear it works :) Yeah sqlite integration read more
  • Steven Ryan: I just gave the tool a whirl to test my read more
  • Petri Lopia: Now it's possible to run Kismet on Nokia N900: http://www.petrilopia.net/wordpress/security-and-hacking/warwalking-nokia-n900/ read more
  • Jonathan Siegel: Rory--you gave one of the best talks I have ever read more
  • Nicholas Schmidt: I know in SNMPv3 roll-outs I have, that type of read more
  • Pamelarole: thans for the tip read more
  • pci compliance: This information is very helpful. It really helps me understand read more
  • Rory2: Whoops, thanks for that, I've updated the link to point read more
  • STRSHR: The link under 'this paper' points to mc's page. read more
  • Rory2: yeah i'm just starting to get into using metasploit at read more

Recent Assets

  • wireshark-nmap-snmpv3.png