Security and software crafting for hacking minds.
 

Some Security Tips for Ruby Hackers: Leveraging the Attack Surface. Part 1.

In the first episode I introduced the security checks I’d like to talk about at the talk I have to give next Friday.

Today we will talk about the code to automate this checks.

The attack surface

Discovering the attack surface it will be the first part of my talk. It’s about:

Category Owasp Testing guide reference Test name
Information Gathering OWASP-IG-001 Spiders, Robots and Crawlers
Information Gathering OWASP-IG-002 Search Engine Discovery/Reconnaissance
Information Gathering OWASP-IG-003 Identify application entry points
Information Gathering OWASP-IG-004 Testing for Web Application Fingerprint
Information Gathering OWASP-IG-005 Application Discovery
Information Gathering OWASP-IG-006 Analysis of Error Codes
Configuration Management Testing OWASP-CM-001 SSL/TLS Testing
Configuration Management Testing OWASP-CM-002 DB Listener Testing
Configuration Management Testing OWASP-CM-003 Infrastructure Configuration Management Testing
Configuration Management Testing OWASP-CM-004 Application Configuration Management Testing
Configuration Management Testing OWASP-CM-005 Testing for File Extensions Handling
Configuration Management Testing OWASP-CM-006 Old, backup and unreferenced files
Configuration Management Testing OWASP-CM-007 Infrastructure and Application Admin Interfaces
Configuration Management Testing OWASP-CM-008 Testing for HTTP Methods and XST

Let’s go.

OWASP-IG-001: Spiders, Robots and Crawlers

In April I wrote a post about using robots.txt as attack weapon. Do you remember it? No?!? Go back and read it.

Acting as a spider it is possible to discover how much your website is wide and to spot interesting entry points.

Robots.txt file is the first thing I test if I want to find more out of your site.

links rubygem is a piece of code I wrote to automate OWASP-IG-001 testing.

As you may see… the Net::HTTP is enough to play with this test.

Links::Api.robots - testing for OWASP-IG-001
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# TESTING: SPIDERS, ROBOTS, AND CRAWLERS (OWASP-IG-001)
def self.robots(site, only_disallow=true)

  if (! site.start_with? 'http://') and (! site.start_with? 'https://')
    site = 'http://'+site
  end

  list = []
  begin
    res=Net::HTTP.get_response(URI(site+'/robots.txt'))
    if (res.code != "200")
      return []
    end

    res.body.split("\n").each do |line|
      if only_disallow
        if (line.downcase.start_with?('disallow'))
          list << line.split(":")[1].strip.chomp
        end
      else
        if (line.downcase.start_with?('allow') or line.downcase.start_with?('disallow'))
          list << line.split(":")[1].strip.chomp
        end
      end
    end
  rescue
    return []
  end

  list
end

In the bin/links ruby script we check if the link disallowed is accessible or not. Discovering disallowed urls that are accessible is important if we’re wondering to discover service door and try to break-in

what we can do with robots.txt content (again from links rubygem)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
list.each do |l|
  if robots or bulk
    if ! l.start_with? '/'
      l = '/'+l.chomp
    end
    if ! target.start_with? 'http://' and ! target.start_with? 'https://'
      #defaulting to HTTP when no protocol has been supplied
      target = "http://"+target
    end

    print "#{target}#{l}:".color(:white)
    start = Time.now
    code = Links::Api.code(target+l, proxy)
    stop = Time.now
  else
    print "#{l}:".color(:white)
    start = Time.now
    code = Links::Api.code(l, proxy)
    stop = Time.now
  end
...

Crawling: the clean way

What about crawling a website? By crawling I mean retrieving all the possible urls starting from the homepage, extracting all the links in the HTML and recursive make a lot of requests.

But we’re lucky enough and there is something who make a great gem for us.

Using anemone rubygem, we have a clean DSL for crawling a website starting from the links extracted by the web pages we find.

crawling a website using anemone
1
2
3
4
5
6
7
require 'anemone'

Anemone.crawl("http://www.target.com/") do |anemone|
  anemone.on_every_page do |page|
      puts page.url
  end
end

Crawling: the bruteforce way

Even before discovering anemone rubygem, I wrote the enchant gem to discover links by bruteforcing the url with words taken from dictionary.

Using a bruteforce approach can be useful if an important link is not in robots.txt (and I do suggest not to do this) and it’s likely not linked in any of the public pages.

Enchant::Engine.get_list method is trivial, it take the words from a dictionary I borrow from Owasp Zap project.

Enchant::Engine.get_list
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
def get_list

  if @wordlist.nil?
    if File.exists?('../../db/directory-list-2.3-small.txt')
      @wordlist='../../db/directory-list-2.3-small.txt'
    end
    if File.exists?('./db/directory-list-2.3-small.txt')
      @wordlist='./db/directory-list-2.3-small.txt'
    else
      @list = {}
    end

  end

  begin
    File.open(@wordlist, 'r') { |f|
      @list = f.readlines
    }
  rescue Errno::ENOENT
    puts "it seems the wordlist file is not present (#{@wordlist})".color(:red)
    @list = {}
  end
end

There is no real magic in the Enchant::Engine.scan method… just a bunch of get and check for error codes… I know, I won’t win the A.Turing awards for these pieces of code, but sometimes they saved me the day in real pentest.

Enchant::Engine.scan main loop
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
list.each do |path|
  pbar.inc
  if ! path.start_with? '#'
    begin
      response = http.get('/'+path.chop)
      c = response.code.to_i
      refused = 0
      if c == 200
        @urls_open << path
      end
      if c == 401
        @urls_private << path
      end
      if c >= 500
        @urls_internal_error << path
      end
    rescue  Errno::ECONNREFUSED
      refused += 1
      if refused > 5
        pbar.finish
        puts "received 5 connection refused. #{@host} went down".color(:red)
        return @urls_open.count
      else
        puts "[WARNING] connection refused".color(:yellow)
        sleep 2 * refused
      end

    rescue Net::HTTPBadResponse
      refused = 0
      if @verbose
        puts "#{$!}".color(:red)
      end
    rescue Errno::ETIMEDOUT
      refused = 0
      if @verbose
        puts "#{$!}".color(:red)
      end
    end
  end
end

OWASP-IG-002: Search Engine Discovery/Reconnaissance

This task can be done easily with a browser. Just point it to google.com and use the ‘site:’ special keyword to search for all pages about a website indexed with google.

A sample query that enumerate all the stuff you can find related to armoredcode.com domain is: http://www.google.it/search?q=site:armoredcode.com

Of course you can use Net::HTTP also in this case, but Google is not happy to be called in an automated way without authentication and their api usage… so it’s easy not to automate the task at all :–)

OWASP-IG-004: Testing for Web Application Fingerprint

This is a 2 years old project, may be it would a great idea to write down a new and better fingerprinter, however wafp script can be used to try to detect the CMS version or a particular Application server serving our target.

OWASP-CM-001: SSL/TLS Testing

For SSL/TSL testing I use a rubygem I wrote a couple of months ago: ciphersurfer.

I blogged about ciphersurfer in my previous blog: here and here.

Maybe those two posts deserve a repost over armoredcode.com.

However the trick behind ciphersurfer is trying to make HTTPS calls, using standard Ruby networking APIs (against, no voodoo here).

lib/ciphersurfer/scanner.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def go
  context=OpenSSL::SSL::SSLContext.new(@proto)
  cipher_set = context.ciphers
  cipher_set.each do |cipher_name, cipher_version, bits, algorithm_bits|

    request = Net::HTTP.new(@host, @port)
    request.use_ssl = true
    request.verify_mode = OpenSSL::SSL::VERIFY_NONE
    request.ciphers= cipher_name
    begin
      response = request.get("/")
      @ok_bits << bits
      @ok_ciphers << cipher_name
    rescue OpenSSL::SSL::SSLError => e
      # Quietly discard SSLErrors, really I don't care if the cipher has
      # not been accepted
    rescue
      # Quietly discard all other errors... you must perform all error
      # chekcs in the calling program
    end
  end
end

Here we don’t use httpclient helpers since I want to play with different ciphers at time.

That’s it. All the magic happens there. Now, let’s look like at the bin script to see how the scoring system has been used.

First of all, we must scan the target for all the protocols we support.

bin/ciphersurfer
1
2
3
4
5
6
7
8
9
10
11
protocol_version.each do |version|
  s = Ciphersurfer::Scanner.new({:host=>host, :port=>port, :proto=>version})

  s.go
  if (s.ok_ciphers.size != 0)
    supported_protocols << version
    cipher_bits = cipher_bits | s.ok_bits
    ciphers = ciphers | s.ok_ciphers
  end

end
bin/ciphersurfer
1
2
3
4
5
6
7
8
cert= Ciphersurfer::Scanner.cert(host, port)
if ! cert.nil?
  a=cert.public_key.to_text
  key_size=/Modulus \((\d+)/i.match(a)[1]
else
  puts "warning: the server didn't give us the certificate".color(:yellow)
  key_size=0
end

Note that we don’t make another GET here since we did it at the beginning of the engagement when we checked if the target was alive or not.

Now, let’s calculate the scores, all of them in a 0..100 range.

bin/ciphersurfer
1
2
3
4
proto_score=  Ciphersurfer::Score.evaluate_protocols(supported_protocols)
cipher_score= Ciphersurfer::Score.evaluate_ciphers(cipher_bits)
key_score=    Ciphersurfer::Score.evaluate_key(key_size.to_i)
score=        Ciphersurfer::Score.score(proto_score, key_score, cipher_score)

And then, some graphics to make the experience more appealing.

bin/ciphersurfer
1
2
3
4
5
6
7
8
9
10
printf "%20s : %s (%s)\n", "Overall evaluation", Ciphersurfer::Score.evaluate(score), score.to_s
printf "%20s : ", "Protocol support"
proto_score.to_i.times{print 'o'.color(score_to_color(proto_score))}
puts ' ('+proto_score.to_s+')'
printf "%20s : ",  "Key exchange"
key_score.to_i.times{print 'o'.color(score_to_color(key_score))}
puts ' ('+key_score.to_s+')'
printf "%20s : ", "Cipher strength"
cipher_score.to_i.times{print 'o'.color(score_to_color(cipher_score))}
puts ' ('+cipher_score.to_s+')'

Wrap up

This is the first episode about leveraging the attack surface of a web application and, as along I was writing it I realized a couple of things:

  • in the friday talk I’ll go completely out of time
  • there is a lot of things to say about using ruby and the Owasp testing guide that it’s worth making something bigger…
  • I have a lot of things yet to learn
  • I don’t have fancy pictures to put on my Friday slideshow

Reference

Please note that this post series is built using Owasp testing guide as skeleton.

Off topic

True to be told I’m nervous for #rubyday talk. The talks I gave since today were done at security conferences where I’m confortable.

Here I’m going to talk about code to people who do uunderstand and that they do write great code, and they are not afraid to show it.

Just a bit scared… I hope they like it.

Comments

Google Analytics Alternative