Wayback WebApp Hacking

Thursday, January 06, 2011

Rob Fuller

D8853ae281be8cfdfa18ab73608e8c3f

Archive.org allows you to check the history of sites and pages, but a service most are not aware of is one that allows you to get a list of every page that Archive.org has for a given domain.

This is great for enumerating a web applications, many times you'll find parts of web apps that have been long forgotten (and usually vulnerable).

This module doesn't make any requests to the targeted domain, it simply outputs a list to the screen/or a file of all the pages it has found on Archive.org.

msf auxiliary(enum_wayback) > info
       Name: Pull Archive.org stored URLs for a domain
    Version: 10394
    License: Metasploit Framework License (BSD)
       Rank: Normal

Provided by:  Rob Fuller

Basic options:

  Name     Current Setting  Required  Description
  ----     ---------------  --------  -----------
  DOMAIN   portswigger.net  yes       Domain to request URLS for
  OUTFILE                   no        Where to output the list for use

Description:

  This module pulls and parses the URLs stored by Archive.org for the
  purpose of replaying during a web assessment. Finding unlinked and
  old pages.

msf auxiliary(enum_wayback) > run

[*] Pulling urls from Archive.org
[*] Located 289 addresses for portswigger.net
http://portswigger.net/
http://portswigger.net/books/
http://portswigger.net/burp/
http://portswigger.net/burp/bullet.gif
http://portswigger.net/burp/buy.html
http://portswigger.net/burp/help.html
http://portswigger.net/burp/ps.css
http://portswigger.net/burp/screenshots.html
http://portswigger.net/burp/tc.html
http://portswigger.net/corner.gif

SNIPPED

You can set the OUTFILE so that you can parse it a bit and import it into Burp, or use a quick script to make the queries yourself. Here is one I wrote in python:

# cat requestor.py

#!/usr/bin/env python
import urllib
proxies = {'http': 'http://127.0.0.1:8080'}
filename = "/tmp/waybacklist.txt"

fl = open(filename, 'r')
for lines in fl:
    url = str(lines)
    if len(url) < 4:
        print "Skipping blank line"
    else:
        print "Requesting " + url
        temp = urllib.urlopen(url, proxies=proxies).read()

Cross-posted from Room362

Possibly Related Articles:
11663
Webappsec->General
Hacking Vulnerabilities Web Application Security Domain Archives
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.