Another nice feature of whiskeris that of "data mining" - searching for
interesting files or directories on servers. Another program that does the
same type of thing is called cgichk (I got it off Packetstorm - I don't see
any URLs in the documentation). We will stick to whisker though. The default
database does some mining but better mining databases exist. One such a DB
is brute.db- also to be found on RFP's site. This DB makes whisker search
for anything that looks password-ish, admin-ish and other interesting files.
Keep your eyes open for similar DB files.
I recently started working on another technique that is proving to be quite
useful. The idea here is to mirror the while website and find common
directories. For instance, an administrative backend that sits on
http://xx.com/whole_site_here/admin.aspwont be found with the normal
techniques. The idea is thus to mine the site for directories and put the
common dirs into the brute.dbfile of whisker. Lets look at how to. First I
copy the site (using lynx)
# lynx -accept_all_cookies -crawl -traversal http://www.sensepost.com
(You might try something like TeleportProfor Windows as well) You will a
lot of files in the directory where you executed the command from. The *.dat
files contains the actual pages. The file "reject.dat" is interesting as it
contains link to other sites - it might help you to build a model of
business relations (if anything). It also shows all the "mailto" addresses -
nice to get additional domain names related to the target. In the file
"traverse.dat" you will find all the link on the page itself. Now all you
need to do is look for common directories & populate the whisker brute.db
file with it.
/tmp> cat traverse.dat | awk -F 'http://www.sensepost.com/' '{print
/$2}' | awk -F '/' '{print $1}' | sort | uniq | grep -v "\." | grep -v "\?"
misc
training
You need to change the root directories to brute.dbin the line that says:
array roots = /, cgi-bin, cgi-local, htbin, cgibin, cgis, cgi, scripts
to something like:
array roots = /, misc, training, cgi-bin, cgi-local, htbin, cgibin, cgis, cgi,
scripts
Now fire up whisker with the new brute.dbfile.
> perl whisker.pl -h www.sensepost.com -s brute.db -V,
and you might be surprised to find interesting files and directories you
wouldn’t have seen otherwise.
interesting files or directories on servers. Another program that does the
same type of thing is called cgichk (I got it off Packetstorm - I don't see
any URLs in the documentation). We will stick to whisker though. The default
database does some mining but better mining databases exist. One such a DB
is brute.db- also to be found on RFP's site. This DB makes whisker search
for anything that looks password-ish, admin-ish and other interesting files.
Keep your eyes open for similar DB files.
I recently started working on another technique that is proving to be quite
useful. The idea here is to mirror the while website and find common
directories. For instance, an administrative backend that sits on
http://xx.com/whole_site_here/admin.aspwont be found with the normal
techniques. The idea is thus to mine the site for directories and put the
common dirs into the brute.dbfile of whisker. Lets look at how to. First I
copy the site (using lynx)
# lynx -accept_all_cookies -crawl -traversal http://www.sensepost.com
(You might try something like TeleportProfor Windows as well) You will a
lot of files in the directory where you executed the command from. The *.dat
files contains the actual pages. The file "reject.dat" is interesting as it
contains link to other sites - it might help you to build a model of
business relations (if anything). It also shows all the "mailto" addresses -
nice to get additional domain names related to the target. In the file
"traverse.dat" you will find all the link on the page itself. Now all you
need to do is look for common directories & populate the whisker brute.db
file with it.
/tmp> cat traverse.dat | awk -F 'http://www.sensepost.com/' '{print
/$2}' | awk -F '/' '{print $1}' | sort | uniq | grep -v "\." | grep -v "\?"
misc
training
You need to change the root directories to brute.dbin the line that says:
array roots = /, cgi-bin, cgi-local, htbin, cgibin, cgis, cgi, scripts
to something like:
array roots = /, misc, training, cgi-bin, cgi-local, htbin, cgibin, cgis, cgi,
scripts
Now fire up whisker with the new brute.dbfile.
> perl whisker.pl -h www.sensepost.com -s brute.db -V,
and you might be surprised to find interesting files and directories you
wouldn’t have seen otherwise.
No comments:
Post a Comment