Domain analyzer is a security analysis tool which automatically discovers and reports information about the given domain. Its main purpose is to analyze domains in an unattended way.
Domain Analyzer
Options
Features
- It creates a directory with all the information, including nmap output files.
- It uses colors to remark important information on the console.
- It detects some security problems like host name problems, unusual port numbers and zone transfers.
- It is heavily tested and it is very robust against DNS configuration problems.
- It uses nmap for active host detection, port scanning and version information (including nmap scripts).
- It searches for SPF records information to find new hostnames or IP addresses.
- It searches for reverse DNS names and compare them to the hostname.
- It prints out the country of every IP address.
- It creates a PDF file with results.
- It automatically detects and analyze sub-domains!
- It searches for domains emails.
- It checks the 192 most common hostnames in the DNS servers.
- It checks for Zone Transfer on every DNS server.
- It finds the reverse names of the /24 network range of every IP address.
- It finds active host using nmap complete set of techniques.
- It scan ports using nmap.
- It searches for host and port information using nmap.
- It automatically detects web servers used.
- It crawls every web server page using our Web Crawler Security Tool.
- It filters out hostnames based on their name.
- It pseudo-randomly searches N domains in google and automatically analyze them!
- Uses CTRL-C to stop current analysis stage and continue working.
Now untar the file tar zxvf domainanalyzer.tar.gz
Crawler
./crawler.py –u www.hackingarticles.in
Options:
-u, --url
|
URL to start crawling.
|
-m, --max-amount-to-crawl
|
Max deep to crawl. Using breadth first algorithm
|
-w, --write-to-file
|
Save summary of crawling to a text file. Output directory is created automatically
|
-s, --subdomains
|
Also scan subdomains matching with url domain.
|
-r, --follow-redirect
|
Do not follow redirect. By default follow redirection at main URL.
|
-f, --fetch-files
|
Download there every file detected in 'Files' directory. Overwrite existing content.
|
-F, --file-extension
|
Download files specified by comma separated extensions. This option also activates 'fetch-files' option. 'Ex.: -F pdf,xls,doc'
|
-d, --docs-files
|
Download docs files:xls,pdf,doc,docx,txt,odt,gnumeric,csv, etc. This option also activates 'fetch-files' option.
|
-E,--exclude-extensions
|
Do not download files that matches with this extensions. Options '-f','-F' or '-d' needed.
|
-h, --help
|
Show this help message and exit.
|
-V, --version
|
Output version information and exit
|
-v, --verbose
|
Be verbose
|
-D, --debug
|
Debug.
|
Domain Analyzer
./domain_analyzer_v_0.5.py –d www.example.com
Options
-h, --help
|
Show this help message and exit
|
-V, --version
|
Output version information and exit.
|
-D, --debug
|
Debug
|
-d, --domain
|
Domain to analyze.
|
-j, --not-common-hosts-names
|
Do not check common host names. Quicker but you will lose hosts
|
-t, --not-zone-transfer
|
Do not attempt to transfer the zone.
|
-n, --not-net-block
|
Do not attempt to -sL each IP netblock.
|
-o, --store-output
|
Store everything in a directory named as the domain. Nmap output files and the summary are stored inside.
|
-a, --not-scan-or-active
|
Do not use nmap to scan ports nor to search for active hosts
|
-p, --not-store-nmap
|
Do not store any nmap output files in the directory
|
-e, --zenmap
|
Move xml nmap files to a directory and open zenmap with the topology of the whole group. Your user should have access to the DISPLAY variable.
|
-g, --not-goog-mail
|
Do not use goog-mail.py (embebed) to look for emails for each domain
|
-s, --not-subdomains
|
Do not analyze sub-domains recursively. You will lose subdomain internal information.
|
-f, --create-pdf
|
Create a pdf file with all the information.
|
-w, --not-webcrawl
|
Do not web crawl every web site (in every port) we found looking for public web mis-configurations (Directory listing, etc.).
|
-m, --max-amount-to-crawl
|
If you crawl, do it up to this amount of links for each web site. Defaults to 50.
|
-F, --download-files
|
If you crawl, do ti up to this amount of links for each web site. Defaults to 10.
|
-c, --not-countrys
|
Do not resolve the country name for every IP and hostname.
|
-q, --not-spf
|
Do not check SPF records.
|
-k, --random-domain
|
Find this amount of domains from google and analyze them. For base domain
|
-x, --nmap-scantype
|
Nmap parameters to port scan. Defaults to: '-O --reason --webxml --traceroute
|
0 Comments:
Post a Comment