Code Monkey home page Code Monkey logo

domain_analyzer's Introduction

Domain Analyzer v0.8.3

GitHub last commit (branch) example workflow example workflow Docker Pulls Branch Maintenance

What

Domain analyzer is a security analysis tool that automatically discovers and reports information about a given domain. Its main purpose is to analyze domains in an unattended way. It has many crazy features, such as getting more domains from DNS zones, automatic nmap, webcrawler, and world domination mode. Check the features.

If you want nmap to scan more ports and run scripts and to run the crawler on those websites, you need to be root.

Example default options

domainanalyzer-gif-demo See in asciinema at https://asciinema.org/a/466274

How

Domain analyzer takes a domain name and finds information about it, such as DNS servers, mail servers, IP addresses, mails on Google, SPF information, etc. After all the information is stored and organized it scans the ports of every IP found using nmap and perform several other security checks. After the ports are found, it uses the tool crawler.py from @verovaleros, to spider the complete web page of all the web ports found. This tool has the option to download files and find open folders.

The main features are:

  • It creates a directory with all the information, including nmap output files.
  • It uses colors to remark important information on the console.
  • It detects some security problems like host name problems, unusual port numbers and zone transfers.
  • It is heavily tested and it is very robust against DNS configuration problems.
  • It uses nmap for active host detection, port scanning and version information (including nmap scripts).
  • It searches for SPF records information to find new hostnames or IP addresses.
  • It searches for reverse DNS names and compare them to the hostname.
  • It prints out the country of every IP address.
  • It creates a PDF file with results.
  • It automatically detects and analyze sub-domains!
  • It searches for domains emails.
  • It checks the 192 most common hostnames in the DNS servers.
  • It checks for Zone Transfer on every DNS server.
  • It finds the reverse names of the /24 network range of every IP address.
  • It finds active host using nmap complete set of techniques.
  • It scan ports using nmap (remember that for the SYN scan you need to need root).
  • It searches for host and port information using nmap.
  • It automatically detects web servers used.
  • It crawls every web server page using our crawler.py tool. See the description below.
  • It filters out hostnames based on their name.
  • It pseudo-randomly searches N domains in Google and automatically analyze them!
  • Uses CTRL-C to stop current analysis stage and continue working.
  • It can read an external file with domain names and try to find them on the domain.

Bonus features

@verovaleros developed a separate python web crawler called "crawler.py". Its main features are:

  • Crawl http and https web sites.
  • Crawl http and https web sites not using common ports.
  • Uses regular expressions to find 'href' and 'src' html tag. Also content links.
  • Identifies relative links.
  • Identifies domain related emails.
  • Identifies directory indexing.
  • Detects references to URLs like 'file:', 'feed=', 'mailto:', 'javascript:' and others.
  • Uses CTRL-C to stop current crawler stages and continue working.
  • Identifies file extensions (zip, swf, sql, rar, etc.)
  • Download files to a directory:
    • Download every important file (images, documents, compressed files).
    • Or download specified files types.
    • Or download a predefined set of files (like 'document' files: .doc, .xls, .pdf, .odt, .gnumeric, etc.).
  • Maximum amount of links to crawl. A default value of 5000 URLs is set.
  • Follows redirections using HTML and JavaScript Location tag and HTTP response codes.

This extended edition has more features!

  • World-domination: You can automatically analyze the whole world! (if you have time)
  • Robin-hood: Although it is still in development, it will let you send automatically an email to the mails found during scan with the analysis information.
  • Robtex DNS: With this incredible function, every time you found a DNS servers with Zone Transfer, it will retrieve from the Robtex site other domains using that DNS server! It will automatically analyze them too! This can be a never ending test! Every vulnerable DNS server can be used by hundreds of domains, which in turn can be using other vulnerable DNS servers. BEWARE! Domains retrieved can be unrelated to the first one.

Examples

  • Find 10 random domains in the .gov domain and analyze them fully (including web crawling). If it finds some Zone Transfer, retrieve more domains using them from Robtex!!

    domain_analyzer.py -d .gov -k 10 -b

  • (Very Quick and dirty) Find everything related with .edu.cn domain, store everything in directories. Do not search for active host, do not nmap scan them, do not reverse-dns the netblock, do not search for emails.

    domain_analyzer.py -d edu.cn -b -o -g -a -n

  • Analyze the 386.edu.ru domain fully

    domain_analyzer.py -d 386.edu.ru -b -o

  • (Pen tester mode). Analyze a domain fully. Do not find other domains. Print everything in a pdf file. Store everything on disk. When finished open Zenmap and show me the topology every host found at the same time!

    domain_analyzer.py -d amigos.net -o -e

  • (Quick with web crawl only). Ignore everything with 'google' on it.

    domain_analyzer.py -d mil.cn -b -o -g -a -n -v google -x '-O --reason --webxml --traceroute -sS -sV -sC -PN -n -v -p 80,4443'

  • (Everything) Crawl up to 100 URLs of this site including subdomains. Store output into a file and download every INTERESTING file found to disk.

    crawler.py -u www.386.edu.ru -w -s -m 100 -f

  • (Quick and dirty) Crawl the site very quick. Do not download files. Store the output to a file.

    crawler.py -u www.386.edu.ru -w -m 20

  • (If you want to analyze metadata later with lafoca). Verbose prints which extensions are being downloaded. Download only the set of archives corresponding to Documents (.doc, .docx, .ppt, .xls, .odt. etc.)

    crawler.py -u ieeeexplore.ieee.org/otherfiles/ -d -v

Most of these features can be deactivated.

Docker Image

Docker image

Domain analyzer has now a docker image for the main version, on Python 3, with all dependencies already installed:

docker run --rm -ti verovaleros/domain_analyzer:latest /domain_analyzer/domain_analyzer.py -d <domain>

Docker for Domain Analyzer on Python 2.7

We have created a docker image that can be used to run domain analyzer on Python 2.7, and has all the dependencies already installed.

docker run --rm -it verovaleros/domain_analyzer:python2.7 /domain_analyzer/domain_analyzer.py -d <domain>

Screenshots

  1. Example domain_analyzer.py -d .gov -k 10 -b Basic Operation

History

Domain analyzer was born on Feb 4th, 2011. You can check the original repository in source forge here

Changelog

  • 0.8 We can check for hostnames read from an external file. Thanks to Gustavo Sorondo for the code! ([email protected])

Requests

If you have any question, please send us an email! They are in the python files.

Installation

git clone https://github.com/eldraco/domain_analyzer.git
pip install -r requirements.txt

domain_analyzer's People

Contributors

eldraco avatar macbookandrew avatar mariarigaki avatar verovaleros avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

domain_analyzer's Issues

Can't connect to Google Web!

Hey,

first thanks for your great project!

My System:
Virtualbox recent Kali Rolling 64 Bit (Host: Win10 Pro 64Bit)

If use following command:
sudo domain_analyzer.py -d $target -o -e

I always get following Error for "Searching for mynaric.com. emails in Google":
"Can't connect to Google Web!"

What do i missing or is this an general issue?

Cheers
Flo

Syntax Error: Missing parentheses in call to 'print'. Did you mean print(.. .)?

┌──(kali ㉿kali)-[~/Tools/ domain analyzer]
└─$ ./domain_analyzer.py -d google.com
File "/home/user/Tools/domain
analyzer/./domain_analyzer.py", line 262
print "+----------------------------------------------------------------------+"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Syntax Error: Missing parentheses in call to 'print'. Did you mean print(.. .)?

when I check the doamin_analyzer.py source code. all the print statement does not have open and close parenthesis.
I tried all the methods even run this like,
python3 doamin_analyzer.py -d google.com
but, it showing the same error !!!

Python-dnspython issues

Not running, says "You need to install python-dnspython. apt-get install python-dnspython" I tried installing it with apt but package not found. Downloaded and install dnspthon manually but still the same thing.

Python 3.x incompatible use of print operator

383 instances of Python 3.x incompatible use of print operator

Fixing Python 3.x incompatible use of print operator issues

The 2to3 utility that ships with Python can automatically fix all print statements so that they are compatible with both Python 2 and Python 3.

At the root of your project, run 2to3 -f print . to see all the lines that need to be changed and then run 2to3 -f print -w . to actually write (-w) the changes into the files.

https://docs.python.org/2/library/2to3.html

¿Que hay acerca del Disclosure?

¿Han considerado poner un Disclosure?

Ej,
Este software es para uso meramente educativo, no nos hacemos responsables por el mal uso que se le de.

erro

i am using ubuntu os and this command is not working domain_analyzer.py -d .gov -k 10 -b

Need to migrate to serapi google search

It's not longer possible to use urllib3 to query google, as any query redirects to the consent page which is not easy to bypass. The Google serapi API allows to query google without issue.

Output in JSON format?

Hi there,

There's an option to output the data in JSON format to a file or directly to stdout?

Thanks in advance.

domain_analyzer:python2.7 google search not working on given example

root@560519d1cc64:/domain_analyzer# python2.7 domain_analyzer.py -d .gov -k 10 -b
Domains should not start with a '.'. So I'm stripping it off. The domain I'm looking for now is: gov.
Finding 10 pseudo-random sub-domains to analyze in the gov. domain.

WARNING! Something prevent us from obtaining results from google. Try again the same command until it succeed. If it does > not work (because you use this feature many times) google could have blocked you for five minutes or so.

Migrate to googlesearch package.

[Just aesthetic] Create GIF in README.md that is more readable

Hey,
GIFs in READMEs are really cool, but this one has a lot of unused space to the right, making it little bit impossible to read.
If you crop about 1/3 of the blank space in the right edge, the GIF will be better distributed to the screen, and the letters will become readable.

That's just for demonstration purposes, and doesn't affect the application itself (of course), which, by the way, is AWESOME!

ValueError: need more than 1 value to unpack

Just downloaded and ran domain_analyzer, playing around with different features, so far the info is great! Thanks for the dev work! Host is Kali Linux.

I did get the following error a couple minutes through run time of the following command:
root@--- # ./domain_analyzer.py -d .com

<>

Checking with nmap the reverse DNS hostnames of every /24 netblock using system default resolver...
<>

Searching for .com emails in google
<>

Checking 55 active hosts using nmap... (nmap -sn -n -v -PP -PM -PS80,25 -PA -PY -PU53,40125 -PE --reason -oA <output_directory>/nmap/.sn)
<>

---This is where script breaks and kicks out the following error---

<type 'exceptions.ValueError'>
('No closing quotation',)
No closing quotation
<type 'exceptions.ValueError'>
('need more than 1 value to unpack',)
need more than 1 value to unpack
Traceback (most recent call last):
File "./domain_analyzer.py", line 2837, in
main()
File "./domain_analyzer.py", line 2765, in main
analyze_domain(unrelated_domain)
File "./domain_analyzer.py", line 2256, in analyze_domain
x, y = inst # getitem allows args to be unpacked directly
ValueError: need more than 1 value to unpack
root@--- #

python-dnspython resolution

It looks like python-dnspython can't get resolved when trying to run this on macOSX. Is this script limited to ubuntu flavored linux distributions? Or is it any-linux? or any-unix?

OSError, No such file or directory

Fresh install.
Ubuntu 16.04.3
Python 2.7.12

`user@user ~/D/domain_analyzer-master> sudo ./domain_analyzer.py -d website.net -o -e
+----------------------------------------------------------------------+

A lot of stuff

--------------End Summary --------------

<type 'exceptions.OSError'>
(2, 'No such file or directory')
[Errno 2] No such file or directory
x = 2
y = No such file or directory
`

info for mac users

Thanks for the analyzer. (-:

I download the files, unzip them and open the folder in terminal. Then I did the following command
sudo find . -name \*.py -exec cp {} /usr/local/bin \;
Works!

IPv6 support?

Hi Sebastian,

thanks for your great tool. One issue: It looks like it uses IPv4 only. Some of my own name servers (which I tested with your tool) also provide IPv6. Hence it would be great if your checks (ping, nmap, zone transfer, ...) would also run against the IPv6 addresses and not only to the v4 ones.

Cheers,
Johannes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.