Code Monkey home page Code Monkey logo

dirmap's Introduction

Dirmap

English

一个高级web目录扫描工具,功能将会强于DirBuster、Dirsearch、cansina、御剑

dirmap

需求分析

经过大量调研,总结一个优秀的web目录扫描工具至少具备以下功能:

  • 并发引擎
  • 能使用字典
  • 能纯爆破
  • 能爬取页面动态生成字典
  • 能fuzz扫描
  • 自定义请求
  • 自定义响应结果处理...

那么接下来看看Dirmap的特点

功能特点

  1. 支持n个target*n个payload并发
  2. 支持递归扫描
  3. 支持自定义需要递归扫描的状态码
  4. 支持(单|多)字典扫描
  5. 支持自定义字符集爆破
  6. 支持爬虫动态字典扫描
  7. 支持自定义标签fuzz目标url
  8. 自定义请求User-Agent
  9. 自定义请求随机延时
  10. 自定义请求超时时间
  11. 自定义请求代理
  12. 自定义正则表达式匹配假性404页面
  13. 自定义要处理的响应状态码
  14. 自定义跳过大小为x的页面
  15. 自定义显示content-type
  16. 自定义显示页面大小
  17. 按域名去重复保存结果

使用方法

环境准备

git clone https://github.com/H4ckForJob/dirmap.git && cd dirmap && python3 -m pip install -r requirement.txt

快速使用

输入目标

单目标,默认为http

python3 dirmap.py -i https://target.com -lcf
python3 dirmap.py -i 192.168.1.1 -lcf

子网(CIDR格式)

python3 dirmap.py -i 192.168.1.0/24 -lcf

网络范围

python3 dirmap.py -i 192.168.1.1-192.168.1.100 -lcf

文件读取

python3 dirmap.py -iF targets.txt -lcf

targets.txt中支持上述格式

结果保存

  1. 结果将自动保存在项目根目录下的output文件夹中
  2. 每一个目标生成一个txt,命名格式为目标域名.txt
  3. 结果自动去重复,不用担心产生大量冗余

高级使用

自定义dirmap配置,开始探索dirmap高级功能

暂时采用加载配置文件的方式进行详细配置,不支持使用命令行参数进行详细配置

编辑项目根目录下的dirmap.conf,进行配置

dirmap.conf配置详解

#递归扫描处理配置
[RecursiveScan]
#是否开启递归扫描:关闭:0;开启:1
conf.recursive_scan = 0
#遇到这些状态码,开启递归扫描。默认配置[301,403]
conf.recursive_status_code = [301,403]
#URL超过这个长度就退出扫描
conf.recursive_scan_max_url_length = 60
#这些后缀名不递归扫
conf.recursive_blacklist_exts = ["html",'htm','shtml','png','jpg','webp','bmp','js','css','pdf','ini','mp3','mp4']
#设置排除扫描的目录。默认配置空。其他配置:e.g:['/test1','/test2']
#conf.exclude_subdirs = ['/test1','/test2']
conf.exclude_subdirs = ""

#扫描模式处理配置(4个模式,1次只能选择1个)
[ScanModeHandler]
#字典模式:关闭:0;单字典:1;多字典:2
conf.dict_mode = 1
#单字典模式的路径
conf.dict_mode_load_single_dict = "dict_mode_dict.txt"
#多字典模式的路径,默认配置dictmult
conf.dict_mode_load_mult_dict = "dictmult"
#爆破模式:关闭:0;开启:1
conf.blast_mode = 0
#生成字典最小长度。默认配置3
conf.blast_mode_min = 3
#生成字典最大长度。默认配置3
conf.blast_mode_max = 3
#默认字符集:a-z。暂未使用。
conf.blast_mode_az = "abcdefghijklmnopqrstuvwxyz"
#默认字符集:0-9。暂未使用。
conf.blast_mode_num = "0123456789"
#自定义字符集。默认配置"abc"。使用abc构造字典
conf.blast_mode_custom_charset = "abc"
#自定义继续字符集。默认配置空。
conf.blast_mode_resume_charset = ""
#爬虫模式:关闭:0;开启:1
conf.crawl_mode = 0
#用于生成动态敏感文件payload的后缀字典
conf.crawl_mode_dynamic_fuzz_suffix = "crawl_mode_suffix.txt"
#解析robots.txt文件。暂未实现。
conf.crawl_mode_parse_robots = 0
#解析html页面的xpath表达式
conf.crawl_mode_parse_html = "//*/@href | //*/@src | //form/@action"
#是否进行动态爬虫字典生成。默认配置1,开启爬虫动态字典生成。其他配置:e.g:关闭:0;开启:1
conf.crawl_mode_dynamic_fuzz = 1
#Fuzz模式:关闭:0;单字典:1;多字典:2
conf.fuzz_mode = 0
#单字典模式的路径。
conf.fuzz_mode_load_single_dict = "fuzz_mode_dir.txt"
#多字典模式的路径。默认配置:fuzzmult
conf.fuzz_mode_load_mult_dict = "fuzzmult"
#设置fuzz标签。默认配置{dir}。使用{dir}标签当成字典插入点,将http://target.com/{dir}.php替换成http://target.com/字典中的每一行.php。其他配置:e.g:{dir};{ext}
#conf.fuzz_mode_label = "{ext}"
conf.fuzz_mode_label = "{dir}"

#处理payload配置。暂未实现。
[PayloadHandler]

#处理请求配置
[RequestHandler]
#自定义请求头。默认配置空。其他配置:e.g:test1=test1,test2=test2
#conf.request_headers = "test1=test1,test2=test2"
conf.request_headers = ""
#自定义请求User-Agent。默认配置chrome的ua。
conf.request_header_ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
#自定义请求cookie。默认配置空,不设置cookie。其他配置e.g:cookie1=cookie1; cookie2=cookie2;
#conf.request_header_cookie = "cookie1=cookie1; cookie2=cookie2"
conf.request_header_cookie = ""
#自定义401认证。暂未实现。因为自定义请求头功能可满足该需求(懒XD)
conf.request_header_401_auth = ""
#自定义请求方法。默认配置get方法。其他配置:e.g:get;head
#conf.request_method = "head"
conf.request_method = "get"
#自定义每个请求超时时间。默认配置3秒。
conf.request_timeout = 3
#随机延迟(0-x)秒发送请求。参数必须是整数。默认配置0秒,无延迟。
conf.request_delay = 0
#自定义单个目标,请求协程线程数。默认配置30线程
conf.request_limit = 30
#自定义最大重试次数。暂未实现。
conf.request_max_retries = 1
#设置持久连接。是否使用session()。暂未实现。
conf.request_persistent_connect = 0
#302重定向。默认False,不重定向。其他配置:e.g:True;False
conf.redirection_302 = False
#payload后添加后缀。默认空,扫描时,不添加后缀。其他配置:e.g:txt;php;asp;jsp
#conf.file_extension = "txt"
conf.file_extension = ""

#处理响应配置
[ResponseHandler]
#设置要记录的响应状态。默认配置[200],记录200状态码。其他配置:e.g:[200,403,301]
#conf.response_status_code = [200,403,301]
conf.response_status_code = [200]
#是否记录content-type响应头。默认配置1记录
#conf.response_header_content_type = 0
conf.response_header_content_type = 1
#是否记录页面大小。默认配置1记录
#conf.response_size = 0
conf.response_size = 1
#是否自动检测404页面。默认配置True,开启自动检测404.其他配置参考e.g:True;False
#conf.auto_check_404_page = False
conf.auto_check_404_page = True
#自定义匹配503页面正则。暂未实现。感觉用不着,可能要废弃。
#conf.custom_503_page = "page 503"
conf.custom_503_page = ""
#自定义正则表达式,匹配页面内容
#conf.custom_response_page = "([0-9]){3}([a-z]){3}test"
conf.custom_response_page = ""
#跳过显示页面大小为x的页面,若不设置,请配置成"None",默认配置“None”。其他大小配置参考e.g:None;0b;1k;1m
#conf.skip_size = "0b"
conf.skip_size = "None"

#代理选项
[ProxyHandler]
#代理配置。默认设置“None”,不开启代理。其他配置e.g:{"http":"http://127.0.0.1:8080","https":"https://127.0.0.1:8080"}
#conf.proxy_server = {"http":"http://127.0.0.1:8080","https":"https://127.0.0.1:8080"}
conf.proxy_server = None

#Debug选项
[DebugMode]
#打印payloads并退出
conf.debug = 0

#update选项
[CheckUpdate]
#github获取更新。暂未实现。
conf.update = 0

TODO

  • 命令行参数解析全局初始化
  • engine初始化
    • 设置线程数
  • target初始化
    • 自动解析处理输入格式( -i,inputTarget)
      • IP
      • Domain
      • URL
      • IP/MASK
      • IP Start-End
    • 文件读入(-iF,inputLocalFile)
  • bruter初始化
    • 加载配置方式()
      • 读取命令行参数值
      • 读取配置文件(-lcf,loadConfigFile)
    • 递归模式选项(RecursiveScan)
      • 递归扫描(-rs,recursive_scan)
      • 需要递归的状态码(-rd,recursive_status_code)
      • 排除某些目录(-es,exclude_subdirs)
    • 扫描模式选项(ScanModeHandler)
      • 字典模式(-dm,dict_mode)
        • 加载单个字典(-dmlsd,dict_mode_load_single_dict)
        • 加载多个字典(-dmlmd,dict_mode_load_mult_dict)
      • 爆破模式(-bm,blast_mode)
        • 爆破目录长度范围(必选)
          • 最小长度(-bmmin,blast_mode_min)
          • 最大长度(-bmmax,blast_mode_max)
        • 基于默认字符集
          • 基于a-z
          • 基于0-9
        • 基于自定义字符集(-bmcc,blast_mode_custom_charset)
        • 断点续生成payload(-bmrc,blast_mode_resume_charset)
      • 爬虫模式(-cm,crawl_mode)
        • 自定义解析标签(-cmph,crawl_mode_parse_html)(a:href,img:src,form:action,script:src,iframe:src,div:src,frame:src,embed:src)
        • 解析robots.txt(-cmpr,crawl_mode_parse_robots)
        • 爬虫类动态fuzz扫描(-cmdf,crawl_mode_dynamic_fuzz)
      • fuzz模式(-fm,fuzz_mode)
        • fuzz单个字典(-fmlsd,fuzz_mode_load_single_dict)
        • fuzz多个字典(-fmlmd,fuzz_mode_load_mult_dict)
        • fuzz标签(-fml,fuzz_mode_label)
    • 请求优化选项(RequestHandler)
      • 自定义请求超时(-rt,request_timeout)
      • 自定义请求延时(-rd,request_delay)
      • 限制单个目标主机协程数扫描(-rl,request_limit)
      • 限制重试次数(-rmr,request_max_retries)
      • http持久连接(-rpc,request_persistent_connect)
      • 自定义请求方法(-rm,request_method)(get、head)
      • 302状态处理(-r3,redirection_302)(是否重定向)
      • 自定义header
        • 自定义其他header(-rh,request_headers)(解决需要401认证)
        • 自定义ua(-rhua,request_header_ua)
        • 自定义cookie(-rhc,request_header_cookie)
    • 字典处理选项(PayloadHandler)
      • 字典处理(payload修改-去斜杠)
      • 字典处理(payload修改-首字符加斜杠)
      • 字典处理(payload修改-单词首字母大写)
      • 字典处理(payload修改-去扩展)
      • 字典处理(payload修改-去除非字母数字)
    • 响应结果处理模块(ResponseHandler)
      • 跳过大小为x字节的文件(-ss,skip_size)
      • 自动检测404页面(-ac4p,auto_check_404_page)
      • 自定义503页面(-c5p,custom_503_page)
      • 自定义正则匹配响应内容并进行某种操作
        • 自定义正则匹配响应(-crp,custom_response_page)
        • 某种操作(暂时未定义)
      • 输出结果为自定义状态码(-rsc,response_status_code)
      • 输出payload为完整路径(默认输出完成url)
      • 输出结果展示content-type
      • 自动去重复保存结果
    • 状态处理模块(StatusHandler)
      • 状态显示(等待开始、进行中、暂停中、异常、完成)
      • 进度显示
      • 状态控制(开始、暂停、继续、停止)
      • 续扫模块(暂未配置)
      • 断点续扫
      • 选行续扫
    • 日志记录模块(ScanLogHandler)
      • 扫描日志
      • 错误日志
    • 代理模块(ProxyHandler)
      • 单个代理(-ps,proxy_server)
      • 代理池
    • 调试模式选项(DebugMode)
      • debug(--debug)
    • 检查更新选项(CheckUpdate)
      • update(--update)

默认字典文件

字典文件存放在项目根目录中的data文件夹中

  1. dict_mode_dict.txt “字典模式”字典,使用dirsearch默认字典
  2. crawl_mode_suffix.txt “爬虫模式”字典,使用FileSensor默认字典
  3. fuzz_mode_dir.txt “fuzz模式”字典,使用DirBuster默认字典
  4. fuzz_mode_ext.txt “fuzz模式”字典,使用常见后缀制作的字典
  5. dictmult 该目录为“字典模式”默认多字典文件夹,包含:BAK.min.txt(备份文件小字典),BAK.txt(备份文件大字典),LEAKS.txt(信息泄露文件字典)
  6. fuzzmult 该目录为“fuzz模式”默认多字典文件夹,包含:fuzz_mode_dir.txt(默认目录字典),fuzz_mode_ext.txt(默认后缀字典)

已知缺陷

  1. “爬虫模式”只爬取了目标的当前页面,用于生成动态字典。项目将来会将“爬虫模块”与“生成动态字典功能”分离。
  2. 关于bruter.py第517行bar.log.start()出错。解决方案:请安装progressbar2。卸载progressbar。防止导入同名模块。感谢某位表哥提醒。
执行命令:
python3 -m pip uninstall progressbar
python3 -m pip install progressbar2

维护工作

  1. 若使用过程中出现问题,欢迎发issue
  2. 本项目正在维护,未来将会有新的功能加入,具体参照“TODO”列表,未打勾项

致谢声明

dirmap在编写过程中,借鉴了大量的优秀开源项目的模式与**,特此说明并表示感谢。

联系作者

mail: [email protected]

donate

dirmap's People

Contributors

h4ckforjob avatar strawberrybiscuits avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dirmap's Issues

关于bruter.py的bar.log.start()出错

Traceback (most recent call last):
File "D:\CTF\Web\dirmap-master\lib\controller\engine.py", line 44, in scan
bruter(target)
File "D:\CTF\Web\dirmap-master\lib\controller\bruter.py", line 592, in bruter
bar.log.start(tasks.task_length)
TypeError: start() takes 1 positional argument but 2 were given

无法扫描出.phps的扩展名

环境:win10+py3
目标:攻防世界里的一道例题,web进阶区“php2”
执行代码: py dirmap.py -i http://111.198.29.45:42890/ -lcf
问题:网站里有个index.phps,将这条添加到字典dict_mode_dict后用dirmap扫描无法扫出来,尝试用御剑可以。正常访问的状态码是200,但dirmap就是扫不出来,不清楚原理是什么。

简化建议

既然每次都-lcf,何不如直接默认加载配置文件
运行只需要 python dirmap.py -i http://xxx.com
原谅我的强迫症。。

运行出错

定位:\Git\BurstList\dirmap\lib\controller\bruter.py
安装包都装了,我一开始以为是python版本不行,开始是3.7.0,后来换了3.7.7、3.8.2 都不行 我心态崩了 希望大佬能帮忙解决一下 搞了一晚上了 百度好像也没有我这个情况,我也不知道为啥呢

错误:
Admin@PS2020UGTRNSJW MINGW64 ~/Desktop/Git/BurstList/dirmap (master)
$ python dirmap.py -i http://www.xsese.com/ -lcf
Traceback (most recent call last):
File "dirmap.py", line 14, in
from gevent import monkey
File "D:\python3\lib\site-packages\gevent_init_.py", line 86, in
from gevent._hub_local import get_hub
File "D:\python3\lib\site-packages\gevent_hub_local.py", line 101, in
import_c_accel(globals(), 'gevent._hub_local')
File "D:\python3\lib\site-packages\gevent_util.py", line 105, in import_c_accel
mod = importlib.import_module(cname)
File "D:\python3\lib\importlib_init
.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'gevent.__hub_local'

源代码:582行到591行
#设置进度条长度,若是递归模式或爬虫模式,则不设置任务队列长度,即无法显示进度,仅显示耗时
if not conf.recursive_scan:
#NOTE:这里取所有payloads的长度*target数量计算任务总数,修复issue#2
tasks.task_length = len(payloads.all_payloads)*conf.target_nums
bar.log.start(tasks.task_length)
#FIXME:循环任务数不能一次性取完所有的task,暂时采用每次执行30个任务。这样写还能解决hub.LoopExit的bug
while not tasks.all_task.empty():
all_task = [gevent.spawn(boss) for i in range(conf.request_limit)]
gevent.joinall(all_task)

运行出错

配置如下:

#Recursive scan options
[RecursiveScan]
#recursive scan:Close:0;Open:1
conf.recursive_scan = 1
#Recursive scanning if these status codes
conf.recursive_status_code = [301,403]
#Exit the scan when the URL exceeds this length
conf.recursive_scan_max_url_length = 60
#These suffix names are not recursive
conf.recursive_blacklist_exts = ["html",'htm','shtml','png','jpg','webp','bmp','js','css','pdf','ini','mp3','mp4']
#The directory does not scan
#conf.exclude_subdirs = ['/test1','/test2']
conf.exclude_subdirs = ""

#Processing scan mode
[ScanModeHandler]
#Dict mode:Close :0;single dict:1;multiple dict:2
conf.dict_mode = 2
#Single dictionary file path
conf.dict_mode_load_single_dict = "dict_mode_dict.txt"
#Multiple dictionary file path
conf.dict_mode_load_mult_dict = "dictmult"
#Blast mode:tips:Use "conf.file_extension" options for suffixes
conf.blast_mode = 0
#Minimum length of character set
conf.blast_mode_min = 3
#Maximum length of character set
conf.blast_mode_max = 3
#The default character set:a-z
conf.blast_mode_az = "abcdefghijklmnopqrstuvwxyz"
#The default character set:0-9
conf.blast_mode_num = "0123456789"
#Custom character set
conf.blast_mode_custom_charset = "abc"
#Custom continue to generate blast dictionary location
conf.blast_mode_resume_charset = ""
#Crawl mode:Close :0;Open:1
conf.crawl_mode = 0
#Crawl mode dynamic fuzz suffix dict
conf.crawl_mode_dynamic_fuzz_suffix = "crawl_mode_suffix.txt"
#Parse robots.txt file
conf.crawl_mode_parse_robots = 0
#An xpath expression used by a crawler to parse an HTML document
conf.crawl_mode_parse_html = "//*/@href | //*/@src | //form/@action"
#Whether to turn on the dynamically generated payloads:close:0;open:1
conf.crawl_mode_dynamic_fuzz = 1
#Fuzz mode:Close :0;single dict:1;multiple dict:2
conf.fuzz_mode = 2
#Single dictionary file path.You can customize the dictionary path. The labels are just a flag for insert dict.
conf.fuzz_mode_load_single_dict = "fuzz_mode_dir.txt"
#Multiple dictionary file path
conf.fuzz_mode_load_mult_dict = "fuzzmult"
#Set the label of fuzz.e.g:{dir};{ext}
#conf.fuzz_mode_label = "{dir}"
conf.fuzz_mode_label = "{dir}"

#Processing payloads
[PayloadHandler]

#Processing requests
[RequestHandler]
#Custom request header.e.g:test1=test1,test2=test2
conf.request_headers = ""
#Custom request user-agent
conf.request_header_ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
#Custom request cookie.e.g:cookie1=cookie1; cookie2=cookie2;
conf.request_header_cookie = ""
#Custom 401 certification
conf.request_header_401_auth = ""
#Custom request methods (get, head)
conf.request_method = "get"
#Custom per request timeout in x sec.
conf.request_timeout = 3
#Custom per request delay random(0-x) secends.The parameter must be an integer.
conf.request_delay = 0
#Custom all request limit,default 30 coroutines
conf.request_limit = 30
#Custom request max retries
conf.request_max_retries = 1
#Whether to open an HTTP persistent connection
conf.request_persistent_connect = 0
#Whether to follow 302 redirection
conf.redirection_302 = False
#Payload add file extension
conf.file_extension = ""

#Processing responses
[ResponseHandler]
#Sets the response status code to record
conf.response_status_code = [200]
#Whether to record content-type
conf.response_header_content_type = 1
#Whether to record page size
conf.response_size = 1
#Auto check 404 page
conf.auto_check_404_page = True
#Custom 503 page regex
conf.custom_503_page = "page 503"
#Custom regular match response content
# conf.custom_response_page = "([0-9]){3}([a-z]){3}test"
conf.custom_response_page = ""
#Skip files of size x bytes.you must be set "None",if don't want to skip any file.e.g:None;0b;1k;1m
conf.skip_size = "None"

#Processing proxy
[ProxyHandler]
#proxy:e.g:{"http":"http://127.0.0.1:8080","https":"https://127.0.0.1:8080"}
#conf.proxy_server = {"http":"http://127.0.0.1:8080","https":"https://127.0.0.1:8080"}
conf.proxy_server = None

#Debug option
[DebugMode]
#Print payloads and exit the program
conf.debug = 0

#update option
[CheckUpdate]
#Get the latest code from github(Not yet available)
conf.update = 0

报错如下:

[*] Use fuzz mode
[*] Use recursive scan: Yes                                                                                             
[*] Use fuzz mode
[*] Use recursive scan: Yes                                                                                             
[*] Use fuzz mode
[*] Use recursive scan: Yes                                                                                             
[*] Use fuzz mode
Traceback (most recent call last):
  File "/root/dirmap/lib/controller/engine.py", line 44, in scan
    bruter(target)
  File "/root/dirmap/lib/controller/bruter.py", line 573, in bruter
    for payload in payloads.all_payloads:
TypeError: 'NoneType' object is not iterable

[爬虫模式]扫描目标超时requests.exceptions.ConnectionError

[+] Load target: http://testphp.vulnweb.com
[+] Set the number of thread: 30
[+] Coroutine mode
[+] Current target: http://testphp.vulnweb.com/
[] Launching auto check 404
[+] Checking with: http://testphp.vulnweb.com/eepsihlqlxihidaqrkujwqhpwiprvennpoqgombxfy
[
] Use recursive scan: No
[*] Use crawl mode
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 57, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/local/lib/python3.7/dist-packages/gevent/_socketcommon.py", line 212, in getaddrinfo
addrlist = get_hub().resolver.getaddrinfo(host, port, family, type, proto, flags)
File "/usr/local/lib/python3.7/dist-packages/gevent/resolver/thread.py", line 65, in getaddrinfo
return self.pool.apply(_socket.getaddrinfo, args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/gevent/pool.py", line 159, in apply
return self.spawn(func, *args, **kwds).get()
File "src/gevent/event.py", line 268, in gevent._event.AsyncResult.get
File "src/gevent/event.py", line 296, in gevent._event.AsyncResult.get
File "src/gevent/event.py", line 286, in gevent._event.AsyncResult.get
File "src/gevent/event.py", line 266, in gevent._event.AsyncResult._raise_exception
File "/usr/local/lib/python3.7/dist-packages/gevent/_compat.py", line 47, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.7/dist-packages/gevent/threadpool.py", line 281, in _worker
value = func(*args, **kwargs)
socket.gaierror: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.7/http/client.py", line 1229, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1275, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1224, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1016, in _send_output
self.send(msg)
File "/usr/lib/python3.7/http/client.py", line 956, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fbb2e5a1978>: Failed to establish a new connection: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='testphp.vulnweb.com', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbb2e5a1978>: Failed to establish a new connection: [Errno -2] Name or service not known'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/dirmap/lib/controller/engine.py", line 44, in scan
bruter(target)
File "/root/dirmap/lib/controller/bruter.py", line 505, in bruter
payloads.all_payloads = scanModeHandler()
File "/root/dirmap/lib/controller/bruter.py", line 348, in scanModeHandler
response = requests.get(conf.url, headers=headers, timeout=5)
File "/usr/lib/python3/dist-packages/requests/api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='testphp.vulnweb.com', port=80): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbb2e5a1978>: Failed to establish a new connection: [Errno -2] Name or service not known'))

root@kali:~/dirmap#

bar.log.start(tasks.task_length)出现异常[已解决]

Traceback (most recent call last):
File "C:\Users\xxx\Desktop\dirmap-master\lib\controller\engine.py", line 44, in scan
bruter(target)
File "C:\Users\xxx\Desktop\dirmap-master\lib\controller\bruter.py", line 517, in bruter
bar.log.start(tasks.task_length)
TypeError: start() takes 1 positional argument but 2 were given

Crawl mode has stopped at 99%

Sometimes the crawl mode has sotpped at 99% at the end of the scan, or just doesn't output the result when the scan is completed.

Does anyone know why?

当批量扫描时,有域名超时访问,就报错退出

raceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 426, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 421, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.6/http/client.py", line 1356, in getresponse
response.begin()
File "/usr/lib/python3.6/http/client.py", line 307, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.6/http/client.py", line 268, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
File "/usr/local/lib/python3.6/dist-packages/gevent/_socket3.py", line 502, in recv_into
self._wait(self._read_event)
File "src/gevent/_hub_primitives.py", line 317, in gevent.__hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 322, in gevent.__hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 313, in gevent.__hub_primitives._primitive_wait
File "src/gevent/_hub_primitives.py", line 314, in gevent.__hub_primitives._primitive_wait
File "src/gevent/_hub_primitives.py", line 46, in gevent.__hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 46, in gevent.__hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 55, in gevent.__hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_waiter.py", line 151, in gevent.__waiter.Waiter.get
File "src/gevent/_greenlet_primitives.py", line 61, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 61, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 65, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/__greenlet_primitives.pxd", line 35, in gevent.__greenlet_primitives._greenlet_switch
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 725, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py", line 403, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python3.6/dist-packages/urllib3/packages/six.py", line 735, in reraise
raise value
File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 428, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 336, in _raise_timeout
self, url, "Read timed out. (read timeout=%s)" % timeout_value
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='xxxl.com', port=80): Read timed out. (read timeout=3)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/dirmap/lib/controller/engine.py", line 44, in scan
bruter(target)
File "/root/dirmap/lib/controller/bruter.py", line 558, in bruter
payloads.all_payloads = scanModeHandler()
File "/root/dirmap/lib/controller/bruter.py", line 391, in scanModeHandler
response = requests.get(conf.url, headers=headers, timeout=conf.request_timeout, verify=False, allow_redirects=conf.redirection_302, proxies=conf.proxy_server)
File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 529, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPConnectionPool(host='xxxl.com', port=80): Read timed out. (read timeout=3)

erros

[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
0% (196 of 311136030) | | Elapsed Time: 0:00:09 ETA: 187 days, 13:53:05[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
0% (203 of 311136030) | | Elapsed Time: 0:00:09 ETA: 166 days, 6:44:35[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
[x] error:'utf-8' codec can't decode byte 0xc3 in position 93: invalid continuation byte
。。。。。。

运行爬虫模式使用HTTPS协议提示出错

提示如下
[] Use recursive scan: No
[
] Use crawl mode
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 343, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 841, in _validate_conn
conn.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 364, in connect
_match_hostname(cert, self.assert_hostname or server_hostname)
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 374, in _match_hostname
match_hostname(cert, asserted_hostname)
File "/usr/lib/python3.7/ssl.py", line 327, in match_hostname
% (hostname, dnsnames[0]))
ssl.SSLCertVerificationError: ("hostname '168hs.com' doesn't match 'job.168hs.com'",)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='168hs.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError("hostname '168hs.com' doesn't match 'job.168hs.com'")))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/dirmap/lib/controller/engine.py", line 44, in scan
bruter(target)
File "/root/dirmap/lib/controller/bruter.py", line 505, in bruter
payloads.all_payloads = scanModeHandler()
File "/root/dirmap/lib/controller/bruter.py", line 348, in scanModeHandler
response = requests.get(conf.url, headers=headers, timeout=5)
File "/usr/lib/python3/dist-packages/requests/api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='168hs.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError("hostname '168hs.com' doesn't match 'job.168hs.com'")))

root@kali:~/dirmap#

AttributeError:"NoneType" object has no attribute 'xpath'

大佬请解答一下,这么大的项目也不可能去改代码.
执行到Use crawl mode的时候,
Traceback (most recent call last):
File "C:\me\tool\dirmap\lib\controller\engine.py", line 44, in scan
bruter(target)
File "C:\me\tool\dirmap\lib\controller\bruter.py", line 564, in bruter
payloads.all_payloads = scanModeHandler()
File "C:\me\tool\dirmap\lib\controller\bruter.py", line 407, in scanModeHandler
urls = html.xpath(conf.crawl_mode_parse_html)
AttributeError: 'NoneType' object has no attribute 'xpath'
出现这个错误,求解

线程问题&未配置爬虫模式下的内存占用过大

线程问题:配置文件是#Custom all request limit,default 30 coroutines
conf.request_limit = 5
python3 dirmap.py -iF urls.txt -lcf,urls.txt里面只有一个链接,80秒就显示扫了6000个是什么鬼,|6000 Elapsed Time 0:01:20
image
错过截图时间,除一下87563/24/60=59.57

未配置爬虫模式下的内存占用过大:配置配置100个target*5个payload之后,出现了占用内存到达4G+的情况,表哥这是什么原因啊?我平时写多线程也有这个问题,有时候是反向dos(一个返回包3G+),但有时候分析不出什么原因。

High memory usage

I configure a basic research, i left for 1h, when i come back python was using 9GB of RAM, i have to kill the process.

In anothers research it runs using 1.5 / 2GB in Mac OSX Catalina

Any suggestion?

@tlhszc 应该是访问网站超时导致的问题。爬虫未给出自定义超时选项,默认5秒超时,解决方案:

@tlhszc 应该是访问网站超时导致的问题。爬虫未给出自定义超时选项,默认5秒超时,解决方案:

#找到dirmap\lib\controller\bruter.py文件,修改第348行timeout超时参数值为30秒。
#response = requests.get(conf.url, headers=headers, timeout=5)
response = requests.get(conf.url, headers=headers, timeout=30)

使用你提供的这个网站http://testphp.vulnweb.com,测试爬虫模式,但在output文件夹内仍然没有发现文件输出

运行出错

[*] Initialize targets...
[+] Load targets from: http://192.168.113.232/
[+] Set the number of thread: 30
[+] Coroutine mode
[+] Current target: http://192.168.113.232/
[*] Launching auto check 404
[+] Checking with: http://192.168.113.232/sqopyrqypkvnyynxpefbtiswjvcuupyacdbdfrukgl
[*] Use recursive scan: No
[*] Use dict mode
[+] Load dict:/Users/bufsnake/Web-Pentest/dirmap/data/dict_mode_dict.txt
[*] Use crawl mode
Traceback (most recent call last):
  File "/Users/bufsnake/Web-Pentest/dirmap/lib/controller/engine.py", line 44, in scan
    bruter(target)
  File "/Users/bufsnake/Web-Pentest/dirmap/lib/controller/bruter.py", line 586, in bruter
    bar.log.start(tasks.task_length)
TypeError: start() takes 1 positional argument but 2 were given

没有修改配置文件

不适合python小白使用

因为一上来会报一堆找不到方法的错误,因为好多用到的扩展程序都需要装一遍,很不实用。

建议添加3xx的功能

建议添加3xx的功能 御剑直接有这个。现在在配置文件里面还要一个一个去配置状态码。。。

TypeError: start() takes 1 positional argument but 2 were given

C:\Users\6128000055\Downloads\webScan\dirmap-master>python3 dirmap.py -i https:/
/baidu.com -lcf

Traceback (most recent call last):
File "C:\Users\6128000055\Downloads\webScan\dirmap-master\lib\controller\engin
e.py", line 45, in scan
bruter(target)
File "C:\Users\6128000055\Downloads\webScan\dirmap-master\lib\controller\brute
r.py", line 585, in bruter
bar.log.start(tasks.task_length)
TypeError: start() takes 1 positional argument but 2 were given

代码功底不太好,源码研究了一天也没找到原因,还望解答,多谢!

IndexError: string index out of range

dirmap.conf中配置有如下:

conf.blast_mode = 1
conf.blast_mode_custom_charset = ""

PS D:\工具\dirmap> python .\dirmap.py -i http://www.xxxx.com -t 60 -lcf

                 #####  # #####  #    #   ##   #####
                 #    # # #    # ##  ##  #  #  #    #
                 #    # # #    # # ## # #    # #    #
                 #    # # #####  #    # ###### #####
                 #    # # #   #  #    # #    # #
                 #####  # #    # #    # #    # #   v1.0

[] Initialize targets...
[+] Load targets from: http://www.xxxx.com
[+] Set the number of thread: 60
[+] Coroutine mode
[+] Current target: http://www.xxxx.com /
[
] Launching auto check 404
[+] Checking with: http://www.xxxx.com/wjsnuaryagrdxsephppbmamqeiuacyeskmmvrfkjlb
[] Use recursive scan: No
[
] Use blast mode
[] Use char set:
[
] Use paylaod min length: 0
[*] Use paylaod max length: 6
Traceback (most recent call last):
File "D:\工具\dirmap\lib\controller\engine.py", line 44, in scan
bruter(target)
File "D:\工具\dirmap\lib\controller\bruter.py", line 558, in bruter
payloads.all_payloads = scanModeHandler()
File "D:\工具\dirmap\lib\controller\bruter.py", line 369, in scanModeHandler
payloadlists.extend(generateBlastDict())
File "D:\工具\dirmap\lib\controller\bruter.py", line 268, in generateBlastDict
generateLengthDict(length)
File "D:\工具\dirmap\lib\controller\bruter.py", line 296, in generateLengthDict
temp += conf.blast_mode_custom_charset[j]
IndexError: string index out of range

建议加一个异常处理

[递归扫描]conf.recursive_scan无显式声明的递归退出条件

执行的命令:
python dirmap.py -iU http://testphp.vulnweb.com/ -lcf
配置文件:
#Recursive scan options
[RecursiveScan]
#recursive scan:Close:0;Open:1
conf.recursive_scan = 1
conf.recursive_status_code = [301,403]
#The directory does not scan
#conf.exclude_subdirs = ['/test1','/test2']
conf.exclude_subdirs = ""

#Processing scan mode
[ScanModeHandler]
#Dict mode:Close :0;single dict:1;multiple dict:2
conf.dict_mode = 1
#Single dictionary file path
conf.dict_mode_load_single_dict = "dict_mode_dict.txt"
#Multiple dictionary file path
conf.dict_mode_load_mult_dict = "dictmult"
#Blast mode:tips:Use "conf.file_extension" options for suffixes
conf.blast_mode = 0
#Minimum length of character set
conf.blast_mode_min = 3
#Maximum length of character set
conf.blast_mode_max = 3
#The default character set:a-z
conf.blast_mode_az = "abcdefghijklmnopqrstuvwxyz"
#The default character set:0-9
conf.blast_mode_num = "0123456789"
#Custom character set
conf.blast_mode_custom_charset = "abc"
#Custom continue to generate blast dictionary location
conf.blast_mode_resume_charset = ""
#Crawl mode:Close :0;Open:1
conf.crawl_mode = 1
#Parse robots.txt file
conf.crawl_mode_parse_robots = 0
#An xpath expression used by a crawler to parse an HTML document
conf.crawl_mode_parse_html = "///@href | ///@src | //form/@action"
#Whether to turn on the dynamically generated payloads:close:0;open:1
conf.crawl_mode_dynamic_fuzz = 1
#Fuzz mode:Close :0;single dict:1;multiple dict:2
conf.fuzz_mode = 0
#Single dictionary file path.You can customize the dictionary path. The labels are just a flag for insert dict.
conf.fuzz_mode_load_single_dict = "fuzz_mode_dir.txt"
#Multiple dictionary file path
conf.fuzz_mode_load_mult_dict = "fuzzmult"
#Set the label of fuzz.e.g:{dir};{ext}
#conf.fuzz_mode_label = "{dir}"
conf.fuzz_mode_label = "{dir}"

#Processing payloads
[PayloadHandler]

#Processing requests
[RequestHandler]
#Custom request header.e.g:test1=test1,test2=test2
conf.request_headers = ""
#Custom request user-agent
conf.request_header_ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
#Custom request cookie.e.g:cookie1=cookie1; cookie2=cookie2;
conf.request_header_cookie = ""
#Custom 401 certification
conf.request_header_401_auth = ""
#Custom request methods (get, head)
conf.request_method = "get"
#Custom per request timeout in x sec.
conf.request_timeout = 3
#Custom per request delay random(0-x) secends.The parameter must be an integer.
conf.request_delay = 0
#Custom all request limit,default 30 coroutines
conf.request_limit = 30
#Custom request max retries
conf.request_max_retries = 1
#Whether to open an HTTP persistent connection
conf.request_persistent_connect = 0
#Whether to follow 302 redirection
conf.redirection_302 = False
#Payload add file extension
conf.file_extension = ""

#Processing responses
[ResponseHandler]
#Sets the response status code to record
conf.response_status_code = [200]
#Whether to record content-type
conf.response_header_content_type = 1
#Whether to record page size
conf.response_size = 1
#Custom 404 page regex
conf.custom_404_page = "fake 404"
#Custom 503 page regex
conf.custom_503_page = "page 503"
#Custom regular match response content
conf.custom_response_page = "([0-9]){3}([a-z]){3}test"
#Skip files of size x bytes.you must be set "None",if don't want to skip any file.e.g:None;0b;1k;1m
conf.skip_size = "None"

#Processing proxy
[ProxyHandler]
#proxy:e.g:{"http":"http://127.0.0.1:8080","https":"https://127.0.0.1:8080"}
#conf.proxy_server = {"http":"http://127.0.0.1:8080","https":"https://127.0.0.1:8080"}
conf.proxy_server = None

#Debug option
[DebugMode]
#Print payloads and exit the program
conf.debug = 0

#update option
[CheckUpdate]
#Get the latest code from github(Not yet available)
conf.update = 0

应该算是很正常的操作吧...难道是因为我开了conf.recursive_scan = 1吗?然后....

图片

关于bruter.py的bar.log.start()出错

Traceback (most recent call last):
File "/root/dirmap/lib/controller/engine.py", line 44, in scan
bruter(target)
File "/root/dirmap/lib/controller/bruter.py", line 517, in bruter
bar.log.start(tasks.task_length)
TypeError: start() takes 1 positional argument but 2 were given

Require English Document Support

Hi

can u add a wiki in english please? Or maybe just a readme if u can.

Also is there a way to update or we have to reinstall a fresh release?

Thanks

在使用-iF urls.txt -lcf扫大量域名时会报错

File "/dirmap/lib/controller/bruter.py", line 459, in boss
worker()
File "//dirmap/lib/controller/bruter.py", line 450, in worker
bar.log.update(tasks.task_count)
File "/python/lib/python3.7/site-packages/progressbar/bar.py", line 565, in update
% (value, self.min_value, self.max_value))
ValueError: Value 175699 is out of range, should be between 0 and 5715
2019-04-28T09:21:54Z <Greenlet at 0x10bac2ae8: boss> failed with ValueError

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.