syslifters / reptor Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Finding data holds values only and does not contain type definitions.
Most data types cannot be differentiated (like strings and enums).
reptor\api\models.py is too large and should be split in multiple files.
Then, we should also update the docs and add the new models there.
Test cases are currently not executed in the GitHub CI pipeline:
Run python -m unittest discover -v
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
python -m unittest discover -v
was replaced by make.
However, the Makefile
specifies the tests-directory explicitly. This means that tests cases of modules are no longer executed.
The test cases currently also fail due to failed imports (at least for my locally): ImportError: cannot import name 'Finding' from 'reptor.api.models'
.
Adding __init__.py
-files obviously fixes the problem.
During troubleshooting, I also realized that TypeAlias
is available from python 3.10 and not compatible with 3.8.
/cc @richardschwabe
Plugin config data should be possible to be queried interactively and stored to the user config file.
Need a burp scanner plugin.
Requires access to a sample file
I think it does not make sense to differentiate between core
and community
in this repo.
A differentiation between Tools, Importers, Conf, etc. makes more sense.
There might be plugins that we regard as community
:
There is an issue in the dependencies when installing all
Python Version: 3.8
pip install .[all]
...SNIP...
ERROR: botocore 1.31.3 has requirement urllib3<1.27,>=1.25.4, but you'll have urllib3 2.0.3 which is incompatible.
Needs more investigation.
Currently, tool outputs are parsed, formatted and then uploaded to notes.
In the future, it should be possible to upload formatted data into findings or report fields in an existing project.
Let's assume we detected a list of weak ciphers using sslyze.
We could create a new template weak_ciphers.md
that lists the weak ciphers.
We would add a new finding to our report, e.g. containing a title, and a predefined description. This predefined description includes some static text and a placeholder where the list of weak ciphers should be added.
The placeholder could be an HTML comment with some information for humans and a yaml structure holding relevant metadata that might be required to properly add the data.
A description might look like:
We detected weak SSL configurations for your server.
<!-- This is a placeholder for reptor automizations.
---
plugin: sslyze
template: weak_ciphers
prepend_text: "The following weak ciphers were detected on your server:"
append_text: "Find more information at example.com."
-->
reptor
should iterate through all report fields and finding fields looking for placeholders.
Extra features
It might make sense to have something like a "secrets" manager. All importers will require an API key to connect to the given tool. Writing a secret often is a) annoying and b) bad practice as it might get logged in the console history etc.
Then however the question is, how often would reptor be used in automated processes where this is a big requirement?
For normal tool plugins, nothing comes to mind why someone would need to provide an API key.
On the other hand, it might be unnecessary code and complexity added to the project.
But we could have something like a keys
attribute for each plugin/importer/exporter along with the meta
attribute.
We could then check if an environment variable exists.
i.e:
class Translate(Base):
""" """
meta = {
"name": "Translate",
"summary": "Translate Projects and Templates to other languages",
}
keys = [
"token"
]
...SNIP...
In the background something along the line of:
return getenv(MODULENAME_KEYNAME,"")
The user must ensure there is an environment variable in this case of TRANSLATE_TOKEN
And the user could access it in code via
self.secrets.get("token")
This process could also help make sure that the user gets notified before the plugin is run. We could automatically check at the beginning if all requirements are met and the plugin does not run at all if these aren't.
There are a few errors when pip installing reptor that come from imports from Ghostwriter.
Depends on #9
other:
importers Show importers to use to import finding templates
plugins Allows plugin management & development
projects Queries Projects from reptor.api
templates Queries Finding Templates from reptor.api
translate Translate Projects and Templates to other languages
importers Show importers to use to import finding templates
plugins Allows plugin management & development
projects Queries Projects from reptor.api
templates Queries Finding Templates from reptor.ap
It is easier for users to override core templates if they already have the plugin/module files available in their home directories.
The question is if we want to move all plugins to the users' home directories (maybe only leave the plugins needed for conf and maybe project/note/SysReptor-related in the core project).
However, users then should not modify the file contents, but rather copy them because otherwise, updating the plugins will be difficult.
Currently, we can format data and upload it into one note, or we can split it into multiple notes of the same level.
It would be nice to be able to be able to create notes with more levels, e.g.
The method preprocess_for_template
now returns a dictionary holding the context for Django template rendering.
We will introduce a model, holding details about structure and depth of the notes. This will also allow us to create parent notes with contents (which is currently not possible).
If a dict is returned, we will transparently transform the data structure to the model.
Protocols currently mostly resolve circular import problems.
However, they require us to maintain much extra code and define redundant declarations of methods.
The protocol classes are also quite unique, so protocols do not add much extra value, from my point of view.
Could you add this line
deepl >= 1.15.0',
to
Lines 21 to 22 in 0184ea3
because it is missing
➜ reptor -h
No .sysreptor folder found in home directory...Creating one
Traceback (most recent call last):
File "/tmp/.venv/lib/python3.11/site-packages/reptor/plugins/core/Translate/Translate.py", line 13, in <module>
import deepl
ModuleNotFoundError: No module named 'deepl'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/.venv/bin/reptor", line 8, in <module>
sys.exit(run())
^^^^^
File "/tmp/.venv/lib/python3.11/site-packages/reptor/__main__.py", line 7, in run
reptor.run()
File "/tmp/.venv/lib/python3.11/site-packages/reptor/lib/reptor.py", line 273, in run
self.plugin_manager.load_plugins()
File "/tmp/.venv/lib/python3.11/site-packages/reptor/lib/pluginmanager.py", line 154, in load_plugins
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/tmp/.venv/lib/python3.11/site-packages/reptor/plugins/core/Translate/Translate.py", line 15, in <module>
raise Exception("Make sure you have deepl installed.")
Exception: Make sure you have deepl installed.
The mkdocs should be setup:
When I use a wrong parameter I got this error instead of a help message.
➜ reptor importers -s a
'PluginDocs' object has no attribute 'short_help'
Readme should have instructions, an overview, etc. To get everyone started
Edited:
I was planning to create a ruby script to convert Pwndoc finding export file into Sysreptor finding import file in the same spirit as for those on https://github.com/noraj/Pentest-collab-convert.
But before those, I saw there was an importers
command on reptor. It seems that's for now it has 0 importer available (2 planned: #6 #7) and the new importer command is just a mock:
reptor/reptor/plugins/core/Importers.py
Lines 65 to 66 in 0184ea3
So for now I plan just going to the ruby script way unless you have some advices to give me.
Respect new structure
We are not yet fully consistent with this naming, but rather go with "module" so far.
I'd suggest to use "plugin" because "module" might be ambiguous in the python world.
Pentests often have parts that can easily be automated. Some tools could be automatically triggered, parsed, and added as issue to a report.
#41 would allow us to protocol commands and their outputs.
We could use this feature to implement workflows. A workflow is a definition of commands that should be executed.
---
upload: yes
parallel_execution: yes
commands:
- sudo nmap -p 80 {target}
- nuclei -t xyz- -u {target}
- sslyze -u {target}
The workflow could be executed (reptor cmd --workflow wf.yaml
), the tools run and upload their tool outputs to the current report.
(Regarding parallelization, we could also introduce stages that should run in parallel.)
---
upload: yes
parallel_execution: yes
commands:
portscan:
- sudo nmap -p 80 {target}
attacks:
- nuclei -t xyz- -u {target}
- sslyze -u {target}
In the future, we could also take tool outputs from previous tools (like sslyze open ssl ports from nmap scan).
Should we have something like this to update versions, together with new merges into the main branch?
As title says, we should handle the CTRL+C signal.
The nmap plugin should allow uploading multiple notes at once, one note per host.
What plugins should be included in alpha release?
We might want to put unfinished plugins into a separate branch.
Otherwise, plugins should have test cases (if feasible) and should be working.
We currently do not load plugins from the user's home directory.
We should do this.
We could introduce a plugin (e. g. reptor cmd
alias reptor c
to make it shorter) that takes tool commands and executes them: reptor c sudo nmap -p 80
The plugin creates a data structure like...
---
cmd: sudo nmap -p 80
started: 2023-08-03T08:50:07+00:00
finished: 2023-08-03T08:55:07+00:00
exit_code: 0
stdout: open port 80
stderr: starting nmap...
This allows us to create a protocol of pentesting activities.
We could create a timeline from this and upload it to the notes. (If we add a plugin to our markdown renderer, we could even create a nice visual timeline: https://www.npmjs.com/package/hexo-tag-mdline)
It could also allow us to dynamically find out, if there is a corresponding plugin that is able to process the output. The plugin could define a list of command name (cmds = ["nmap", "masscan"]
) that is dynamically expanded (cmds = ["nmap", "masscan", "sudo nmap", "su -c nmap", "sudo masscan", "su -c masscan"]
) to detect if the tool output can be processed.
(It might also be possible to add some conditionals, like if it contains "-oX" it must use xml parsing, or we iterate through all possible parsing algos.)
We could add an option that uploaded should be done right after execution (e. g. reptor c --upload sudo nmap -p 80
).
If this was not specified, the user could upload later (e. g. reptor nmap --upload --cmd
). The cmd switch defines that the input should be taken from the cmd outputs. This takes the cmd output that matches the command with the newest "started" timestamp and a valid "finished" timestamp. If the user wants to use a different output, he must specify a number (e. g. --cmd 1
for the second to last run).
The approach of using the docstrings of a plugin to both show information within the reptor cli usage and the documentation is not a clean approach.
My current suggestion is to move away from this and follow a more static approach via a meta
attribute in the Base classes of plugins, importers and exporters.
I have already implemented the change, but need to finish moving over all other plugins & importers.
If we run into errors that might not be recoverable, we now fall fail_with_exit
.
However, if in the future we want to provide some functionalities (like API classes) as libraries (e.g. for importing into other tools), this method is not very nice.
Shouldn't we instead work with exceptions?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.