Code Monkey home page Code Monkey logo

reptor's People

Contributors

aronmolnar avatar kmahyyg avatar mwedl avatar noraj avatar richardschwabe avatar syn-4ck avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

noraj azisaz syn-4ck

reptor's Issues

Create finding data model

Finding data holds values only and does not contain type definitions.
Most data types cannot be differentiated (like strings and enums).

  • Therefore, we need a model that joins finding data values from an acutal report with project design field definitions.
  • It should also prevent that invalid values (like non-existing values for enums) cannot be set.

Split models.py

reptor\api\models.py is too large and should be split in multiple files.

Then, we should also update the docs and add the new models there.

Test cases not executed

Test cases are currently not executed in the GitHub CI pipeline:

Run python -m unittest discover -v

----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK

python -m unittest discover -v was replaced by make.
However, the Makefile specifies the tests-directory explicitly. This means that tests cases of modules are no longer executed.

The test cases currently also fail due to failed imports (at least for my locally): ImportError: cannot import name 'Finding' from 'reptor.api.models'.

Adding __init__.py-files obviously fixes the problem.

During troubleshooting, I also realized that TypeAlias is available from python 3.10 and not compatible with 3.8.

/cc @richardschwabe

Burp Plugin

Need a burp scanner plugin.

Requires access to a sample file

Restructure plugins

I think it does not make sense to differentiate between core and community in this repo.

A differentiation between Tools, Importers, Conf, etc. makes more sense.

There might be plugins that we regard as community:

  • Edge cases not suitable for most users
  • Plugins with license issues (e. g. pdf2docx has GPL - we would not want to distribute this in this repo)

Dependecies botocore

There is an issue in the dependencies when installing all

Python Version: 3.8

pip install .[all]
...SNIP...
ERROR: botocore 1.31.3 has requirement urllib3<1.27,>=1.25.4, but you'll have urllib3 2.0.3 which is incompatible.

Needs more investigation.

Tool outputs to findings

Currently, tool outputs are parsed, formatted and then uploaded to notes.

In the future, it should be possible to upload formatted data into findings or report fields in an existing project.

Let's assume we detected a list of weak ciphers using sslyze.

  • The parsing process remains the same.
  • Formatting can be done by the same or dedicated templates.
  • Upload process will be different and more complex.

We could create a new template weak_ciphers.md that lists the weak ciphers.

We would add a new finding to our report, e.g. containing a title, and a predefined description. This predefined description includes some static text and a placeholder where the list of weak ciphers should be added.

The placeholder could be an HTML comment with some information for humans and a yaml structure holding relevant metadata that might be required to properly add the data.

A description might look like:

We detected weak SSL configurations for your server.

<!-- This is a placeholder for reptor automizations.
---
plugin: sslyze
template: weak_ciphers
prepend_text: "The following weak ciphers were detected on your server:"
append_text: "Find more information at example.com."
-->

reptor should iterate through all report fields and finding fields looking for placeholders.

Extra features

  • We might want to add conditions to the yaml, like add if some data is True
  • Not only existing findings and fields should be searched for placeholders, but new findings could be created from existing templates.
    • This could be done by adding tags to the finding templates (e.g. "sslyze:weak_ciphers")

Key Management for Plugins

It might make sense to have something like a "secrets" manager. All importers will require an API key to connect to the given tool. Writing a secret often is a) annoying and b) bad practice as it might get logged in the console history etc.

Then however the question is, how often would reptor be used in automated processes where this is a big requirement?

For normal tool plugins, nothing comes to mind why someone would need to provide an API key.

On the other hand, it might be unnecessary code and complexity added to the project.

But we could have something like a keys attribute for each plugin/importer/exporter along with the meta attribute.

We could then check if an environment variable exists.

i.e:

class Translate(Base):
    """ """

    meta = {
        "name": "Translate",
        "summary": "Translate Projects and Templates to other languages",
    }
    
   keys = [
        "token"
   ]

...SNIP...

In the background something along the line of:

return getenv(MODULENAME_KEYNAME,"")

The user must ensure there is an environment variable in this case of TRANSLATE_TOKEN

And the user could access it in code via

self.secrets.get("token")

This process could also help make sure that the user gets notified before the plugin is run. We could automatically check at the beginning if all requirements are met and the plugin does not run at all if these aren't.

Duplicate plugins in help message

  other:
  importers             Show importers to use to import finding templates
  plugins               Allows plugin management & development
  projects              Queries Projects from reptor.api
  templates             Queries Finding Templates from reptor.api
  translate             Translate Projects and Templates to other languages
  importers             Show importers to use to import finding templates
  plugins               Allows plugin management & development
  projects              Queries Projects from reptor.api
  templates             Queries Finding Templates from reptor.ap

Discussion: Plugins to separate project?

It is easier for users to override core templates if they already have the plugin/module files available in their home directories.

The question is if we want to move all plugins to the users' home directories (maybe only leave the plugins needed for conf and maybe project/note/SysReptor-related in the core project).

However, users then should not modify the file contents, but rather copy them because otherwise, updating the plugins will be difficult.

Allow splitting notes into more levels

Currently, we can format data and upload it into one note, or we can split it into multiple notes of the same level.

It would be nice to be able to be able to create notes with more levels, e.g.

  • target 1
    • vuln 1
    • vuln 2
  • target 2
    • vuln 3
    • vuln 3
      • details 1

The method preprocess_for_template now returns a dictionary holding the context for Django template rendering.

We will introduce a model, holding details about structure and depth of the notes. This will also allow us to create parent notes with contents (which is currently not possible).

If a dict is returned, we will transparently transform the data structure to the model.

Remove Protocols

Protocols currently mostly resolve circular import problems.

However, they require us to maintain much extra code and define redundant declarations of methods.
The protocol classes are also quite unique, so protocols do not add much extra value, from my point of view.

deepl dependency missing

Could you add this line

deepl >= 1.15.0',

to

reptor/pyproject.toml

Lines 21 to 22 in 0184ea3

'charset-normalizer >= 3.0.0',
'Django >= 4.2',

because it is missing

➜ reptor -h
No .sysreptor folder found in home directory...Creating one
Traceback (most recent call last):
  File "/tmp/.venv/lib/python3.11/site-packages/reptor/plugins/core/Translate/Translate.py", line 13, in <module>
    import deepl
ModuleNotFoundError: No module named 'deepl'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/.venv/bin/reptor", line 8, in <module>
    sys.exit(run())
             ^^^^^
  File "/tmp/.venv/lib/python3.11/site-packages/reptor/__main__.py", line 7, in run
    reptor.run()
  File "/tmp/.venv/lib/python3.11/site-packages/reptor/lib/reptor.py", line 273, in run
    self.plugin_manager.load_plugins()
  File "/tmp/.venv/lib/python3.11/site-packages/reptor/lib/pluginmanager.py", line 154, in load_plugins
    spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/tmp/.venv/lib/python3.11/site-packages/reptor/plugins/core/Translate/Translate.py", line 15, in <module>
    raise Exception("Make sure you have deepl installed.")
Exception: Make sure you have deepl installed.

Working Documentation

The mkdocs should be setup:

  • Installation & Usage
  • Core Plugins
  • How to get started writing Plugins

Update Readme

Readme should have instructions, an overview, etc. To get everyone started

Edited:

  • Sysreptor References
  • Documentation Links
  • Badges
  • Description
  • Screencasts reptor in action

Pwndoc importer

I was planning to create a ruby script to convert Pwndoc finding export file into Sysreptor finding import file in the same spirit as for those on https://github.com/noraj/Pentest-collab-convert.

But before those, I saw there was an importers command on reptor. It seems that's for now it has 0 importer available (2 planned: #6 #7) and the new importer command is just a mock:

def _create_new_importer(self):
...

So for now I plan just going to the ruby script way unless you have some advices to give me.

Discussion: Naming "module" vs "plugin"

We are not yet fully consistent with this naming, but rather go with "module" so far.

I'd suggest to use "plugin" because "module" might be ambiguous in the python world.

Tool workflows for automating pentests

Pentests often have parts that can easily be automated. Some tools could be automatically triggered, parsed, and added as issue to a report.

#41 would allow us to protocol commands and their outputs.

We could use this feature to implement workflows. A workflow is a definition of commands that should be executed.

---
upload: yes
parallel_execution: yes
commands:
  - sudo nmap -p 80 {target}
  - nuclei -t xyz- -u {target}
  - sslyze -u {target}

The workflow could be executed (reptor cmd --workflow wf.yaml), the tools run and upload their tool outputs to the current report.

(Regarding parallelization, we could also introduce stages that should run in parallel.)

---
upload: yes
parallel_execution: yes
commands:
  portscan:
    - sudo nmap -p 80 {target}
  attacks:
    - nuclei -t xyz- -u {target}
    - sslyze -u {target}

In the future, we could also take tool outputs from previous tools (like sslyze open ssl ports from nmap scan).

Prepare for pre-alpha access

What plugins should be included in alpha release?

We might want to put unfinished plugins into a separate branch.

Otherwise, plugins should have test cases (if feasible) and should be working.

  • conf
  • file
  • note
  • nmap
  • nikto
  • owaspzap
  • simplelist --> needs to be migrated / has no template. Needs a dedicated plugin category. Would remove it for the moment.
  • sslyze
  • importers
  • plugins
  • projects
  • templates
  • translate
  • ghostwriter --> (test cases could be added)

We currently do not load plugins from the user's home directory.
We should do this.

Logging Changes

  • Remove default logging to file
  • Change file logging to a single file
  • "log_file":true in config.yml

Command plugin for saving commands, inputs, outputs, etc

We could introduce a plugin (e. g. reptor cmd alias reptor c to make it shorter) that takes tool commands and executes them: reptor c sudo nmap -p 80

The plugin creates a data structure like...

---
cmd: sudo nmap -p 80
started: 2023-08-03T08:50:07+00:00
finished: 2023-08-03T08:55:07+00:00
exit_code: 0
stdout: open port 80
stderr: starting nmap...

This allows us to create a protocol of pentesting activities.
We could create a timeline from this and upload it to the notes. (If we add a plugin to our markdown renderer, we could even create a nice visual timeline: https://www.npmjs.com/package/hexo-tag-mdline)

It could also allow us to dynamically find out, if there is a corresponding plugin that is able to process the output. The plugin could define a list of command name (cmds = ["nmap", "masscan"]) that is dynamically expanded (cmds = ["nmap", "masscan", "sudo nmap", "su -c nmap", "sudo masscan", "su -c masscan"]) to detect if the tool output can be processed.

(It might also be possible to add some conditionals, like if it contains "-oX" it must use xml parsing, or we iterate through all possible parsing algos.)

We could add an option that uploaded should be done right after execution (e. g. reptor c --upload sudo nmap -p 80).

If this was not specified, the user could upload later (e. g. reptor nmap --upload --cmd). The cmd switch defines that the input should be taken from the cmd outputs. This takes the cmd output that matches the command with the newest "started" timestamp and a valid "finished" timestamp. If the user wants to use a different output, he must specify a number (e. g. --cmd 1 for the second to last run).

Rework the Meta Data of Plugins

The approach of using the docstrings of a plugin to both show information within the reptor cli usage and the documentation is not a clean approach.

My current suggestion is to move away from this and follow a more static approach via a meta attribute in the Base classes of plugins, importers and exporters.

I have already implemented the change, but need to finish moving over all other plugins & importers.

Potential problem with fail_with_exit

If we run into errors that might not be recoverable, we now fall fail_with_exit.

However, if in the future we want to provide some functionalities (like API classes) as libraries (e.g. for importing into other tools), this method is not very nice.

Shouldn't we instead work with exceptions?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.