Code Monkey home page Code Monkey logo

rest-attacker's Introduction

REST-Attacker

REST-Attacker is an automated penetration testing framework for APIs following the REST architecture style. The tool's focus is on streamlining the analysis of generic REST API implementations by completely automating the testing process - including test generation, access control handling, and report generation - with minimal configuration effort. Additionally, REST-Attacker is designed to be flexible and extensible with support for both large-scale testing and fine-grained analysis.

REST-Attacker is maintained by the Chair of Network & Data Security of the Ruhr University of Bochum.

Features

REST-Attacker currently provides these features:

  • Automated generation of tests
    • Utilize an OpenAPI description to automatically generate test runs
    • 32 integrated security tests based on OWASP and other scientific contributions
    • Built-in creation of security reports
  • Streamlined API communication
    • Custom request interface for the REST security use case (based on the Python3 requests module)
    • Communicate with any generic REST API
  • Handling of access control
    • Background authentication/authorization with API
    • Support for the most popular access control mechanisms: OAuth2, HTTP Basic Auth, API keys and more
  • Easy to use & extend
    • Usable as standalone (CLI) tool or as a module
    • Adapt test runs to specific APIs with extensive configuration options
    • Create custom test cases or access control schemes with the tool's interfaces

Install

Get the tool by downloading or cloning the repository:

git clone https://github.com/RUB-NDS/REST-Attacker.git

You need Python >3.10 for running the tool.

You also need to install the following packages with pip:

python3 -m pip install -r requirements.txt

Quickstart

Here you can find a quick rundown of the most common and useful commands. You can find more information on each command and other about available configuration options in our usage guides.

Get the list of supported test cases:

python3 -m rest_attacker --list

Basic test run (with load-time test case generation):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate

Full test run (with load-time and runtime test case generation + rate limit handling):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --propose --handle-limits

Test run with only selected test cases (only generates test cases for test cases scopes.TestTokenRequestScopeOmit and resources.FindSecurityParameters):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --test-cases scopes.TestTokenRequestScopeOmit resources.FindSecurityParameters

Rerun a test run from a report:

python3 -m rest_attacker <cfg-dir-or-openapi-file> --run /path/to/report.json

Documentation

Usage guides and configuration format documentation can be found in the documentation subfolders.

Troubleshooting

For fixes/mitigations for known problems with the tool, see the troubleshooting docs or the Issues section.

Contributing

Contributions of all kinds are appreciated! If you found a bug or want to make a suggestion or feature request, feel free to create a new issue in the issue tracker. You can also submit fixes or code ammendments via a pull request.

Unfortunately, we can be very busy sometimes, so it may take a while before we respond to comments in this repository.

License

This project is licensed under GNU LGPLv3 or later (LGPL3+). See COPYING for the full license text and CONTRIBUTORS.md for the list of authors.

rest-attacker's People

Contributors

heinezen avatar injcristianrojas avatar iphoneintosh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rest-attacker's Issues

Browser GUI integration for docker mode

Currently, the most robust method for automated authentication (Browser Authentication) does not work well when running the tool inside a docker container. The reason for this is that this method requires access to the Browser GUI for manual log in at the targeted service. There are workarounds for getting a Firefox/Chrome GUI to work in Docker, but they currently are not consistent enough as a stable solution.

As an alternative, we could run a full Ubuntu image (including a GUI) inside the docker container and start the browser inside its GUI. We can then access the Browser GUI by connecting to the Ubuntu client session via network, e.g. by using noVNC.

Use Python generators for test generation

In the current implementation, the test generation logic puts all generated tests into a big list and passes this list to the engine for execution. This is fine for smaller test runs, but could result on a heavy memory footprint if runs contain a large number of tests or if individual TestCase objects is high. Generating the test
cases using generators could improve this situation, since tests would be generated on-the-fly.

It should still be possible to generate all tests at once, so that a run can be saved to file and executed later. The test engine should accept a list of tests or a list of generators.

Implementing Python generators would probably involve the following tasks:

  • Redesign all existing TestCase.generate as generators (should be trivial)
  • Store generators and pass them to the engine
  • Test IDs need to be generated on-the-fly during test execution (a simple counter should be enough).
  • Test results need to be stored after test execution, so that we can destroy the test object after execution to save memory.
  • Tests should not be destroyed immediately, since the engine may want to execute it a second time. This can happen if the engine detects that the rate limit of a service has been reached and the last X tests have to be redone.

You can use the Python module tracemalloc to monitor memory usage.

Compare test results

TestCase should implement a compare() method that can be used to compare the results of two checks. Comparing results can be useful for the following scenarios:

  1. Reproducing a test run (e.g. for checking if a detected issue has been fixed)
  2. Diffing results of two tests of the same type (e.g. for comparing the APIs reaction to slighlty different API requests)

compare() should be implemented as a classmethod that gets passed two report objects. It should then do the following:

  • Check for differences in the issue field.
  • Diff the value field. Complexity for this may vary depending on the test case, since there can be optional fields and nested values.
  • Return a comparison object (as a dict). The object should contain a flag that indicates whether the results are a match/mismatch and the created diff.

Pass Authorization Token/Cookie Headers.

Not all the test cases mentioned in the documentation are present in the generated report like the ones starting with scopes and related to OAuth.

Is there any way to check if the information provided inside the info.json and auth.json is correct and was used successfully to run appropriate test cases?

Can the dev environment be tested using an authorization token with cookies as headers? I need to run it like a simple curl command curl -X GET [API_LINK] -H "Authorization: Bearer [token]" --cookie "Name=Value"

Dry runs

The tool should have an option for "dry runs" that can be found in similar tools. In a dry run, the tool would only execute the configuration stage and/or the test generation. Test execution is skipped. This can be useful to determine whether a given test or service configuration is valid before it's let loose on the targeted REST API.

Dry run functionality could also be used to generate test runs and then save them to file, rather than executing them directly. Essentially, running a dry run with test generation should create a run configuration file that can be passed to the tool at a later time.

Implementing dry runs would probably involve the following tasks:

  • New CLI flag --dry-run
  • Skip engine.run() if a dry run is currently being executed
  • Output run configuration file if test generation is used (with the --generate flag). Test configs can be retrieved from generated tests with the serialize() function.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.