Code Monkey home page Code Monkey logo

nerve's Introduction

Join the project community on our server!


nerve

Crate Release Docker Hub Rust Report GitHub Actions Workflow Status Software License

Nerve is a tool that creates stateful agents with any LLM โ€” without writing a single line of code. Agents created with Nerve are capable of both planning and enacting step-by-step whatever actions are required to complete a user-defined task. This is done by dynamically updating the system prompt with new information gathered during previous actions, making the agent stateful across multiple inferences.

  • Automated Problem Solving: Nerve provides a standard library of actions the agent uses autonomously to inform and enhance its performance. These include identifying specific goals required to complete the task, devising and revising a plan to achieve those goals, and creating and recalling memories comprised of pertinent information gleaned during previous actions.
  • User-Defined Agents: Agents are defined using a standard YAML template. The sky is the limit! You can define an agent for any task you desire โ€” check out the existing examples for inspiration.
  • Works with any LLM: Nerve is an LLM-agnostic tool.

Nerve

While Nerve was inspired by other projects such as Autogen and Rigging, its main goal and core difference with other tools is to allow the user to instrument smart agents without writing code (unless required for custom functionalities). Another advantage of Nerve is being a single static binary (or docker container) that does not require heavy runtimes (such as Python) while offering maximum efficiency and memory safety.

NOTE: The performance of this tool is heavily dependent on the model you use. Bigger models tend to interpret and generate structured data more reliably than smaller ones. If the model you are using generates invalid responses (visible at every step in the statistics), consider using a bigger one of your task.

LLM Support

Nerve features integrations for any model accessible via the ollama, groq, OpenAI and Fireworks APIs. You can specify which provider and which model to use via the -G (or --generator) argument:

For Ollama:

nerve -G "ollama://llama3@localhost:11434" ...

For Groq:

GROQ_API_KEY=you-api-key nerve -G "groq://llama3-70b-8192" ...

For OpenAI:

OPENAI_API_KEY=you-api-key nerve -G "openai://gpt-4" ...

For Fireworks:

LLM_FIREWORKS_KEY=you-api-key nerve -G "fireworks://llama-v3-70b-instruct" ...

Example

Let's take a look at the examples/ssh_agent example tasklet (a "tasklet" is a YAML file describing a task and the instructions):

# If this block is not specified, the agent will be able to access all of the 
# standard function namespaces. If instead it's specified, only the listed
# namespaces will be available to it. Use it to limit what the agent can do.
using:
  # the agent can save and recall memories
  - memory
  # the agent can update its own goal
  - goal
  # the agent can set the task as completed or impossible autonomously
  - task
  # the agent can create an action plan for the task
  - planning
  #  give the agent a sense of time
  - time

# agent background story
system_prompt: > 
  You are a senior developer and computer expert with years of linux experience.
  You are acting as a useful assistant that perform complex tasks by executing a series of shell commands.

# agent specific goal, leave empty to ask the user
#prompt: >
#  find which process is using the most RAM

# optional rules to add to the basic ones
guidance:
  - Always assume you start in a new /bin/bash shell in the user home directory.
  - Prefer using full paths to files and directories.
  - Use the /tmp directory for any file write operations.
  - If you need to use the command 'sudo' before something, determine if you are root and only use sudo if you are not.

# optional global action timeout
timeout: 120s

# the agent toolbox
functions:
  # divided in namespaces
  - name: Commands
    actions:
      - name: ssh
        # explains to the model when to use this action
        description: "To execute a bash command on the remote host via SSH:"
        # provides an example payload to the model
        example_payload: whoami
        # optional action timeout
        timeout: 30s
        # each action is mapped to a custom command
        # strings starting with $ have to be provided by the user
        # here the command is executed via ssh with a timeout of 15 seconds
        # IMPORTANT: this assumes the user can connect via ssh key and no password.
        tool: ssh $SSH_USER_HOST_STRING

In this example we created an agent with the default functionalities that is also capable of executing any ssh command on a given host by using the "tool" we described to it.

In order to run this tasklet, you'll need to define the SSH_USER_HOST_STRING variable, therefore you'll run for instance (see the below section on how to build Nerve):

nerve -G "ollama://llama3@localhost:11434" \
  -T /path/to/ssh_agent \
  -DSSH_USER_HOST_STRING=user@example-ssh-server-host

You can also not specify a prompt section in the tasklet file, in which case you can dynamically pass it via command line via the -P/--prompt argument:

nerve -G "ollama://llama3@localhost:11434" \
  -T /path/to/ssh_agent \
  -DSSH_USER_HOST_STRING=user@example-ssh-server-host \
  -P 'find which process is using the most RAM'

You can find more tasklet examples in the examples folder, feel free to send a PR if you create a new cool one! :D

How does it work?

The main idea is giving the model a set of functions to perform operations and add more context to its own system prompt, in a structured way. Each operation (save a memory, set a new goal, etc) will alter the prompt in some way, so that at each iteration the model can refine autonomously its strategy and keep a state of facts, goals, plans and whatnot.

If you want to observe this (basically the debug mode of Nerve), run your tasklet by adding the following additional argument:

nerve -G ... -T whatever-tasklet --save-to state.txt

The agent save to disk its internal state at each iteration for you to observe.

Installing from Crates.io

Nerve is published as a binary crate on crates.io, if you have Cargo installed you can:

cargo install nerve-ai

This will compile its sources and install the binary in $HOME/.cargo/bin/nerve.

Installing from DockerHub

A Docker image is available on Docker Hub:

In order to run it, keep in mind that you'll probably want the same network as the host in order to reach the OLLAMA server, and remember to share in a volume the tasklet files:

docker run -it --network=host -v ./examples:/root/.nerve/tasklets evilsocket/nerve -h

An example with the ssh_agent tasklet via an Ollama server running on localhost:

docker run -it --network=host \
  -v ./examples:/root/.nerve/tasklets \
  evilsocket/nerve -G "ollama://llama3@localhost:11434" -T ssh_agent -P'find which process is consuming more ram'

Building from sources

To build from source:

cargo build --release

Run a tasklet with a given OLLAMA server:

./target/release/nerve -G "ollama://<model-name>@<ollama-host>:11434" -T /path/to/tasklet 

Building with Docker

docker build . -t nerve

License

Nerve is released under the GPL 3 license. To see the licenses of the project dependencies, install cargo license with cargo install cargo-license and then run cargo license.

Star History Chart

nerve's People

Contributors

5ibyl avatar eltociear avatar evilsocket avatar subhashdasyam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

nerve's Issues

http namespace cannot handle uri with body on POST, DELETE, PATCH, PUT method

When running the web_vulnerability_scanner tasklet and trying to do the POST method for login or such kind of requests, nerve seem to fail as
payload variable only contain the body of the form ( username and password ) but not the uri

the uri ( action ) is present in attr variable

see the below image

image

FYI @evilsocket

I have potential fix I will create the fix and reference the PR here

Fireworks as an LLM provider?

I did attempt it using podman but it gets upset at my -G definition:

openai://accounts/fireworks/models/[email protected]/inference/v1

Yes, Fireworks defines accounts/fireworks/models/llama-v3-8b-hf as the entire model name. Yes, you have to provide the /inference/v1 for their chat completions.

Workarounds would be appreciated! I'll use Groq in the meantime.

Issue with rag namespace thread 'main' panicked at src/agent/namespaces/rag/mod.rs:

Ran the below command

RUST_BACKTRACE=full nerve -G "ollama://llama3:8b@localhost:11434" -E "ollama://all-minilm@localhost:11434" -T auto_rag -P "define a Darmepinter"

I got the following error

[2024-07-05T05:52:26Z DEBUG] tasklet = Tasklet { folder: "/mnt/disk1/nerve_old/examples/auto_rag", name: "auto_rag", system_prompt: "You are an useful assistant that can search for information to provide truthful and concise answers to the user questions.", prompt: None, rag: Some(Configuration { source_path: "./docs", data_path: "./data", chunk_size: Some(1023) }), timeout: None, using: Some(["rag"]), guidance: Some(["You will start by using the search command to find information to answer the user question.", "Your task is complete once the answer to the user question is in memory."]), functions: None }
nerve v0.1.1 ๐Ÿง  llama3:8b@localhost:11434 > auto_rag


[2024-07-05T05:52:26Z DEBUG] document with id '369bc271482cb8635c16e66153d3e3304f74cc4e69f77ce6304f7307a9a60d46@155' already indexed
[2024-07-05T05:52:26Z DEBUG] document with id '369bc271482cb8635c16e66153d3e3304f74cc4e69f77ce6304f7307a9a60d46@156' already indexed
[2024-07-05T05:52:26Z DEBUG] document with id '369bc271482cb8635c16e66153d3e3304f74cc4e69f77ce6304f7307a9a60d46@157' already indexed
[2024-07-05T05:52:26Z DEBUG] document with id '369bc271482cb8635c16e66153d3e3304f74cc4e69f77ce6304f7307a9a60d46@158' already indexed
[2024-07-05T05:52:26Z DEBUG] document with id 'd5bfd175b6e21463fac6fe449a62a46294433766c96ffd70d665c0ae3e8bad05@0' already indexed
[2024-07-05T05:52:26Z DEBUG] document with id 'cb81cccda3625d0b9e24e8775e96b27d70cdcf90c12f7db5f100b538fdc0b956@0' already indexed
[2024-07-05T05:52:26Z DEBUG] document with id '37e2fb0f093363c4d552b1f4d3aba17052a690dbbef62170ce4ba2787a13a57d@0' already indexed
[2024-07-05T05:52:26Z DEBUG] document with id 'b23571ca25e1f82e58755a542ef1048b6b5814be9cda19a3d3b7b30d7dff2c30@0' already indexed
[2024-07-05T05:52:26Z DEBUG] document with id '5355545119cf6d6017b316f7d37edd7038adafe87cdb05fd8e04a1ec10515fb9@0' already indexed
[2024-07-05T05:52:26Z DEBUG] document with id '01c9edd0617ceeb4d49586bb6d722bbf085f1824f21b317ff891fd8fee0134af@0' already indexed
...
....
...
..


[2024-07-05T05:52:26Z DEBUG] starting new connection: http://localhost:11434/
[2024-07-05T05:52:26Z INFO ] step:1 mem:27.3 MiB
[2024-07-05T05:52:29Z DEBUG] StartDocument(1.0, UTF-8, None)
[2024-07-05T05:52:29Z DEBUG] StartElement(search, {"": "", "xml": "http://www.w3.org/XML/1998/namespace", "xmlns": "http://www.w3.org/2000/xmlns/"})
[2024-07-05T05:52:29Z DEBUG] Characters(what is a Darmepinter?)
[2024-07-05T05:52:29Z DEBUG] EndElement(search)
[2024-07-05T05:52:29Z DEBUG] what is a Darmepinter? (top 1)
[2024-07-05T05:52:29Z DEBUG] starting new connection: http://localhost:11434/
[2024-07-05T05:52:30Z INFO ] rag search for 'what is a Darmepinter?': 1 results in 1.180500744s
thread 'main' panicked at src/agent/namespaces/rag/mod.rs:55:56:
called `Result::unwrap()` on an `Err` value: No such file or directory (os error 2)

Stack backtrace:
   0: <unknown>
   1: <unknown>
   2: <unknown>
   3: <unknown>
   4: <unknown>
   5: <unknown>
   6: <unknown>
   7: <unknown>
   8: __libc_start_main
   9: <unknown>
stack backtrace:
   0:     0x5ef42116995f - <unknown>
   1:     0x5ef420ec6a4b - <unknown>
   2:     0x5ef421132bd2 - <unknown>
   3:     0x5ef42116b229 - <unknown>
   4:     0x5ef42116aa4e - <unknown>
   5:     0x5ef42116bc3f - <unknown>
   6:     0x5ef42116b592 - <unknown>
   7:     0x5ef42116b4e9 - <unknown>
   8:     0x5ef42116b4d6 - <unknown>
   9:     0x5ef420d81fc2 - <unknown>
  10:     0x5ef420d82415 - <unknown>
  11:     0x5ef420e392d2 - <unknown>
  12:     0x5ef420e1cf82 - <unknown>
  13:     0x5ef420e4da41 - <unknown>
  14:     0x5ef420e445d3 - <unknown>
  15:     0x5ef420db83c3 - <unknown>
  16:     0x5ef420e4e62f - <unknown>
  17:     0x71af97a29d90 - <unknown>
  18:     0x71af97a29e40 - __libc_start_main
  19:     0x5ef420d9f345 - <unknown>
  20:                0x0 - <unknown>
Aborted (core dumped)

`--full-dump` doesn't seem to output anything.

Installed the latest cargo build using:

cargo install nerve-ai

With my API keys defined as environment variables, if I run:

nerve -G groq://llama3-70b-8192 -T examples/fs_explorer/task.yml --full-dump

Not only do I not see any PROMPT messages but I don't see any extra created files or anything. If I specify a file it throws an error.

This is the output I see on the console:

nerve v0.0.3 ๐Ÿง  llama3-70b-8192@groq > fs_explorer
task: read the file in /etc that contains mappings for hostnames to IP addresses and save its contents in your memory

<read-file> /etc/hosts -> 125 bytes
<memories> hosts-file-contents=127.0.1.1        nerve
127.0.0.1       localhost
::1             localhost ip6-localhost ip6-loopback
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters

task complete: 'The task is complete because I have successfully read the contents of the file in /etc that contains mappings for hostnames to IP addresses and saved its contents in my memory.'

FWIW, the --stats works to show timings, etc. I even tried using both --stats and --full-dump together and only saw stats.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.