Code Monkey home page Code Monkey logo

lsp-ai's Introduction

Logo

LSP-AI

Discord

LSP-AI is an open source language server that serves as a backend for performing completion with large language models and soon other AI powered functionality. Because it is a language server, it works with any editor that has LSP support.

The goal of LSP-AI is to assist and empower software engineers by integrating with the tools they already know and love not replace software engineers.

A short list of a few of the editors it works with:

  • VS Code
  • NeoVim
  • Emacs
  • Helix
  • Sublime

It works with many many many more editors.

See the wiki for instructions on:

LSP-AI can work as an alternative to Github Copilot.

LSP-AI.VS-Code.and.Helix.Demo.mp4

On the left: VS Code using Mistral Codestral. On the right: Helix using stabilityai/stable-code-3b

Note that speed for completions is entirely dependent on the backend being used. For the fastest completions we recommend using either a small local model or Groq.

The Case for LSP-AI

tl;dr LSP-AI abstracts complex implementation details from editor specific plugin authors, centralizing open-source development work into one shareable backend.

Editor integrated AI-powered assistants are here to stay. They are not perfect, but are only improving and early research is already showing the benefits. While several companies have released advanced AI-powered editors like Cursor, the open-source community lacks a direct competitor.

LSP-AI aims to fill this gap by providing a language server that integrates AI-powered functionality into the editors we know and love. Here’s why we believe LSP-AI is necessary and beneficial:

  1. Unified AI Features:

    • By centralizing AI features into a single backend, LSP-AI allows supported editors to benefit from these advancements without redundant development efforts.
  2. Simplified Plugin Development:

    • LSP-AI abstracts away the complexities of setting up LLM backends, building complex prompts and soon much more. Plugin developers can focus on enhancing the specific editor they are working on, rather than dealing with backend intricacies.
  3. Enhanced Collaboration:

    • Offering a shared backend creates a collaborative platform where open-source developers can come together to add new functionalities. This unified effort fosters innovation and reduces duplicated work.
  4. Broad Compatibility:

    • LSP-AI supports any editor that adheres to the Language Server Protocol (LSP), ensuring that a wide range of editors can leverage the AI capabilities provided by LSP-AI.
  5. Flexible LLM Backend Support:

    • Currently, LSP-AI supports llama.cpp, Ollama, OpenAI-compatible APIs, Anthropic-compatible APIs, Gemini-compatible APIs and Mistral AI FIM-compatible APIs, giving developers the flexibility to choose their preferred backend. This list will soon grow.
  6. Future-Ready:

    • LSP-AI is committed to staying updated with the latest advancements in LLM-driven software development.

Roadmap

There is so much to do for this project and incredible new research and tools coming out everyday. Below is a list of some ideas for what we want to add next, but we welcome any contributions and discussion around prioritizing new features.

  • Implement semantic search-powered context building (This could be incredibly cool and powerful). Planning to use Tree-sitter to chunk code correctly.
  • Support for additional backends
  • Exploration of agent-based systems

lsp-ai's People

Contributors

asukaminato0721 avatar luixiao0 avatar silasmarvin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lsp-ai's Issues

Update cargo lockfile for frozen builds

Running cargo build --frozen fails for release 0.2.0. I am trying to package 0.2.0 for nixos, but the lockfile being behind makes it so that I cannot do so. An easy fix is to update the lockfile and bump the version to (say) 0.2.1.

Steps to reproduce:

Run cargo build --frozen

Output:

error: the lock file /path/to/lsp-ai/Cargo.lock needs to be updated but --frozen was passed to prevent this
If you want to try to generate the lock file without accessing the network, remove the --frozen flag and use --offline instead.

Sample helix languages.toml?

I am having difficulty figuring out how to pass initializationOptions via helix. Any way you might be able to provide a quick helix config as an example? Thanks.

Add doc for printing LSP server logs

demo
I'm working on getting a handle on logging so implement features like list suggestions, next suggestion, select suggestions, etc.

I've got LSP-AI working on my local & can start the extension and generate a response as well as confirm the generation.

I set LSP_AI_LOG=DEBUG before running app.

However I haven't gotten a handle on how to print LSP server logs.

Can we add doc explaining that?

How to start model?

I've installed LSP-AI on my local

$ which lsp-ai

/Users/future/.cargo/bin/lsp-ai

But I don't know how to start it.
when I run

$ /Users/future/.cargo/bin/lsp-ai             

The console hangs. If I press a key it exits immediately.
Not sure what to do next. Please advise.

Thanks for your work!

Support remote endpoint for Ollama?

Thanks for this project

Is there an existing issue for this?

I have searched the existing issues

Feature request

Support remote endpoint for Ollama

Context

ollama is not fast enough using onboard hardware if run on laptops locally, we can utilize exising cloud GPUs to speed it up

Possible implementation
I have provided a possible implementation at #15

The code generated using the above prompt needs to be deleted before it can be used. Is there any good way to directly delete it?

"lsp-ai.serverConfiguration": {
        "memory": {
            "file_store": {}
          },
          "models": {
            "model1": {
              "type": "open_ai",
              "chat_endpoint": "https://api.openai.com/v1/chat/completions",
              "model": "gpt-4o",
              "auth_token": "sk-xxxxxxxxxxxxxx"
            }
          },
          "completion": {
            "model": "model1",
            "parameters": {
              "max_context": 2048,
              "max_tokens": 128,
              "messages": [
                {
                  "role": "system",
                  "content": "Instructions:\n- You are an AI programming assistant.\n- Given a piece of code with the cursor location marked by \"<CURSOR>\", replace \"<CURSOR>\" with the correct code or comment.\n- First, think step-by-step.\n- Describe your plan for what to build in pseudocode, written out in great detail.\n- Then output the code replacing the \"<CURSOR>\"\n- Ensure that your completion fits within the language context of the provided code snippet (e.g., Python, JavaScript, Rust).\n\nRules:\n- Only respond with code or comments.\n- Only replace \"<CURSOR>\"; do not include any previously written code.\n- Never include \"<CURSOR>\" in your response\n- If the cursor is within a comment, complete the comment meaningfully.\n- Handle ambiguous cases by providing the most contextually appropriate completion.\n- Be consistent with your responses."
                },
                {
                  "role": "user",
                  "content": "def greet(name):\n    print(f\"Hello, {<CURSOR>}\")"
                },
                {
                  "role": "assistant",
                  "content": "name"
                },
                {
                  "role": "user",
                  "content": "function sum(a, b) {\n    return a + <CURSOR>;\n}"
                },
                {
                  "role": "assistant",
                  "content": "b"
                },
                {
                  "role": "user",
                  "content": "fn multiply(a: i32, b: i32) -> i32 {\n    a * <CURSOR>\n}"
                },
                {
                  "role": "assistant",
                  "content": "b"
                },
                {
                  "role": "user",
                  "content": "# <CURSOR>\ndef add(a, b):\n    return a + b"
                },
                {
                  "role": "assistant",
                  "content": "Adds two numbers"
                },
                {
                  "role": "user",
                  "content": "# This function checks if a number is even\n<CURSOR>"
                },
                {
                  "role": "assistant",
                  "content": "def is_even(n):\n    return n % 2 == 0"
                },
                {
                  "role": "user",
                  "content": "{CODE}"
                }
              ]
            }
          }
    },
    ```
The code generated using the above prompt needs to be deleted before it can be used.
Is there any good way to directly delete it?


def TreeNode:```python
    # Binary tree node definition
    def __init__(self, value=0, left=None, right=None):
        self.value = value
        self.left = left
        self.right = right
```python
# Function to create a binary tree from a list
def create_binary_tree(lst, index=0):
    if index < len(lst):
        node = TreeNode(lst[index])
        node.left = create_binary_tree(lst, 2 * index + 1)
        node.right = create_binary_tree(lst, 2 * index + 2)
        return node
    return None
```python
# Example usage
if __name__ == "__main__":
    elements = [1, 2, 3, 4, 5, 6, 7]
    root = create_binary_tree(elements)
    print(root.value)  # Output: 1
```python
```python
print(root.left.value)  # Output: 2
```print(root.right.value)  # Output: 3

Zed Extension

I am trying to use this in Zed using the LSP extension. I downloaded the lsp-ai using Cargo, bound it to different file types, and passed the following initialization options:

(Note: Codegemma is installed on my system using Ollama)

{
    "memory": {
        "file_store": {}
    },
    "models": {
        "model1": {
            "type": "ollama",
            "model": "codegemma"
        }
    },
    "completion": {
        "model": "model1",
        "parameters": {
            "fim": {
                "start": "<|fim_begin|>",
                "middle": "<|fim_hole|>",
                "end": "<|fim_end|>"
            },
            "max_context": 2000,
            "options": {
                "num_predict": 32
            }
        }
    }
}

However, when I tested it, I didn't get any completions. Here are the LSP logs of lsp-ai:

Server Logs:

stderr: ERROR lsp_ai::memory_worker: error in memory worker task: Error getting rope slice
stderr: ERROR lsp_ai::transformer_worker: generating response: channel closed
stderr: ERROR lsp_ai::memory_worker: error in memory worker task: Error getting rope slice
stderr: ERROR lsp_ai::transformer_worker: generating response: channel closed
stderr: ERROR lsp_ai::memory_worker: error in memory worker task: Error getting rope slice
stderr: ERROR lsp_ai::transformer_worker: generating response: channel closed
stderr: ERROR lsp_ai::memory_worker: error in memory worker task: Error getting rope slice
stderr: ERROR lsp_ai::transformer_worker: generating response: channel closed
stderr: ERROR lsp_ai::memory_worker: error in memory worker task: Error getting rope slice
stderr: ERROR lsp_ai::transformer_worker: generating response: channel closed

Server Logs (RPC):

// Send:
{"jsonrpc":"2.0","id":8,"method":"textDocument/completion","params":{"textDocument":{"uri":"file:///home/raunak/Documents/zed-lsp-ai/test.py"},"position":{"line":1,"character":5}}}
// Receive:
{"jsonrpc":"2.0","id":8,"error":{"code":-32603,"message":"channel closed"}}
// Send:
{"jsonrpc":"2.0","id":9,"method":"textDocument/completion","params":{"textDocument":{"uri":"file:///home/raunak/Documents/zed-lsp-ai/test.py"},"position":{"line":2,"character":5}}}
// Receive:
{"jsonrpc":"2.0","id":9,"error":{"code":-32603,"message":"channel closed"}}

Is this an issue with the editor, lsp-ai, or is it my fault?

Support lsp_client.cancel_requests()

While developing neovim plugin for LSP-AI, I found that despite of asking LSP to cancel previous request, I still received multiple responses from server.

[Question] Is it possible to use Ollama as a backend?

Hi, first of all - thank you for an awesome project :)
I have a question though - the docs say that when llama.cpp backend is used, the LSP links directly to it, but I'm not sure - is it possible to utilize an Ollama instance serving on the local network?
Thanks!

RAG for code

Really interesting project – I think an AI-focused LSP makes a lot of sense as a standalone service.

Implement semantic search-powered context building (This could be incredibly cool and powerful). Planning to use Tree-sitter to chunk code correctly.

You might want to check out https://github.com/getgrit/gritql by @morgante et al. It's a Rust-based wrapper around tree-sitter that I used for my last project doing RAG for code and was really impressed with it.

Cheers 👋

How do I use this in helix?

Hello,

I have installed lsp-ai

❯ (Get-Command lsp-ai).Path
C:\Users\AndreJohansson\.cargo\bin\lsp-ai.exe

I have set my openai key

❯ $env:OPENAI_API_KEY
sk-proj-...

I have used your example to configure my languages file. I added rust to it.

languages.toml

...
#################################
## Configuration for languages ##
#################################

[[language]]
name = "python"
language-servers = ["pyright", "lsp-ai"]

[[language]]
name = "rust"
auto-format = true
language-servers = ["rust-analyzer", "lsp-ai"]

Then I upen a rust file in helix

hx main.rs

How would I then trigger prompts and questions etc?
Please note: I'm a novice helix user, so any keyboard shortcuts and commands are helpful.

Using Anthropic API renders some issues

Thanks for this project, I'd love to use this with helix.

Doing some attempts I ran into two issues, the first I was able to track down.

2024-06-08T21:55:06.687 helix_lsp::transport [ERROR] lsp-ai <- InternalError: "{\"message\":\"messages.0.tool_calls: Extra inputs are not permitted\",\"type\":\"invalid_request_error\"}"

I made a fork and removed both references to tool_calls in config.rs; this fixed that issue.
I'm no Rustacean, so not sure how to implement that nicely, but I hope it points you in some useful direction.


The second issue I'm not sure how to tackle…
I get results from the Anthropic-API, but they are cut off at (I believe…) about 155 tokens.

For example, if I enter

const monthNames = <CURSOR>

…it returns a (cut-off) list of monthNames.

The following, however, works fine.

const monthNamesAbbreviated = <CURSOR>

Setting up with home-manager on MacOS

Hey, this looks like it would be a great addition to the helix text editor! I was wondering if you know of any way to add this to a home-manager configuration file? Additionally, I have it downloaded but does not seem to run in any of my editors.

Thanks!

VS code

[Error - 5:16:50 PM] lsp-ai client: couldn't create connection to server.
Launching server using command lsp-ai failed. Error: spawn lsp-ai ENOENT

The following error shows on vs code insiders

Problème d'installation sur Windows 11 : "cargo install lsp-ai -F llama_cpp"

En regardant le Tuto de : AICodeKing pour installer LSP-AI pour Ollama sur Windows 11.
Je me suis retrouvé avec ces erreurs :
CARGO-PROJETS - Visual Studio Code
J'ai donc un peu cherché et j'ai trouvé !

🥇 🥇 🥇 INSTALLER sur Windows 11 & Python 3.12 & Conda 🥇 🥇 🥇

PRE-INSTALLATION SYSTEME :
Python
MiniConda
Installer Microsoft C++ Build Tools
Installer Rust
Installer Visual Studio Code
Installer LLVM 18.1.5 💯 DECLARER : COUPABLE

Rajouter les variables ENV au système pour llama.cpp.

INSTALLATION de LSP-AI :*
Créer un ENV conda & l'activer.
cloner le Depot.
Se déplacer dans le dossier créé.
Vérifier que Cargo est bien installer. (Redémarrer si il y a un probléme)
Lancer la commande d'installation.

### Créer un ENV conda & l'activer.
conda create -n lsp-ai python=3.12 -y
conda activate lsp-ai

### cloner le Depot.
git clone https://github.com/SilasMarvin/lsp-ai.git

### Se déplacer dans le dossier créé.
cd lsp-ai

### Vérifier que Cargo est bien installer.
cargo --version 

###### Si OK ! alors on continu...
### Lancer la commande d'installation. 
cargo install lsp-ai -F llama_cpp

# Installation Terminée.

Nouvelle Réponse 👍

toml - CARGO-PROJETS - Visual Studio Code
👍 👍 👍
Bonne Installation...

Installer LSP-AI pour VScode

LSP-AI pour VScode
Ollama

Ps: l'installation a été faite avec un .env dans 'VisualStudio" mais il n'y ne devrait pas y avoir de soucis avec conda.

Bonus :

  1. Le Plugin dans VScode.
  2. Configuration pour Ollama.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.