Code Monkey home page Code Monkey logo

firecoder's Introduction

FireCoder: Your Self-hosted AI Code Writing Assistant

FireCoder is your self-hosted AI assistant, purpose-built to optimize your coding experience directly on your local machine.

Feel free to share your feedback or report any issues on our GitHub repository.

FireCoder is currently a work in progress, and we appreciate your patience and support as we continue to enhance its capabilities.

Features

  • Easy Installation: Simply install extensions and start using FireCoder.
  • Completion Auto Mode: Enjoy the convenience of automatic code suggestions.
  • Manual Mode: Switch between auto mode and manual mode for code suggestions.
  • Chat Mode: Interact with FireCoder through natural language, receiving code suggestions and guidance tailored to your needs.
  • Multi-line Code Suggestions: Enhance your coding experience with multi-line code suggestions.
  • Platform Support: FireCoder supports Windows, Linux, and macOS.

New Experimental Features:

  • GPU Support: You can now utilize GPU support by adjusting the firecoder.experimental.useGpu settings in configuration.

Getting Started

  1. Download the VS Code extension for FireCoder.
  2. Wait for Server and Model Download.
  3. Start coding with the assistance of our AI-driven features.

Roadmap

  • Custom Commands
  • Generate Commit Descriptions
  • Easy GPU Support
  • Pull Request Reviews
  • Cloud Service
  • IntelliJ IDEA Support
  • Self-Hosting for Teams

We're committed to making FireCoder an indispensable part of your coding toolkit. Stay tuned for updates as we bring these exciting features to life!

System Requirements

Minimal Requirements

  • Disk Space: Minimum 2 GB of free disk space.
  • RAM: Minimum 1 GB of available memory.

These are the minimum specifications to run the FireCoder. The extension should function, but performance may be limited.

Optimal Requirements

  • Disk Space: 14 GB or more of free disk space.
  • RAM: 6 GB or more of available memory.

If you intend to utilize the high-power and large model features, it is advisable to use a system with enhanced specifications to achieve better performance.

Links

Release Notes

See Github Releases

firecoder's People

Contributors

eltociear avatar gespispace avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

firecoder's Issues

local LLM -- option to select *.gguf models by myself

Hello.

I am happy with this extension. It is the only one, that works without any dependencies and therefore it is instantly usable within VScode. Thank you for bringing us such an extension. I have some requests:

It would be nice, if we could have a option to select the *.gguf model file by our own. I can download it somewhere (f.e. huggingFace) and inside the options of FireCoder i can select the file. I recognized that the models are saved inside C:\ drive. inside the USERS subfolder: ".firecoder". If its possible it would also be very handy, if we could have an option where to store downloaded models.

In my case, the drive C: is reserved for the OS and stays clean. I put heavy stuff onto the D: partition. (Windows OS). I managed to use the MkLink command, to alias this folder to drive D:.

issue with Server (can not be find)

Hello.

When i install FireCoder vsix file offline with VScode, i got these errors:

  • Server file info not found (while starting VScode)
  • fetch failed (when start chat)

It seems he can not find the server.exe file ? Maybe it would be a good idea to add a option for defining a own path to its file inside the FireCoder-Extension-Settings.

Surprisingly, if i am online and i install FireCoder vsix file with VScode, than it works. Only offline install seems not to work.
I will try to investigate a bit more into this.

[Feature] Use local tokenizer

FireCoder currently uses a tokenizer to determine the maximum length of the prompt for autocomplete. To achieve this, FireCoder sends the text to the llama.cpp tokenizer endpoint. However, this process takes time and cannot be used if the user is working on the cloud. It is important to provide as much context as possible, but the current method has some issues.

Firstly, to use the llama.cpp tokenizer, the user must download the server and model. However, this is not convenient for users who want to work with the cloud.
Secondly, preparing a prompt can take more than 2 seconds, which can be time-consuming.
Finally, FireCoder has a complex algorithm for selecting the maximum suitable length of the prompt with the minimum request to llama.cpp.

The solution is to use a local tokenizer that can be directly called from the extension. There are two possible options for this:

  1. Use tokenizers, but it works poorly when combined with nodejs bindings, so further investigation is needed.
  2. Use transformers.js, which should work well, but it still needs to be tested.

Add chat mode

TODO:

  • add chat models for downloading
  • add a setting for using chat
  • start the server with chat model if firecoder.experimental.chat is true
  • add webview for showing chat
  • add a wraper for request to chat with supporting chat template
  • add chat history class

Crash server with nvidia after update to last llama.cpp build

Version: 1.88.1
Commit: e170252f762678dec6ca2cc69aba1570769a5d39
Date: 2024-04-10T17:34:12.840Z
Electron: 28.2.8
ElectronBuildId: 27744544
Chromium: 120.0.6099.291
Node.js: 18.18.2
V8: 12.0.267.19-electron.0
OS: Linux x64 6.7.10-200.fc39.x86_64 snap

Log:

llama> /home/gespispace/.firecoder/server: error while loading shared libraries: libcudart.so.11.0: cannot open shared object file: No such file or directory

2024-04-17 17:31:05.150 [trace] llama> child process exited with code 127

UX - Buttons / Menus

Hello.

As i read FireCoder is in its early stage. Maybe it is the perfect time, to give some suggestions what could improve the usability of FireCoder. These things would be nice:

  • a copy button, to copy the generated code-text
  • a cancel / stop button to stop the ai commenting more text, because the code is finished but the AI explaination has a too long text
  • a drop-down menu where we can store the chats (with the possibility to delete them by a small X-button)
  • a mouse-button rightclick context menu, when selecting code inside the VScode editor. Inside such a context menu
    could be something like: analyze code, find problem, reformat code, etc. This could be send back to FireCoder and it will
    automatically start the chat.
  • Start / Stop Server Button: Because sometimes i dont need FireCoder. And if i am not able to deactivate it, my RAM is full. So deactivating the Server manually would free RAM space
    [EDIT: When starting VScode, the server should not start automatically. imho. But this could be handled as a own preference by the user inside the FireCoder Settings ]

I think these few things, by adding some "Buttons" and "Menus" could improve the usability of FireCoder.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.