silvanmelchior / incognitopilot Goto Github PK
View Code? Open in Web Editor NEWAn AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.
License: MIT License
An AI code interpreter for sensitive data, powered by GPT-4 or Code Llama / Llama 2.
License: MIT License
Would be great to support code llama. As of now, there is no huggingface-version, which prevents a trivial integration.
So far, the privacy concept (what is sent to cloud, what stays local) is only explained in the readme. It would be great to also make it more clear in the UI. Options are:
Try to automatically detect sensitive content in the code results, e.g. because it matches file content.
Hi try to run it with the
-e LLM="gpt:gpt-3.5-turbo"
flag but it returns:
Unknown LLM setting: gpt:gpt-3.5-turbo
By any chance we could change the llm here to llama2 or other llm? Thank you!
Allow the model to output markdown formatting. Syntax-highlight code blocks.
Maybe also allow to show images etc. from working directory in chat interface.
Need to be good in explaining that APIs don't see these images.
Dear Developers,
First of all, I would like to express my appreciation for your hard work in developing this amazing project. I have been an avid user and contributor for some time now, and I believe that the improvements made have greatly enhanced the overall user experience.
However, I would like to raise an issue regarding the current state of the user interface (UI) on mobile devices. While the project's UI is excellent on desktop, I have noticed some usability challenges when accessing it on my mobile device. The UI elements appear to be less optimized for smaller screens, resulting in a less user-friendly experience.
Would be nice to have a dark mode
Please add support for the StabelCode models, which are purpose built both for advanced coding and for learning coding - they have larger context windows and can handle multiple files comprising a code base.
https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b
These models might also be able to address #19 (comment)
In any case, kindly add some more documentation for use by n00bs, as this would make it great to use in education settings. Thanks for your work, this has potential in many use cases
Dear developers,
Hello! I would like to suggest an enhancement and kindly request your assistance. Could you please consider adapting certain projects that resemble OpenAI interfaces? Specifically, I am referring to projects such as LocalAi, text-generation-web-ui's REST API interface (which I primarily use for integrating third-party apps, as it automatically adapts various model prompts), and llama-cpp-python. These projects share the characteristic of being unable to utilize OpenAI's function directly. However, if we could incorporate the existing codes with slight modifications, it would greatly simplify the process. Currently, the closest approach I have found is to utilize the LlamaReplicate class for the necessary modifications.
Thank you for your attention to this matter. Your amazing work is greatly appreciated!
Best regards,
dbian
ChatGPT refuses to install a package, but then a few days ago, it would work if I asked chatGPT to run the command (and I can also do it on the terminal)
import os
os.sys(''' pip install duckdb ''')
import duckdb
Error:
Traceback (most recent call last):
File /tmp/tmp85zlufnx.py:1
import duckdb
ModuleNotFoundError: No module named 'duckdb'
Now the first 2 lines work, but the last line still doesn't work. What is the best way to install packages?
Would be helpful to support new lines in the input field, e.g. with shift+enter
I get errors like "Interpreter has stopped" or "Error: This model's maximum context length is 8192 tokens. However, your messages resulted in 9667 tokens (9514 in the messages, 153 in the functions). Please reduce the length of the messages or functions." and then it gives me no options but to reset the entire chat and lose everything. It should just continue the conversation and work around the bug.
Add more tests, including UI tests and end-2-end tests
Have a checkbox or so which allows to explain the files in the working directory to the model. E.g. file-names and structure.
Allow to have a chat history
I want to use a Custom python package with the Copilot. Aim is to integrate a python package with the Copilot for code generation purposes. This would enable us to leverage the capabilities of the private package within our code-writing workflows.
I am able to install the package in the docker
But the Copilot/ LLM is not able to pick it and use it. I want the package to be used in a way like openai functions.
For example, I have a package named foretell that does time series forecasting. When I ask a prompt like " Use foretell and build a time series model", the Copilot shall be able to understand the foretell package and use it as needed.
Is this doable?
Add more llama 2 connectors, e.g. to use it via official repo or via llama.cpp
Hello,
Thank you for the great tool!
I checked it out with the gpt-openai mode and it worked well for me.
I then wanted to use it with gpt-azure and kept getting "ERROR: Internal Error" for anything I submit.
I checked this with Mac Ventura 13.5.1 Docker and colima, with GCP docker, with amd and arm processors, and with full and slim images. I also checked building the images locally. The creds were verified with a simple curl/python to be working fine and getting a response.
Is this a known issue under work?
Thanks in advance
Keep auto-approved stored somewhere. Probably ask the user explicitely if it should be stored first.
If both auto-approves are true, could also stop showing sidebar if code runs (maybe).
And allow to hide side-bar again once it appears.
Add multi-user authentication to UI and services
Hi,
Trying different combination (from Docker, from source...), I keep getting the error:
Error: Could not authenticate to backend. This probably means there is no or an invalid authentication token provided in the URL. Please check the startup console output of the backend and add a valid token to the URL.
You can find the Gitpod config here.
Since the UI is now served static, it would be possible to pack everything into a Python package. However, this prevents the sandboxing of the code execution. One idea to at least prevent file access is the following:
Hi there! StarCoder from BigCode was trained for this kind of tasks, so having some documentation/support for it would be great.
Very nice project btw ๐ฅ
Allow to edit code before it runs
It will be great to support inputs larger than 8K tokens for GPT-35-16K and GPT-4-32K.
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.