Code Monkey home page Code Monkey logo

incognitopilot's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

incognitopilot's Issues

Add support for code llama

Would be great to support code llama. As of now, there is no huggingface-version, which prevents a trivial integration.

Privacy Concept in UI

So far, the privacy concept (what is sent to cloud, what stays local) is only explained in the readme. It would be great to also make it more clear in the UI. Options are:

  • Change UI design s.t. it's more clear what's local and what's in the cloud
  • Getting started assistant which explains everything (would need to store state somewhere that already seen, which would impact other issues as well, e.g. #7)

Unknown LLM setting

Hi try to run it with the

-e LLM="gpt:gpt-3.5-turbo"

flag but it returns:

Unknown LLM setting: gpt:gpt-3.5-turbo

Rich model outputs

Allow the model to output markdown formatting. Syntax-highlight code blocks.

Maybe also allow to show images etc. from working directory in chat interface.

Need to be good in explaining that APIs don't see these images.

WSL Docker doesn't work

image

this is the error I get
ERROR: ASGI callable returned without sending handshake.
INFO: connection open
INFO: connection closed

Any idea? I'm running from a PC that seems to be able to use OpenAI's API.

Request for Mobile UI Adaptation - Improving User Experience on Mobile Devices

Dear Developers,

First of all, I would like to express my appreciation for your hard work in developing this amazing project. I have been an avid user and contributor for some time now, and I believe that the improvements made have greatly enhanced the overall user experience.

However, I would like to raise an issue regarding the current state of the user interface (UI) on mobile devices. While the project's UI is excellent on desktop, I have noticed some usability challenges when accessing it on my mobile device. The UI elements appear to be less optimized for smaller screens, resulting in a less user-friendly experience.

example:
image

Dark mode

Would be nice to have a dark mode

Add Support for StableCode models

Please add support for the StabelCode models, which are purpose built both for advanced coding and for learning coding - they have larger context windows and can handle multiple files comprising a code base.

https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b

These models might also be able to address #19 (comment)

In any case, kindly add some more documentation for use by n00bs, as this would make it great to use in education settings. Thanks for your work, this has potential in many use cases

Request for Adapting Projects with Similarities to OpenAI Interfaces

Dear developers,

Hello! I would like to suggest an enhancement and kindly request your assistance. Could you please consider adapting certain projects that resemble OpenAI interfaces? Specifically, I am referring to projects such as LocalAi, text-generation-web-ui's REST API interface (which I primarily use for integrating third-party apps, as it automatically adapts various model prompts), and llama-cpp-python. These projects share the characteristic of being unable to utilize OpenAI's function directly. However, if we could incorporate the existing codes with slight modifications, it would greatly simplify the process. Currently, the closest approach I have found is to utilize the LlamaReplicate class for the necessary modifications.

Thank you for your attention to this matter. Your amazing work is greatly appreciated!

Best regards,
dbian

installing packages

ChatGPT refuses to install a package, but then a few days ago, it would work if I asked chatGPT to run the command (and I can also do it on the terminal)

import os
os.sys(''' pip install duckdb ''')
import duckdb

Error:

Traceback (most recent call last):
File /tmp/tmp85zlufnx.py:1
import duckdb
ModuleNotFoundError: No module named 'duckdb'

Now the first 2 lines work, but the last line still doesn't work. What is the best way to install packages?

Recover from errors

I get errors like "Interpreter has stopped" or "Error: This model's maximum context length is 8192 tokens. However, your messages resulted in 9667 tokens (9514 in the messages, 153 in the functions). Please reduce the length of the messages or functions." and then it gives me no options but to reset the entire chat and lose everything. It should just continue the conversation and work around the bug.

Explain files in working-dir

Have a checkbox or so which allows to explain the files in the working directory to the model. E.g. file-names and structure.

Use Custom Python Packages with IncognitoPilot

I want to use a Custom python package with the Copilot. Aim is to integrate a python package with the Copilot for code generation purposes. This would enable us to leverage the capabilities of the private package within our code-writing workflows.
I am able to install the package in the docker
But the Copilot/ LLM is not able to pick it and use it. I want the package to be used in a way like openai functions.

For example, I have a package named foretell that does time series forecasting. When I ask a prompt like " Use foretell and build a time series model", the Copilot shall be able to understand the foretell package and use it as needed.

Is this doable?

GPT-AZURE: Internal Error

Hello,

Thank you for the great tool!

I checked it out with the gpt-openai mode and it worked well for me.

I then wanted to use it with gpt-azure and kept getting "ERROR: Internal Error" for anything I submit.
I checked this with Mac Ventura 13.5.1 Docker and colima, with GCP docker, with amd and arm processors, and with full and slim images. I also checked building the images locally. The creds were verified with a simple curl/python to be working fine and getting a response.

Is this a known issue under work?

Thanks in advance

File up- and download

For deployed version, a file up- and download would be helpful. This would go hand in hand with #8 to show e.g. links to downloadable files and #12 to maybe only explain newly uploaded files.

Save auto-approve

Keep auto-approved stored somewhere. Probably ask the user explicitely if it should be stored first.

If both auto-approves are true, could also stop showing sidebar if code runs (maybe).

And allow to hide side-bar again once it appears.

Can't use on gitpod

Hi,

Trying different combination (from Docker, from source...), I keep getting the error:

Error: Could not authenticate to backend. This probably means there is no or an invalid authentication token provided in the URL. Please check the startup console output of the backend and add a valid token to the URL.

You can find the Gitpod config here.

Dockerless Installation

Since the UI is now served static, it would be possible to pack everything into a Python package. However, this prevents the sandboxing of the code execution. One idea to at least prevent file access is the following:

  • Overwrite open method
  • To check: do numpy, ... use this method?
  • To check: Are there alternative ways of accessing files?
  • To check: restrict more than just file access? Other things you dont want to do locally?

Add support for StarCoder models

Hi there! StarCoder from BigCode was trained for this kind of tasks, so having some documentation/support for it would be great.

Very nice project btw ๐Ÿ”ฅ

Code edit

Allow to edit code before it runs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.