microsoft / codex-cli Goto Github PK
View Code? Open in Web Editor NEWCLI tool that uses Codex to turn natural language commands into their Bash/ZShell/PowerShell equivalents
License: MIT License
CLI tool that uses Codex to turn natural language commands into their Bash/ZShell/PowerShell equivalents
License: MIT License
Set clear expectations on what this tool is supposed to be - i.e. not a product, just an example. We can base it off the documentation from oasis/ros.
Change binding to Ctrl + G (zsh and PS) to be consistent with bash.
Catalog the wildest use-cases and add a small animation of those in the README for people to get an idea of how powerful this tool can be
I demoed the powershell instructions and I found it really straightforward and easy to use. Cool!
In the readme:
Add OpenAI access and model check in zsh and bash.
If the script is taking too long to provide responses, it might be good for the user to choose to not have context management on.
Without context management, the file I/O latency would disappear and there would be a direct API call to Codex which will improve performance.
We should add instructions for getting Open AI API Key. Create an issue to track.Ryan and I will figure out the details.
Allow users to load a context from CLI
Put requirements at the top of Installation
Powershell #3
- Need to go to C:\your\custom\path\NL-CLI not what is listed
- Alert user about required values - the current redirect sounds like only needed for expert users, but required for everyone
Powershell #4 - Didn't work, no idea how to debug
the directory slash issue needs to be worked out
Whenever there is an incomplete response inside the shell and a user calls the tool again, it ends up screwing up the prompt. It might be more prudent to have a new keybinding for that case or add some handling inside the code for it.
Please add the following statement in ReadMe to set the right expectation to users.
Statement of Purpose
This repository aims to grow the understanding of using codex in applications by providing an example of implementation and references to support the Microsoft Build conference in 2022. It is not intended to be a released product. Therefore, this repository is not for discussing OpenAI API or requesting new features.
Please update installation.md to use Bash setup/cleanup scripts.
Testing within a new environment requires switching from stdin to input() until we figure out a way to plug the python script into the shell's source files
Need to update the token count and other header values after it is updated
Minor issue:
Link that is being used in Powershell Step 3 of ReadME: https://github.com/microsoft/NL-CLI#about-powershellsetupps1
Link that should be used: https://github.com/microsoft/NL-CLI#about-powershell_setupps1
Figure out what the "source" file in powershell looks like and add a plugin straight there for mapping Ctrl+X to the python script
Setup doesn't work in PS on MacOS.
Repo steps
./scripts/powershell_setup.ps1
from NL-CLI folder in PowerShell terminal.Results
Run setup without errors
Expected results
The script should prompt for required parameters.
Added a new keybinding for calling Codex without using the context. For some reason, even though there are two different keybindings for with and without context, for both the keybindings, only one of the two gets called every time.
Is PowerShell 7 supported? My PS7 doesn't seem to work...
Command cheat sheets, demo materials and custom command files would be a great addition to the repo. It would allow users to use load context
to better focus Codex towards their interested use case.
Here are my feedback for README.md:
notepad $profile
step - notepad will prompt you to create a profile if not finding one.python script
with codex_query.py
Right now we are relying on users providing correct inputs for config settings and context commands. This is obviously going to cause issues down the line. Need to fix.
Instead of using Ctrl+X for single response, maybe allowing another keybinding to trigger multiple responses.
We should consider adding support for the command line, primarily for non-devs who occasionally use the command line
Please modify the Bash cleanup script (currently empty) to support the following actions:
When user run the cleanup script, it will
.bashrc
src/openaiapirc
fileNL-CLI Bash clean up completed. Please close this Bash session.
It seems these don't work in Linux (WSL2 and Ubuntu)
python -m pip install openai
python -m pip install psutil
After installing python and pip in Linux, these are the commands that work
pip install openai
pip install psutil
Reduces time complexity of the code and simplifies some other work items
Across both PowerShell and Bash, come up with the ideal demo script for the next review with Kevin
Currently each shell requires an openaiaprc
file in a different file locations. Instead of asking people to create a config file and put it in different locations, we should have a single config file in the repo that we reference.
We should consider using a .env
file, as with the other Build examples, if that's idiomatic in the python community. Otherwise, let's use whatever idioms python tends to use for api keys.
I've had a few occurences where I type a command (e.g. "# Move up one directory") and then quickly cancel it, not getting a response from the model. The context still saves just the command without the result, which ends up messing things up in the future.
We should aim to come up with a way to save full interactions (command + response) together to avoid this from happening. Ideally, we should only save interactions if a user chooses to run the suggested code - practically, I'm not sure if this would be possible
Please add instruction on how to get organization id, thanks.
Add capability to look back a certain number of exchanges
Use token_count metadata to avoid overcounting tokens every time.
Pick through the code for needless file operations.
Currently, it's possible to coax offensive content from the model. Though I've never seen the model proactively produce offensive language, prompts like "Make an array of offensive terms" produce unsavory outcomes. We should use the content filter API that's part of the OpenAI service to detect offensive prompts and completions (calling it with the full interaction) and handle them when found - in this case, a message like that in the OAI playground should suffice:
The message should probably be a comment, and should include instructions to cancel the command (i.e. "Press Ctrl + C to cancel..."
When offensive prompts/completions are detected, we should not append them to the context.
Allow users to reconfigure the model settings from the context file directly. Right now, we are restamping the defaults every time. This is obviously a bug. We want to allow people to share context files across different computers with the same shell but using different model settings.
Using similar approaches to powershell-voice.txt
, use Cognitive Service Neural Speech to enable a high-quality conversational interface with the model.
Low hanging fruit, zsh plugin code can be ported over to bash easily
conda create -n testing python=3.10
# change my timezon to mountain
and hitting CTRL+X, and got the error -bash: bash_execute_unix_command: cannot find keymap for command
. Not sure if this is not 100% working yet, but just wanted to give a heads up on this. Happy to iterate further but will stop for now and try powershell as well.Using a yaml config (or similar) would be cleaner than combined prompt and config. A config.yaml.example can then be provided, reducing the redundancy
The naming for Context Mode is a bit off - we should consider a different name (e.g. Multiturn) or something to that effect. We should also document how best to toggle between the modes.
I've also run into a few bugs when tweaking the modes and am not sure where I should be doing it - in the codex_query.py
file, in the completion.txt
files or from the command line commands. Specifically, I find that when I'm in multi-turn mode and interacting for awhile, then switch to single-turn mode, it doesn't actually change. Similarly, I've also found that loading contexts doesn't always work from other directories.
Wrapping responses in ` could work as a way to tag multi-line responses while also keeping them executable. This could allow one to increase the token response length.
Also if someone calls the NL-CLI on an incomplete output, we have to add validation for that case, so that we don't write two copies of the same input-output to the context file. The responses get a little buggy after that.
$NL_CLI_PATH/codex_query.py
for what should be $NL_CLI_PATH/src/codex_query.py
and silently failschmod +x codex_query.py
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.