Code Monkey home page Code Monkey logo

kardolus / chatgpt-cli Goto Github PK

View Code? Open in Web Editor NEW
430.0 10.0 29.0 5.72 MB

ChatGPT CLI is an advanced command-line interface for ChatGPT models via OpenAI and Azure, offering streaming, query mode, and history tracking for seamless, context-aware conversations. Ideal for both users and developers, it provides advanced configuration and easy setup options to ensure a tailored conversational experience with the GPT model.

License: MIT License

Go 97.08% Shell 2.92%
chatgpt cli go golang gpt language-model openai azure

chatgpt-cli's Introduction

ChatGPT CLI

Test Workflow

ChatGPT CLI provides a powerful command-line interface for seamless interaction with ChatGPT models via OpenAI and Azure, featuring streaming capabilities and extensive configuration options.

a screenshot

Table of Contents

Features

  • Streaming mode: Real-time interaction with the GPT model.
  • Query mode: Single input-output interactions with the GPT model.
  • Interactive mode: The interactive mode allows for a more conversational experience with the model. Prints the token usage when combined with query mode.
  • Thread-based context management: Enjoy seamless conversations with the GPT model with individualized context for each thread, much like your experience on the OpenAI website. Each unique thread has its own history, ensuring relevant and coherent responses across different chat instances.
  • Sliding window history: To stay within token limits, the chat history automatically trims while still preserving the necessary context. The size of this window can be adjusted through the context-window setting.
  • Custom context from any source: You can provide the GPT model with a custom context during conversation. This context can be piped in from any source, such as local files, standard input, or even another program. This flexibility allows the model to adapt to a wide range of conversational scenarios.
  • Model listing: Access a list of available models using the -l or --list-models flag.
  • Thread listing: Display a list of active threads using the --list-threads flag.
  • Advanced configuration options: The CLI supports a layered configuration system where settings can be specified through default values, a config.yaml file, and environment variables. For quick adjustments, various --set-<value> flags are provided. To verify your current settings, use the --config or -c flag.
  • Availability Note: This CLI supports both gpt-4 and gpt-3.5-turbo models. However, the specific ChatGPT model used on chat.openai.com may not be available via the OpenAI API.

Installation

Using Homebrew (macOS)

You can install chatgpt-cli using Homebrew:

brew tap kardolus/chatgpt-cli && brew install chatgpt-cli

Direct Download

For a quick and easy installation without compiling, you can directly download the pre-built binary for your operating system and architecture:

Apple Silicon

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-darwin-arm64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

macOS Intel chips

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-darwin-amd64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

Linux (amd64)

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-linux-amd64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

Linux (arm64)

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-linux-arm64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

Linux (386)

curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/latest/download/chatgpt-linux-386 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/

Windows (amd64)

Download the binary from this link and add it to your PATH.

Choose the appropriate command for your system, which will download the binary, make it executable, and move it to your /usr/local/bin directory (or %PATH% on Windows) for easy access.

Getting Started

  1. Set the OPENAI_API_KEY environment variable to your ChatGPT secret key. To set the environment variable, you can add the following line to your shell profile (e.g., ~/.bashrc, ~/.zshrc, or ~/.bash_profile), replacing your_api_key with your actual key:

    export OPENAI_API_KEY="your_api_key"
  2. To enable history tracking across CLI calls, create a ~/.chatgpt-cli directory using the command:

    mkdir -p ~/.chatgpt-cli

    Once this directory is in place, the CLI automatically manages the message history for each "thread" you converse with. The history operates like a sliding window, maintaining context up to a configurable token maximum. This ensures a balance between maintaining conversation context and achieving optimal performance.

    By default, if a specific thread is not provided by the user, the CLI uses the default thread and stores the history at ~/.chatgpt-cli/history/default.json. You can find more details about how to configure the thread parameter in the Configuration section of this document.

  3. Try it out:

    chatgpt what is the capital of the Netherlands
  4. To start interactive mode, use the -i or --interactive flag:

    chatgpt --interactive

    If you want the CLI to automatically create a new thread for each session, ensure that the auto_create_new_thread configuration variable is set to true. This will create a unique thread identifier for each interactive session.

  5. To use the pipe feature, create a text file containing some context. For example, create a file named context.txt with the following content:

    Kya is a playful dog who loves swimming and playing fetch.

    Then, use the pipe feature to provide this context to ChatGPT:

    cat context.txt | chatgpt "What kind of toy would Kya enjoy?"
  6. To list all available models, use the -l or --list-models flag:

    chatgpt --list-models
  7. For more options, see:

    chatgpt --help

Configuration

The ChatGPT CLI adopts a three-tier configuration strategy, with different levels of precedence assigned to default values, the config.yaml file, and environment variables, in that respective order.

General Configuration

Configuration variables:

Variable Description Default
name The prefix for environment variable overrides. 'openai'
api_key Your OpenAI API key. (none for security)
model The GPT model used by the application. 'gpt-3.5-turbo'
max_tokens The maximum number of tokens that can be used in a single API call. 4096
context_window The memory limit for how much of the conversation can be remembered at one time. 8192
role The system role 'You are a helpful assistant.'
temperature What sampling temperature to use, between 0 and 2. Higher values make the output more random; lower values make it more focused and deterministic. 1.0
frequency_penalty Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far. 0.0
top_p An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. 1.0
presence_penalty Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far. 0.0
thread The name of the current chat thread. Each unique thread name has its own context. 'default'
omit_history If true, the chat history will not be used to provide context for the GPT model. false
url The base URL for the OpenAI API. 'https://api.openai.com'
completions_path The API endpoint for completions. '/v1/chat/completions'
models_path The API endpoint for accessing model information. '/v1/models'
auth_header The header used for authorization in API requests. 'Authorization'
auth_token_prefix The prefix to be added before the token in the auth_header. 'Bearer '
command_prompt The command prompt in interactive mode. Should be single-quoted. '[%datetime] [Q%counter]'
auto_create_new_thread If set to true, a new thread with a unique identifier (e.g., int_a1b2) will be created for each interactive session. If false, the CLI will use the thread specified by the thread parameter. false
track_token_usage If set to true, displays the total token usage after each query in --query mode, helping you monitor API usage. false

Variables for interactive mode:

  • %date: The current date in the format YYYY-MM-DD.
  • %time: The current time in the format HH:MM:SS.
  • %datetime: The current date and time in the format YYYY-MM-DD HH:MM:SS.
  • %counter: The total number of queries in the current session.
  • %usage: The usage in total tokens used (only works in query mode).

The defaults can be overridden by providing your own values in the user configuration file, named .chatgpt-cli/config.yaml, located in your home directory.

The structure of the user configuration file mirrors that of the default configuration. For instance, to override the model and max_tokens parameters, your file might look like this:

model: gpt-3.5-turbo-16k
max_tokens: 4096

This alters the model to gpt-3.5-turbo-16k and adjusts max_tokens to 4096. All other options, such as url , completions_path, and models_path, can similarly be modified. If the user configuration file cannot be accessed or is missing, the application will resort to the default configuration.

Another way to adjust values without manually editing the configuration file is by using environment variables. The name attribute forms the prefix for these variables. As an example, the model can be modified using the OPENAI_MODEL environment variable. Similarly, to disable history during the execution of a command, use:

OPENAI_OMIT_HISTORY=true chatgpt what is the capital of Denmark?

This approach is especially beneficial for temporary changes or for testing varying configurations.

Moreover, you can use the --config or -c flag to view the present configuration. This handy feature allows users to swiftly verify their current settings without the need to manually inspect the configuration files.

chatgpt --config

Executing this command will display the active configuration, including any overrides instituted by environment variables or the user configuration file.

To facilitate convenient adjustments, the ChatGPT CLI provides flags for swiftly modifying the model, thread , context-window and max_tokens parameters in your user configured config.yaml. These flags are --set-model , --set-thread, --set-context-window and --set-max-tokens.

For instance, to update the model, use the following command:

chatgpt --set-model gpt-3.5-turbo-16k

This feature allows for rapid changes to key configuration parameters, optimizing your experience with the ChatGPT CLI.

Azure Configuration

For Azure, use a configuration similar to:

name: azure
api_key: <your_key>
model: <not relevant, read from the completions path>
max_tokens: 4096
context_window: 8192
role: You are a helpful assistant.
temperature: 1
top_p: 1
frequency_penalty: 0
presence_penalty: 0
thread: default
omit_history: false
url: https://<your_resource>.openai.azure.com
completions_path: /openai/deployments/<your_deployment>/chat/completions?api-version=<your_api>
models_path: /v1/models
auth_header: api-key
auth_token_prefix: " "
command_prompt: '[%datetime] [Q%counter]'
auto_create_new_thread: false
track_token_usage: false

You can set the API key either in the config.yaml file as shown above or export it as an environment variable:

export AZURE_API_KEY=<your_key>

Command-Line Autocompletion

Enhance your CLI experience with our new autocompletion feature for command flags!

Enabling Autocompletion

Autocompletion is currently supported for the following shells: Bash, Zsh, Fish, and PowerShell. To activate flag completion in your current shell session, execute the appropriate command based on your shell:

  • Bash
    . <(chatgpt --set-completions bash)
  • Zsh
    . <(chatgpt --set-completions zsh)
  • Fish
    chatgpt --set-completions fish | source
  • PowerShell
    chatgpt --set-completions powershell | Out-String | Invoke-Expression

Persistent Autocompletion

For added convenience, you can make autocompletion persist across all new shell sessions by adding the appropriate sourcing command to your shell's startup file. Here are the files typically used for each shell:

  • Bash: Add to .bashrc or .bash_profile
  • Zsh: Add to .zshrc
  • Fish: Add to config.fish
  • PowerShell: Add to your PowerShell profile script

For example, for Bash, you would add the following line to your .bashrc file:

. <(chatgpt --set-completions bash)

This ensures that command flag autocompletion is enabled automatically every time you open a new terminal window.

Development

To start developing, set the OPENAI_API_KEY environment variable to your ChatGPT secret key. Follow these steps for running tests and building the application:

  1. Run the tests using the following scripts:

    For unit tests, run:

    ./scripts/unit.sh

    For integration tests, run:

    ./scripts/integration.sh

    For contract tests, run:

    ./scripts/contract.sh

    To run all tests, use:

    ./scripts/all-tests.sh
  2. Build the app using the installation script:

    ./scripts/install.sh
  3. After a successful build, test the application with the following command:

    ./bin/chatgpt what type of dog is a Jack Russel?
  4. As mentioned previously, the ChatGPT CLI supports tracking conversation history across CLI calls. This feature creates a seamless and conversational experience with the GPT model, as the history is utilized as context in subsequent interactions.

    To enable this feature, you need to create a ~/.chatgpt-cli directory using the command:

    mkdir -p ~/.chatgpt-cli

Reporting Issues and Contributing

If you encounter any issues or have suggestions for improvements, please submit an issue on GitHub. We appreciate your feedback and contributions to help make this project better.

Uninstallation

If for any reason you wish to uninstall the ChatGPT CLI application from your system, you can do so by following these steps:

Using Homebrew (macOS)

If you installed the CLI using Homebrew you can do:

brew uninstall chatgpt-cli

And to remove the tap:

brew untap kardolus/chatgpt-cli

MacOS / Linux

If you installed the binary directly, follow these steps:

  1. Remove the binary:

    sudo rm /usr/local/bin/chatgpt
  2. Optionally, if you wish to remove the history tracking directory, you can also delete the ~/.chatgpt-cli directory:

    rm -rf ~/.chatgpt-cli

Windows

  1. Navigate to the location of the chatgpt binary in your system, which should be in your PATH.

  2. Delete the chatgpt binary.

  3. Optionally, if you wish to remove the history tracking, navigate to the ~/.chatgpt-cli directory (where ~ refers to your user's home directory) and delete it.

Please note that the history tracking directory ~/.chatgpt-cli only contains conversation history and no personal data. If you have any concerns about this, please feel free to delete this directory during uninstallation.

Useful Links

Additional Resources

Thank you for using ChatGPT CLI!

chatgpt-cli's People

Contributors

benbenbang avatar catskull avatar dependabot[bot] avatar johnd0e avatar kardolus avatar manveerbhullar avatar morganbat avatar nopeless avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-cli's Issues

OPENAI KEY

so i just installed this that ms copilot told me to do so i can have chatgpt and it says: missing environment variable: OPENAI_API_KEY how am i gonna solve this?

403 not supported

➜  $ chatgpt test                                                                                                                                                             ✭ ✱
http status 403: Country, region, or territory not supported

Regarding autocompletion

Hi there,

Since we are using cobra, any plan to use the completion to help with the flags?
I can do a PR if it's ok for you :)

Thanks

-m options was useful

I've tried the new version and I believe it's important to reinstate the -m option so that the model can be switched during runtime.

To balance the cost of the APIs, it is often useful to operate with GPT-3 and then pass the output to GPT-4, for example:

chatgpt -m gpt-3 "Do something" | chatgpt -m gpt-4 "Do something else" | ...

Using global variables or file configurations forces one to break the pipeline and save partial results in temporary files, which is not aligned with the rest of the utilities. This doesn't only apply to this scenario, but also when a user wants to change the model on the fly, or create a command alias.

Before this version, I had some very useful aliases:

alias gpt4="chatgpt -m gpt-4"
alias gpt3="chatgpt -m gpt-3-turbo"

which can no longer be created now.

In summary, I believe that:

  • every option should have a default value
  • every option present in the yaml file should have a corresponding argument
  • the hierarchy should be: default, config, argument
  • local variables should be used only for API_KEY

I hope these considerations may be of help!
Thank you!

http error: 429

Everytime i try a query, i get a 429.
nothing more is said

Error: failed to make request

~ $ chatgpt -l
Error: failed to make request: Get "https://api.openai.com/v1/models": dial tcp: lookup api.openai.com on [::1]:53: read udp [::1]:46086->[::1]:53: read: connection refused
Usage:
  chatgpt [flags]

Flags:
      --clear-history        Clear all prior conversation context for the current thread
  -c, --config               Display the configuration
  -h, --help                 help for chatgpt
  -i, --interactive          Use interactive mode
  -l, --list-models          List available models
  -q, --query                Use query mode instead of stream mode
      --set-max-tokens int   Set a new default max token size by specifying the max tokens
      --set-model string     Set a new default GPT model by specifying the model name
  -v, --version              Display the version information

failed to make request: Get "https://api.openai.com/v1/models": dial tcp: lookup api.openai.com on [::1]:53: read udp [::1]:46086->[::1]:53: read: connection refused
~ $ chatgpt -c
name: openai
api_key: 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
model: gpt-3.5-turbo
max_tokens: 4096
thread: default
omit_history: false
url: https://api.openai.com
completions_path: /v1/chat/completions
models_path: /v1/models

HTTP error 429

I just downloaded the program with
curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/download/v1.3.2/chatgpt-linux-amd64
set up my API key, and run it with a first query. I get the error message:
http error: 429

System: Linux Mint 21.2 Cinnamon
I am using the free tier of chatgpt (3.5), if that is relevant.

Can't change model

Hello. I can't change from chatgpt-3 to 4. I have checked using openAI python tool that I have access to gpt-4.

`local:.chatgpt-cli yasin$ chatgpt -l
Available models:

  • gpt-3.5-turbo-0613
  • gpt-4-0314
  • gpt-4-0613
  • gpt-4 (current)
  • gpt-3.5-turbo-instruct-0914
  • gpt-3.5-turbo-instruct
  • gpt-3.5-turbo-0301
  • gpt-3.5-turbo-16k
  • gpt-3.5-turbo
  • gpt-3.5-turbo-16k-0613
    local:.chatgpt which model are you
    As an AI developed by OpenAI, I don't have a specific model number. I'm based on the GPT-3 (Generative Pretrained Transformer 3) architecture. My primary function is to assist and facilitate efficient communication.`

Not sure how I should go about debugging this.

Option to remove line-by-line animation?

When using this client, it becomes really nauseating for me when the text is printing out line by line. I would much prefer if there is an option to just have it print it all out at once to save me the downtime, and for it to be a bit easier on the eyes. Do you suppose we could make this a feature?

Plans to add `--set-thread` and `--list-threads` flags ?

Hi @kardolus ,

Thanks for your work putting this CLI together. It's so nice to use ChatGPT in such a simple way. (And great to use in situations where browsers are disabled, like when using restricted airplane "message only wifi" packages !)

Having used it for a couple of days now, I'm finding the most UX friction to be in changing the threads. The simplest way I have been doing this from the command line is with sed on the config file.

Do you have any plans to add a --set-thread flag, and beyond that, a --list-threads flag?

I think these would make a really positive impact on the UX.

All the best,
Angus

Escape chars

Is there a preferred way to provide escape characters especially in cases where we want to provide multi-line code snippets with the prompt?

Cheers and thanks!

Any interest in support Claude?

Seems like there are some new models like claude and llama3.1 that are pretty exciting, is there any interest in supporting them?

32 bit's support

This tool will be especially cool on ancient devices where there is nothing but the CLI. But now it does not run on the Eee PC 900, as I understand it because it is a 32-bit platform. I could probably compile it myself from the sources, but... I need more instructions - I'm not an expert in this. )

Error message should not go to stdout

Currently, if I redirect output of chatgpt-cli to a file as

chatgpt "1+1" >> output

If an error occurs, e.g. internet connection goes wrong, the error message will go to the file.

Please add readline capability for interactive mode

The readline mode can be archived by rlwrap, but it's much better to have it built-in.

$ rlwrap chatgpt -i
[2024-04-01 00:00:00] Q1: hello world┃ # press Ctrl-A to move the cursor to the beginning
[2024-04-01 00:00:00] Q1: ┃hello world 

Thanks!

Auto-create new thread per conversation like in ChatGPT

It would be nice to have a mode similar to ChatGPT that, by default, starts the interactive mode in a new thread rather than in the default one.
This way, I can quickly create random conversations with a few questions.
Currently, I don't see how to quickly start a conversation in a new thread (and I don't yet know if it's a throwaway thread or one I might want to return to).

How to use with Azure?

Great work on the CLI!
How do I configure it to use our azure openai deployment? Thanks!

Start a new conversation

Could be useful exclude history from context and/or have a way to start a new conversation. Something like: chatgpt --new-chat.

I just learned in the hard way how can be painful kept a large context in a loop.

Thx

listModule interface design suggestions

Description:

Currently, the ⁠listModule interface in Litelm uses the “gpt” prefix for all models. This makes it difficult to identify and use models other than GPT models.

Proposed Solutions:

  1. Remove the “gpt” prefix: This would make the interface more inclusive and allow users to easily identify all available models.
  2. Use parameters: Instead of hardcoding the “gpt” prefix, using a parameter to specify the model type would provide more flexibility and clarity. This would allow users to filter models based on specific criteria, such as model family or provider.

Expected Outcome:

Implementing one of these solutions would improve the user experience by:
• Providing a clear and consistent interface for listing models.
• Allowing users to easily identify and utilize models other than GPT models.
• Enhancing the overall flexibility and usability of the ⁠listModule interface.

Update Readme on Chatgpt vs OpenAI API models?

This is a great project!

But, in my quest for looking for a way to use the same model that "chat.openapi.com" uses also from the CLI as the browser, I guess I need to keep looking.

I mean of course it's really Openapi's decisions not to offer the ChatGPT model via an API yet.

I think it would benefit this project, if it went to a little bit of effort to explain that in the Readme.

The fact that you can access many different OpenAI models via their APIs, but the ChatGPT model is not yet available.

It would also be worth mentioning the pricing difference.

Error: failed to make request

~ $ chatgpt -l
Error: failed to make request: Get "https://api.openai.com/v1/models": dial tcp: lookup api.openai.com on [::1]:53: read udp [::1]:46086->[::1]:53: read: connection refused
Usage:
  chatgpt [flags]

Flags:
      --clear-history        Clear all prior conversation context for the current thread
  -c, --config               Display the configuration
  -h, --help                 help for chatgpt
  -i, --interactive          Use interactive mode
  -l, --list-models          List available models
  -q, --query                Use query mode instead of stream mode
      --set-max-tokens int   Set a new default max token size by specifying the max tokens
      --set-model string     Set a new default GPT model by specifying the model name
  -v, --version              Display the version information

failed to make request: Get "https://api.openai.com/v1/models": dial tcp: lookup api.openai.com on [::1]:53: read udp [::1]:46086->[::1]:53: read: connection refused
~ $ chatgpt -c
name: openai
api_key: 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
model: gpt-3.5-turbo
max_tokens: 4096
thread: default
omit_history: false
url: https://api.openai.com
completions_path: /v1/chat/completions
models_path: /v1/models

Count tokens outside of `--interactive` mode

I am wanting to write a script that will track my total token usage though the chatgpt-cli client, and the total cost of me using the tool through my own API key. Is there any way to do this outside of the --interactive mode?

http error: 429

  • Installed the app (windows)
  • Set the API key accordingly
  • Ran the command
  • Got 429 error (too many requests) despite only making a single query.
  • Multiple attempts yield same result.

Feature: multi-instance mode

Love this tool ❤️

chatgpt -n
Error: Another chatgpt instance is running, chatgpt works not well with multiple instances, please close the other one first.
If you are sure there is no other chatgpt instance running, please delete the lock file: /tmp/chatgpt.lock
You can also try `chatgpt -d` to run in detach mode, this check will be skipped, but conversation will not be saved.

How (much effort) would it be to support multiple insances?
Would be amazing :)

Ready to throw a sunday at it (at some point this year 😅)

[BUG] `max_tokens` seems to have no effeect on the output

running on latest main commit (local build)

setting max_tokens to something low like 20 will still output long text

name: openai
api_key: <redacted>
model: gpt-4-turbo-preview
max_tokens: 20
role: You are a developer
temperature: 1
top_p: 1
frequency_penalty: 0
presence_penalty: 0
thread: default
omit_history: false
url: https://api.openai.com
completions_path: /v1/chat/completions
models_path: /v1/models
auth_header: Authorization
auth_token_prefix: 'Bearer '

New Line in interactive mode

Love this tool! Is there a way though to write the prompts across multiple lines into the interactive mode. I tried Shift-Enter Ctrl-Enter but all of them submit the prompt....

Read `config.yaml` from XDG standard paths

As more CLI apps are adopting the path conventions laid out by the XDG Base Directory Specification, it would be nice if config.yaml and other generated files like thread history are placed in these standard paths—or at least look for them in these paths before using ~/.chatgpt-cli/:

  • config.yaml — use $XDG_CONFIG_HOME/chatgpt-cli/config.yaml (if XDG_CONFIG_HOME is NOT explicitly defined, use ~/.config/chatgpt-cli/config.yaml)
  • history/default.json etc. — use $XDG_DATA_HOME/chatgpt-cli/history/default.json (if XDG_DATA_HOME is NOT explicitly defined, use ~/.local/share/chatgpt-cli/history/default.json)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.