Code Monkey home page Code Monkey logo

gpt-cli's People

Contributors

alexanderyastrebov avatar audreyt avatar chrisjefferson avatar dltn avatar gabelli avatar hazzlim avatar id4rksid3 avatar kharvd avatar maxbrito500 avatar nieryyds avatar ykim-isabel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt-cli's Issues

--model

openai.error.InvalidRequestError'>: The model: gpt-4 does not exist

error given when trying to add --model gpt-4

FileNotFoundError: Shared library with base name 'llama' not found

Just followed the install step from readme file and getting this error

Traceback (most recent call last):
  File "gpt.py", line 10, in <module>
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "gptcli/assistant.py", line 9, in <module>
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "gptcli/llama.py", line 4, in <module>
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "llama_cpp/__init__.py", line 1, in <module>
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "llama_cpp/llama_cpp.py", line 73, in <module>
  File "llama_cpp/llama_cpp.py", line 64, in _load_shared_library
FileNotFoundError: Shared library with base name 'llama' not found
[58098] Failed to execute script 'gpt' due to unhandled exception!

Environment Info:
Mac OS Venture (13.4)
Python Version 3.11.2
Pip Version 23.1.2

error with circular import

Traceback (most recent call last):
  File "/Users/timdaub/Projects/updated-gpt-cli/gptcli/gpt.py", line 11, in <module>
    import openai
  File "/Users/timdaub/Projects/updated-gpt-cli/gptcli/openai.py", line 3, in <module>
    from openai import OpenAI
ImportError: cannot import name 'OpenAI' from partially initialized module 'openai' (most likely due to a circular import) (/Users/timdaub/Projects/updated-gpt-cli/gptcli/openai.py)```

why does it even invoke the openai module in the first place?

seems to throw when using new gpt4-turbo model

Hey are you planning to continue to maintain this, because I'm an active user

for new model https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo

gpt-4-1106-preview

hey
Hello! How can I assist you today?

Uncaught exception
Traceback (most recent call last):
File "/Users/timdaub/Projects/gpt-cli/gpt.py", line 239, in
main()
File "/Users/timdaub/Projects/gpt-cli/gpt.py", line 187, in main
run_interactive(args, assistant)
File "/Users/timdaub/Projects/gpt-cli/gpt.py", line 235, in run_interactive
session.loop(input_provider)
File "/Users/timdaub/Projects/gpt-cli/gptcli/session.py", line 168, in loop
while self.process_input(*input_provider.get_user_input()):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/timdaub/Projects/gpt-cli/gptcli/session.py", line 160, in process_input
response_saved = self._respond(args)
^^^^^^^^^^^^^^^^^^^
File "/Users/timdaub/Projects/gpt-cli/gptcli/session.py", line 116, in _respond
self.listener.on_chat_response(self.messages, next_message, args)
File "/Users/timdaub/Projects/gpt-cli/gptcli/composite.py", line 59, in on_chat_response
listener.on_chat_response(messages, response, overrides)
File "/Users/timdaub/Projects/gpt-cli/gptcli/cost.py", line 142, in on_chat_response
price = price_for_completion(messages, response, model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Model GPT-4 not found

When I do
gpt.py --model=gpt-4

I got

Hi! I'm here to help. Type q or Ctrl-D to exit, c or Ctrl-C to clear the
conversation, r or Ctrl-R to re-generate the last response. To enter multi-line
mode, enter a backslash \ followed by a new line. Exit the multi-line mode by
pressing ESC and then Enter (Meta+Enter).
> are you chatgpt 4?


Request Error. The last prompt was not saved: <class 'openai.error.InvalidRequestError'>: The model: `gpt-4` does not exist
The model: `gpt-4` does not exist
Traceback (most recent call last):
  File "/Users/gogl92/PhpstormProjects/gpt-cli/gptcli/session.py", line 101, in _respond
    for response in completion_iter:
  File "/Users/gogl92/PhpstormProjects/gpt-cli/gptcli/openai.py", line 20, in complete
    openai.ChatCompletion.create(
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: The model: `gpt-4` does not exist

Scrolling causes output corruption

When I try to scroll back, the same few lines repeat over and over. This happens in multiple terminal emulators. Do you have any idea what could cause this?

Solution to MacOS Overlapping Issue With An Existing Tool

I installed gpt-cli via pip through the instructions. When I tried to run it in my terminal as gpt I found out it executes a different tool that exists in MacOS's called "GUID partition table maintenance utility" which located at "/usr/sbin".

Screenshot 2023-11-26 at 16 51 42

To solve this problem I changed gpt-cli's executable's name from "gpt" to "gpt-cli". Now I'm able to run gpt-cli via gpt-cli command in my terminal.

Solution:

Find executable. In my case it was located in "/usr/local/bin" folder. It was named "gpt". To see if it works you can run ./gpt inside the "/usr/local/bin" folder.

Change its name to something you want. I wanted to run gpt-cli via gpt-cli command in my terminal so I changed its name to "gpt-cli". To do this you can execute mv gpt gpt-cli or you can replace "gpt-cli" with something you want.

Screenshot 2023-11-26 at 16 55 46

Bard: "Your default credentials were not found."

Great to see Bard capabilities in the app! I'm using a similar CLI but they don't have the ability to set a consistent 'role' like this app. I've been working to get it running this afternoon and have run into stumbling locks:

  1. Unlike Open_AI/AnthropicAI, the default yaml file is missing an optional line for Bard
  2. The system doesn't appear to recognize when I manually add google_api_key: <insert key here> to the yaml file but...
  3. It does appear to recognize api_key: <insert key here> instead. However, I don't get very far once within the package:

python3 gpt-cli/gpt.py --model chat-bison-001

Hi! I'm here to help. Type q or Ctrl-D to exit, c or Ctrl-C to clear the        
conversation, r or Ctrl-R to re-generate the last response. To enter multi-line 
mode, enter a backslash \ followed by a new line. Exit the multi-line mode by   
pressing ESC and then Enter (Meta+Enter).         

hello

An uncaught exception occurred. Please report this issue on GitHub.
Traceback (most recent call last):
  File "/home/cameron/gpt-cli/gpt.py", line 236, in <module>
    main()
  File "/home/cameron/gpt-cli/gpt.py", line 184, in main
    run_interactive(args, assistant)
  File "/home/cameron/gpt-cli/gpt.py", line 232, in run_interactive
    session.loop(input_provider)
  File "/home/cameron/gpt-cli/gptcli/session.py", line 168, in loop
    while self.process_input(*input_provider.get_user_input()):
  File "/home/cameron/gpt-cli/gptcli/session.py", line 160, in process_input
    response_saved = self._respond(args)
  File "/home/cameron/gpt-cli/gptcli/session.py", line 101, in _respond
    for response in completion_iter:
  File "/home/cameron/gpt-cli/gptcli/google.py", line 42, in complete
    response = genai.chat(**kwargs)
  File "/home/cameron/.local/lib/python3.9/site-packages/google/generativeai/discuss.py", line 342, in chat
    return _generate_response(client=client, request=request)
  File "/home/cameron/.local/lib/python3.9/site-packages/google/generativeai/discuss.py", line 478, in _generate_response
    client = get_default_discuss_client()
  File "/home/cameron/.local/lib/python3.9/site-packages/google/generativeai/client.py", line 122, in get_default_discuss_client
    default_discuss_client = glm.DiscussServiceClient(**default_client_config)
  File "/home/cameron/.local/lib/python3.9/site-packages/google/ai/generativelanguage_v1beta2/services/discuss_service/client.py", line 430, in __init__
    self._transport = Transport(
  File "/home/cameron/.local/lib/python3.9/site-packages/google/ai/generativelanguage_v1beta2/services/discuss_service/transports/grpc.py", line 151, in __init__
    super().__init__(
  File "/home/cameron/.local/lib/python3.9/site-packages/google/ai/generativelanguage_v1beta2/services/discuss_service/transports/base.py", line 97, in __init__
    credentials, _ = google.auth.default(
  File "/home/cameron/.local/lib/python3.9/site-packages/google/auth/_default.py", line 648, in default
    raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)
google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see https://cloud.google.com/docs/authentication/external/set-up-adc for more information.

I followed the instructions via the error msg's link & download the gcloud cli package. Once this step is completely, I also manually set the api-key within the gcloud-cli as well but the issue persists.

token length issue

I use gpt4 and sometimes when my message is pretty long and I start running into this error below once, but I then adjust my input so that it is within he token length of 8192, the prompt will go through but it'll then not produce a very long response, e.g here and resurface that error

Screenshot 2023-07-01 at 21 33 18

happened to me twice now.

Request Error. The last prompt was not saved: <class 'openai.error.InvalidRequestError'>: This
model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens.
Please reduce the length of the messages.
This model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
  File "/Users/[username]/Projects/gpt-cli/gptcli/session.py", line 101, in _respond
    for response in completion_iter:
  File "/Users/[username]/Projects/gpt-cli/gptcli/openai.py", line 20, in complete
    openai.ChatCompletion.create(
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens. Please reduce the length of the messages.

error on macos 13.3

hi there-

i tried to run gpt-cli, but I got this error:

ERROR:root:Uncaught exception
Traceback (most recent call last):
File "/Users/rescreen/gpt-cli/./gpt.py", line 191, in
main()
File "/Users/rescreen/gpt-cli/./gpt.py", line 121, in main
read_yaml_config(config_path) if os.path.isfile(config_path) else GptCliConfig()
File "/Users/rescreen/gpt-cli/gptcli/config.py", line 23, in read_yaml_config
return GptCliConfig(
TypeError: gptcli.config.GptCliConfig() argument after ** must be a mapping, not str
An uncaught exception occurred. Please report this issue on GitHub.
Traceback (most recent call last):
File "/Users/rescreen/gpt-cli/./gpt.py", line 191, in
main()
File "/Users/rescreen/gpt-cli/./gpt.py", line 121, in main
read_yaml_config(config_path) if os.path.isfile(config_path) else GptCliConfig()
File "/Users/rescreen/gpt-cli/gptcli/config.py", line 23, in read_yaml_config
return GptCliConfig(
TypeError: gptcli.config.GptCliConfig() argument after ** must be a mapping, not str

How to use multi-line mode?

Hey there.

To enter multi-line mode, enter a backslash \ followed by a new line.

% python3 ./gpt.py
Hi! I'm here to help. Type q or Ctrl-D to exit, c or Ctrl-C to clear the
conversation, r or Ctrl-R to re-generate the last response. To enter multi-line
mode, enter a backslash \ followed by a new line. Exit the multi-line mode by
pressing ESC and then Enter (Meta+Enter).
> Hello world \
Hello! How can I assist you today?

What am I doing wrong? Thanks.

Problem with using pipe and reading from stdin

Hello. I have encountered a problem with the tool when I try to pipe in some text to it:

$ echo "hi" | gpt general
Warning: Input is not a terminal (fd=0).
Hi! I'm here to help. Type :q or Ctrl-D to exit, :c or Ctrl-C and Enter to clear
the conversation, :r or Ctrl-R to re-generate the last response. To enter
multi-line mode, enter a backslash \ followed by a new line. Exit the multi-line
mode by pressing ESC and then Enter (Meta+Enter). Try :? for help.
^[[36;1R> hi
Hello! How can I assist you today?

                                                    Tokens: 22 | Price: $0.000 | Total: $0.000
^[[40;1R>
Uncaught exception
Traceback (most recent call last):
  File "/home/kuba/.local/bin/gpt", line 8, in <module>
    sys.exit(main())
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/gpt.py", line 189, in main
    run_interactive(args, assistant)
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/gpt.py", line 237, in run_interactive
    session.loop(input_provider)
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/session.py", line 183, in loop
    while self.process_input(*input_provider.get_user_input()):
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 148, in get_user_input
    while (next_user_input := self._request_input()) == "":
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 197, in _request_input
    line = self.prompt()
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 186, in prompt
    return self.prompt_session.prompt(
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/shortcuts/prompt.py", line 1035, in prompt
    return self.app.run(
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 961, in run
    return loop.run_until_complete(coro)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 875, in run_async
    return await _run_async(f)
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 740, in _run_async
    result = await f
EOFError
An uncaught exception occurred. Please report this issue on GitHub.
Traceback (most recent call last):
  File "/home/kuba/.local/bin/gpt", line 8, in <module>
    sys.exit(main())
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/gpt.py", line 189, in main
    run_interactive(args, assistant)
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/gpt.py", line 237, in run_interactive
    session.loop(input_provider)
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/session.py", line 183, in loop
    while self.process_input(*input_provider.get_user_input()):
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 148, in get_user_input
    while (next_user_input := self._request_input()) == "":
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 197, in _request_input
    line = self.prompt()
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 186, in prompt
    return self.prompt_session.prompt(
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/shortcuts/prompt.py", line 1035, in prompt
    return self.app.run(
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 961, in run
    return loop.run_until_complete(coro)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 875, in run_async
    return await _run_async(f)
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 740, in _run_async
    result = await f
EOFError

I would expect it to work the same way as with the following syntax:

$ gpt general --prompt "$(echo hi)"
Hello! How can I assist you today?%

Could you advise if there is something I'm doing incorrectly or there is a bug in the tool?

Overwrite assistants

I redefined the general and dev assistant to use gpt-4 by default. The help-option now lists the assistants twice:

positional arguments:
  {dev,general,bash,general,dev}

Quotas exceeded ?

Hi, got a strange behavior

error

(.venv) ➜  gpt-cli git:(main) ./gpt.py
Hi! I'm here to help. Type q or Ctrl-D to exit, c or Ctrl-C to clear the
conversation, r or Ctrl-R to re-generate the last response. To enter multi-line
mode, enter a backslash \ followed by a new line. Exit the multi-line mode by
pressing ESC and then Enter (Meta+Enter).
> something fun

API Error. Type `r` or Ctrl-R to try again: <class 'openai.error.RateLimitError'>: You exceeded your current quota, please check your plan and billing details.
You exceeded your current quota, please check your plan and billing details.
Traceback (most recent call last):
  File "/Users/frederic/bin/gpt-cli/gptcli/session.py", line 96, in _respond
    for response in completion_iter:
  File "/Users/frederic/bin/gpt-cli/gptcli/openai.py", line 20, in complete
    openai.ChatCompletion.create(
  File "/Users/frederic/bin/gpt-cli/.venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/frederic/bin/gpt-cli/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/Users/frederic/bin/gpt-cli/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/frederic/bin/gpt-cli/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/Users/frederic/bin/gpt-cli/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.

usage screenshot

image

notes

i'm using free plan but i don't see any limitation mentioned about it

How to install?

Hello,

Was missing to find the installation section? What would be the recommended steps for installing?

Preferably a one-liner script would certainly help. Thanks.

Allow responding while gpt is writing back

Hey, I've been a power user of this tool for a while now. One thing I recently noticed that is slowing down my work flow is that I'd like to already respond to gpt when it's writing to me. However, I only have the chance to write back in case I stop it with ctrl c or when I wait until it is done.

Pip build failure

Problem occurs on a fresh install of fedora 39

This is the output

user:gpt-cli/ (main) $ pip install . | tee log.log
Defaulting to user installation because normal site-packages is not writeable
Processing /home/user/git/gpt-cli
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Collecting anthropic==0.7.7 (from gpt-command-line==0.1.4)
  Using cached anthropic-0.7.7-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: attrs==23.1.0 in /home/user/.local/lib/python3.12/site-packages (from gpt-command-line==0.1.4) (23.1.0)
Collecting black==23.1.0 (from gpt-command-line==0.1.4)
  Using cached black-23.1.0-py3-none-any.whl (174 kB)
Collecting google-generativeai==0.1.0 (from gpt-command-line==0.1.4)
  Using cached google_generativeai-0.1.0-py3-none-any.whl.metadata (3.0 kB)
Collecting openai==1.3.8 (from gpt-command-line==0.1.4)
  Using cached openai-1.3.8-py3-none-any.whl.metadata (17 kB)
Collecting prompt-toolkit==3.0.41 (from gpt-command-line==0.1.4)
  Using cached prompt_toolkit-3.0.41-py3-none-any.whl.metadata (6.5 kB)
Collecting pytest==7.3.1 (from gpt-command-line==0.1.4)
  Using cached pytest-7.3.1-py3-none-any.whl (320 kB)
Collecting PyYAML==6.0 (from gpt-command-line==0.1.4)
  Using cached PyYAML-6.0.tar.gz (124 kB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'error'
  error: subprocess-exited-with-error

  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [54 lines of output]
      running egg_info
      writing lib/PyYAML.egg-info/PKG-INFO
      writing dependency_links to lib/PyYAML.egg-info/dependency_links.txt
      writing top-level names to lib/PyYAML.egg-info/top_level.txt
      Traceback (most recent call last):
        File "/home/user/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/home/user/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/user/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
                 ^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires
          self.run_setup()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 311, in run_setup
          exec(code, locals())
        File "<string>", line 288, in <module>
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/__init__.py", line 103, in setup
          return distutils.core.setup(**attrs)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 185, in setup
          return run_commands(dist)
                 ^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
          dist.run_commands()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
          self.run_command(cmd)
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/dist.py", line 963, in run_command
          super().run_command(command)
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
          cmd_obj.run()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 321, in run
          self.find_sources()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 329, in find_sources
          mm.run()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 551, in run
          self.add_defaults()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 589, in add_defaults
          sdist.add_defaults(self)
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/command/sdist.py", line 112, in add_defaults
          super().add_defaults()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/sdist.py", line 251, in add_defaults
          self._add_defaults_ext()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/sdist.py", line 336, in _add_defaults_ext
          self.filelist.extend(build_ext.get_source_files())
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "<string>", line 204, in get_source_files
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 107, in __getattr__
          raise AttributeError(attr)
      AttributeError: cython_sources
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

Loading historical commands on start

Wondering if we could easily load previous session chat prompts on start - i.e. session history. This would allow you to up arrow and choose a command you ran in a recent session. Not a huge deal if it's a lot of work. Thanks for this library!

Publish to pypi/install with pip

Would be great to be able to install this more easily!

Or you could add this to the README:

pip install git+https://github.com/kharvd/gpt-cli

which would work once you add a pyproject.toml to the repo.

Error on input `--a b`

I have a reproducible error when the bot is given input string --a b.

$ gpt
Hi! I'm here to help. Type :q or Ctrl-D to exit, :c or Ctrl-C and Enter to clear
the conversation, :r or Ctrl-R to re-generate the last response. To enter
multi-line mode, enter a backslash \ followed by a new line. Exit the multi-line
mode by pressing ESC and then Enter (Meta+Enter). Try :? for help.
> --a b
Invalid argument: a. Allowed arguments: ['model', 'temperature', 'top_p']
Invalid argument: a. Allowed arguments: ['model', 'temperature', 'top_p']
NoneType: None
$ gpt --version
gpt-cli v0.1.3
Here is the output from running `$ gpt --log_file log.txt --log_level DEBUG`:
$ cat log.txt
2023-07-17 18:26:33,531 - gptcli - INFO - Starting a new chat session. Assistant config: {'messages': [], 'temperature': 0.0, 'model': 'gpt-4'}
2023-07-17 18:26:33,539 - gptcli-session - INFO - Chat started
2023-07-17 18:26:33,539 - asyncio - DEBUG - Using selector: EpollSelector
2023-07-17 18:26:35,314 - gptcli-session - ERROR - Invalid argument: a. Allowed arguments: ['model', 'temperature', 'top_p']
NoneType: None

For context, this error came up when I copy/pasted the following rustc error message into the cli using multiline> mode:

 15 | ) -> impl Iterator<Item = AdaptedRecord> {
    |      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `()` is not an iterator
    |
    = help: the trait `Iterator` is not implemented for `()`

 For more information about this error, try `rustc --explain E0277`

This resulted in Invalid argument: explain. Allowed arguments: ...

Can conversations be restored?

E.g. right now I had a conversation of 4k tokens with gpt4 and I accidentally did a Ctrl-c and cleared the conversation, but that was an accident. Is there any way to undo this? Or load that conversation again?

add try-agagin func to solve speed rate limit in claude

when I use claude api, always return
anthropic.api.ApiException: ('post request failed with status code: 429', {'error': {'type': 'rate_limit_error', 'message': 'Number of concurrent connections to Claude exceeds your rate limit. Please try again, or contact [email protected] to discuss your options for a rate limit increase.'}})
then the process will be killed(end)

macos 12.6 hurdles

Cleared a fair few hurdles to get this going using zsh.

Can't seem to get past this error:

gpt-cli % python3 gpt.py ERROR:root:Uncaught exception Traceback (most recent call last): File "/Users/adtm1x/gpt-cli/gpt.py", line 191, in <module> main() File "/Users/adtm1x/gpt-cli/gpt.py", line 132, in main read_yaml_config(config_path) if os.path.isfile(config_path) else GptCliConfig() File "/Users/adtm1x/gpt-cli/gptcli/config.py", line 23, in read_yaml_config return GptCliConfig( TypeError: gptcli.config.GptCliConfig() argument after ** must be a mapping, not str An uncaught exception occurred. Please report this issue on GitHub. Traceback (most recent call last): File "/Users/adtm1x/gpt-cli/gpt.py", line 191, in <module> main() File "/Users/adtm1x/gpt-cli/gpt.py", line 132, in main read_yaml_config(config_path) if os.path.isfile(config_path) else GptCliConfig() File "/Users/adtm1x/gpt-cli/gptcli/config.py", line 23, in read_yaml_config return GptCliConfig( TypeError: gptcli.config.GptCliConfig() argument after ** must be a mapping, not str

Sometimes it erros if given special characters

I was not able to reproduce the error. It happened two times both when my query included special characters. It was something like host aborted the connection.. might be a connection error...

Also thanks for making this...it is good

Conditionally loaded modules

Having access to all of the best APIs in one CLI is awesome! 🚀 I've been thinking of how we could overcome the downsides:

  • 2 seconds from run to API from all of the imports
  • If I just want a CLI to OpenAI, the setup of llama.cpp is a bit overkill
  • Each module will introduce idiosyncrasies like google-generativeai mandating Python 3.9+ (#29)

What if we conditionally load modules, based on the presence of OPENAI_API_KEY or config file?

Usage tracking not working.

Hi, I can't see any usage tracking in the terminal, as shown in the screenshots.

Running on Ubuntu 20 on Python 3.8.

Proposal: support more terminal keyboard commands

Loving the CLI tool and one thing I keep finding myself doing out of habit is using terminal keyboard commands like:

  • option + arrow (move forward or back a word)
  • option + delete (delete the previous word)
  • ctrl + e (go to end of line)
  • ctrl + a (go to beginning of line)
  • ctrl + u (delete entire line)

Unsure if listening to these commands is an easy addition, but if so it would be really nice as it more closely mirrors the terminal experience.

Thanks again!

brew install gpt-cli

As a gpt-cli user
I want to be able to install via brew
So that I can have a single command for brew install which is fast and easy

Unable to install

I tried a simple install using pip and received the following error:

ModuleNotFoundError: No module named '_cffi_backend'
thread '<unnamed>' panicked at /Users/brew/Library/Caches/Homebrew/cargo_cache/registry/src/index.crates.io-6f17d22bba15001f/pyo3-0.18.3/src/err/mod.rs:790:5:
Python API call failed
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Traceback (most recent call last):
  File "/opt/homebrew/bin/gpt", line 5, in <module>
    from gptcli.gpt import main
  File "/opt/homebrew/lib/python3.11/site-packages/gptcli/gpt.py", line 16, in <module>
    import google.generativeai as genai
  File "/opt/homebrew/lib/python3.11/site-packages/google/generativeai/__init__.py", line 69, in <module>
    from google.generativeai import types
  File "/opt/homebrew/lib/python3.11/site-packages/google/generativeai/types/__init__.py", line 17, in <module>
    from google.generativeai.types.discuss_types import *
  File "/opt/homebrew/lib/python3.11/site-packages/google/generativeai/types/discuss_types.py", line 21, in <module>
    import google.ai.generativelanguage as glm
  File "/opt/homebrew/lib/python3.11/site-packages/google/ai/generativelanguage/__init__.py", line 21, in <module>
    from google.ai.generativelanguage_v1beta2.services.discuss_service.async_client import (
  File "/opt/homebrew/lib/python3.11/site-packages/google/ai/generativelanguage_v1beta2/__init__.py", line 21, in <module>
    from .services.discuss_service import DiscussServiceAsyncClient, DiscussServiceClient
  File "/opt/homebrew/lib/python3.11/site-packages/google/ai/generativelanguage_v1beta2/services/discuss_service/__init__.py", line 16, in <module>
    from .async_client import DiscussServiceAsyncClient
  File "/opt/homebrew/lib/python3.11/site-packages/google/ai/generativelanguage_v1beta2/services/discuss_service/async_client.py", line 32, in <module>
    from google.api_core import gapic_v1
  File "/opt/homebrew/lib/python3.11/site-packages/google/api_core/gapic_v1/__init__.py", line 18, in <module>
    from google.api_core.gapic_v1 import method
  File "/opt/homebrew/lib/python3.11/site-packages/google/api_core/gapic_v1/method.py", line 24, in <module>
    from google.api_core import grpc_helpers
  File "/opt/homebrew/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 27, in <module>
    import google.auth.transport.grpc
  File "/opt/homebrew/lib/python3.11/site-packages/google/auth/transport/grpc.py", line 25, in <module>
    from google.oauth2 import service_account
  File "/opt/homebrew/lib/python3.11/site-packages/google/oauth2/service_account.py", line 77, in <module>
    from google.auth import _service_account_info
  File "/opt/homebrew/lib/python3.11/site-packages/google/auth/_service_account_info.py", line 20, in <module>
    from google.auth import crypt
  File "/opt/homebrew/lib/python3.11/site-packages/google/auth/crypt/__init__.py", line 41, in <module>
    from google.auth.crypt import rsa
  File "/opt/homebrew/lib/python3.11/site-packages/google/auth/crypt/rsa.py", line 20, in <module>
    from google.auth.crypt import _cryptography_rsa
  File "/opt/homebrew/lib/python3.11/site-packages/google/auth/crypt/_cryptography_rsa.py", line 22, in <module>
    import cryptography.exceptions
  File "/opt/homebrew/lib/python3.11/site-packages/cryptography/exceptions.py", line 9, in <module>
    from cryptography.hazmat.bindings._rust import exceptions as rust_exceptions
pyo3_runtime.PanicException: Python API call failed

I have no idea what any of this means. I did try installing the latest build but that made no difference. Thanks in advance :-)

CLI Conflict with /usr/sbin/gpt

I followed install instructions - installed via pip and tried to run samples from Readme. None of the command worked. Finally ran a man gpt and saw that MacOS came prepackaged with gpt – GUID partition table maintenance utility.

The default install should probably have a "mostly" globally unique command, rather than one that comes prepackaged with one of the most popular operating systems.

Alternatively, provide the instructions for creating a command alias

Running in Docker

Hey, i made a very simple Dockerfile on my local machine, and it worked, do you guys think it would be interesting to publish a docker image for the cli?

Recent commit broke the gpt usage (for me)

Current behaviour (46247d3)

~/p/g/gpt-cli ❯❯❯ ./gpt.py dev                                                 main
Traceback (most recent call last):
  File "./gpt.py", line 10, in <module>
    from gptcli.assistant import (
  File "/Users/patricknueser/projects/gpt-cli/gpt-cli/gptcli/assistant.py", line 8, in <module>
    from gptcli.llama import LLaMACompletionProvider
  File "/Users/patricknueser/projects/gpt-cli/gpt-cli/gptcli/llama.py", line 12, in <module>
    LLAMA_MODELS: Optional[dict[str, Path]] = None
TypeError: 'type' object is not subscriptable

Expected behaviour (7752b05)
~/p/g/gpt-cli ❯❯❯ ./gpt.py dev ✘ 1 main
Hi! I'm here to help. Type q or Ctrl-D to exit, c or Ctrl-C to clear the
conversation, r or Ctrl-R to re-generate the last response. To enter multi-line
mode, enter a backslash \ followed by a new line. Exit the multi-line mode by
pressing ESC and then Enter (Meta+Enter).

Add Base URL option

There are various open source models and hosted LLMOps tools that are compatible with the OpenAI API, like LocalAI and Helicone. It would be great to be able to use the OPENAI_API_BASE environment variable or the config file to use gpt-cli with these systems!

"Server Overloaded" message at Anthropic breaks sessions often with 'uncaught exception'

When the servers are overloaded at Anthropic:


`anthropic.APIStatusError: {'type': 'error', 'error': {'details': None, 'type': 'overloaded_error', 'message': 'Overloaded'}}
`

They send back an error, sometimes 529, sometimes nothing, leading to this:

anthropic.APIStatusError: {'type': 'error', 'error': {'details': None, 'type': 'overloaded_error', 'message': 'Overloaded'}}
An uncaught exception occurred. Please report this issue on GitHub.
Traceback (most recent call last):
  File "/home/gnewt/.pyenv/versions/3.12-dev/bin/gpt", line 8, in <module>
    sys.exit(main())

this crashes the session and eliminates Claude's memory. Can the script be changed to tolerate a "server overloaded" message without breaking? These errors are pretty common.

Doesn't actually execute commands?

It doesn't seem to be able to. If true, can this be added?

> gpt bash
Hi! I'm here to help. Type :q or Ctrl-D to exit, :c or Ctrl-C and Enter to clear
the conversation, :r or Ctrl-R to re-generate the last response. To enter
multi-line mode, enter a backslash \ followed by a new line. Exit the multi-line
mode by pressing ESC and then Enter (Meta+Enter). Try :? for help.
> list files
ls

                                                                                                                                                                                                  Tokens: 129 | Price: $0.000 | Total: $0.000
> ls
ls

                                                                                                                                                                                                  Tokens: 141 | Price: $0.000 | Total: $0.000

I want a REPL-like thing that I can talk to, ask it questions in plain language and get responses in plain language, but it can also suggest commands and explain what they do, and then I can execute them, and then it can see the command being executed and its output as part of the conversation context, to see if it worked correctly, suggest a new command that builds on the output of the previous, etc.

Response duplicating many times

Hi, I love this tool, I hope to help make it better!

One major issue (and sorry if it has already been raised), the response from OpenAI is duplicated many times over before finishing response, usually in the middle of the response.

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
Xcode project's linked libraries and frameworks.                           
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
Xcode project's linked libraries and frameworks.                           
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.
To remove a package from an Xcode project, you typically need to do it
through the Xcode interface or by editing the project's configuration
files, depending on how the package was added (e.g., CocoaPods, Carthage,
Swift Package Manager, or manually).

Here are the general steps to remove a package for each package manager:

  1. CocoaPods

If the package was added via CocoaPods, you would remove the package entry
from the Podfile and then run:

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.
To remove a package from an Xcode project, you typically need to do it
through the Xcode interface or by editing the project's configuration
files, depending on how the package was added (e.g., CocoaPods, Carthage,
Swift Package Manager, or manually).

Here are the general steps to remove a package for each package manager:

  1. CocoaPods

If the package was added via CocoaPods, you would remove the package entry
from the Podfile and then run:

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.
To remove a package from an Xcode project, you typically need to do it
through the Xcode interface or by editing the project's configuration
files, depending on how the package was added (e.g., CocoaPods, Carthage,
Swift Package Manager, or manually).

Here are the general steps to remove a package for each package manager:

  1. CocoaPods

If the package was added via CocoaPods, you would remove the package entry
from the Podfile and then run:

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.
To remove a package from an Xcode project, you typically need to do it
through the Xcode interface or by editing the project's configuration
files, depending on how the package was added (e.g., CocoaPods, Carthage,
Swift Package Manager, or manually).

Here are the general steps to remove a package for each package manager:

  1. CocoaPods

If the package was added via CocoaPods, you would remove the package entry
from the Podfile and then run:

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.
To remove a package from an Xcode project, you typically need to do it
through the Xcode interface or by editing the project's configuration
files, depending on how the package was added (e.g., CocoaPods, Carthage,
Swift Package Manager, or manually).

Here are the general steps to remove a package for each package manager:

  1. CocoaPods

If the package was added via CocoaPods, you would remove the package entry
from the Podfile and then run:

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.

  1. Swift Package Manager

If the package was added using Swift Package Manager, you can remove it by
opening the Xcode project and:

• Go to File > Swift Packages > Update Package Versions.
• Select the package you want to remove and click the minus sign (-) to
remove it.

If you want to use the command line, you can manually edit the
Package.swift file and remove the dependency from the dependencies array.
After saving the changes, you can run:

swift package update

  1. Manually Added

For manually added packages (e.g., added as a Git submodule or just copied
into the project), you'll need to:

• Delete the package files from your project directory.
• Remove any references to the package from your Xcode project (targets,
build phases, etc.).

Keep in mind that these are general guidelines. The specific commands and
steps might vary depending on the version of your package management tools
and the configuration of your project. Always make sure to have a backup
before modifying project files or dependencies.


My config file is set to OpenAI GPT-4-preview

error when attempting to chat with bison model

Entered key for bison model and got this error when attempting to chat

Uncaught exception
Traceback (most recent call last):
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 236, in
main()
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 184, in main
run_interactive(args, assistant)
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 232, in run_interactive
session.loop(input_provider)
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 168, in loop
while self.process_input(*input_provider.get_user_input()):
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 160, in process_input
response_saved = self._respond(args)
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 102, in _respond
next_response += response
TypeError: can only concatenate str (not "NoneType") to str
An uncaught exception occurred. Please report this issue on GitHub.
Traceback (most recent call last):
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 236, in
main()
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 184, in main
run_interactive(args, assistant)
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 232, in run_interactive
session.loop(input_provider)
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 168, in loop
while self.process_input(*input_provider.get_user_input()):
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 160, in process_input
response_saved = self._respond(args)
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 102, in _respond
next_response += response
TypeError: can only concatenate str (not "NoneType") to str

Model Parameter Not Functioning as Expected

When using the command line interface for ChatGPT, the --model parameter seems to not be working as intended. When attempting to set the model to GPT-4, the application returns responses as if it were GPT-3.

Steps to Reproduce

  1. Set the model to GPT-4 using the command:
    % gpt --model gpt-4 -p "Are you chatgpt-4?"
  2. The response was:
    As an AI model developed by OpenAI, I'm currently based on GPT-3. As of now, GPT-4 has not been released.
  3. Defining a new assistant in the config file and omitting the --model parameter yielded the same response.
    No, I'm an AI developed by OpenAI and currently known as ChatGPT-3. As of now, there is no ChatGPT-4.

Expected Behavior
The application should return responses as per the specified model in the command line or as set in the default assistant in the config file.
Actual Behavior
The application returns responses as if it were GPT-3, irrespective of the model set in the command line or the config file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.