Code Monkey home page Code Monkey logo

chatgpt-shell's Introduction

👉 Support this work via GitHub Sponsors

https://melpa.org/packages/chatgpt-shell-badge.svg

chatgpt-shell

ChatGPT and DALL-E Emacs shells + Org Babel.

Includes shell-maker, a way to create shells for any service (local or cloud).

Support this effort

If you’re finding chatgpt-shell useful, consider ✨sponsoring✨.

chatgpt-shell is in development. Please report issues or send pull requests for improvements.

Like this package? Tell me about it 💙

Finding it useful? Like the package? I’d love to hear from you. Get in touch (Mastodon / Twitter / Reddit / Email).

Shell usage

Insert to current buffer

Install

  • Load (require 'chatgpt-shell)
  • Load (require 'dall-e-shell)

MELPA

If using use-package, you can install with :ensure t.

(use-package chatgpt-shell
  :ensure t
  :custom
  ((chatgpt-shell-openai-key
    (lambda ()
      (auth-source-pass-get 'secret "openai-key")))))

Straight

chatgpt-shell depends on shell-maker. This dependency is resolved without issues on MELPA but seems to run into issues with straight. I’m not familiar with straight but users have reported the following to work.

(use-package shell-maker
  :straight (:host github :repo "xenodium/chatgpt-shell" :files ("shell-maker.el")))

(use-package chatgpt-shell
  :requires shell-maker
  :straight (:host github :repo "xenodium/chatgpt-shell" :files ("chatgpt-shell.el")))

If you have a better straight solution, please send a pull request or open an issue with a suggestion.

Read on for setting your OpenAI key in other ways.

Set OpenAI key

You’ll first need to get a key from OpenAI.

ChatGPT key

As function

;; if you are using the "pass" password manager
(setq chatgpt-shell-openai-key
      (lambda ()
        ;; (auth-source-pass-get 'secret "openai-key") ; alternative using pass support in auth-sources
        (nth 0 (process-lines "pass" "show" "openai-key"))))

;; or if using auth-sources, e.g., so the file ~/.authinfo has this line:
;;  machine api.openai.com password OPENAI_KEY
(setq chatgpt-shell-openai-key
      (auth-source-pick-first-password :host "api.openai.com"))

;; or same as previous but lazy loaded (prevents unexpected passphrase prompt)
(setq chatgpt-shell-openai-key
      (lambda ()
        (auth-source-pick-first-password :host "api.openai.com")))

Manually

M-x set-variable chatgpt-shell-openai-key

As variable

(setq chatgpt-shell-openai-key "my key")

As an ENV variable

(setq chatgpt-shell-openai-key (getenv "OPENAI_API_KEY"))

DALL-E key

Same as ChatGPT, but use dall-e-shell-openai-key variable.

ChatGPT through proxy service

If you use ChatGPT through proxy service “https://api.chatgpt.domain.com”, set options like the following:

(use-package chatgpt-shell
  :ensure t
  :custom
  ((chatgpt-shell-api-url-base "https://api.chatgpt.domain.com")
   (chatgpt-shell-openai-key
    (lambda ()
      ;; Here the openai-key should be the proxy service key.
      (auth-source-pass-get 'secret "openai-key")))))

If your proxy service API path is not OpenAI ChatGPT default path like ”/v1/chat/completions”, then you can customize option chatgpt-shell-api-url-path.

Using ChatGPT through HTTP(S) proxy

Behind the scenes chatgpt-shell uses curl to send requests to the openai server. If you use ChatGPT through a HTTP proxy (for example you are in a corporate network and a HTTP proxy shields the corporate network from the internet), you need to tell curl to use the proxy via the curl option -x http://your_proxy. One way to do this is to set the proxy url via the customizable variable chatgpt-shell-additional-curl-options. If you set this variable via the Emacs Customize interface you should insert two separate items -x and http://your_proxy. See the curl manpage for more details and further options.

Using ChatGPT through Azure OpenAI Service

Endpoint: https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}

Configure the following variables:

(setq chatgpt-shell-api-url-base "https://{your-resource-name}.openai.azure.com")
(setq chatgpt-shell-api-url-path "/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}")
(setq chatgpt-shell-auth-header (lambda () (format "api-key: %s" (chatgpt-shell-openai-key))))

Launch

Launch with M-x chatgpt-shell or dall-e-shell.

Clear buffer

Type clear as a prompt.

ChatGPT> clear

Alternatively, use either M-x chatgpt-shell-clear-buffer or M-x comint-clear-buffer.

Saving and restoring (experimental)

Save with M-x shell-maker-save-session-transcript and restore with M-x chatgpt-shell-restore-session-from-transcript.

Streaming

chatgpt-shell can either wait until the entire response is received before displaying, or it can progressively display as chunks arrive (streaming).

Streaming is enabled by default. (setq chatgpt-shell-streaming nil) to disable it.

chatgpt-shell customizations

Custom variableDescription
chatgpt-shell-display-functionFunction to display the shell. Set to `display-buffer’ or custom function.
chatgpt-shell-model-versionsThe list of ChatGPT OpenAI models to swap from.
chatgpt-shell-system-promptThe system prompt `chatgpt-shell-system-prompts’ index.
chatgpt-shell-default-promptsList of default prompts to choose from.
chatgpt-shell-read-string-functionFunction to read strings from user.
chatgpt-shell-model-temperatureWhat sampling temperature to use, between 0 and 2, or nil.
chatgpt-shell-transmitted-context-lengthControls the amount of context provided to chatGPT.
chatgpt-shell-system-promptsList of system prompts to choose from.
chatgpt-shell-streamingWhether or not to stream ChatGPT responses (show chunks as they arrive).
chatgpt-shell-prompt-header-refactor-codePrompt header of `refactor-code`.
chatgpt-shell-auth-headerFunction to generate the request’s `Authorization’ header string.
chatgpt-shell-prompt-header-whats-wrong-with-last-commandPrompt header of `whats-wrong-with-last-command`.
chatgpt-shell-prompt-header-write-git-commitPrompt header of `git-commit`.
chatgpt-shell-loggingLogging disabled by default (slows things down).
chatgpt-shell-prompt-query-response-styleDetermines the prompt style when invoking from other buffers.
chatgpt-shell-root-pathRoot path location to store internal shell files.
chatgpt-shell-prompt-header-proofread-regionPromt header of `proofread-region`.
chatgpt-shell-model-versionThe active ChatGPT OpenAI model index.
chatgpt-shell-source-block-actionsBlock actions for known languages.
chatgpt-shell-prompt-header-eshell-summarize-last-command-outputPrompt header of `eshell-summarize-last-command-output`.
chatgpt-shell-welcome-functionFunction returning welcome message or nil for no message.
chatgpt-shell-api-url-pathOpenAI API’s URL path.
chatgpt-shell-additional-curl-optionsAdditional options for `curl’ command.
chatgpt-shell-openai-keyOpenAI key as a string or a function that loads and returns it.
chatgpt-shell-after-command-functionsAbnormal hook (i.e. with parameters) invoked after each command.
chatgpt-shell-prompt-header-describe-codePrompt header of `describe-code`.
chatgpt-shell-api-url-baseOpenAI API’s base URL.
chatgpt-shell-babel-headersAdditional headers to make babel blocks work.
chatgpt-shell-highlight-blocksWhether or not to highlight source blocks.
chatgpt-shell-language-mappingMaps external language names to Emacs names.
chatgpt-shell-prompt-header-generate-unit-testPrompt header of `generate-unit-test`.
chatgpt-shell-request-timeoutHow long to wait for a request to time out in seconds.

There are more. Browse via M-x set-variable

chatgpt-shell-display-function (with custom function)

If you’d prefer your own custom display function,

(setq chatgpt-shell-display-function #'my/chatgpt-shell-frame)

(defun my/chatgpt-shell-frame (bname)
  (let ((cur-f (selected-frame))
        (f (my/find-or-make-frame "chatgpt")))
    (select-frame-by-name "chatgpt")
    (pop-to-buffer-same-window bname)
    (set-frame-position f (/ (display-pixel-width) 2) 0)
    (set-frame-height f (frame-height cur-f))
    (set-frame-width f  (frame-width cur-f) 1)))

(defun my/find-or-make-frame (fname)
  (condition-case
      nil
      (select-frame-by-name fname)
    (error (make-frame `((name . ,fname))))))

Thanks to tuhdo for the custom display function.

chatgpt-shell commands

BindingCommandDescription
chatgpt-shellStart a ChatGPT shell interactive command.
chatgpt-shell-rename-block-at-pointRename block at point (perhaps a different language).
C-M-hchatgpt-shell-mark-at-point-dwimMark source block if at point. Mark all output otherwise.
C-<up> or M-pchatgpt-shell-previous-inputCycle backwards through input history, saving input.
chatgpt-shell-execute-babel-block-action-at-pointExecute block as org babel.
chatgpt-shell-eshell-whats-wrong-with-last-commandAsk ChatGPT what’s wrong with the last eshell command.
C-c C-pchatgpt-shell-previous-itemGo to previous item.
chatgpt-shell-set-as-primary-shellSet as primary shell when there are multiple sessions.
chatgpt-shell-refresh-renderingRefresh markdown rendering by re-applying to entire buffer.
chatgpt-shell-explain-codeDescribe code from region using ChatGPT.
chatgpt-shell-rename-bufferRename current shell buffer.
chatgpt-shell-write-git-commitWrite commit from region using ChatGPT.
chatgpt-shell-promptMake a ChatGPT request from the minibuffer.
chatgpt-shell-remove-block-overlaysRemove block overlays. Handy for renaming blocks.
chatgpt-shell-system-prompts-menuChatGPT
chatgpt-shell-proofread-regionProofread English from region using ChatGPT.
M-rchatgpt-shell-search-historySearch previous input history.
chatgpt-shell-send-and-review-regionSend region to ChatGPT, review before submitting.
C-<down> or M-nchatgpt-shell-next-inputCycle forwards through input history.
chatgpt-shell-eshell-summarize-last-command-outputAsk ChatGPT to summarize the last command output.
chatgpt-shell-prompt-appending-kill-ringMake a ChatGPT request from the minibuffer appending kill ring.
chatgpt-shell-describe-codeDescribe code from region using ChatGPT.
chatgpt-shell-modeMajor mode for ChatGPT shell.
C-c C-vchatgpt-shell-swap-model-versionSwap model version from `chatgpt-shell-model-versions’.
chatgpt-shell-previous-source-blockMove point to previous source block.
chatgpt-shell-refactor-codeRefactor code from region using ChatGPT.
S-<return>chatgpt-shell-newlineInsert a newline, and move to left margin of the new line.
C-c C-schatgpt-shell-swap-system-promptSwap system prompt from `chatgpt-shell-system-prompts’.
C-x C-schatgpt-shell-save-session-transcriptSave shell transcript to file.
C-c M-ochatgpt-shell-clear-bufferClear the comint buffer.
chatgpt-shell-load-awesome-promptsLoad `chatgpt-shell-system-prompts’ from awesome-chatgpt-prompts.
RETchatgpt-shell-submitSubmit current input.
C-c C-nchatgpt-shell-next-itemGo to next item.
chatgpt-shell-describe-imageRequest OpenAI to describe image.
chatgpt-shell-execute-block-action-at-pointExecute block at point.
chatgpt-shell-view-at-pointView prompt and output at point in a separate buffer.
chatgpt-shell-send-regionSend region to ChatGPT.
chatgpt-shell-restore-session-from-transcriptRestore session from transcript.
chatgpt-shell-generate-unit-testGenerate unit-test for the code from region using ChatGPT.
C-c C-echatgpt-shell-prompt-composeCompose and send prompt (kbd “C-c C-c”) from a dedicated buffer.
chatgpt-shell-next-source-blockMove point to previous source block.
C-c C-cchatgpt-shell-ctrl-c-ctrl-cIf point in source block, execute it. Otherwise interrupt.
chatgpt-shell-interruptInterrupt `chatgpt-shell’ from any buffer.

Browse all available via M-x.

Feature requests

  • Please go through this README to see if the feature is already supported.
  • Need custom behaviour? Check out existing issues/feature requests. You may find solutions in discussions.

Reporting bugs

Setup isn’t working?

Please share the entire snippet you’ve used to set chatgpt-shell up (but redact your key). Share any errors you encountered. Read on for sharing additional details.

Found runtime/elisp errors?

Please enable M-x toggle-debug-on-error, reproduce the error, and share the stack trace.

Found unexpected behaviour?

Please enable logging (setq chatgpt-shell-logging t) and share the content of the *chatgpt-log* buffer in the bug report.

Babel issues?

Please also share the entire org snippet.

dall-e-shell customizations

Custom variableDescription
dall-e-shell-welcome-functionFunction returning welcome message or nil for no message.
dall-e-shell-openai-keyOpenAI key as a string or a function that loads and returns it.
dall-e-shell-image-sizeThe default size of the requested image as a string.
dall-e-shell-read-string-functionFunction to read strings from user.
dall-e-shell-request-timeoutHow long to wait for a request to time out.
dall-e-shell-model-versionThe used DALL-E OpenAI model. For Dall-E 3, use “dall-e-3”.
dall-e-shell-display-functionFunction to display the shell. Set to `display-buffer’ or custom function.
dall-e-shell-model-versionsThe list of Dall-E OpenAI models to swap from.
dall-e-shell-additional-curl-optionsAdditional options for `curl’ command.
dall-e-shell-image-output-directoryOutput directory for the generated image.
dall-e-shell-image-qualityImage quality: `standard’ or `hd’ (DALL-E 3 only feature).

dall-e-shell commands

C-<up> or M-pdall-e-shell-previous-inputCycle backwards through input history, saving input.
dall-e-shellStart a DALL-E shell.
dall-e-shell-insert-image-from-region-descriptionGenerate and insert an image using current region as description.
dall-e-shell-interruptInterrupt `dall-e-shell’ from any buffer.
S-<return>dall-e-shell-newlineInsert a newline, and move to left margin of the new line.
RETdall-e-shell-submitSubmit current input.
C-x C-sdall-e-shell-save-session-transcriptSave shell transcript to file.
C-c C-vdall-e-shell-swap-model-versionSwap model version from `dall-e-shell-model-versions’.
dall-e-shell-modeMajor mode for DALL-E shell.
C-<down> or M-ndall-e-shell-next-inputCycle forwards through input history.
M-rdall-e-shell-search-historySearch previous input history.
dall-e-shell-rename-bufferRename current shell buffer.

ChatGPT org babel

Load (require 'ob-chatgpt-shell) and invoke (ob-chatgpt-shell-setup).

#+begin_src chatgpt-shell
  Hello
#+end_src

#+RESULTS:
: Hi there! How can I assist you today?

:version

Use :version to specify “gpt-4”, “gpt-3.5-turbo”, or something else.

#+begin_src chatgpt-shell :version "gpt-4"
 Hello
#+end_src

#+RESULTS:
Hello! How can I help you today?

:system

Use :system to set the system prompt.

#+begin_src chatgpt-shell :system "always respond like a pirate"
  hello
#+end_src

#+RESULTS:
Ahoy there, me hearty! How be ye today?

:temperature

Use :temperature to set the temperature parameter.

#+begin_src chatgpt-shell :temperature 0.3
  hello
#+end_src

:context

Use :context t to include all prior context in current buffer.

#+begin_src chatgpt-shell
  tell me a random day of the week
#+end_src

#+RESULTS:
Wednesday

#+begin_src chatgpt-shell :system "always respond like a pirate"
  hello
#+end_src

#+RESULTS:
Ahoy there, me hearty! How be ye today?

#+begin_src chatgpt-shell :context t
  what was the day you told me and what greeting?
#+end_src

#+RESULTS:
The day I told you was Wednesday, and the greeting I used was "Ahoy there, me hearty! How be ye today?"

If you’d like to cherrypick which blocks are part of a given context, add :context CONTEXT-NAME to each block where CONTEXT-NAME is any string. When this form is used only source blocks with same CONTEXT-NAME will be included as opposed to every previous block when using :context t.

The example below shows how two different contexts can be interleaved.

#+begin_src chatgpt-shell :context shakespeare :system "alway speak like shakespeare"
How do you do?
#+end_src

#+RESULTS:
How dost thou fare?

#+begin_src chatgpt-shell :context robot :system "always speak like a sci fi movie robot"
How do you do?
#+end_src

#+RESULTS:
Greetings, human. I am functioning at optimal capacity. How may I assist you in your endeavors today?

#+begin_src chatgpt-shell :context shakespeare
What did you call me?
#+end_src

#+RESULTS:
Mine apologies if mine words hath caused confusion. I merely addressed thee as 'sir' or 'madam', a term of respect in the language of the Bard. Pray, how may I assist thee further?

DALL-E org babel

Load (require 'ob-dall-e-shell) and invoke (ob-dall-e-shell-setup).

#+begin_src dall-e-shell
  Pretty clouds
#+end_src

#+RESULTS:
[[file:/var/folders/m7/ky091cp56d5g68nyhl4y7frc0000gn/T/1680644778.png]]

:version

Use :version to set the model, for example: “dall-e-3”.

:results

For DALL-E 3, use :results both to also output the revised prompt.

shell-maker

There are currently two shell implementations (ChatGPT and DALL-E). Other services (local or cloud) can be brought to Emacs as shells. shell-maker can help with that.

shell-maker is a convenience wrapper around comint mode.

Both chatgpt-shell and dall-e-shell use shell-maker, but a basic implementation of a new shell looks as follows:

(require 'shell-maker)

(defvar greeter-shell--config
  (make-shell-maker-config
   :name "Greeter"
   :execute-command
   (lambda (command _history callback error-callback)
     (funcall callback
              (format "Hello \"%s\"" command)
              nil))))

(defun greeter-shell ()
  "Start a Greeter shell."
  (interactive)
  (shell-maker-start greeter-shell--config))

Support my work

👉 Find my work useful? Support this work via GitHub Sponsors or buy my iOS apps.

My other utilities, packages, apps, writing…

Contributors

Made with contrib.rocks.

chatgpt-shell's People

Contributors

bigsky77 avatar celeritascelery avatar djliden avatar djr7c4 avatar fritzgrabo avatar goofansu avatar jtmoulia avatar kazuakiishiguro avatar lenbok avatar meliache avatar munsterplop avatar neymarsabin avatar nikolaplejic avatar pabl0 avatar rkallio avatar schmendrik avatar shouya avatar spray27ds avatar stardiviner avatar suzuki avatar swflint avatar tarsius avatar tejasbubane avatar thangaayyanar avatar tninja avatar vellvisher avatar vonfry avatar whil- avatar xenodium avatar zachary-romero-blueprint avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-shell's Issues

Unable to find shell-maker (:straight)

Commit ae1f3b2 makes me unable to start chatgpt shell because of the following error:

 ■  Warning (initialization): An error occurred while loading ‘/home/juanmi/.emacs.d/init.el’:

error: Could not find package shell-maker. Updating recipe repositories: (org-elpa melpa gnu-elpa-mirror nongnu-elpa el-get emacsmirror-mirror) with ‘straight-pull-recipe-repositories’ may fix this

To ensure normal operation, you should investigate and remove the
cause of the error in your initialization file.  Start Emacs with
the ‘--debug-init’ option to view a complete error backtrace.

Simply removing the version works fine for me but I'm not sure why I'm getting that. I'm using an emacs built from master from a few weeks ago and this is my straight recipe:

  (use-package chatgpt-shell
  :straight (:host github :repo "xenodium/chatgpt-shell")
  :config
   (setq chatgpt-shell-openai-key "...")
   (setq chatgpt-shell-transmitted-context-length 0)
  )

chatgpt-shell--send-output

Could you implement a chatgpt-shell--send-output function as well?

Hello!

Hi there! How can I assist you today?

ChatGPT> Hello! >/tmp/cht.txt

I'm sorry, I'm not sure what you mean by "/tmp/cht.txt". Could you please provide more context or clarify your question?


Perhaps you can get some inspiration from here:
https://kadekillary.work/posts/1000x-eng/

Suggestion for api-key

Another suggestion, if you do not bother,
is to implement the function to call the api-key from a encrypted file...
The traditional one for Emacs is ".authinfo.gpg"
So that, the key won't be in plain text in Emacsen init.el(s) files...

chatgpt-shell-model-temperature bug

fixing #36 didnt't really work, the variable is not customizable with customize-variable anymore. trying to set it with set-variable errors : Symbol’s function definition is void: nil

[FR] Streaming response

For longer responses you end up waiting with nothing to see for quite a while then a big dump (possibly more than a screen full) arrives. It would be much nicer UX to read the response as it comes back in.

It looks like there are a couple of forks that implement this already so hopefully it is not too hard.

Pop chatgpt shell into its own frame when not exists (with code and demo)

Popping the shell onto another window disrupts the current workflow. It would be nice if there is an option to pop the shell to another frame instead: a frame named chatgpt is created if not exists, otherwise reuse that frame. I roughly implemented the feature:

(defun find-or-make-frame (fname)
    (condition-case
        nil
        (select-frame-by-name fname)
      (error (make-frame `((name . ,fname))))))
(defun display-chatgpt-shell-frame (bname)
    (let ((cur-f (selected-frame))
          (f (find-or-make-frame "chatgpt")))
      (select-frame-by-name "chatgpt")
      (pop-to-buffer-same-window bname)
      (set-frame-position f (/ (display-pixel-width) 2) 0)
      (set-frame-height f (frame-height cur-f))
      (set-frame-width f  (frame-width cur-f) 1)
      ))
   (setq shell-maker-display-function 'display-chatgpt-shell-frame)

It would be nice if this is refined and integrated into the package.

This is a really nice package.

Demo:

https://imgur.com/4GjJHnk

Timeout in long response.

Asked it for a detailed kinda answer, got a timeout.

ChatGPT> can you show me a graph on how tail recursion works?
<shell-maker-end-of-prompt>
I cannot draw a graph directly in this text-based environment, but I can describe the differences between regular recursion and tail recursion using call stack diagrams.

Consider calculating the factorial of 5.

Regular Recursion:

1. factorial(5)
   Call Stack:
   - factorial(5)
2. 5 * factorial(4)
   Call Stack:
   - factorial(5)
   - factorial(4)
3. 5 * (4 * factorial(3))
   Call Stack:
   - factorial(5)
   - factorial(4)
   - factorial(3)
4. 5 * (4 * (3 * factorial(2)))
   Call Stack:
   - factorial(5)
   - factorial(4)
   - factorial(3)
   - factorial(2)
5. 5 * (4 * (3 * (2 * factorial(1))))
   Call Stack:
   - factorial(5)
   - factorial(4)
   - factorial(3)
   - factorial(2)
   - factorial(1)
6. 5 * (4 * (3 * (2 * 1))))
   Call Stack:
   - factorial(5)
   - factorial(4)
   - factorial(3)
   - factorial(2)
7. 5 * (4 * (3 * 2))
   Call Stack:
   - factorial(5)
   - factorial(4)
   - factorial(3)
8. 5 * (4 * 6)
   Call Stack:
   - factorial(5)
   - factorial(4)
9. 5 * 24
   Call Stack:
   - factorial(5)
10. 120

Tail Recursion:

1. tailRecursiveFactorial(5, accumulator = 1)
   Call Stack:
   - tailRecursiveFactorial(5, 1)
2. tailRecursiveFactorial(4, accumulator = 5)
   Call Stack:
   - tailRecursiveFactorial(4, 5)
3. tailRecursiveFactorial(3, accumulator = 20)
   Call Stack:
   - tailRecursiveFactorial(3, 20)
4. tailRecursiveFactorial(2, accumulator = 60)
   Call Stack:
   - tailRecursiveFactorial(2,curl: (28) Operation timed out after 60004 milliseconds with 93371 bytes received

<shell-maker-failed-command>ChatGPT> 

is there's some way to increase the timeout?

Timeout issues, 30 seconds is not enough.

Thank you very much for developing ChatGPT shell for Emacs.

May I suggest to externalize the timeout in a variable? 30 seconds often results in timeouts when the requests are complex.

I have now increased the time to 60 seconds by modifying the value of the -m parameter passed to curl. But there is probably a saner way to do this directly from Emacs.

Compilation warnings when using chatgpt-shell

I added this chatgpt-shell to emacs on my OS. through quelpa package manager and I got those warnings which says that the code should be improved and developed

those are the warnings I got in emacs warning buffer :
Warning (comp): chatgpt-shell.el:94:1: Warning: defcustom for chatgpt-shell-language-mapping' fails to specify type Disable showing Disable logging Warning (comp): chatgpt-shell.el:189:13: Warning: Unused lexical argument input' Disable showing Disable logging
Warning (comp): chatgpt-shell.el:227:13: Warning: Unused lexical argument input' Disable showing Disable logging Warning (comp): chatgpt-shell.el:343:31: Warning: Unused lexical argument config' Disable showing Disable logging
Warning (comp): chatgpt-shell.el:496:71: Warning: Unused lexical argument url' Disable showing Disable logging Warning (comp): chatgpt-shell.el:496:71: Warning: Unused lexical argument request-data' Disable showing Disable logging
Warning (comp): chatgpt-shell.el:496:71: Warning: Unused lexical argument response-extractor' Disable showing Disable logging Warning (comp): chatgpt-shell.el:496:71: Warning: Unused lexical argument error-callback' Disable showing Disable logging
Warning (comp): chatgpt-shell.el:574:44: Warning: reference to free variable eshell-last-input-start' Disable showing Disable logging Warning (comp): chatgpt-shell.el:574:68: Warning: reference to free variable eshell-last-input-end' Disable showing Disable logging
Warning (comp): chatgpt-shell.el:584:44: Warning: reference to free variable eshell-last-input-start' Disable showing Disable logging Warning (comp): chatgpt-shell.el:584:68: Warning: reference to free variable eshell-last-input-end' Disable showing Disable logging
Warning (comp): chatgpt-shell.el:597:1: Warning: Variable curbuf' left uninitialized Disable showing Disable logging Warning (comp): chatgpt-shell.el:597:60: Warning: Unused lexical variable current-buffer' Disable showing Disable logging
Warning (comp): chatgpt-shell.el:634:13: Warning: mark-whole-buffer' is for interactive use only. Disable showing Disable logging Warning (comp): chatgpt-shell.el:679:35: Warning: reference to free variable chatgpt-shell--prompt-internal' Disable showing Disable logging
Warning (comp): chatgpt-shell.el:706:42: Warning: reference to free variable chatgpt-shell--prompt-internal' Disable showing Disable logging Warning (comp): chatgpt-shell.el:930:33: Warning: reference to free variable chatgpt-shell--prompt-internal' Disable showing Disable logging
Warning (comp): chatgpt-shell.el:952:30: Warning: Unused lexical variable err' Disable showing Disable logging Warning (comp): chatgpt-shell.el:978:6: Warning: Unused lexical variable err' Disable showing Disable logging
Warning (comp): chatgpt-shell.el:994:8: Warning: docstring wider than 80 characters Disable showing Disable logging
Warning (comp): chatgpt-shell.el:1002:8: Warning: docstring wider than 80 characters Disable showing Disable logging
Warning (comp): chatgpt-shell.el:1102:13: Warning: assignment to free variable view-exit-action' Disable showing Disable logging Warning (comp): chatgpt-shell.el:629:8: Warning: the function ielm-return' is not known to be defined. Disable showing Disable logging
Warning (comp): chatgpt-shell.el:576:74: Warning: the function eshell-end-of-output' is not known to be defined. Disable showing Disable logging Warning (comp): chatgpt-shell.el:576:45: Warning: the function eshell-beginning-of-output' is not known to be defined. Disable showing Disable logging

Different way of getting authinfo secret?

In the readme, there is this example:

;; or if using auth-sources, e.g., so the file ~/.authinfo has this line:
;;  machine openai.com password OPENAI_KEY
(setq chatgpt-shell-openai-key
      (plist-get (car (auth-source-search :host "openai.com"))
                 :secret))

I was wondering about the difference between this and the simpler:

(setq chatgpt-shell-openai-key (auth-source-pick-first-password :host "openai.com"))

that I see used in other packages.

Evaluating the readme example, I get a byte code function object where the secret is obfuscated and with auth-source-pick-first-password, a simple clear string.
I suppose that means the readme example is more secure and those other packages are not?

It's more of a question than an actual issue but I needed to know if I was correct.

Typing assistant

I am thoroughly impressed with this package that I can't seem to put down. It got me thinking - have you ever considered creating a context-aware typing assistant? Not like dabbrev or other tools that rely on company or corfu, though there are situations where those options would come in handy.

What if you could design something to rival even Gmail's "smart compose," or better yet, imagine an autocomplete feature that's 100 times better than any other currently available on Emacs. Picture this: you start typing, and an overlay appears with a suggestion to complete your sentence, intuitively based on the context of your writing. Wouldn't that be amazing? Is something like that even possible, given the non-concurrent nature of Emacs model?

Language-specific syntax highlighting in source blocks

Hi,

Thanks for an amazing mode, I am experimenting with it a lot and its really useful. I have one request though, is it possible to somehow integrate Markdown mode, with possibly a way to narro to the code regions? I ask this because the output with code is already annotated correctly by GPT:

ChatGPT> what is quantlib DayCounter?

Here's an example code that demonstrates the usage of DayCounter in QuantLib:

#include <iostream>
#include <ql/quantlib.hpp>

using namespace QuantLib;
<snip>

So if there was a way to make this region follow markdown mode like say org-mode does it would be great.

Thanks for your time.

Timeout.

Longer queries timeout. Which might be not the big problem by itself, yet it would be nice if there's a way to extend the duration (there is). Also, I think they should fail more gracefully. Currently, it looks like as if it's appending error message without any prior delimeter, simply adds to the output, something like: "curl: (28) Operation timecurl: (28) Operation timed out after 60003 milliseconds with 47011 bytes received"

Suggestion - add string arguments to chatgpt-shell

I've written this in my scratch buffer and evalutated it, but chatgpt-shell does not take ARGS... so, it did not work.
Could you add the possibility to get arguments from that?

(chatgpt-shell "write an emacs script using the function find-youtubedl-links to download this youtube video hash SzA2YODtgK4")

This works only on the eshell prompt...

Also, if I type only the string:
"write an emacs script using the function find-youtubedl-links to download this youtube video hash SzA2YODtgK4"
and fire:
M-x chatgpt-shell
The expected behaviour should be:
Open chatgpt-shell prompt and execute the string... as it happens if we call a searching string on the scratch buffer and call w3m browser...

some install/running issues

Trying to install it to give it a whirl, I ran into some issues!

First, installing it directly through use-package gave me an error that I cannot find back in my messages. Maybe it was this one? reference to free variable ‘chatgpt-shell’

Then I did M-x list-packages, M-x package-refresh-contents, found the package and used this menu to install it.

I've copied over your use-package config and evaluated it.

I've saved my ~/.authinfo after appending machine api.openai.com password "my-key"

I've added

  (setq chatgpt-shell-openai-key
	(auth-source-pick-first-password :host "api.openai.com"))

to the :config part (or to the :init part) of my use-package call and evaluated it.

In my Messages I see this now:

assignment to free variable ‘chatgpt-shell-openai-key’
the function ‘auth-source-pick-first-password’ is not known to be defined. 

M-x chatgpt-shell doesn't work, returning a weird error that I cannot even copy over here because of encoding issues. I've uploaded the plaintext file here:
chatgpt-shell.txt

Looks like it's something with the authentication going wrong! I hope this is enough to help debug it :)

chatgpt-shell-restore-session-from-transcript not working as expected

I'm using main:8c2197ffff51f595ffe7ca1844b6b6604ba855b8 for this test.

Steps to reproduce:

  • Create a testing session a save it with chatgpt-shell-save-session-transcript.
  • Close the session and open a new one.
  • Restore it with chatgpt-shell-restore-session-from-transcript.
  • See the Messages buffer for the error. The shell loads only the fist line of the transcript.

The full error is here:

funcall: Wrong number of arguments: (((v . #s(shell-maker-config "ChatGPT" (closure (t) (_command) (if chatgpt-shell-openai-key nil "Variable `chatgpt-shell-openai-key' needs to be set to your key.

Try M-x set-variable chatgpt-shell-openai-key

or

(setq chatgpt-shell-openai-key \"my-key\")")) (closure (t) (_command history callback error-callback) (shell-maker-async-shell-command (chatgpt-shell--make-curl-request-command-list (chatgpt-shell--make-payload history)) chatgpt-shell-streaming #'chatgpt-shell--extract-chatgpt-response callback error-callback)) (closure (t) (_command output) (chatgpt-shell--put-source-block-overlays) (if chatgpt-shell-on-command-finished-function (progn (funcall chatgpt-shell-on-command-finished-function output)))) (closure (t) (output) (if (chatgpt-shell-openai-key) (string-replace (chatgpt-shell-openai-key) "SK-REDACTED-OPENAI-KEY" output) output)))) (failed) (response) (command (role . "user") (content . "create a python function that sums two numbers")) (validate-command closure (t) (_command) (if chatgpt-shell-openai-key nil "Variable `chatgpt-shell-openai-key' needs to be set to your key.

Try M-x set-variable chatgpt-shell-openai-key

or
...
;; What follows is basically a "elispefied" version of my transcript.txt, so I removed it for brevity

What surprises me is the Try M-x set-variable chatgpt-shell-openai-key, since it is set and working without any problems (I can create the first session).

My setup looks like this:

(require 'chatgpt-shell)
(setq chatgpt-shell-openai-key "my-super-secret-openai-key")
(setq chatgpt-shell-chatgpt-model-version "gpt-3.5-turbo")
(setq chatgpt-shell-chatgpt-streaming t)
(setq chatgpt-shell-chatgpt-system-prompt "You are a senior Python developer in charge of maintaining a very big application")

I'm attaching my transcript.txt file: sumscript.txt

If you need more info, just let me know!

Cannot select gpt-4 for chatgpt-shell-model-version

When I set the model version to gpt-4 like so:

chatgpt-shell-model-version is a variable defined in chatgpt-shell.el.

Value
"gpt-4"

Original Value
"gpt-3.5-turbo"

I get the following response:

Can you provide some config I can use for my doom emacs

The model: gpt-4 does not exist

Am I doing something wrong or is this just not yet supported?

How to recenter?

Whenever I press <RET>, the text in *chatgpt* buffer shifts downwards, I'd like to recenter it. I couldn't find any dedicated hooks in the package source. I'm thinking about advising, which function do you think would be the best to override, or set :after/:before function at?

authinfo passphrase prompt stops emacs daemon from starting

Using this:

(setq chatgpt-shell-openai-key
      (plist-get (car (auth-source-search :host "openai.com"))
                 :secret))

Will have emacs prompt for your gpg passphrase but it won't work when launching Emacs daemon in the background.

I suggest setting chatgpt-shell-openai-key in inferior-chatgpt-mode-hook, like that it will only prompt you after invoking chatgpt-shell:

(add-hook 'inferior-chatgpt-mode-hook
          (lambda ()
            (setq chatgpt-shell-openai-key
                  (plist-get (car (auth-source-search :host "api.openai.com"))
                             :secret))))

Symbol’s value as variable is void: comint--prompt-rear-nonsticky

Running latest chatgpt-shell.el in Emacs 27.2 would yield this error:

Symbol’s value as variable is void: comint--prompt-rear-nonsticky

I assume this is added in newer Emacs versions and IMHO should be checked for it's presence. I solved it by evaluating below code in *scratch*, which is a copied version from the latest comint.el, but still, it should be ignore if is not defned.

(defconst comint--prompt-rear-nonsticky
  '( field inhibit-line-move-field-capture read-only font-lock-face
     insert-in-front-hooks))

Installing in doom-Emacs

This is how I made this work in doom-Emacs:

;; (add-load-path! "/.emacs.d/local-repo/chagpt-shell")
(load-file "
/.emacs.d/local-repo/chatgpt-shell/chatgpt-shell.el")
(require 'chatgpt-shell)
(setq chatgpt-shell-openai-key "my key")

I had to load the file directly, because it didn't load through calling the directory itself

As a susggestion, could you implemente a "you.com" AI and a BARD AI chatgpt-shell like as well?

https://you.com/search?q=who+are+you&tbm=youchat&cfr=chat
https://bard.google.com/

And as a pull request, I would suggest the function to work in vterm as well...
The installation is not an issue, by the way...
Thx for the function...

Symbol’s function definition is void: json-available-p

Since the last modifications, I got the error message written in the title. I see in the code that it's now recommended to have built emacs with the json support, that is, "Emacs needs to be compiled with --with-json".

Is it possible to go back to a minimal config of chatgpt-shell.el if this is not the case, by switching off certain options requiring json? Thank you.

Question: Usage from elisp?

I was wondering if there is support for calling a function that does a "one-off" request and returns the output. This would be very useful for automating stuff.

On the other hand, maybe I'm just too much of a noob and that can be accomplished by somehow sending the arguments to the chatgpt-shell buffer and retrieving the output?

Thanks a lot for this project!

Evil / Doomy compatible version

(Please feel free to close if this is not the direction you are looking to take this package).

I am one of those heathens that use evil (Doom) and I'm very accustomed to using vterm in that context.

I'm not familiar with eshell or comint, but do you think a similar evil UX is achievable (as an optional extra, evil-collection style)?

I guess what I mean is vi modal style manipulation of the chatgpt-shell buffer. Sorry I've not described this very well.

Publish dall-e-shell to MELPA

Similar to melpa/melpa#8479

  • Update dall-e-shell.el to

Package-Requires: ((emacs "27.1") (shell-maker "0.17.1"))

(dall-e-shell
  :fetcher github
  :repo "xenodium/chatgpt-shell"
  :files ("dall-e-shell.el"))
  • Ensure all MELPA checks pass (see chatgpt-shell PR with all initial checkboxes).

  • Send PR to melpa repo.

  • Wait for approval or further feedback.

Shorten the conversation history.

If conversations are long, eventually they become to long to be back presented to chatGPT because of the token limit.
In those cases the history that is given back can be shortened.
This is not perfect of course, but better than the conversation ending in an error, and the only solution for continuing is to manually repeat parts of the conversation.

Symbol’s function definition is void: seq-first

Everything looks good (curl request goes through and all). Then I get:

Symbol’s function definition is void: seq-first

from

(defun chatgpt-shell--extract-content (json)
  "Extract ChatGPT response from JSON."
  (when-let (parsed (chatgpt-shell--json-parse-string json))
    (string-trim
     (map-elt (map-elt (seq-first (map-elt parsed 'choices))
                       'message)
              'content))))

I don't see seq-first in the seq doc. Am I missing something?

Running: GNU Emacs 28.2 (build 1, aarch64-apple-darwin22.3.0, NS appkit-2299.40 Version 13.2.1 (Build 22D68)) of 2023-02-24.

Curl error messages are sent as context to chatGPT

If a request fails, the curl error message is inserted into the comint buffer. This error message is then sent as context to chatGPT in subsequent requests. See log below. Should there be a way to skip curl errors from being sent?

// Request

curl https://api.openai.com/v1/chat/completions --fail --no-progress-meter -m 30 -H Content-Type: application/json -H Authorization: Bearer <redacted> -d {"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"Can you show an example display-buffer-alist? (elisp)"},{"role":"system","content":"Sure, here's an example `display-buffer-alist`:\n\n```\n(setq display-buffer-alist\n      '((\"\\*Help\\*\" display-buffer-same-window)\n        (\"\\\\*magit.*popup\\\\*\" display-buffer-below-selected)\n        (\"\\\\*compilation\\\\*\" display-buffer-same-window)\n        (\"\\\\*Customize.*\\\\*\" display-buffer-same-window)\n        (\".*\" display-buffer-same-window)))\n```\n\nThis alist specifies how to display different kinds of buffers:\n\n- Buffers named \"\\*Help\\*\" are displayed in the same window they were opened in.\n- Buffers whose names match the regexp \"\\\\*magit.*popup\\\\*\" (e.g., \\*magit-commit-popup\\*) are displayed below the selected window.\n- Buffers named \"\\*compilation\\*\" are displayed in the same window they were opened in.\n- Buffers whose names match the regexp \"\\\\*Customize.*\\\\*\" (e.g., \\*Customize Group: magit\\*) are displayed in the same window they were opened in.\n- Any other buffer is displayed in the same window it was opened in.\n\nNote that the order of the entries in the alist matters—the first matching entry determines how the buffer is displayed."},{"role":"user","content":"Show an example of action-alist usage to set a frames height with display-buffer-alist"},{"role":"system","content":"Here's an example of using `action-alist` to set the height of a frame when displaying a buffer:\n\n```\n(defun set-frame-height (frame height)\n  \"Sets the height of FRAME to HEIGHT.\"\n  (set-frame-parameter frame 'height height))\n\n(setq display-buffer-alist\n      '((\"\\\\*Org Agenda\\\\*\" display-buffer-pop-up-frame\n         (set-frame-height . 30))))\n```\n\nIn this example, we have an `action-alist` specified for buffers whose names match the regexp \"\\\\*Org Agenda\\\\*\". When such a buffer is displayed, we use `display-buffer-pop-up-frame` to display it in a separate frame. Additionally, we specify an action to be performed after displaying the buffer—namely, to call the `set-frame-height` function with arguments `frame` (the frame that the buffer was displayed in) and `30` (the height we want to set the frame to).\n\nNote that `action-alist` actions are performed after the buffer is displayed, so we can use them to modify properties of the window/frame that the buffer was displayed in. In this example, we use `set-frame-height` to modify the height of the frame that the buffer is displayed in."},{"role":"user","content":"Is there no built-in way to do that?"}]}

// Response (active)

curl: (28) Operation timed out after 30001 milliseconds with 0 bytes received


// Request

curl https://api.openai.com/v1/chat/completions --fail --no-progress-meter -m 30 -H Content-Type: application/json -H Authorization: Bearer <redacted> -d {"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"Can you show an example display-buffer-alist? (elisp)"},{"role":"system","content":"Sure, here's an example `display-buffer-alist`:\n\n```\n(setq display-buffer-alist\n      '((\"\\*Help\\*\" display-buffer-same-window)\n        (\"\\\\*magit.*popup\\\\*\" display-buffer-below-selected)\n        (\"\\\\*compilation\\\\*\" display-buffer-same-window)\n        (\"\\\\*Customize.*\\\\*\" display-buffer-same-window)\n        (\".*\" display-buffer-same-window)))\n```\n\nThis alist specifies how to display different kinds of buffers:\n\n- Buffers named \"\\*Help\\*\" are displayed in the same window they were opened in.\n- Buffers whose names match the regexp \"\\\\*magit.*popup\\\\*\" (e.g., \\*magit-commit-popup\\*) are displayed below the selected window.\n- Buffers named \"\\*compilation\\*\" are displayed in the same window they were opened in.\n- Buffers whose names match the regexp \"\\\\*Customize.*\\\\*\" (e.g., \\*Customize Group: magit\\*) are displayed in the same window they were opened in.\n- Any other buffer is displayed in the same window it was opened in.\n\nNote that the order of the entries in the alist matters—the first matching entry determines how the buffer is displayed."},{"role":"user","content":"Show an example of action-alist usage to set a frames height with display-buffer-alist"},{"role":"system","content":"Here's an example of using `action-alist` to set the height of a frame when displaying a buffer:\n\n```\n(defun set-frame-height (frame height)\n  \"Sets the height of FRAME to HEIGHT.\"\n  (set-frame-parameter frame 'height height))\n\n(setq display-buffer-alist\n      '((\"\\\\*Org Agenda\\\\*\" display-buffer-pop-up-frame\n         (set-frame-height . 30))))\n```\n\nIn this example, we have an `action-alist` specified for buffers whose names match the regexp \"\\\\*Org Agenda\\\\*\". When such a buffer is displayed, we use `display-buffer-pop-up-frame` to display it in a separate frame. Additionally, we specify an action to be performed after displaying the buffer—namely, to call the `set-frame-height` function with arguments `frame` (the frame that the buffer was displayed in) and `30` (the height we want to set the frame to).\n\nNote that `action-alist` actions are performed after the buffer is displayed, so we can use them to modify properties of the window/frame that the buffer was displayed in. In this example, we use `set-frame-height` to modify the height of the frame that the buffer is displayed in."},{"role":"user","content":"Is there no built-in way to do that?"},{"role":"system","content":"curl: (28) Operation timed out after 30001 milliseconds with 0 bytes received"},{"role":"user","content":"Are you still there?"}]}

// Response (active)

{
  "id": "chatcmpl-6xEvSXvaI0hym5bgbsGwUYa3NweTh",
  "object": "chat.completion",
  "created": 1679576746,
  "model": "gpt-3.5-turbo-0301",
  "usage": {
    "prompt_tokens": 599,
    "completion_tokens": 108,
    "total_tokens": 707
  },
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Yes, sorry about that. To answer your question, there is a built-in way to set the height of the frame when displaying a buffer using `display-buffer-alist`: \n\n```\n(setq display-buffer-alist\n      '((\"\\\\*Org Agenda\\\\*\" display-buffer-pop-up-frame\n         (height . 30))))\n```\n\nIn this example, we specify the `height` parameter in the `display-buffer-alist` entry for the \"\\*Org Agenda\\*\" buffer. This sets the height of the new frame to 30 lines."
      },
      "finish_reason": "stop",
      "index": 0
    }
  ]
}

Feature request: Proofread & replace in place

I just started using the chatgpt-shell-proofreading-doc feature, and I can't even describe how great it is, especially for someone like me, a non-native English speaker. At some point, I might even create custom variants for other languages that I frequently use. However, that's not what I wanted to ask about.

Is it possible to add a feature that can proofread and replace the selected region? Currently, it does it in the *chatgpt* buffer, but almost every time, I have to grab it from there and then go back to the previous buffer. I think this feature would be useful and quite popular. WDYT?

I would even dare to suggest that if no region is selected, proofread the entire content of the buffer. The header may have to change because we can't ask chatgpt to do it for a single paragraph.

[FR] When `clear` gets sent, save the conversation

This enables a history just like ChatGPT

  • The command clear writes the buffers content to a file
  • The filename uses the first prompt to name it
  • Moonshot: Use the API to summarize the first prompt to generate the name (e.g. Summarize in a title the conversation that starts with [PROMPT]), this is toggled with a variable (chatgpt--create-title-on-save)
  • Moonshot: Resume old conversations

Showing the error message along with the error code

I kept receiving 400 as a error code, but I could not understand why. Only when I went to the web interface I realized that the text I was passing to ChatGPT was too long. Would it be possible to add the error message along with the error code in the extension?

Dall-e is temperature sensible.

This is what i mean

<gpt-end-of-prompt>{
  "error": {
    "code": null,
    "message": "Additional properties are not allowed ('temperature' was unexpected)",
    "param": null,
    "type": "invalid_request_error"
  }
}
curl: (22) The requested URL returned error: 400

A more direct Assistant feature.

There is an linux application called shell-gpt. Here the user can command chatGPT to take certain actions in the terminal. chatGPT will then provide a shell command that tries to do the prompted task, the user sees this shell command is is asked if it should be excecuted.
This could be adapted by chatgpt-shell. A command like chatgpt-shell-assistant, could prompt in the minibuffer, and prepend something like "Answer the following request with valid elisp code that implements the requested action: ". The result will be offered for excecution similar to the way shell-gpt does.

What do you think?

option --no-progress-meter: is unknown

Hi, thank you for this great contribution. I tried it and configured the key. However, one option used by curl (on Mac) seems unknown: "--no-progress-meter: is unknown". I've thus installed the newer version of brew, and replaced the call to "curl" in chatgpt-shell.el by "/usr/local/bin/curl" that points to it, but now I got another error : "curl: (22) The requested URL returned error: 429" Any idea of how to solve it? Thanks.

Paste response directly at the point where chatgpt-shell-prompt is called

Its really a very nice emacs packages! I liked it very much!

Here two ideas to improve the usability for day to day usage.:

It would be very handy, if the answer to a question would be inserted directly at the point where the cursor in the emacs buffer is. By this one could easily created documents (eg in org files) without needing to copy them from the chatgpt shell window.

It would be also very handy, when the answer to a question of a marked region, would be directly placed in the next new line down it.

Exceeded current quota API

When I make a query from the chatgpt-shell prompt, it returns the following message: "You exceeded your current quota, please check your plan and billing details. Curl: (22) The requested URL returned error: 429". However, in the usage section of the OpenAI dashboard, it shows that I have not yet made any use of the API. Curl v8.0.1. Emacs v28.1

Improvement suggestions/ideas

Hi, thanks for the package. I'd like to hear whether the following contributions would be appreciated.

Provide a way to track consumed/available tokens.

I'm not sure yet how the token system works. Can it cause additional charges for the user of the API? Maybe it would be nice to be able to track consumed and available tokens.

Provide customization variables to fine tune the model

Currently the package hard codes the model to be gpt-3.5-turbo, and sets the temperature parameter to 0.7. I think this settings should be customisable.

Replace curl with the built-in url.el

Write contents of http requests to a log buffer

[FR] Ask chatgpt about selected region, but prompting with history

You have some functions that act on a region with pre-canned prompts (e.g. chatgpt-shell-describe-code). This seems a common pattern where you want to make some request about the region. You can just use chatgpt-shell-send-region followed by manually typing the question, but it would be nice to facilitate this by reading the prompt from the user, with history. I've hacked together the following that you might want to incorporate after some tidying (maybe use a specific history var?):

  (defvar my-chatgpt-shell-default-prompts '(
                                "Write a unit test for the following code:"
                                "Refactor the following code so that "
                                ))
  (defun my-chatgpt-ask-about-region ()
    "Ask ChatGPT about region using user supplied prompt."
    (interactive)
    (let ((prompt (read-from-minibuffer "Prompt: " nil nil nil nil my-chatgpt-shell-default-prompts)))
      (chatgpt-shell-send-to-buffer
       (concat prompt "\n\n"
               (buffer-substring (region-beginning) (region-end))))
      (chatgpt-shell--send-input)))

You can use M-p/M-f to scroll through previous and future history and edit if necessary.

Use display-buffer to show buffers

Hello,

The commands for displaying the chatgpt and dall-e buffers currently use pop-to-buffer-same-window. It would be nice if it used display-buffer instead, so the window display behaviour can be configured by the user.

To preserve the existing behaviour, you could add a custom variable which is set to pop-to-buffer-same-window and default to using that if non-nil. But if nil, use display-buffer. I think magit does something like this with magit-display-buffer-function.

Thanks very much for this package, it's so much more convenient than the web UI!

No such curl option `--fail-with-body`

Just installed chatgpt-shell on another machine and this new install fails with:

curl: option --fail-with-body: is unknown
curl: try 'curl --help' or 'curl --manual' for more information
<gpt-ignored-response>

BTW, this machine (ubuntu 20.04 LTS) reports curl --verson as: curl 7.68.0

Add comments to region.

chatgpt-shell-explain-code is already a nice thing. But how a function like chatgpt-shell-add-comments-to-region ? with a "system" message that explains that the code should be left totally unchanged, just some helpful, functionality explaining comments should be added to the presented code, we can expect a result that can be directly used to replace the region. I think this would be a really nice way to study some code, or find flaws in newly written code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.