Code Monkey home page Code Monkey logo

openai-api's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai-api's Issues

Content Filter LogProps is null

After applying the content filter like so:

var content_to_classify = gptResponse.data["choices"][0]["text"];

const gptFilterResponse = await openai.complete({
engine: "content-filter-alpha-c4",
prompt: "<|endoftext|>" + content_to_classify + "\n--\nLabel:",
maxTokens: 1,
temperature: 0,
topP: 1,
presencePenalty: 0,
frequencyPenalty: 0,
});

I get the following output:
{
id: 'cmpl-3NMfxu1YGjNTNeWSeNXnktFboYLoM',
object: 'text_completion',
created: 1626697721,
model: 'toxicity-double-18',
choices: [ { text: '2', index: 0, logprobs: null, finish_reason: 'length' } ]
}

While the output label is fine, I'm not getting the logprobs to evaluate how sure the filter is in "2".

Key validity check

Someone can suggest how I can check the validity of the key before I start using OpenAI ?

How to use streams?

If I set the stream parameter to true, how do I read from it and display it in real-time? The only methods available for me to hook into are then/catch/finally, wouldn't an on method help in this case?

Error: Uncaught (in promise) Error: Request failed with status code 401

Reproduce

  1. Generate the key from the OpenAPI.

  2. Installation

npm install openai-api

  1. Copy/past the example in my NextJS component ( Button )

const sendRequest = async (event: React.MouseEvent<HTMLImageElement>) => {
		const request = {
			title: prompt,
			output: 'Done',
		};
		setRequests(request);

		const gptResponse = await openai.complete({
			engine: 'davinci',
			prompt: 'this is a test',
			maxTokens: 5,
			temperature: 0.9,
			topP: 1,
			presencePenalty: 0,
			frequencyPenalty: 0,
			bestOf: 1,
			n: 1,
			stream: false,
			stop: ['\n', 'testing'],
		});

		console.log(gptResponse.data);
	};

Expected

Print the result

Current

createError.js?6b1b:16 Uncaught (in promise) Error: Request failed with status code 401
    at createError (webpack-internal:///(:3000/app-client)/./node_modules/openai-api/node_modules/axios/lib/core/createError.js:16:15)
    at settle (webpack-internal:///(:3000/app-client)/./node_modules/openai-api/node_modules/axios/lib/core/settle.js:17:12)
    at XMLHttpRequest.onloadend (webpack-internal:///(:3000/app-client)/./node_modules/openai-api/node_modules/axios/lib/adapters/xhr.js:54:7)

Screenshot

Screenshot from 2023-06-18 00-05-24

No answers property

Hello,

I tried the library on a react project but it seems like there are only 3 methods available on the openai object. But the documentation shows the method answers. Am I doing something wrong?

import OpenAI from "openai-api";

const openai = new OpenAI(process.env.REACT_APP_API_KEY!);

This is how I initialize it.

Feature Request: Analyzing the data

Can we also analyze the data on the go, when making requests? I know it might cost extra bucks but it could be great if we can classify the type of request or the response category.

Finetune support

Wondering if / when finetune support will be available (fetch finetunes, run finetunes with data, etc.).

Timeout issue

Sometimes OpenAPI Api responds for a bit longer. This package fails to handle this:

} connect ETIMEDOUT 52.152.96.252:443

I suggest adding a parameter to handle timeouts.

Support for Handling error e.g. HTTP 429

Hi.. thanks for the nice library!

I have a small question.. Is there a good way to handle error returned by openai api response?
For example, when you're running out of Quota, it will reply with HTTP 429 response.

Thanks.

Feature Request: Support For Q&A

Q&A support would be really great, I may submit a PR but not 100% if my JavaScript is up to standard! ๐Ÿ˜…

If I don't end up doing this, this would be a really great feature, this is basically the only openai-api JavaScript library.

Can not increase output number "n"

Problem with multiple outputs,

When "n" is 1, everything works correctly but when I increase n, it gives 500 error
"Rejection: SyntaxError: The string did not match the expected pattern."

Providing training data set

How can i give the training data set ? which is accepted for only first time then we can only pass prompt to the api which will help to save the tokens

Mismatch between property names in example and the ones expected by `_send_request()` method

Hi,

Noticed that: best_of , presence_penalty and frequency_penalty weren't showing up in the request object when submitting the prompt.
The example uses snake cased property names while the _send_request() method expects a camel cased version of the above three properties.

After changing the casing to camel in the example everything was ok, just wanted to let the author know about this typo.

Regards!

How to use logit bias?

Hello! Is logit_bias an option with this wrapper? I built a backend off of this wrapper in js and now realize logit_bias is going to be a key part of it for keeping the api from aping my prompts. It's hard to tell if it works at all or if so what the syntax is.

Installation problems

Source:
https://harishgarg.com/writing/how-to-build-a-serverless-gpt-3-powered-using-nextjs-react/

Suspected core Issues (full log below):
-> Error: 403 status code downloading tarball https://tokenizers-releases.s3.amazonaws.com/node/0.7.0/index-v0.7.0-node-v88-win32-x64-unknown.tar.gz
-> Unsupported engine for [email protected]: wanted: {"node":">=10 < 11 || >=12 <14"} (current: {"node":"14.16.0","npm":"6.14.11"})
-> npm ERR! Failed at the [email protected] install script. This is probably not a problem with npm. There is likely additional logging output above.

Attempts (windows):
-> nvm use 10.13
failed
-> npm install node-pre-gyp prior
failed
-> npm --build-from-source install bcrypt
failed

Full error log (VSC/ Windows10 64x):

PS C:\Users\j\Desktop\gpt3app\gpt-3-app> npm i openai-api
npm WARN deprecated [email protected]: Please upgrade to @mapbox/node-pre-gyp: the non-scoped node-pre-gyp package is deprecated and only the @mapbox scoped package will recieve updates in the future

[email protected] install C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\tokenizers
node-pre-gyp install

node-pre-gyp WARN Using needle for node-pre-gyp https download
node-pre-gyp ERR! install error
node-pre-gyp ERR! stack Error: 403 status code downloading tarball https://tokenizers-releases.s3.amazonaws.com/node/0.7.0/index-v0.7.0-node-v88-win32-x64-unknown.tar.gz
node-pre-gyp ERR! stack at PassThrough. (C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\tokenizers\node_modules\node-pre-gyp\lib\install.js:142:27)
node-pre-gyp ERR! stack at PassThrough.emit (node:events:381:22)
node-pre-gyp ERR! stack at ClientRequest. (C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\needle\lib\needle.js:508:9)
node-pre-gyp ERR! stack at Object.onceWrapper (node:events:476:26)
node-pre-gyp ERR! stack at ClientRequest.emit (node:events:369:20)
node-pre-gyp ERR! stack at HTTPParser.parserOnIncomingClient [as onIncoming] (node:_http_client:646:27)
node-pre-gyp ERR! stack at HTTPParser.parserOnHeadersComplete (node:_http_common:129:17)
node-pre-gyp ERR! stack at TLSSocket.socketOnData (node:_http_client:512:22)
node-pre-gyp ERR! stack at TLSSocket.emit (node:events:369:20)
node-pre-gyp ERR! stack at addChunk (node:internal/streams/readable:313:12)
node-pre-gyp ERR! System Windows_NT 10.0.19042
node-pre-gyp ERR! command "C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\node\bin\node.exe" "C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\tokenizers\node_modules\node-pre-gyp\bin\node-pre-gyp" "install"
node-pre-gyp ERR! cwd C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\tokenizers
node-pre-gyp ERR! node -v v15.14.0
node-pre-gyp ERR! node-pre-gyp -v v0.14.0
node-pre-gyp ERR! not ok
403 status code downloading tarball https://tokenizers-releases.s3.amazonaws.com/node/0.7.0/index-v0.7.0-node-v88-win32-x64-unknown.tar.gz
npm WARN notsup Unsupported engine for [email protected]: wanted: {"node":">=10 < 11 || >=12 <14"} (current: {"node":"14.16.0","npm":"6.14.11"})
npm WARN notsup Not compatible with your version of node/npm: [email protected]
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: node-pre-gyp install
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\j\AppData\Roaming\npm-cache_logs\2021-04-21T05_47_51_510Z-debug.log
PS C:\Users\j\Desktop\gpt3app\gpt-3-app>

The package works in development but does not work in production.

I created a simple question-answering bot using this package and it runs perfectly fine on my local machine but when I deployed it to Vercel it stopped working and is giving 401 error.

I have checked everything, everything is fine -- the environment variables etc.

Here is my GitHub repository - Github Repo

Here is the deployed app on Vercel - App

Please look into it and what's causing the issue?

Enable Content Filtering completion endpoint

Hi! ๐Ÿ‘‹

Firstly, thanks for your work on this project! ๐Ÿ™‚

Today I used patch-package to patch [email protected] for the project I'm working on.

In order to use OpenAI in production, we are required to use the content filtering API, which is a special completion engine, as described here https://beta.openai.com/docs/engines/content-filter.

Also, unrelated but I seemingly needed to patch the export to satisfy TypeScript, which is in the diff. Feel free to consider that separately.

Here is the diff that solved my problem:

diff --git a/node_modules/openai-api/config.js b/node_modules/openai-api/config.js
index 60b4a32..6415103 100644
--- a/node_modules/openai-api/config.js
+++ b/node_modules/openai-api/config.js
@@ -1,5 +1,5 @@
 const DEFAULT_ENGINE = 'davinci';
-const ENGINE_LIST = ['ada', 'babbage', 'curie', 'davinci', 'davinci-instruct-beta', 'curie-instruct-beta'];
+const ENGINE_LIST = ['ada', 'babbage', 'curie', 'davinci', 'davinci-instruct-beta', 'curie-instruct-beta', 'content-filter-alpha-c4'];
 const ORIGIN = 'https://api.openai.com';
 const API_VERSION = 'v1';
 const OPEN_AI_URL = `${ORIGIN}/${API_VERSION}`
diff --git a/node_modules/openai-api/index.d.ts b/node_modules/openai-api/index.d.ts
index c126d0c..9755c4b 100644
--- a/node_modules/openai-api/index.d.ts
+++ b/node_modules/openai-api/index.d.ts
@@ -61,5 +61,5 @@ declare module 'openai-api' {
         encode(str: string): number[];
         search(opts: SearchOpts): Promise<Search>;
     }
-    export = OpenAI;
+    export default OpenAI;
 }

This issue body was partially generated by patch-package.

Content Filter Endpoint

I was wondering if it is possible to use the content filter with the complete function?

https://beta.openai.com/docs/content-filter
According to the documentation, you should be able to just reference the engine and set the options, see example.

Example;

contentFilter: async function(prompt){
            let gpt3_options = {
                prompt: prompt,
                engine: 'content-filter-alpha-c4',
                maxTokens: 1,
                temperature: 0.0,
                topP: 0

            }

            let gpt3_repsonse = await openai.complete(gpt3_options)
            return await gpt3_response
}

Error;

UnhandledPromiseRejectionWarning: Error: Request failed with status code 400

How to handle error for openai.answers

For the search module (openai.answers) I can't find out how to cope with errors.

The error message that pops up has this at the end:
data: {
error: {
code: null,
message: "No similar documents were found in file with ID 'file-KAPTlH2AyKZGJF8nL1uhYnYE'.Please upload more documents or adjust your query.",
param: null,
type: 'invalid_request_error'
}
}

If anyone can help me find out where to look.

User field in completions

Hello.

I need to use your plugin, but when using the API for a Public Chatbot, OpenAI forces me to send a String field called "user", in which I have to send the user Id.

Could you add that field?

Edit
I am also forced to use the content filter, which uses the engine "content-filter-alpha-c4" Could you add it too?

Best regards

Allow custom `engine`

Since the fine-tuning endpoints are out, the engine can be an arbitrary string. This library should either use the engines endpoint to determine the list of available engines (rather than hardcoding them) or just allow any arbitrary string for an engine.

.engines

Every time I call .engines endpoint.

Invalid request to GET /v1/engines. You provided a body with this GET request. Either submit a POST request, or pass your arguments in the query string.

Feature request: support the new createEmbedding call

Apparently there's a new createEmbedding function, where the purpose is to allow you to create embeddings for a chunk of data with a cheaper engine e.g. babbage, and then use those embeddings in calls to davinci.

It's a processing and especially cost optimization.

Initial info from the open ai team is here: https://openai.com/blog/introducing-text-and-code-embeddings/

I think in Javascript it is called like this, but I haven't been able to get it work because it 'illegally' sets a user-agent and so xhr doesn't actually make the 'createEmbedding' call from the browser.

const getOpenAI = () => {
        const { Configuration, OpenAIApi } = require("openai")
        const configuration = new Configuration({
            apiKey: process.env.REACT_APP_OPEN_AI_KEY,
        })
        const openai = new OpenAIApi(configuration)

        return openai
    }

   // The open ai team suggestion is, depending on the scenario, to create and store embeddings using a cheaper engine
  // like babbage, and then use the embeddings in a call to davinci.
    const handleCreateEmbeddings = async () => {
        const openai = getOpenAI()
        const response = await openai.createEmbedding("text-babbage-001", {
            prompt: "Say this is a test",
            max_tokens: 6,
        })

        console.log("response", response)
    }

That's 1 reason why I've been using this openai-api wrapper library.

Copy pasted text from Juston at Open AI....
"As an alternative, we'd recommend looking into implementing an Embeddings + Completion call workflow, where you embed all of your documents/information and store the embeddings. Afterwards, you embed your query/question, compare it to your stored embeddings to find the nearest neighbors, and then use the nearest neighbors to provide "context" for your completion call. With this method, you're only charged for the cost of embedding the documents a single time vs every time with a search call. Here's an example that illustrates this -> https://beta.openai.com/playground/p/TLxOrLWyAY8fsO1G0XXt72GN?model=text-davinci-001"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.