njerschow / openai-api Goto Github PK
View Code? Open in Web Editor NEWA tiny client module for the openAI API
A tiny client module for the openAI API
After applying the content filter like so:
var content_to_classify = gptResponse.data["choices"][0]["text"];
const gptFilterResponse = await openai.complete({
engine: "content-filter-alpha-c4",
prompt: "<|endoftext|>" + content_to_classify + "\n--\nLabel:",
maxTokens: 1,
temperature: 0,
topP: 1,
presencePenalty: 0,
frequencyPenalty: 0,
});
I get the following output:
{
id: 'cmpl-3NMfxu1YGjNTNeWSeNXnktFboYLoM',
object: 'text_completion',
created: 1626697721,
model: 'toxicity-double-18',
choices: [ { text: '2', index: 0, logprobs: null, finish_reason: 'length' } ]
}
While the output label is fine, I'm not getting the logprobs to evaluate how sure the filter is in "2".
Someone can suggest how I can check the validity of the key before I start using OpenAI ?
If I set the stream
parameter to true
, how do I read from it and display it in real-time? The only methods available for me to hook into are then
/catch
/finally
, wouldn't an on
method help in this case?
Generate the key from the OpenAPI.
Installation
npm install openai-api
const sendRequest = async (event: React.MouseEvent<HTMLImageElement>) => {
const request = {
title: prompt,
output: 'Done',
};
setRequests(request);
const gptResponse = await openai.complete({
engine: 'davinci',
prompt: 'this is a test',
maxTokens: 5,
temperature: 0.9,
topP: 1,
presencePenalty: 0,
frequencyPenalty: 0,
bestOf: 1,
n: 1,
stream: false,
stop: ['\n', 'testing'],
});
console.log(gptResponse.data);
};
Print the result
createError.js?6b1b:16 Uncaught (in promise) Error: Request failed with status code 401
at createError (webpack-internal:///(:3000/app-client)/./node_modules/openai-api/node_modules/axios/lib/core/createError.js:16:15)
at settle (webpack-internal:///(:3000/app-client)/./node_modules/openai-api/node_modules/axios/lib/core/settle.js:17:12)
at XMLHttpRequest.onloadend (webpack-internal:///(:3000/app-client)/./node_modules/openai-api/node_modules/axios/lib/adapters/xhr.js:54:7)
I was only able to find API calls for "Completion", "Answers", "Search" and "Classification", can we access other API calls as well?
Hello,
I tried the library on a react project but it seems like there are only 3 methods available on the openai object. But the documentation shows the method answers. Am I doing something wrong?
import OpenAI from "openai-api";
const openai = new OpenAI(process.env.REACT_APP_API_KEY!);
This is how I initialize it.
Can we also analyze the data on the go, when making requests? I know it might cost extra bucks but it could be great if we can classify the type of request or the response category.
Wondering if / when finetune support will be available (fetch finetunes, run finetunes with data, etc.).
Sometimes OpenAPI Api responds for a bit longer. This package fails to handle this:
} connect ETIMEDOUT 52.152.96.252:443
I suggest adding a parameter to handle timeouts.
Hi.. thanks for the nice library!
I have a small question.. Is there a good way to handle error returned by openai api response?
For example, when you're running out of Quota, it will reply with HTTP 429 response.
Thanks.
[email protected] currently only works for node >=10 < 11 || >=12 <14
People on later versions of node can't use your library unfortunately. This is the OpenAI client I like but I had to swap it for something else.
I'm making a GraphQL API using this lib it would be great to have the files, answers, classifications, engines endpoints available as well, I will open a PR shortly with the changes but wanted to open an issue first.
It'd be great to add an example successful response.data
to the README.md.
Consider converting the numeric params as max_tokens etc to numbers. When I was sending "100" for examples, as max_tokens, the API resulted in an error. Various cases where this may happen is if we are passing along user input directly etc
For ex
https://github.com/Njerschow/openai-api/blob/ff691c79028378c0d7003b5beea5d30d2db6924c/index.js#L21
By replacing axios
to native fetch
and removed dotenv
, you can reach pure dependency.
Q&A support would be really great, I may submit a PR but not 100% if my JavaScript is up to standard! ๐
If I don't end up doing this, this would be a really great feature, this is basically the only openai-api JavaScript library.
Problem with multiple outputs,
When "n" is 1, everything works correctly but when I increase n, it gives 500 error
"Rejection: SyntaxError: The string did not match the expected pattern."
How can i give the training data set ? which is accepted for only first time then we can only pass prompt to the api which will help to save the tokens
Hi,
Noticed that: best_of
, presence_penalty
and frequency_penalty
weren't showing up in the request object when submitting the prompt.
The example uses snake cased property names while the _send_request()
method expects a camel cased version of the above three properties.
After changing the casing to camel in the example everything was ok, just wanted to let the author know about this typo.
Regards!
Hello! Is logit_bias
an option with this wrapper? I built a backend off of this wrapper in js and now realize logit_bias is going to be a key part of it for keeping the api from aping my prompts. It's hard to tell if it works at all or if so what the syntax is.
I using same model and getting correct response in python library.
Source:
https://harishgarg.com/writing/how-to-build-a-serverless-gpt-3-powered-using-nextjs-react/
Suspected core Issues (full log below):
-> Error: 403 status code downloading tarball https://tokenizers-releases.s3.amazonaws.com/node/0.7.0/index-v0.7.0-node-v88-win32-x64-unknown.tar.gz
-> Unsupported engine for [email protected]: wanted: {"node":">=10 < 11 || >=12 <14"} (current: {"node":"14.16.0","npm":"6.14.11"})
-> npm ERR! Failed at the [email protected] install script. This is probably not a problem with npm. There is likely additional logging output above.
Attempts (windows):
-> nvm use 10.13
failed
-> npm install node-pre-gyp prior
failed
-> npm --build-from-source install bcrypt
failed
PS C:\Users\j\Desktop\gpt3app\gpt-3-app> npm i openai-api
npm WARN deprecated [email protected]: Please upgrade to @mapbox/node-pre-gyp: the non-scoped node-pre-gyp package is deprecated and only the @mapbox scoped package will recieve updates in the future
[email protected] install C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\tokenizers
node-pre-gyp install
node-pre-gyp WARN Using needle for node-pre-gyp https download
node-pre-gyp ERR! install error
node-pre-gyp ERR! stack Error: 403 status code downloading tarball https://tokenizers-releases.s3.amazonaws.com/node/0.7.0/index-v0.7.0-node-v88-win32-x64-unknown.tar.gz
node-pre-gyp ERR! stack at PassThrough. (C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\tokenizers\node_modules\node-pre-gyp\lib\install.js:142:27)
node-pre-gyp ERR! stack at PassThrough.emit (node:events:381:22)
node-pre-gyp ERR! stack at ClientRequest. (C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\needle\lib\needle.js:508:9)
node-pre-gyp ERR! stack at Object.onceWrapper (node:events:476:26)
node-pre-gyp ERR! stack at ClientRequest.emit (node:events:369:20)
node-pre-gyp ERR! stack at HTTPParser.parserOnIncomingClient [as onIncoming] (node:_http_client:646:27)
node-pre-gyp ERR! stack at HTTPParser.parserOnHeadersComplete (node:_http_common:129:17)
node-pre-gyp ERR! stack at TLSSocket.socketOnData (node:_http_client:512:22)
node-pre-gyp ERR! stack at TLSSocket.emit (node:events:369:20)
node-pre-gyp ERR! stack at addChunk (node:internal/streams/readable:313:12)
node-pre-gyp ERR! System Windows_NT 10.0.19042
node-pre-gyp ERR! command "C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\node\bin\node.exe" "C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\tokenizers\node_modules\node-pre-gyp\bin\node-pre-gyp" "install"
node-pre-gyp ERR! cwd C:\Users\j\Desktop\gpt3app\gpt-3-app\node_modules\tokenizers
node-pre-gyp ERR! node -v v15.14.0
node-pre-gyp ERR! node-pre-gyp -v v0.14.0
node-pre-gyp ERR! not ok
403 status code downloading tarball https://tokenizers-releases.s3.amazonaws.com/node/0.7.0/index-v0.7.0-node-v88-win32-x64-unknown.tar.gz
npm WARN notsup Unsupported engine for [email protected]: wanted: {"node":">=10 < 11 || >=12 <14"} (current: {"node":"14.16.0","npm":"6.14.11"})
npm WARN notsup Not compatible with your version of node/npm: [email protected]
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: node-pre-gyp install
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\j\AppData\Roaming\npm-cache_logs\2021-04-21T05_47_51_510Z-debug.log
PS C:\Users\j\Desktop\gpt3app\gpt-3-app>
I created a simple question-answering bot using this package and it runs perfectly fine on my local machine but when I deployed it to Vercel it stopped working and is giving 401 error.
I have checked everything, everything is fine -- the environment variables etc.
Here is my GitHub repository - Github Repo
Here is the deployed app on Vercel - App
Please look into it and what's causing the issue?
Hello!
Thanks for making this module it's helped me get my OpenAI bot up and running.
Looking into applying for production and am seeing that their guidelines recommend using the content filter. Is there a way to call that in node?
Here is their reference information which includes how to do it in Python: https://beta.openai.com/docs/engines/content-filter
Hi! ๐
Firstly, thanks for your work on this project! ๐
Today I used patch-package to patch [email protected]
for the project I'm working on.
In order to use OpenAI in production, we are required to use the content filtering API, which is a special completion engine, as described here https://beta.openai.com/docs/engines/content-filter.
Also, unrelated but I seemingly needed to patch the export to satisfy TypeScript, which is in the diff. Feel free to consider that separately.
Here is the diff that solved my problem:
diff --git a/node_modules/openai-api/config.js b/node_modules/openai-api/config.js
index 60b4a32..6415103 100644
--- a/node_modules/openai-api/config.js
+++ b/node_modules/openai-api/config.js
@@ -1,5 +1,5 @@
const DEFAULT_ENGINE = 'davinci';
-const ENGINE_LIST = ['ada', 'babbage', 'curie', 'davinci', 'davinci-instruct-beta', 'curie-instruct-beta'];
+const ENGINE_LIST = ['ada', 'babbage', 'curie', 'davinci', 'davinci-instruct-beta', 'curie-instruct-beta', 'content-filter-alpha-c4'];
const ORIGIN = 'https://api.openai.com';
const API_VERSION = 'v1';
const OPEN_AI_URL = `${ORIGIN}/${API_VERSION}`
diff --git a/node_modules/openai-api/index.d.ts b/node_modules/openai-api/index.d.ts
index c126d0c..9755c4b 100644
--- a/node_modules/openai-api/index.d.ts
+++ b/node_modules/openai-api/index.d.ts
@@ -61,5 +61,5 @@ declare module 'openai-api' {
encode(str: string): number[];
search(opts: SearchOpts): Promise<Search>;
}
- export = OpenAI;
+ export default OpenAI;
}
This issue body was partially generated by patch-package.
I was wondering if it is possible to use the content filter with the complete function?
https://beta.openai.com/docs/content-filter
According to the documentation, you should be able to just reference the engine and set the options, see example.
Example;
contentFilter: async function(prompt){
let gpt3_options = {
prompt: prompt,
engine: 'content-filter-alpha-c4',
maxTokens: 1,
temperature: 0.0,
topP: 0
}
let gpt3_repsonse = await openai.complete(gpt3_options)
return await gpt3_response
}
Error;
UnhandledPromiseRejectionWarning: Error: Request failed with status code 400
I use custom engine and it returns and error.
curie:ft-gcore-labs-2021-12-10-13-02-16
But it's working in bash.
Are there any examples available on how to use this package when "stream" is set to "true"? I'm currently trying to make this work, but I don't know how...
Thanks!
https://openai.com/blog/introducing-chatgpt-and-whisper-apis
Please support two APIs
Is there anything wrong with poking the open api from the client? (For example CORS)
For the search module (openai.answers) I can't find out how to cope with errors.
The error message that pops up has this at the end:
data: {
error: {
code: null,
message: "No similar documents were found in file with ID 'file-KAPTlH2AyKZGJF8nL1uhYnYE'.Please upload more documents or adjust your query.",
param: null,
type: 'invalid_request_error'
}
}
If anyone can help me find out where to look.
there is no way to set a timeout currently which becomes a problem if one of the requests gets stuck.
stop: ['\n', "testing"]in
should be
stop: ['\n', "testing"]
Hello.
I need to use your plugin, but when using the API for a Public Chatbot, OpenAI forces me to send a String field called "user", in which I have to send the user Id.
Could you add that field?
Edit
I am also forced to use the content filter, which uses the engine "content-filter-alpha-c4" Could you add it too?
Best regards
Since the fine-tuning endpoints are out, the engine
can be an arbitrary string. This library should either use the engines endpoint to determine the list of available engines (rather than hardcoding them) or just allow any arbitrary string for an engine.
Every time I call .engines endpoint.
Invalid request to GET /v1/engines. You provided a body with this GET request. Either submit a POST request, or pass your arguments in the query string.
Apparently there's a new createEmbedding function, where the purpose is to allow you to create embeddings for a chunk of data with a cheaper engine e.g. babbage, and then use those embeddings in calls to davinci.
It's a processing and especially cost optimization.
Initial info from the open ai team is here: https://openai.com/blog/introducing-text-and-code-embeddings/
I think in Javascript it is called like this, but I haven't been able to get it work because it 'illegally' sets a user-agent and so xhr doesn't actually make the 'createEmbedding' call from the browser.
const getOpenAI = () => {
const { Configuration, OpenAIApi } = require("openai")
const configuration = new Configuration({
apiKey: process.env.REACT_APP_OPEN_AI_KEY,
})
const openai = new OpenAIApi(configuration)
return openai
}
// The open ai team suggestion is, depending on the scenario, to create and store embeddings using a cheaper engine
// like babbage, and then use the embeddings in a call to davinci.
const handleCreateEmbeddings = async () => {
const openai = getOpenAI()
const response = await openai.createEmbedding("text-babbage-001", {
prompt: "Say this is a test",
max_tokens: 6,
})
console.log("response", response)
}
That's 1 reason why I've been using this openai-api wrapper library.
Copy pasted text from Juston at Open AI....
"As an alternative, we'd recommend looking into implementing an Embeddings + Completion call workflow, where you embed all of your documents/information and store the embeddings. Afterwards, you embed your query/question, compare it to your stored embeddings to find the nearest neighbors, and then use the nearest neighbors to provide "context" for your completion call. With this method, you're only charged for the cost of embedding the documents a single time vs every time with a search call. Here's an example that illustrates this -> https://beta.openai.com/playground/p/TLxOrLWyAY8fsO1G0XXt72GN?model=text-davinci-001"
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.