Code Monkey home page Code Monkey logo

openai's Introduction

Unofficial Deno wrapper for the Open AI API

Tags Doc Checks License

Usage

Your Open AI Api key (found here) is needed for this library to work. We recommend setting it as an environment variable. Here is a configuration example.

import { OpenAI } from "https://deno.land/x/openai/mod.ts";

const openAI = new OpenAI(Deno.env.get("YOUR_API_KEY")!);

Completion

import { OpenAI } from "https://deno.land/x/openai/mod.ts";

const openAI = new OpenAI(Deno.env.get("YOUR_API_KEY")!);

const completion = await openAI.createCompletion({
  model: "davinci",
  prompt: "The meaning of life is",
});

console.log(completion.choices);

Chat Completion

import { OpenAI } from "https://deno.land/x/openai/mod.ts";

const openAI = new OpenAI(Deno.env.get("YOUR_API_KEY")!);

const chatCompletion = await openAI.createChatCompletion({
  model: "gpt-3.5-turbo",
  messages: [
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "Who won the world series in 2020?" },
    {
      "role": "assistant",
      "content": "The Los Angeles Dodgers won the World Series in 2020.",
    },
    { "role": "user", "content": "Where was it played?" },
  ],
});

console.log(chatCompletion);

Image

import { OpenAI } from "https://deno.land/x/openai/mod.ts";

const openAI = new OpenAI(Deno.env.get("YOUR_API_KEY")!);

const image = await openAI.createImage({
  prompt: "A unicorn in space",
});

console.log(image);

Edit

import { OpenAI } from "https://deno.land/x/openai/mod.ts";

const openAI = new OpenAI(Deno.env.get("YOUR_API_KEY")!);

const edit = await openAI.createEdit({
  model: "text-davinci-edit-001",
  input: "What day of the wek is it?",
  instruction: "Fix the spelling mistakes",
});

console.log(edit);

Image Edit

import { OpenAI } from "https://deno.land/x/openai/mod.ts";

const openAI = new OpenAI(Deno.env.get("YOUR_API_KEY")!);

const imageEdit = await openAI.createImageEdit({
  image: "@otter.png",
  mask: "@mask.png",
  prompt: "A cute baby sea otter wearing a beret",
  n: 2,
  size: "1024x1024",
});

console.log(imageEdit);

Image Variation

import { OpenAI } from "https://deno.land/x/openai/mod.ts";

const openAI = new OpenAI(Deno.env.get("YOUR_API_KEY")!);

const imageVariation = await openAI.createImageVariation({
  image: "@otter.png",
  n: 2,
  size: "1024x1024",
});

console.log(imageVariation);

Audio Transcription

import { OpenAI } from "https://deno.land/x/openai/mod.ts";

const openAI = new OpenAI(Deno.env.get("YOUR_API_KEY")!);

const transcription = await openAI.createTranscription({
  model: "whisper-1",
  file: "/path/to/your/audio/file.mp3",
});

console.log(transcription);

Maintainers

License

MIT

openai's People

Contributors

blakechambers avatar chuckjonas avatar djmuted avatar ferdousbhai avatar lino-levan avatar load1n9 avatar luizmiranda7 avatar miguel-nascimento avatar mikp0 avatar pseudosavant avatar userpixel avatar ymirke avatar ytkg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

openai's Issues

Add types for handling errors?

I noticed that when there is an error with Open AI API, the response from createChatCompletion looks like this:

{
  error: {
    message: "That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID <redacted> in your message.)", // string
    type: "server_error", // string
    param: null, // any?
    code: null, // string | null
  },
}```

Could you please add the types for the error response to the API?

streaming chat completions with functions

I noticed that the type defintion for createChatCompletionStream options doesn't accept function_call and functions unlike createChatCompletion that accepts them as options

Wrong type returned by createCompletion?

Hello there. Thanks for sharing this library.

I believe the return types for the methods in this library are incorrect. For example, createCompletion is declared as returning Promise<Response>, when it's really returning a promised response.json(), which is of any type, since it's the actual response body data parsed as JSON into an object (see here).

I think, instead, it should either be declared as returning Promise<any> or return Promise<the actual object shape> (shapes are documented here). If you would prefer, I could contribute this change. If so, just let me know if you would prefer Promise<any> or if I should fully type the API.

Export ChatCompletionMessage types

Currently only ChatCompletionMessage is exported.

I need its variations to write guard type guards:

  • SystemCompletionMessage
  • UserCompletionMessage
  • FunctionAwareAssistantCompletionMessage
  • FunctionCompletionMessage
  • AssistantCompletionMessage
  1. Would you be open to a PR that addresses that?
  2. Would you like me to also submit the guard functions (they help when going through the array of messages trying to identify what type each message has. e.g. a function_call and regular content both can have role: 'assistant, so one needs to look inside the object)?

ChatCompletionMessage union types dissallow proper FunctionAwareAssistantCompletionMessage and AssistantCompletionMessage usage

When passing function and function_call to the createChatCompletion() we should expect to have function_call object inside content of a message, however it's impossible to determine whether the openAI response is AssistantCompletionMessage or FunctionAwareAssistantCompletionMessage as both share the same role:

interface AssistantCompletionMessage {
  content: string;
  name?: string;
  role: "assistant";
}

interface FunctionAwareAssistantCompletionMessage {
  content: string | null;
  role: "assistant";
  function_call?: {
    "name": string;
    "arguments": string;
  };
}

And it seems impossible to narrow the type from that point, so properties name from AssistantCompletionMessage and property function_call from FunctionAwareAssistantCompletionMessage can't be accessed.
image

Moreover, those types are not exported, so you can't really create a Type Guard yourself.

Method: createImageVariation: wrong request content-type

When calling the method createImageVariation the following error is returned:

{
    "error": "invalid_request_error: Invalid Content-Type header (application/json), expected multipart/form-data. (HINT: If you're using curl, you can pass -H 'Content-Type: multipart/form-data')"
}

Token limits per message?

Just a question on token limits:

If I use, for example, the 8k token model, can I send a list of messages, each with content equal to 8k tokens, or would the entirety of my messages array have to be under 8k?

Add formatting to the deno.json file

Why? To make deno fmt work as well as IDE-specific auto formatting features.

Suggestion based on the current coding style:

  "fmt": {
    "indentWidth": 2,
    "semiColons": true,
    "singleQuote": false,
    "useTabs": false
  }

Running this against the current code base shows minimal changes (only dangling commas are added in ts files) as well as a double quote usage in the README.

Happy to do a PR if needed.

Add tests

Deno has native testing support. Could we add some tests with mock data just to ensure that the parsing and encoding/decoding functionality works as expected?

Improve error handling

Right now the error handling only exposes the response body via the standard Error class.

  1. This makes it hard to know if the exception was a result of this library or something else
  2. You do not have access other response properties (most importantly statusCode

I'd probably copy axios and do something like this:

class HttpResponseError extends Error {
  httpResponse: Response;

  constructor(message: string, httpResponse: Response) {
    super(message);

    // Set the prototype explicitly to allow instanceof checks
    Object.setPrototypeOf(this, HttpResponseError.prototype);

    this.name = 'HttpResponseError';
    this.httpResponse = httpResponse;
  }
}

Or maybe even just have properties forstatus, body, etc if it's already been parsed/consumed.

Then you can do something like this in your catch blocks:

}catch(e){
  if(e instanceof HttpResponseError){
     if(e.statusCode === 429){
        // retry request with backoff
        return retry();
     }
  }
  throw e;
}

Type guards for messages

When going through the incoming JSON array from OpenAI API, each element can be of a different type.

Deno uses TypeScript which uses type guards to simplify type assertions.

Examples

Let's say we have an array of messages:

const messageArray: ChatCompletionMessage[]
  1. When a FunctionCompletionMessage is being inserted to messageArray, it is important to verify that the last element is of type FunctionAwareAssistantCompletionMessage.

  2. When inserting user input into the messageArray, it is important to check that it is well-formed (UserCompletionMessage).

Suggestion

Add the following guard functions:

  • isSystemCompletionMessage(x: unknown): x is SystemCompletionMessage
  • isUserCompletionMessage(x: unknown): x is UserCompletionMessage
  • isFunctionAwareAssistantCompletionMessage(x: unknown): x is FunctionAwareAssistantCompletionMessage
  • isFunctionCompletionMessage(x: unknown): x is FunctionCompletionMessage
  • isAssistantCompletionMessage(x: unknown): x is AssistantCompletionMessage

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.