Code Monkey home page Code Monkey logo

easy-open-ai's Introduction

Easy-open-ai

This repository contains the community-maintained library for OpenAI's API in java, the easiest way to use GPT 3/4 in your applications.

Maven Central

Your Custom Badge

Overview

This Java library (community-maintained Library) provides a convenient way to interact with OpenAI's API for both Moderation and Chat Completion. The library encapsulates the necessary details, making it easy to integrate OpenAI's powerful models into your Java applications. This project is not maintained by OPENAI, this is an unofficial library.

Table of Contents

Single Instance

Asynchronous

Multi-Asynchronous

Multi-Asynchronous-MultiKey

Contributing Guidelines

Please refer CONTRIBUTING.md

Running guidelines

All "OPENAI_KEYS" must be read through the readKeys() function defined here, this function allows you to read multiple keys at the same time, and for multithreaded task it is adviced to use multiple keys to avoid rate limiting. To run this function you need to have keys.txt in your project root folder (feel free to edit).

Usage - syntax

Chat Completion API

To use the Chat Completion API, follow these steps:

Message message = Message.builder()
       .role("user")
       .content("what is the capital of Cambodia?")
       .build();

List<Message> messages = new ArrayList<>();
messages.add(message);

ChatCompletionRequest request = ChatCompletionRequest.builder()
       .model("gpt-3.5-turbo")
       .messages(messages)
       .build();

ChatCompletionResponse response = new EasyopenaiService(new DAOImpl()).chatCompletion("OPENAI_KEY",request);

Click here to jump to the code example.

Moderation API

Single Request, Single Response

To use the Moderation API, follow these steps:

ModerationAPIRequest request = ModerationAPIRequest.builder()
        .model("text-moderation-latest")
        .input("hello from the other side kill me now")
        .build();

ModerationAPIResponse res = new EasyopenaiService(new DAOImpl()).getmoderation("OPENAI_KEY",request);

Click here to jump to the code example.

Easy Vision API

Vision API can be used like this, feel free to add N number of images in list of Images-

VisionApiResponse responseobj = new EasyVisionService().VisionAPI("OPENAI_KEY", new EasyVisionRequest()
       .setModel("gpt-4-vision-preview")
       .setPrompt("What are the difference between these images?")
       .setImageUrls(new ArrayList<String>() {{
           add("https://images.pexels.com/photos/268533/pexels-photo-268533.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1");
           add("https://images.pexels.com/photos/36717/amazing-animal-beautiful-beautifull.jpg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1");
       }}));
.setMaxtokens(1000) //Optional - for more tokens, defaults to 300

Click here to jump to the code example.

Vision API

To use the Vision API, follow these steps:

  1. Initialize the EasyopenaiService object with an instance of DAOImpl:
EasyopenaiService obj = new EasyopenaiService(new DAOImpl());
  1. Create a VisionApiRequest object:
VisionApiRequest request = new VisionApiRequest();
  1. Create an ImageUrl object with the image URL and detail:
ImageUrl url = new ImageUrl();
url.setUrl("https://images.pexels.com/photos/18907092/pexels-photo-18907092/free-photo-of-a-photo-of-the-golden-gate-bridge-in-the-sky.jpeg");
url.setDetail("low");
  1. Create a list of Content objects to represent the messages:
Content content1 = new Content();
content1.setText("What’s in this image?");
content1.setType("text");

Content content2 = new Content();
content2.setImageUrl(url);
content2.setType("image_url");

ArrayList<Content> listofContent = new ArrayList<>();
listofContent.add(content1);
listofContent.add(content2);
  1. Create a MessageList object with the role ("user") and the list of content:
MessageList messageList = new MessageList();
messageList.setRole("user");
messageList.setContent(listofContent);
  1. Create a list of MessageList objects:
ArrayList<MessageList> listofMessage = new ArrayList<>();
listofMessage.add(messageList);
  1. Set various properties of the VisionApiRequest object:
request.setModel("gpt-4-vision-preview");
request.setMaxTokens(300);
request.setMessages(listofMessage);
  1. Make the API call and Print Response:
VisionApiResponse res = obj.visionAPI(OPENAI_KEY, request);
System.out.println("Vision API Response is:" + res);

Click here to jump to the code example.

Speech API

Speech API can be used like this, feel free to tweak-

SpeechRequest request = SpeechRequest.builder()
       .model("tts-1")
       .input("Easy OpenAI is best solution.")
       .voice("alloy")
       .build();

ResponseBody response = new EasyopenaiService(new DAOImpl()).createSpeech("OPENAI_KEY",request);

Click here to jump to the code example.

Easy Transcription API

Transcription API can be used like this, feel free to tweak-

EasyTranscriptionRequest request  =  EasyTranscriptionRequest.builder()
      .filepath("/Users/namankhurpia/Desktop/audio.mp3")
      .model("whisper-1")
      .build();

ResponseBody response = new EasyTranscriptionService().EasyTranscription("OPENAI_KEY", request);

Click here to jump to the code example.

Transcription API

Original Transcription API by OpenAI

Transcription API can be used like this, feel free to tweak-

File audioFile = new File("/Users/namankhurpia/Desktop/audio.mp3");
MultipartBody.Part filePart = MultipartBody.Part.createFormData(
       "file",
       audioFile.getName(),
       RequestBody.create(MediaType.parse("audio/*"), audioFile)
);

RequestBody model = RequestBody.create(MediaType.parse("text/plain"), "whisper-1");
RequestBody language = RequestBody.create(MediaType.parse("text/plain"), "");
RequestBody prompt = RequestBody.create(MediaType.parse("text/plain"), "");
RequestBody response_format = RequestBody.create(MediaType.parse("text/plain"), "");
RequestBody temperature = RequestBody.create(MediaType.parse("text/plain"), "");

ResponseBody response = new EasyopenaiService(new DAOImpl()).createTranscriptions("OPENAI_KEY", filePart,model, language,prompt,response_format,temperature);

Click here to jump to the code example.

Image Generation API

Image Generation API can be used like this, feel free to tweak-

ImageRequest imageRequest  = ImageRequest.builder()
       .prompt("Generate a cute dog playing with frizbee")
       .model("dall-e-2")
       .build();

ImageResponse response = new EasyopenaiService(new DAOImpl()).createImage("OPENAI_KEY",imageRequest);

Click here to jump to the code example.

Async Chat Completion API

To use the Chat Completion API asynchronously, follow these steps:

  1. Initialize the EasyopenaiAsyncService object with an instance of AsyncDAOImpl:

    EasyopenaiAsyncService obj = new EasyopenaiAsyncService(new AsyncDAOImpl());
  2. Create a list of ChatMessage objects to represent the conversation:

    ChatMessage message = new ChatMessage();
    message.setRole("user");
    message.setContent("what is the capital of Cambodia?");
    
    List<ChatMessage> messages = new ArrayList<>();
    messages.add(message);
  3. Create a ChatCompletionRequest object:

    ChatCompletionRequest request = new ChatCompletionRequest();
    request.setModel("gpt-3.5-turbo");
    request.setMessages(messages); // old conversations as well
  4. Make the asynchronous API call:

    ChatCompletionResponse res = obj.getAsyncChatCompletion(OPENAI_KEY, request);

Click here to jump to the code example.

Async Moderation API

To use the Moderation API asynchronously, follow these steps:

  1. Create a ModerationAPIRequest object:

    ModerationAPIRequest request = new ModerationAPIRequest();
    request.setInput("kill me now");
    request.setModel("text-moderation-latest");
  2. Initialize the EasyopenaiAsyncService object with an instance of AsyncDAOImpl:

    EasyopenaiAsyncService obj = new EasyopenaiAsyncService(new AsyncDAOImpl());
  3. Make the asynchronous API call:

    ModerationAPIResponse res = obj.getASyncModeration(OPENAI_KEY, request);

Click here to jump to the code example.

Multithreaded Async Chat Completion API

For multi-threading and concurrent calls with Chat Completion API, follow these steps:

  1. Create a ChatCompletionRequestList object:

    ChatCompletionRequestList list = new ChatCompletionRequestList(new ArrayList<ChatCompletionRequest>());
  2. Add multiple ChatCompletionRequest objects to the list:

    // Example request 1
    ChatCompletionRequest requestchat = new ChatCompletionRequest();
    requestchat.setModel("gpt-3.5-turbo");
    ChatMessage message = new ChatMessage();
    message.setRole("user");
    message.setContent("what is the capital of India?");
    List<ChatMessage> messages = new ArrayList<>();
    messages.add(message);
    requestchat.setMessages(messages);
    list.add(requestchat);
    
    // Add more requests as needed (requestchat2, requestchat3, requestchat4, etc.)
  3. Make the multi-asynchronous API call:

    EasyopenaiConcurrentService concurrentCalls = new EasyopenaiConcurrentService();
    ChatCompletionResponseList responseList = concurrentCalls.CallMultipleChatCompletionAPI(OPENAI_KEY, list);
    System.out.println(responseList);

Click here to jump to the code example.

Multithreaded Async Moderation API

For multi-threading and concurrent calls with the Moderation API, follow these steps:

  1. Create a ModerationRequestList object:

    ModerationRequestList requestList = new ModerationRequestList(new ArrayList<ModerationAPIRequest>());
  2. Add multiple ModerationAPIRequest objects to the list:

    // Example request 1
    ModerationAPIRequest request = new ModerationAPIRequest();
    request.setInput("kill me now");
    request.setModel("text-moderation-latest");
    requestList.add(request);
    
    // Add more requests as needed (request2, request3, request4, etc.)
  3. Make the multi-asynchronous API call:

    EasyopenaiConcurrentService concurrentCalls = new EasyopenaiConcurrentService();
    ModerationResponseList responseList = concurrentCalls.CallMultipleModerationAPI(OPENAI_KEY, requestList);
    System.out.println(responseList);

Click here to jump to the code example.

EMMC EasyOpenAI Multithreaded MultiKey ChatCompletionAPI

This allows you to make multiple Chat completion calls with multiple keys. All API calls are executed parallely but the response is actively collected and sent back when all the threads are finished.

CompletableFuture class has been used for implementation. Kindly refer EasyopenaiConcurrentService.java file to see the source.

Example Usgage-

public static void CallMultipleChatCompletionAPI_multikey_Test()
{
     //this function read multiple keys from keys.txt file 
     ArrayList<String> keyList = readKeys();
     
     EasyopenaiConcurrentService concurrentCalls = new EasyopenaiConcurrentService();
   
     ChatCompletionRequestList list = new ChatCompletionRequestList(new ArrayList<ChatCompletionRequest>());
   
     //First thread for 
     ChatCompletionRequest requestchat = new ChatCompletionRequest();
     requestchat.setModel("gpt-3.5-turbo");
     Message message = new Message();
     message.setRole("user");
     message.setContent("what is the capital of India?");
     List<Message> messages = new ArrayList<>();
     messages.add(message);
     requestchat.setMessages(messages);
     list.add(requestchat);
   
   
     ChatCompletionRequest requestchat2 = new ChatCompletionRequest();
     requestchat2.setModel("gpt-3.5-turbo");
     Message message2 = new Message();
     message2.setRole("user");
     message2.setContent("what is the capital of combodia?");
     List<Message> messages2 = new ArrayList<>();
     messages2.add(message2);
     requestchat2.setMessages(messages2);
     list.add(requestchat2);
   
   
     ChatCompletionRequest requestchat3 = new ChatCompletionRequest();
     requestchat3.setModel("gpt-3.5-turbo");
     Message message3 = new Message();
     message3.setRole("user");
     message3.setContent("what is the capital of new zealand?");
     List<Message> messages3 = new ArrayList<>();
     messages3.add(message3);
     requestchat3.setMessages(messages3);
     list.add(requestchat3);
   
   
     ChatCompletionRequest requestchat4 = new ChatCompletionRequest();
     requestchat4.setModel("gpt-3.5-turbo");
     Message message4 = new Message();
     message4.setRole("user");
     message4.setContent("what is the capital of hawaii? and what is 2+2?");
     List<Message> messages4 = new ArrayList<>();
     messages4.add(message4);
     requestchat4.setMessages(messages4);
     list.add(requestchat4);
   
   
   
     ChatCompletionResponseList responseList = concurrentCalls.CallMultipleChatCompletionAPI(keyList, list);
     System.out.println("response is"+responseList);
}

Dependencies

Ensure you have the required dependencies installed before using the OpenAI API wrapper.

Maven

<dependency>
    <groupId>io.github.namankhurpia</groupId>
    <artifactId>easyopenai</artifactId>
    <version>x.x.x</version>
</dependency>

Groovy - Gradle

implementation group: 'io.github.namankhurpia', name: 'easyopenai', version: 'x.x.x'

Gradle Java -

implementation 'io.github.namankhurpia:easyopenai:x.x.x'

Gradle Kotlin-

implementation("io.github.namankhurpia:easyopenai:x.x.x")

SBT -

libraryDependencies += "io.github.namankhurpia" % "easyopenai" % "x.x.x"

Ivy-

<dependency org="io.github.namankhurpia" name="easyopenai" rev="x.x.x"/>

Grape-

@Grapes(
@Grab(group='io.github.namankhurpia', module='easyopenai', version='x.x.x')
)

Leiningen

[io.github.namankhurpia/easyopenai "x.x.x"]

Buildr

'io.github.namankhurpia:easyopenai:jar:x.x.x'

easy-open-ai's People

Contributors

namankhurpia avatar xmlhexagram avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

easy-open-ai's Issues

No JSON Mode implementation.

You don't provide any examples or documentation on how to actually use the new .reponseFormat() extension method for the ChatCompletionRequest class. Looking through the source code doesn't give me any indication of how to use it either.

Chat Completion Seed: Double instead of integer

The official API states:

seed
integer or null
Optional
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.

In your implementation you are using Double. Was this a change in the api or is this intentional?

Assistant Api

Is your feature request related to a problem? Please describe.

A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I am using https://github.com/TheoKanning/openai-java.
And understand that this repo is the continuation.
Hence wondering if you plan to include Assistant Api?

Thanks for the work you do !

getAsync* methods are actually synchronous

All getAsync methods are actually synchronous.

For example, getAsyncChatCompletion (in AsyncDAOImpl) returns a POJO object: ChatCompletionResponse, and it should return a Future.

You create a CompletableFuture (all well for now), but the method ends in

return (ChatCompletionResponse)future.get();

Basically, it blocks the current thread until the future completes.

For the methods to be actually async they need to return Call<> or Future

Using .topP leads to error

io.github.namankhurpia.Pojo.ChatCompletion.ChatCompletionRequest request = io.github.namankhurpia.Pojo.ChatCompletion.ChatCompletionRequest.builder()
.model("gpt-4-turbo-preview")
.messages(messages)
.temperature(1.0d)
.topP(1.0d)
.seed(value)
.build();

Unsuccessful response with HTTP status code 400 and error body: {
"error": {
"message": "Unrecognized request argument supplied: topP",
"type": "invalid_request_error",
"param": null,
"code": null
}
}

Invalid body is created for ChatCompletionRequest

Describe the bug
The wrong json format is used for the fields maxTokens and responseFormat.

To Reproduce
Steps to reproduce the behavior:
Unsuccessful response with HTTP status code 400 and error body: {
"error": {
"message": "Unrecognized request arguments supplied: maxTokens, responseFormat",
"type": "invalid_request_error",
"param": null,
"code": null
}
}

Java Version (please complete the following information):

  • JDK 21

Any reason for Java 18 instead of 17?

Hey!

I finally got around to implementing your version of the api.
Unfortunately, I can not run it, since I am running JDK 17.

Any specific reason you are not using the LTS Version 17, but rather 18?

Kind regards
Dustin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.