openai-php / client Goto Github PK
View Code? Open in Web Editor NEW⚡️ OpenAI PHP is a supercharged community-maintained PHP API client that allows you to interact with OpenAI API.
License: MIT License
⚡️ OpenAI PHP is a supercharged community-maintained PHP API client that allows you to interact with OpenAI API.
License: MIT License
Hi guys,
I am using gpt-3.5-turbo
. Whenever I use this to get an answer, it forgets the previous one.
$response = $client->chat()->create([
'model' => 'gpt-3.5-turbo',
'messages' => [
['role' => 'user', 'content' => 'Message here'],
],
]);
How do I link it to the previous message? Using id
from response or something?
Hi 👋
I set up the OpenAI client to use the gpt-3.5-turbo model, however, in the usage report, it appears as gpt-3.5-turbo-0301.
My configurations are set to use gpt-3.5-turbo:
Although they are almost the same, the documentation states the following:
In my tests, I noticed that it is really not following the system instructions.
I could not find the responsible part of the code to fix and submit a pull request, so how can we check that?
Hi guys. Thanks for your work. I am getting this kind of error.
Hello,
I created and changed my key today and yesterday.. But I have the errors. What can I do?
Fatal error: Uncaught OpenAI\Exceptions\ErrorException: You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys. in D:\OpenServer\domains\localhost\openai\vendor\openai-php\client\src\Transporters\HttpTransporter.php:61 Stack trace: #0 D:\OpenServer\domains\localhost\openai\vendor\openai-php\client\src\Resources\Completions.php(26): OpenAI\Transporters\HttpTransporter->requestObject() #1 D:\OpenServer\domains\localhost\openai\test.php(9): OpenAI\Resources\Completions->create() #2 {main} thrown in D:\OpenServer\domains\localhost\openai\vendor\openai-php\client\src\Transporters\HttpTransporter.php on line 61
I can't find it in the documentation, maybe I'm missing something. But the answers are truncated. What could be the reason?
Thank you for providing this code. When I try ..., I get this error:
Fatal error: Uncaught Error: Call to undefined method OpenAI\Client::chat()
I've been able to run the code snippets fine as I work down the page. I get this error when I try the chat method. Any suggestions? Thanks!
Hello, I see this example here: https://github.com/openai-php/client#created-streamed
However, it would be helpful to see a more complete example that demonstrates how to integrate this into a web page.
For example, if we could get a demo that connects the dots to JS and HTML the way this MDN demo does, that would be perfect:
https://github.com/mdn/dom-examples/tree/main/server-sent-events
I want to get? but I find not anywhere
Hi, thank you for this library that is very easy to use.
I created a Symfony bundle by copying things from the Laravel integration. You can find it here: https://github.com/GromNaN/openai-symfony (work in progress).
What do you think of moving this project into the openai-php
organisation? It would be good for this project to provide an integration with a 2nd major framework. The bundle is not published on Packagist yet, so that it would be a clean start.
It seems that with GPT-4 it takes too long to receive a response from APIs. In the reference they mention this stream = true to start receiving the first tokens immediately and avoiding a timeout.
why not support stream responses? it is so nice to user . it can tell user ai is working .if wait long time user may think ai is not working .
The official Python library allows a timeout to be set on requests. It would be really helpful for production applications to be able to set up a timeout on requests so we don't keep our web workers hanging if there is hiccups in connections or issues on the OpenAI side.
I get the following error: Undefined array key "events"
Here is the stack trace:
Undefined array key "events"
at vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponse.php:52
48▕ public static function from(array $attributes): self
49▕ {
50▕ $events = array_map(fn (array $result): RetrieveResponseEvent => RetrieveResponseEvent::from(
51▕ $result
➜ 52▕ ), $attributes['events']);
53▕
54▕ $resultFiles = array_map(fn (array $result): RetrieveResponseFile => RetrieveResponseFile::from(
55▕ $result
56▕ ), $attributes['result_files']);
Here is the code I am running: $response = $client->fineTunes()->list();
in some way to integrate with or offer some of the embedding and search features it offers.
There is a JavaScript and Python version of the framework.
thanks!
Hello, $client = OpenAI::client($yourApiKey);
return {"error":"Parse error: syntax error, unexpected '?', expecting function (T_FUNCTION) or const (T_CONST)"}
Any possible reasons? Thanks!
Hi, I encountered an issue with retrieving the list of fine tunes.
I suspect this is because the status details show that the file I uploaded was invalid. However, this is not really an issue with the package. But it would be great if the package could also handle this scenario.
I hope this can be resolved soon. Thanks!
OpenAI\Responses\FineTunes\RetrieveResponseFile::__construct(): Argument #8 ($statusDetails) must be of type ?array, string given, called in /var/www/html/vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponseFile.php on line 50
Hello,
I have a persistent error and can't get over it.
require __DIR__ . '/vendor/autoload.php';
use OpenAI\Client;
use OpenAI\Api\Authentication\BearerAuthentication;
use OpenAI\Resources\Completions\Create as CompletionCreate;
$apiKey = 'sk-fEp........';
$client = new Client(new BearerAuthentication($apiKey));
function generateText($client, $model, $prompt, $length, $temperature = 0.5) {
$response = $client->completions()->create(
$model,
(new CompletionCreate())
->setPrompt($prompt)
->setMaxTokens($length)
->setTemperature($temperature)
);
return $response->getChoices()[0]->getText();
}
Fatal error: Uncaught Error: Class "OpenAI\Api\Authentication\BearerAuthentication" not found in D:\OpenServer\domains\localhost\openai\test.php:10 Stack trace: #0 {main} thrown in D:\OpenServer\domains\localhost\openai\test.php on line 10
We've just noticed that, within this project, we are using the terminology ApiToken
and $apiToken
, where in fact, it should ApiKey
and $apiKey
, as mentioned in multiple places in OpenAI documention.
TypeError: OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php](*/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php) on line 26
File "/app/Actions/Document/CreateNewContent.php", line 64, in App\Actions\Document\CreateNewContent::complete
$result = OpenAI::completions()->create($parameters);
File "/app/Actions/Document/CreateNewContent.php", line 39, in App\Actions\Document\CreateNewContent::App\Actions\Document\{closure}
return $this->complete($template, $document, $data);
File "/app/Actions/Document/CreateNewContent.php", line 42, in App\Actions\Document\CreateNewContent::create
});
File "/app/Http/Controllers/App/DocumentController.php", line 182, in App\Http\Controllers\App\DocumentController::writeForMe
$choices = $contentCreator->create($template, $document, $data);
File "/public/index.php", line 52
$request = Request::capture()
...
(77 additional frame(s) were not displayed)
Sometimes getting error:
OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /var/www/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php on line 24
$attributes from \OpenAI\Responses\Completions\CreateResponseChoice::from
:
array(4) {
["text"]=>
string(972) " ... somthing here ... "
["index"]=>
int(0)
["logprobs"]=>
NULL
["finish_reason"]=>
NULL
}
$client = OpenAI::client($key);
$result = $client->completions()->create([
'model' => 'text-davinci-003',
'prompt' => $input->getArgument('question'),
"temperature" => 0.7,
"top_p" => 1,
"frequency_penalty" => 0,
"presence_penalty" => 0,
'max_tokens' => 600,
]);
Your access was terminated due to violation of our policies, please check your email for more information. If you believe this is
in error and would like to appeal, please contact [email protected].
Please state somewhere near the top of your README that it’s an “unofficial" or "community-maintained” library.
Can you add support for proxy setting for HTTP Client?for now, classes in openai-php/client are all final,we can't extend them for customize
Does this support ChatCompletion? gpt-3.5-turbo
import openai
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
Hi,
Quick question, does this package include a rate limiter or do we need to do it ourselves?
Thanks,
I am trying to call OpenAi in a test website for future projects, but encountered a problem.
On my website I have a button and output field.
when clicking on the button the following code is executed via js:
` <script>
console.log("hello");
window.onload = function() {
document.getElementById("submit-request").addEventListener("click", function() {
var prompt = document.getElementById("prompt-input").value;
var xhr = new XMLHttpRequest();
xhr.open("GET", "/wp-admin/admin-ajax.php?action=make_request, true);
xhr.onreadystatechange = function() {
if (xhr.readyState === 4 && xhr.status === 200) {
var response = JSON.parse(xhr.responseText);
document.getElementById("response-output").innerHTML = JSON.stringify(response);
}
};
xhr.send();
});
}
</script>`
The PHP function that is behind the "make_request" is the following:
`
add_action( 'wp_ajax_make_request', 'make_request' );
add_action( 'wp_ajax_nopriv_make_request', 'make_request' );
function make_request() {
$client = OpenAI::client('sk-xxx');
$result = $client->completions()->create([
'model' => 'text-davinci-003',
'prompt' => 'PHP is',
'max_tokens' => 6
]);
echo $result['choices'][0]['text'];
wp_die();
}`
As you can see it is just the basic example from the Readme for the moment. The API key is removed for obvious reasons.
This is the error I got in the webbrowser console:
"GET https://xx.host.com/wp-admin/admin-ajax.php?action=make_request&prompt=kn 500"
What is the problem. Is there some extra authorisation I should do for Openai-php/client?
I facing a problem with the default params in open ai api fine tunes
Triyng creating a fine tunes with this code:
$responseFineTuning = $openAIClient->fineTunes()->create([
'training_file' => 'my_file_id',
'model' => 'davinci',
]);
result in a trhow
TypeError
OpenAI\Responses\FineTunes\RetrieveResponseHyperparams::__construct(): Argument #1 ($batchSize) must be of type int, null given, called in /code/vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponseHyperparams.php on line 39
at vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponseHyperparams.php:20
16▕ * @use ArrayAccessible<array{batch_size: int, learning_rate_multiplier: float, n_epochs: int, prompt_loss_weight: float}>
17▕ */
18▕ use ArrayAccessible;
19▕
➜ 20▕ private function __construct(
21▕ public readonly int $batchSize,
22▕ public readonly float $learningRateMultiplier,
23▕ public readonly int $nEpochs,
24▕ public readonly float $promptLossWeight,
+3 vendor frames
4 app/App/Console/Commands/GenerateFineTuning.php
OpenAI\Resources\FineTunes::create(["file-XXXXXXXXXXXXX", "davinci"])
+13 vendor frames
18 artisan:37
Illuminate\Foundation\Console\Kernel::handle(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
Looking the api $batchSize is optional param and null by default
local.ERROR: OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /data1/chatgpt/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php on line 22 {"exception":"[object] (TypeError(code: 0): OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /data1/chatgpt/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php on line 22 at /data1/chatgpt/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php:9)
[stacktrace]
Since the open AI API is available in azure, is there a possibility to change to endpoint to azure.
or will there be a plan to add this feature
Tt
Hello!
I was attempting to replace some of the underlying concrete implementations of this project in order to send concurrent API requests to OpenAI to generate multiple completions at once, but due to the architecture of the Resources, they will always make a Request and generate a Response.
For example, 10 synchronous requests to the /completions
endpoint with this library can take up to 50 seconds, depending on what's being generated.
I did a basic implementation using Laravel's Http client utilizing pooling (basically Guzzle Async), and I can generate the same 10 completions in ~4-5 seconds.
Any thoughts on adding concurrent/async support in the future, or at least some way of collecting a pool of Requests, so developers could process them on their own?
Pay as you go users can use up to 3000 requests /minute after 48 hours.
Thanks!
Not really an issue - but why does it need PHP 8.1+ to run? would like to use the official client ... anyway, besides that - i really love gpt-3 ...
Hi!!!
I'm doing some tests and for the chat I'm having this error, the others tested resource worked, except the chat.
The code is exactly as in the example:
$api_key = getenv('OPENAI_KEY');
$organization = getenv('ORGANIZATION');
$client = \OpenAI::client($api_key, $organization);
$response = $client->chat()->create([
'model' => 'gpt-3.5-turbo',
'messages' => [
['role' => 'user', 'content' => 'Hello!'],
],
]);
var_dump($response);
Hi @nunomaduro
First of all, thank you for reviewing and merging my previous PRs so quickly! 👍
It's a pleasure to help you with this package. Learned already a lot about (Open)AI and even more from your way how to build a clean package.
I am a huge fan of using fully typed responses and requests. Therefore I gave it a try with the moderations endpoint to see how it could work.
What I ended up with is the following:
$client = OpenAI::client('TOKEN');
$request = new ModerationCreateRequest(input: 'I want to kill them.', model: ModerationModel::TextModerationLatest);
$response = $client->moderations()->create($request);
dump($response->id); // modr-5vvCuUd3dRjgIumIZIu0yBepv5qwL
dump($response->model); // text-moderation-003
dump($response->results[0]->flagged); // true
dump($response->results[0]->categories[0]->toArray()); // ["category" => "hate", "violated" => true, "score" => 0.40681719779968 ]
In my opinion this gives the developers the better UX than plain arrays.
More or less I took the approach Steve McDougall described here: https://laravel-news.com/working-with-data-in-api-integrations
I also implemented request factories to give the user various options how to create the request instance:
// create the request directly
$request = new ModerationCreateRequest(
input: 'I want to kill them.',
model: ModerationModel::TextModerationLatest,
);
// pass an array to a factory instance
$request = (new ModerationCreateRequestFactory)->make([
'input' => 'I want to kill them.',
'model' => ModerationModel::TextModerationLatest,
]);
// pass an array to a static factory method
$request = ModerationCreateRequestFactory::new([
'input' => 'I want to kill them.',
'model' => ModerationModel::TextModerationLatest,
]);
If you want to have a look, I pushed the POC here: https://github.com/gehrisandro/openai-php-client/tree/poc-strong-typed-requests-and-responses
Wanted to start with thanks a lot for this great package! Really appreciate the work you've done here. ❤️
I'm currently trying to mock responses from OpenAI, however this appears to not be easily done, because everything is marked final
, which prevents mocking anything. This makes it a very painful developer experience when testing.
Maybe we can use a factory or something in the OpenAI::client()
static method, and remove final
on the Client
, so it's at least possible to mock the client itself?
final class OpenAIClientFactory
{
public function make(string $apiToken, string $organization = null): Client
{
// ...
}
}
final class OpenAI
{
/**
* Creates a new Open AI Client with the given API token.
*/
public static function client(string $apiToken, string $organization = null): Client
{
return app(OpenAIClientFactory::class)->make($apiToken, $organization);
}
}
$this->app->bind(OpenAIClientFactory::class);
// TestCase
use OpenAI\Client;
$client = Mockery::mock(Client::class);
app()->bind(OpenAIClientFactory::class, function () use ($client) {
$mock = Mockery::mock(OpenAIClientFactory::class);
$mock->shouldRecieve('make')->andReturn($client);
return $mock;
});
$client->shouldReceive('...')->andReturn('...');
Let me know your thoughts, thanks!
How can I create a completion request that then return paraphrased variants of my given prompt?
I am reading my API key from a file. My editor was adding a newline if the file was open. The result was that any call made via the client would error with the message "you must provide a model parameter", even though a model parameter was being sent.
I fixed my issue by simply trimming the result of my call to get the file's contents but the error message from the API was very confusing. Maybe just add the trim before sending to the API?
I could not make a request with the context that preceded the conversation. Is this functionality not implemented, or am I looking at the wrong function? Maybe you don't call it context, but something else?
Is anyone else having this issue? this is a brand new laravel project on windows, running through php artisan serve
im just running the code from the example in the docs.
My code:
Route::get('/', function () {
$client = OpenAI::client(config('app.open-ai-key'));
$prompt = <<<TEXT
Extract the requirements for this job offer as a list.
"We are seeking a PHP web developer to join our team. The ideal candidate will have experience with PHP, MySQL, HTML, CSS, and JavaScript. They will be responsible for developing and managing web applications and working with a team of developers to create high-quality and innovative software. The salary for this position is negotiable and will be based on experience."
TEXT;
$result = $client->completions()->create([
'model' => 'text-davinci-002',
'prompt' => $prompt,
]);
ray($result);
});
Flare exception:
https://flareapp.io/share/xPQoaD25#F47
Sorry for my English, but I want to know if $parameters are an array of arrays or what kind of structure I need to use because in the documentation I read that I need a JSONL so I don't know how I can make it here in PHP
$client->fineTunes()->create($parameters);
Thanks
Hope everyone is doing well.
Currently the architecture of the component is such that the src/Transporters/HttpTransporter.php under its requestObject method, checks for presence of $response['error'], if there's none, it returns the response object (passes it the corresponding CreateResponse class ).
I suggest to redefine the behavior of the transporter so that it will return any possible error and pass it along to the CreateResponse class. I propose the CreateResponse class to process the response attributes, including those with errors and ultimately forward the error details to the application in which it has initiated the API call.
The benefits of my idea:
Currently, I don't see how these 4 things are possible under the currently behavior of the transporter.
The following test can show better what I mean:
public function test_client_handles_error_response_correctly(): void
{
$client = OpenAI::client('sk-````');
$response = $client->completions()->create([
'prompt' => 'PHP is',
'model' => 'wrongModel', //invoke error
'max_tokens' => 20,
'temperature' => 0,
]);
// Make assertions
$this->assertNotEmpty($response->error["message"]);
$this->assertEquals(500, $response->error["status_code"]);
}
Currently, as per the README example, I would have to access a completion
this way:
echo $result['choices'][0]['text'];
OR
echo $result->choices[0]->text;
Proposal using illuminate/collections:
echo $result->choices->first()->text;
In my opinion, it doesn't have any major impact but still cuts down the direct array access
see: https://beta.openai.com/docs/api-reference/completions/create#completions/create-stream
should we support stream response?
Hi,
All my results to prompts are getting truncated. Typically less than a sentence is returned. Any idea why? Example below:
Any help is much appreciated.
Wyatt
My prompt:
"Write me a story."
Result:
[model] => text-davinci-003
[choices] => Array
(
[0] => OpenAI\Responses\Completions\CreateResponseChoice Object
(
[text] =>
Once upon a time, there was a young girl named Daisy who was
[index] => 0
[logprobs] =>
[finishReason] => length
)
)
[usage] => OpenAI\Responses\Completions\CreateResponseUsage Object
(
[promptTokens] => 4
[completionTokens] => 16
[totalTokens] => 20
)
)
Hi,
what do you think about making the base URI configurable? At the moment, the base URI is hardcoded to https://api.openai.com/v1
.
Making it configurable would make end-to-end testing of applications using the OpenAI client easier, as one could use an url to a mocking server in the test environment.
I would implement this as a non-breaking change via the following steps:
OpenAI\ValueObjects\Transporter\BaseUri
$baseUri
in OpenAI\ValueObjects\Transporter\Payload::toRequest
to extracted interfaceBaseUriInterface $baseUri = null
to OpenAI::client
OpenAI::client
, so that it handles the default value for $baseUri
like that:public static function client(string $apiToken, string $organization = null, BaseUriInterface $baseUri = null): Client
{
...
$baseUri = $baseUri ?? BaseUri::from('api.openai.com/v1');
...
}
What do you think about that? If you don't object, I would implement that.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.