Code Monkey home page Code Monkey logo

generative-ai's Introduction

Gemini AI Client for .NET and ASP.NET Core

GitHub GitHub last commit MsccGenerativeAI GitHub stars

Access and integrate the Gemini API into your .NET applications. The packages support both Google AI Studio and Google Cloud Vertex AI.

Name Package Status
Client for .NET Mscc.GenerativeAI NuGet VersionNuGet Downloads
Client for ASP.NET (Core) Mscc.GenerativeAI.Web NuGet VersionNuGet Downloads
Client for .NET using Google API Client Library Mscc.GenerativeAI.Google NuGet VersionNuGet Downloads

Read more about Mscc.GenerativeAI.Web and how to add it to your ASP.NET (Core) web applications. Read more about Mscc.GenerativeAI.Google.

Install the package ๐Ÿ–ฅ๏ธ

Install the package Mscc.GenerativeAI from NuGet. You can install the package from the command line using either the command line or the NuGet Package Manager Console. Or you add it directly to your .NET project.

Add the package using the dotnet command line tool in your .NET project folder.

> dotnet add package Mscc.GenerativeAI

Working with Visual Studio use the NuGet Package Manager to install the package Mscc.GenerativeAI.

PM> Install-Package Mscc.GenerativeAI

Alternatively, add the following line to your .csproj file.

  <ItemGroup>
    <PackageReference Include="Mscc.GenerativeAI" Version="1.4.0" />
  </ItemGroup>

You can then add this code to your sources whenever you need to access any Gemini API provided by Google. This package works for Google AI (Google AI Studio) and Google Cloud Vertex AI.

Features (as per Gemini analysis) โœฆ

The provided code defines a C# library for interacting with Google's Generative AI models, specifically the Gemini models. It provides functionalities to:

  • List available models: This allows users to see which models are available for use.
  • Get information about a specific model: This provides details about a specific model, such as its capabilities and limitations.
  • Generate content: This allows users to send prompts to a model and receive generated text in response.
  • Generate content stream: This allows users to receive a stream of generated text from a model, which can be useful for real-time applications.
  • Generate a grounded answer: This allows users to ask questions and receive answers that are grounded in provided context.
  • Generate embeddings: This allows users to convert text into numerical representations that can be used for tasks like similarity search.
  • Count tokens: This allows users to estimate the cost of using a model by counting the number of tokens in a prompt or response.
  • Start a chat session: This allows users to have a back-and-forth conversation with a model.
  • Create tuned models: This allows users to provide samples for tuning an existing model. Currently, only the text-bison-001 and gemini-1.0-pro-001 models are supported for tuning
  • File API: This allows users to upload large files and use them with Gemini 1.5.

The package also defines various helper classes and enums to represent different aspects of the Gemini API, such as model names, request parameters, and response data.

Authentication use cases ๐Ÿ‘ฅ

The package supports the following use cases to authenticate.

API Authentication Remarks
Google AI Authentication with an API key
Google AI Authentication with OAuth required for tuned models
Vertex AI Authentication with Application Default Credentials (ADC)
Vertex AI Authentication with Credentials by Metadata Server requires access to a metadata server
Vertex AI Authentication with OAuth using Mscc.GenerativeAI.Google
Vertex AI Authentication with Service Account using Mscc.GenerativeAI.Google

This applies mainly to the instantiation procedure.

Getting Started ๐Ÿš€

Use of Gemini API in either Google AI or Vertex AI is almost identical. The major difference is the way to instantiate the model handling your prompt.

Using Environment variables

In the cloud most settings are configured via environment variables (EnvVars). The ease of configuration, their wide spread support and the simplicity of environment variables makes them a very interesting option.

Variable Name Description
GOOGLE_AI_MODEL The name of the model to use (default is Model.Gemini10Pro)
GOOGLE_API_KEY The API key generated in Google AI Studio
GOOGLE_PROJECT_ID Project ID in Google Cloud to access the APIs
GOOGLE_REGION Region in Google Cloud (default is us-central1)
GOOGLE_ACCESS_TOKEN The access token required to use models running in Vertex AI
GOOGLE_APPLICATION_CREDENTIALS Path to the application credentials file.
GOOGLE_WEB_CREDENTIALS Path to a Web credentials file.

Using any environment variable provides a simplified access to a model.

using Mscc.GenerativeAI;

var model = new GenerativeModel();

Choose an API and authentication mode

Google AI with an API key

using Mscc.GenerativeAI;
// Google AI with an API key
var googleAI = new GoogleAI(apiKey: "your API key");
var model = googleAI.GenerativeModel(model: Model.GeminiPro);

// Original approach, still valid.
// var model = new GenerativeModel(apiKey: "your API key", model: Model.GeminiPro);

Google AI with OAuth. Use gcloud auth application-default print-access-token to get the access token.

using Mscc.GenerativeAI;
// Google AI with OAuth. Use `gcloud auth application-default print-access-token` to get the access token.
var model = new GenerativeModel(model: Model.GeminiPro);
model.AccessToken = accessToken;

Vertex AI with OAuth. Use gcloud auth application-default print-access-token to get the access token.

using Mscc.GenerativeAI;
// Vertex AI with OAuth. Use `gcloud auth application-default print-access-token` to get the access token.
var vertex = new VertexAI(projectId: projectId, region: region);
var model = vertex.GenerativeModel(model: Model.Gemini10Pro);
model.AccessToken = accessToken;

The ConfigurationFixture type in the test project implements multiple options to retrieve sensitive information, ie. API key or access token.

Using Google AI Gemini API

Working with Google AI in your application requires an API key. Get an API key from Google AI Studio.

using Mscc.GenerativeAI;

var apiKey = "your_api_key";
var prompt = "Write a story about a magic backpack.";

var model = new GenerativeModel(apiKey: apiKey, model: Model.GeminiPro);

var response = model.GenerateContent(prompt).Result;
Console.WriteLine(response.Text);

Using Vertex AI Gemini API

Use of Vertex AI requires an account on Google Cloud, a project with billing and Vertex AI API enabled.

using Mscc.GenerativeAI;

var projectId = "your_google_project_id"; // the ID of a project, not its name.
var region = "us-central1";     // see documentation for available regions.
var accessToken = "your_access_token";      // use `gcloud auth application-default print-access-token` to get it.
var prompt = "Write a story about a magic backpack.";

var vertex = new VertexAI(projectId: projectId, region: region);
var model = vertex.GenerativeModel(model: Model.GeminiPro);
model.AccessToken = accessToken;

var response = model.GenerateContent(prompt).Result;
Console.WriteLine(response.Text);

More examples ๐Ÿช„

Supported models are accessible via the Model class. Since release 0.9.0 there is support for the previous PaLM 2 models and their functionalities.

Text-and-image input

using Mscc.GenerativeAI;

var apiKey = "your_api_key";
var prompt = "Parse the time and city from the airport board shown in this image into a list, in Markdown";
var model = new GenerativeModel(apiKey: apiKey, model: Model.GeminiVisionPro);
var request = new GenerateContentRequest(prompt);
await request.AddMedia("https://ai.google.dev/static/docs/images/timetable.png");

var response = await model.GenerateContent(request);
Console.WriteLine(response.Text);

The part of InlineData is supported by both Google AI and Vertex AI. Whereas the part FileData is restricted to Vertex AI only.

Chat conversations

Gemini enables you to have freeform conversations across multiple turns. You can interact with Gemini Pro using a single-turn prompt and response or chat with it in a multi-turn, continuous conversation, even for code understanding and generation.

using Mscc.GenerativeAI;

var apiKey = "your_api_key";
var model = new GenerativeModel(apiKey: apiKey);    // using default model: gemini-pro
var chat = model.StartChat();

// Instead of discarding you could also use the response and access `response.Text`.
_ = await chat.SendMessage("Hello, fancy brainstorming about IT?");
_ = await chat.SendMessage("In one sentence, explain how a computer works to a young child.");
_ = await chat.SendMessage("Okay, how about a more detailed explanation to a high schooler?");
_ = await chat.SendMessage("Lastly, give a thorough definition for a CS graduate.");

// A chat session keeps every response in its history.
chat.History.ForEach(c => Console.WriteLine($"{c.Role}: {c.Text}"));

// Last request/response pair can be removed from the history.
var latest = chat.Rewind();
Console.WriteLine($"{latest.Sent} - {latest.Received}");

Use Gemini 1.5 with large files

With Gemini 1.5 you can create multimodal prompts supporting large files.

The following example uploads one or more files via File API and the created File URIs are used in the GenerateContent call to generate text.

using Mscc.GenerativeAI;

var apiKey = "your_api_key";
var prompt = "Make a short story from the media resources. The media resources are:";
IGenerativeAI genAi = new GoogleAI(apiKey);
var model = genAi.GenerativeModel(Model.Gemini15Pro);

// Upload your large image(s).
// Instead of discarding you could also use the response and access `response.Text`.
var filePath = Path.Combine(Environment.CurrentDirectory, "verylarge.png");
var displayName = "My very large image";
_ = await model.UploadMedia(filePath, displayName);

// Create the prompt with references to File API resources.
var request = new GenerateContentRequest(prompt);
var files = await model.ListFiles();
foreach (var file in files.Where(x => x.MimeType.StartsWith("image/")))
{
    _output.WriteLine($"File: {file.Name}");
    request.AddMedia(file);
}
var response = await model.GenerateContent(request);
Console.WriteLine(response.Text);

Read more about Gemini 1.5: Our next-generation model, now available for Private Preview in Google AI Studio.

Create a tuned model

The Gemini API lets you tune models on your own data. Since it's your data and your tuned models this needs stricter access controls than API-Keys can provide.

Before you can create a tuned model, you'll need to setup OAuth for your project.

using Mscc.GenerativeAI;

var projectId = "your_google_project_id"; // the ID of a project, not its name.
var accessToken = "your_access_token";      // use `gcloud auth application-default print-access-token` to get it.
var model = new GenerativeModel(apiKey: null, model: Model.Gemini10Pro001)
{
    AccessToken = accessToken, ProjectId = projectId
};
var parameters = new HyperParameters() { BatchSize = 2, LearningRate = 0.001f, EpochCount = 3 };
var dataset = new List<TuningExample>
{    
    new() { TextInput = "1", Output = "2" },
    new() { TextInput = "3", Output = "4" },
    new() { TextInput = "-3", Output = "-2" },
    new() { TextInput = "twenty two", Output = "twenty three" },
    new() { TextInput = "two hundred", Output = "two hundred one" },
    new() { TextInput = "ninety nine", Output = "one hundred" },
    new() { TextInput = "8", Output = "9" },
    new() { TextInput = "-98", Output = "-97" },
    new() { TextInput = "1,000", Output = "1,001" },
    new() { TextInput = "thirteen", Output = "fourteen" },
    new() { TextInput = "seven", Output = "eight" },
};
var request = new CreateTunedModelRequest(Model.Gemini10Pro001, 
    "Simply autogenerated Test model",
    dataset,
    parameters);

var response = await model.CreateTunedModel(request);
Console.WriteLine($"Name: {response.Name}");
Console.WriteLine($"Model: {response.Metadata.TunedModel} (Steps: {response.Metadata.TotalSteps})");

(This is still work in progress but operational. Future release will provide types to simplify the create request.)

Tuned models appear in your Google AI Studio library.

Tuned models are listed below My Library in Google AI Studio

Read more about Tune Gemini Pro in Google AI Studio or with the Gemini API.

More samples

The folders samples and tests contain more examples.

Troubleshooting โšก

Sometimes you might have authentication warnings HTTP 403 (Forbidden). Especially while working with OAuth-based authentication. You can fix it by re-authenticating through ADC.

gcloud config set project "$PROJECT_ID"

gcloud auth application-default set-quota-project "$PROJECT_ID"
gcloud auth application-default login

Make sure that the required API have been enabled.

# ENABLE APIs
gcloud services enable aiplatform.googleapis.com

In case of long-running streaming requests it can happen that you get a HttpIOException: The response ended prematurely while waiting for the next frame from the server. (ResponseEnded). The root cause is the .NET runtime and the solution is to upgrade to the latest version of the .NET runtime. In case that you cannot upgrade you might disable dynamic window sizing as a workaround: Either using the environment variable DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP2FLOWCONTROL_DISABLEDYNAMICWINDOWSIZING

DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP2FLOWCONTROL_DISABLEDYNAMICWINDOWSIZING=true

or setting an AppContext switch:

AppContext.SetSwitch("System.Net.SocketsHttpHandler.Http2FlowControl.DisableDynamicWindowSizing", true);

Several issues regarding this problem have been reported on GitHub:

Using the tests ๐Ÿงฉ

The repository contains a number of test cases for Google AI and Vertex AI. You will find them in the tests folder. They are part of the [GenerativeAI solution]. To run the tests, either enter the relevant information into the appsettings.json, create a new appsettings.user.json file with the same JSON structure in the tests folder, or define the following environment variables

  • GOOGLE_API_KEY
  • GOOGLE_PROJECT_ID
  • GOOGLE_REGION
  • GOOGLE_ACCESS_TOKEN (optional: if absent, gcloud auth application-default print-access-token is executed)

The test cases should provide more insights and use cases on how to use the Mscc.GenerativeAI package in your .NET projects.

Feedback โœจ

For support and feedback kindly create issues at the https://github.com/mscraftsman/generative-ai repository.

License ๐Ÿ“œ

This project is licensed under the Apache-2.0 License - see the LICENSE file for details.

Citation ๐Ÿ“š

If you use Mscc.GenerativeAI in your research project, kindly cite as follows

@misc{Mscc.GenerativeAI,
  author = {Kirstรคtter, J and MSCraftsman},
  title = {Mscc.GenerativeAI - Gemini AI Client for .NET and ASP.NET Core},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  note = {https://github.com/mscraftsman/generative-ai}
}

Created by Jochen Kirstรคtter.

generative-ai's People

Contributors

doggy8088 avatar jochenkirstaetter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

generative-ai's Issues

Discussion: Keep API interface consistent

There are two types of Google Cloud Generative AI:

  1. Google AI Gemini API

    // Google AI with an API key
    var model = new GenerativeModel(apiKey: "your API key", model: Model.GeminiPro);
  2. Vertex AI Gemini API

    // Vertex AI with OAuth. Use `gcloud auth application-default print-access-token` to get the access token.
    var vertex = new VertexAI(projectId: projectId, region: region);
    var model = vertex.GenerativeModel(model: Model.Gemini10Pro);

What if we keep these two APIs consistent?

For example,

  1. Google AI Gemini API

    // Google AI with an API key
    var googleai = new GoogleAI(apiKey: "your API key");
    var model = googleai.GenerativeModel(model: Model.Gemini10Pro);
  2. Vertex AI Gemini API

    // Vertex AI with OAuth. Use `gcloud auth application-default print-access-token` to get the access token.
    var vertexai = new VertexAI(projectId: projectId, region: region);
    var model = vertexai.GenerativeModel(model: Model.Gemini10Pro);

Application Default Credentials (ADC) has been loaded automatically even I use API Key auth.

When I'm using a API Key authentication on Google AI Gemini API, the library still load my ADC automatically.

This might lead some issues. (Maybe) I'm not sure.

image

Here is my test code:

async Task Main()
{
	var model = new GenerativeModel(apiKey: Util.GetPassword("GEMINI_API_KEY"), model: Model.GeminiPro);

	var prompt = "I love Taipei. Give me a 15 words summary about it.";
	
	model.Uncapsulate().Dump();

	var response = await model.GenerateContent(prompt);

	response.Dump();
}

LINQPad Query: https://share.linqpad.net/ol86v9he.linq

Check for more FinishReason

In the current version, the code only check for FinishReason.Safety.

Due to docs said, I think there are others needs to be mentioned.

There are only STOP can be treated as "NORMAL" response.

Exception thrown in Google App Engine

When deploying an app using this library to Google App Engine, the exception below is thrown.

This happens despite the fact that the application default credentials indeed are available and are being consumed without any issues by Google's own SDKs in .NET.

System.Exception: OS error while executing 'gcloud auth application-default print-access-token': An error occurred trying to start process 'gcloud' with working directory '/app'. No such file or directory ---> System.ComponentModel.Win32Exception (2): An error occurred trying to start process 'gcloud' with working directory '/app'. No such file or directory

at System.Diagnostics.Process.ForkAndExecProcess
at System.Diagnostics.Process.StartCore
at Mscc.GenerativeAI.GenerativeModel.RunExternalExe
at Mscc.GenerativeAI.GenerativeModel.RunExternalExe
at Mscc.GenerativeAI.GenerativeModel.GetAccessTokenFromAdc

Unknown error due to some settings

The following code will produce Response status code does not indicate success: 400 (Bad Request). error. I have no idea.

async Task Main()
{
	var googleAI = new GoogleAI(apiKey: Util.GetPassword("GEMINI_API_KEY"));
	var model = googleAI.GenerativeModel(model: Model.GeminiPro,
		generationConfig: new GenerationConfig()
		{
			TopK = 1,
			TopP = 1,
			Temperature = 0.9f
		},
		safetySettings: new List<SafetySetting>()
		{
			new SafetySetting() { Category = HarmCategory.HarmCategoryHarassment, Threshold = HarmBlockThreshold.BlockOnlyHigh },
			new SafetySetting() { Category = HarmCategory.HarmCategoryHateSpeech, Threshold = HarmBlockThreshold.BlockOnlyHigh },
			new SafetySetting() { Category = HarmCategory.HarmCategorySexuallyExplicit, Threshold = HarmBlockThreshold.BlockOnlyHigh },
			new SafetySetting() { Category = HarmCategory.HarmCategoryDangerousContent, Threshold = HarmBlockThreshold.BlockOnlyHigh }
		});

	var count = await model.CountTokens("Hello World");
	count.Dump();
}

LINQPad Query: https://share.linqpad.net/entk5npb.linq

Feature suggestion: Retry mechanism

It because Google AI Gemini API is error prone. Every GenAI app should implement retry mechanism. Would you consider implement retry mechanism as a built-in feature in your library?

public async Task<GenerateContentResponse> GenerateContent(GenerateContentRequest? request)
{
if (request == null) throw new ArgumentNullException(nameof(request));
var url = ParseUrl(Url, Method);
string json = Serialize(request);
var payload = new StringContent(json, Encoding.UTF8, MediaType);
var response = await Client.PostAsync(url, payload);
response.EnsureSuccessStatusCode();

Assigning a API_KEY using model.ApiKey is not working

Here is my code. It's not working. It should be a bug.

void Main()
{
    var googleAI = new GoogleAI(apiKey: "WRONG_API_KEY");
    var model = googleAI.GenerativeModel(model: Model.Gemini10Pro001);
    model.ApiKey = Util.GetPassword("GEMINI_API_KEY");
    model.GenerateContent("Tell me 4 things about Taipei. Be short.").Dump();
}

image

I think bug is here:

image

You should able to replace a new key there.

Feature suggestion: create a Tool object from a C# method

When using function calling feature for Gemini API, the Tool must be used.

Here is a thought of mine. Is it possible just simple pass a C# method or lambda or it's method type to the GenerateContent method's tools argument. Your library might be able to convert the method into a Tool object. That will be extremely useful.

Furthermore, it's possibly call that method directly when the Gemini API response with that method calls.

`model.GetModel()` returned wrong `SupportedGenerationMethods`

Here is my code:

async Task Main()
{
	var googleAI = new GoogleAI(apiKey: Util.GetPassword("GEMINI_API_KEY"));
	var model = googleAI.GenerativeModel(model: Model.Embedding).Dump();
	var response = await model.GetModel().Dump();
}

I don't know why. The SupportedGenerationMethods property seems wrong. The model.GetModel() call still think he is models/gemini-pro model?

image

I do checked with the REST API call directly. The response is correct. I don't know why.

{
  "name": "models/embedding-001",
  "version": "001",
  "displayName": "Embedding 001",
  "description": "Obtain a distributed representation of a text.",
  "inputTokenLimit": 2048,
  "outputTokenLimit": 1,
  "supportedGenerationMethods": [
    "embedContent"
  ]
}

SafetySettings can be easier and less error-prone.

In Safety settings doc, there is a section:

These definitions are in the API reference as well. The Gemini models only support HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, and HARM_CATEGORY_DANGEROUS_CONTENT. The other categories are used by PaLM 2 (Legacy) models.

In the HarmCategory enum, there are much more to choose from.

[JsonConverter(typeof(JsonStringEnumConverter<HarmCategory>))]
public enum HarmCategory
{
	/// <summary>
	/// HarmCategoryUnspecified means the harm category is unspecified.
	/// </summary>
	HarmCategoryUnspecified = 0,
	/// <summary>
	/// HarmCategoryHateSpeech means the harm category is hate speech.
	/// </summary>
	HarmCategoryHateSpeech = 1,
	/// <summary>
	/// HarmCategoryDangerousContent means the harm category is dangerous content.
	/// </summary>
	HarmCategoryDangerousContent = 2,
	/// <summary>
	/// HarmCategoryHarassment means the harm category is harassment.
	/// </summary>
	HarmCategoryHarassment = 3,
	/// <summary>
	/// HarmCategorySexuallyExplicit means the harm category is sexually explicit content.
	/// </summary>
	HarmCategorySexuallyExplicit = 4,
	/// <summary>
	/// Negative or harmful comments targeting identity and/or protected attribute.
	/// </summary>
	HarmCategoryDerogatory = 101,
	/// <summary>
	/// Content that is rude, disrespectful, or profane.
	/// </summary>
	HarmCategoryToxicity = 102,
	/// <summary>
	/// Describes scenarios depicting violence against an individual or group, or general descriptions of gore.
	/// </summary>
	HarmCategoryViolence = 103,
	/// <summary>
	/// Contains references to sexual acts or other lewd content.
	/// </summary>
	HarmCategorySexual = 104,
	/// <summary>
	/// Promotes unchecked medical advice.
	/// </summary>
	HarmCategoryMedical = 105,
	/// <summary>
	/// Dangerous content that promotes, facilitates, or encourages harmful acts.
	/// </summary>
	HarmCategoryDangerous = 106
}

What if I just want to use Gemini API, the coding DX is not that good.

Here is my code snippet. I listed all to my code and commented all options I don't want. This is to avoid someone after me take over my code and someone put it to the list by accident.

var model = googleAI.GenerativeModel(
	model: Model.GeminiPro,
	generationConfig: new GenerationConfig()
	{
		TopK = 1,
		TopP = 1,
		Temperature = 0.9f
	},
	safetySettings: new List<SafetySetting>()
	{
		new SafetySetting() { Category = HarmCategory.HarmCategoryHarassment, Threshold = HarmBlockThreshold.BlockOnlyHigh },
		new SafetySetting() { Category = HarmCategory.HarmCategoryHateSpeech, Threshold = HarmBlockThreshold.BlockOnlyHigh },
		new SafetySetting() { Category = HarmCategory.HarmCategorySexuallyExplicit, Threshold = HarmBlockThreshold.BlockOnlyHigh },
		new SafetySetting() { Category = HarmCategory.HarmCategoryDangerousContent, Threshold = HarmBlockThreshold.BlockOnlyHigh },
                // Only above are used in Gemini API.
		//new SafetySetting() { Category = HarmCategory.HarmCategoryDangerous, Threshold = HarmBlockThreshold.BlockOnlyHigh },
		//new SafetySetting() { Category = HarmCategory.HarmCategoryDerogatory, Threshold = HarmBlockThreshold.BlockOnlyHigh },
		//new SafetySetting() { Category = HarmCategory.HarmCategoryMedical, Threshold = HarmBlockThreshold.BlockOnlyHigh },
		//new SafetySetting() { Category = HarmCategory.HarmCategorySexual, Threshold = HarmBlockThreshold.BlockOnlyHigh },
		//new SafetySetting() { Category = HarmCategory.HarmCategoryToxicity, Threshold = HarmBlockThreshold.BlockOnlyHigh },
		//new SafetySetting() { Category = HarmCategory.HarmCategoryUnspecified, Threshold = HarmBlockThreshold.BlockOnlyHigh },
		//new SafetySetting() { Category = HarmCategory.HarmCategoryViolence, Threshold = HarmBlockThreshold.BlockOnlyHigh },
	});

It maybe there is a way to make sure someone will not make this mistake. That details should be hidden from the library.

For example, provide a set of default common SafetySetting list for the users? See below:

var model = googleAI.GenerativeModel(
	model: Model.GeminiPro,
	generationConfig: new GenerationConfig()
	{
		TopK = 1,
		TopP = 1,
		Temperature = 0.9f
	},
	safetySettings: DefaultSafetySettings.BlockOnlyHigh);

The DefaultSafetySettings.BlockOnlyHigh equals to:

DefaultSafetySettings.BlockOnlyHigh = new List<SafetySetting>()
{
	new SafetySetting() { Category = HarmCategory.HarmCategoryHarassment, Threshold = HarmBlockThreshold.BlockOnlyHigh },
	new SafetySetting() { Category = HarmCategory.HarmCategoryHateSpeech, Threshold = HarmBlockThreshold.BlockOnlyHigh },
	new SafetySetting() { Category = HarmCategory.HarmCategorySexuallyExplicit, Threshold = HarmBlockThreshold.BlockOnlyHigh },
	new SafetySetting() { Category = HarmCategory.HarmCategoryDangerousContent, Threshold = HarmBlockThreshold.BlockOnlyHigh }
};

I don't have a good idea right now. What do you think?

await model.ListFiles() get HTTP 403 (Forbidden)

The code:

#nullable enable

async Task Main()
{
    var googleAI = new GoogleAI(apiKey: Util.GetPassword("GEMINI_API_KEY"));

    // var model = googleAI.GenerativeModel(model: Model.Gemini15Pro);
    var model = googleAI.GenerativeModel(model: Model.Gemini10Pro001);
	
    var files = await model.ListFiles();
	
    files.Dump();
}

The result:

image

I don't know what's the exact problem I met. I can't see response body (JSON) from the Gemini API.

Related: #6 and #19

Error while copying content to a stream

First of all, thank you so much for creating this library! You've done an outstanding job and made it so much easier to get started with Gemini as a .NET developer. Amazing work!

I've done some extensive testing and I continuously experience the error. It's quite erratic and I can't really tell why the error is being thrown. It seems random to me.

Is there anything I can do to avoid this?

Thanks!

Error while copying content to a stream.
System.Net.Http.HttpRequestException: Error while copying content to a stream.
      ---> System.Net.Http.HttpIOException: The response ended prematurely while waiting for the next frame from the server. (ResponseEnded)
      at System.Net.Http.Http2Connection.ThrowRequestAborted(Exception innerException)
      at System.Net.Http.Http2Connection.Http2Stream.TryReadFromBuffer(Span`1 buffer, Boolean partOfSyncRead)
      at System.Net.Http.Http2Connection.Http2Stream.CopyToAsync(HttpResponseMessage responseMessage, Stream destination, Int32 bufferSize, CancellationToken cancellationToken)
      at System.Net.Http.HttpConnectionResponseContent.<SerializeToStreamAsync>g__Impl|6_0(Stream stream, CancellationToken cancellationToken)
      at System.Net.Http.HttpContent.LoadIntoBufferAsyncCore(Task serializeToStreamTask, MemoryStream tempBuffer)
      --- End of inner exception stack trace ---
      at System.Net.Http.HttpContent.LoadIntoBufferAsyncCore(Task serializeToStreamTask, MemoryStream tempBuffer)
      at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
      at Mscc.GenerativeAI.GenerativeModel.GenerateContent(GenerateContentRequest request)
      at Mscc.GenerativeAI.GenerativeModel.GenerateContent(String prompt, GenerationConfig generationConfig, List`1 safetySettings, List`1 tools, ToolConfig toolConfig)

A bug is found on `EmbedContentRequest`

When serializing EmbedContentRequest object, the JSON will be:

{
    "model": "models/embedding-001",
    "content": {
        "parts": [
            {
                "text": "Hello"
            }
        ],
        "text": "Hello"
    }
}

Here is my sample code:

var request = new EmbedContentRequest("Hello");

var json = JsonSerializer.Serialize(request, new JsonSerializerOptions(JsonSerializerDefaults.Web)
{
	DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull,
	PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
	DictionaryKeyPolicy = JsonNamingPolicy.CamelCase,
	NumberHandling = JsonNumberHandling.AllowReadingFromString,
	PropertyNameCaseInsensitive = true,
	ReadCommentHandling = JsonCommentHandling.Skip,
	AllowTrailingCommas = true,
	Converters = { (JsonConverter)new JsonStringEnumConverter(JsonNamingPolicy.SnakeCaseUpper) }
}).Dump();

See screenshot:

image

There are two issues:

  1. The model property is not necessary, but no error will be thrown.
  2. The content.text property will produce error.
{
    "error": {
        "code": 400,
        "message": "Invalid JSON payload received. Unknown name \"text\" at 'content': Cannot find field.",
        "status": "INVALID_ARGUMENT",
        "details": [
            {
                "@type": "type.googleapis.com/google.rpc.BadRequest",
                "fieldViolations": [
                    {
                        "field": "content",
                        "description": "Invalid JSON payload received. Unknown name \"text\" at 'content': Cannot find field."
                    }
                ]
            }
        ]
    }
}

The ContentResponse class is not obey the API doc Content definition. Only parts and role are defined.

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.