Code Monkey home page Code Monkey logo

instructor_ex's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

instructor_ex's Issues

Mint.HTTPError "the given data exceeds the request window size" when sending a large request

When sending a larger document for summary in my instructor pipeline, I run into the following error:

[error] Task #PID<0.36499.0> started from #PID<0.36490.0> terminating
[info]** (Mint.HTTPError) the given data exceeds the request window size, which is 65535. The server will refill the window size of the request when ready. This will be handled transparently by stream/2.
[info]    (req 0.4.11) lib/req.ex:978: Req.request!/2
[info]    (instructor 0.0.5) lib/instructor/adapters/openai.ex:80: Instructor.Adapters.OpenAI.do_chat_completion/2
[info]    (instructor 0.0.5) lib/instructor.ex:430: Instructor.do_chat_completion/3
[info]    (summarizer 0.1.0) lib/summarizer.ex:20: Summarizer.summarize/1
[info]    (summarizer 0.1.0) lib/summarizer_web/live/link_live/show.ex:27: anonymous fn/1 in SummarizerWeb.LinkLive.Show.handle_info/2
[info]    (elixir 1.16.1) lib/task/supervised.ex:101: Task.Supervised.invoke_mfa/2
[info]    (elixir 1.16.1) lib/task/supervised.ex:36: Task.Supervised.reply/4

There's a similar error reported for cory o'daniel's k8s client, also using mint for its client: coryodaniel/k8s#220

I'm not sure what to do. For now, I just accept that some documents I cannot handle, but I'd prefer it if I could get around this. I'm fine spelunking and figuring it out but I thought I'd file an issue first in case I've missed something obvious.

Thanks for instructor_ex, it's great!

Support Ollama

As of yesterday, OlLama supports the Open AI API spec.

https://ollama.ai/blog/openai-compatibility

It would be great for us to support this and create documentation to show users how to use it because OLLAMA is much easier to get up and running than Lama.CPP.

In order to do so, however, we need to support more modes than just tools. See The Python Instructor. I have the support for modes roughed out in the code. We just need to create implementations.

JSON Mode is required to support ollama

When should I use the Instructor.Validator?

Hi, first off thank you so much- this has greatly simplified and improved the success rate of parsing I was already doing with ChatGPT.

I'm a little confused however about when to use the use Instructor.Validator import. I currently added it in every relevant Ecto schema and the parent module that makes the call to chat_completion. Should it be in all of these places? Thanks in advance!

Edit: one note of clarification- I'm using a nested changeset/schema structure right now and it's working great. I have three levels of nested schemas with changeset/2 functions. I haven't implemented the valdidate_changeset function in any of them, but have still seen the library prompt ChatGPT when it returned an improperly formatted field of the third-tier schema. Do I even need the Instructor.Validator and validate_changeset?

Runtime switching adapters?

Hey @thmsmlr! Just got to trying this out properly today. Really nice stuff. Locking things to a single adapter via config feels overly restrictive -- some folks use a combination of providers for different purposes. Would you be up for a PR that allows passing an adapter as an opt for chat_completion to override the configured adapter?

SpamPrediction example failing to compile with "Unknown Registry"

Great library! I wanted to dig in more by first using the example. For some reason, taking the SpamPrediction example and adding it to my fork does not compile. There error looks like the following

07:28:15.482 [warning] Failed to lookup telemetry handlers. Ensure the telemetry application has been started. 

07:28:15.485 [warning] Failed to lookup telemetry handlers. Ensure the telemetry application has been started. 

== Compilation error in file lib/spam_prediction.ex ==
** (ArgumentError) unknown registry: Req.Finch
    (elixir 1.15.4) lib/registry.ex:1400: Registry.key_info!/1
    (elixir 1.15.4) lib/registry.ex:590: Registry.lookup/2
    (finch 0.16.0) lib/finch/pool_manager.ex:44: Finch.PoolManager.lookup_pool/2
    (finch 0.16.0) lib/finch/pool_manager.ex:34: Finch.PoolManager.get_pool/2
    (finch 0.16.0) lib/finch.ex:284: Finch.__stream__/5
    (finch 0.16.0) lib/finch.ex:324: anonymous fn/4 in Finch.request/3
    (telemetry 1.2.1) /Users/michaeledoror/workspace/instructor_ex/deps/telemetry/src/telemetry.erl:321: :telemetry.span/3
    (req 0.4.8) lib/req/steps.ex:836: Req.Steps.run_finch_request/3

I feel like I might be setting up my creds for openai incorrectly? Below is the file that I am using. Same everything but I had to do some work to configure the :api_key, http_options, and api_url. Am I going about this the write way?

defmodule SpamPrediction do
  use Ecto.Schema
  use Instructor.Validator

  @doc """
  ## Field Descriptions:
  - class: Whether or not the email is spam.
  - reason: A short, less than 10 word rationalization for the classification.
  - score: A confidence score between 0.0 and 1.0 for the classification.
  """
  @primary_key false
  embedded_schema do
    field(:class, Ecto.Enum, values: [:spam, :not_spam])
    field(:reason, :string)
    field(:score, :float)
  end

  @impl true
  def validate_changeset(changeset) do
    changeset
    |> Ecto.Changeset.validate_number(:score,
      greater_than_or_equal_to: 0.0,
      less_than_or_equal_to: 1.0
    )
  end
end

is_spam? = fn text ->
  Instructor.chat_completion([
      model: "gpt-3.5-turbo",
      response_model: SpamPrediction,
      max_retries: 3,
      messages: [
        %{
          role: "user",
          content: """
          Your purpose is to classify customer support emails as either spam or not.
          This is for a clothing retail business.
          They sell all types of clothing.

          Classify the following email:
          ```
          #{text}
          ```
          """
        }
      ]],
      [api_key: System.get_env("OPENAI_API_KEY"), http_options: [receive_timeout: 60_000], api_url: "https://api.openai.com"]
  )
end

is_spam?.("Hello I am a Nigerian prince and I would like to send you money")

# => {:ok, %SpamPrediction{class: :spam, reason: "Nigerian prince email scam", score: 0.98}}

Jaxon dependency maintenance status

Jaxon is used in a couple of places in the library.

The thing is this library has not seen any commit in the last 2.5 years (https://github.com/boudra/jaxon).

I would suggest that we consider moving to something with a better maintenance ultimately, and try to reduce expanding the usage in instructor_ex.

Opening the issue to ensure we are all aware of that!

Related links:

Chat completion inconsistent (Ollama)

Running chat_completion on Ollama sometimes works, but mostly returns a "can't be blank" error

    messages = [
      %{role: "user", content: "Who were the first three president of the United States?"}
    ]
    
    Instructor.chat_completion(
      [
        messages: messages,
        mode: :json,
        model: "llama3",
        response_model: %{content: :string}
      ],
      api_key: "ollama",
      api_url: "http://localhost:11434",
      http_options: [receive_timeout: 60000]
    )
{:error,
 #Ecto.Changeset<
   action: nil,
   changes: %{},
   errors: [content: {"can't be blank", [validation: :required]}],
   data: %{},
   valid?: false
 >}

Support Bedrock?

Just putting this out there as I think it will become more common (and I'm currently exploring it). Would it be as simple as writing an adapter behaviour?

`Instructor.echo_response/1` no function clause matching

We are using Version 0.0.5 in our application. We recently got this error:

** (FunctionClauseError) no function clause matching in Instructor.echo_response/1
(instructor 0.0.5) lib/instructor.ex:510: Instructor.echo_response(%{"choices" => [%{"finish_reason" => "stop", "index" => 0, "message" => %{"content" => "[OUR_CONTENT]", "role" => "assistant"}}], "created" => 1709884788, "id" => "chatcmpl-90PQq8TYhzCvxtCz2w2xneKudwU0O", "model" => "gpt-4-1106-vision-preview", "object" => "chat.completion", "usage" => %{"completion_tokens" => 76, "prompt_tokens" => 1408, "total_tokens" => 1484}})
(instructor 0.0.5) lib/instructor.ex:456: anonymous fn/3 in Instructor.do_chat_completion/3

I couldn't find time to look at the code yet. Maybe I could provide a fix for this later.

Continuous integration?

I have noticed that the tests are not run automatically while issuing a PR here:

Would you be interested in some contribution around that, e.g. maybe running GitHub actions on top of each PR?

It could start easily, with just mix test, and later do more connected testing.

Happy to provide some help here if you fancy it.

Missing schema descriptions in prod

When running in prod, I'm getting different results than when running in dev. After digging, it seems that the @doc description from my schema is missing in prod.

E.g. in prod:

iex> Instructor.JSONSchema.from_ecto_schema(DSS.ProductSuggestion)
"{\"type\":\"object\",\"description\":\"\",\"title\":\"DSS.ProductSuggestion\",\"required\":[\"product_description\",\"product_name\",\"relative_description\"],\"properties\":{\"product_description\":{\"type\":\"string\",\"title\":\"product_description\"},\"product_name\":{\"type\":\"string\",\"title\":\"product_name\"},\"relative_description\":{\"type\":\"string\",\"title\":\"relative_description\"}}}"

Notice that the description field is empty, which is not the case when calling the same function in dev.

Overwrite the OpenAI Client HTTP defaults

It's tedious continually adding the,

  config: [
    openai: [
      http_options: [recv_timeout: :infinity, async: :once]
    ]
  ]

every time you install. The defaults make no sense for the common usecase. Let's change that

Add CI

I don't really know how to do GitHub actions with Elixir. If someone wanted to take this, I greatly appreciate it. Or at least point me into a good template that I can copy.

Integration with local LLaVA, do we want it? What is the way?

I've been looking for local-only solutions to reliably extract structured data from invoices/receipts. No API/cloud solutions for obvious privacy reasons (those receipts can sometimes include credentials or account identifiers, and I don't want that data to leave the server in that case).

Thanks to a tweet, I came across this apparently very nice solution:

https://github.com/haotian-liu/LLaVA

A first test via their demo page with a real restaurant receipt worked very nicely (just as nicely as GPT4 currently), see https://twitter.com/thibaut_barrere/status/1773031570259001720 for the input and output.

I see (in #36) that other people are interested to extract data from images.

BumbleBee is also a possibility with a proper choice of model, of course.

Would this have its place in instructor_ex to your opinion?

If yes, is there a recommended path to integrate a new model? (unsure I'll tackle this, but at least interested to discuss that).

Inconsistency in getting adapter from config via function params?

Hi all,

I noticed this function in completions.ex:

defp adapter(%{adapter: adapter}) when is_atom(adapter), do: adapter
defp adapter(_), do: Application.get_env(:instructor, :adapter, Instructor.Adapters.OpenAI)

Looking into how the adapters use the config internally, seems like they all expect config to be a keyword list, not a map. My take is that the first clause there is wrong and makes passing in the adapter via explicit config impossible atm.

Am i wrong?

GroqCloud support

Hi, thanks for creating this nice library 😊

I recently came accros GroqCloud. It seems like an alternative to OpenAI and is powered by open source moduls like Mixtral, Llama2 and Gemma. They have two big advantegaes over OpenAI which is speed and price.

So just qurious if it would be possible and makes sense to add support for GroqCloud?

GPT4 Vision

Now that we support :md_json because of #15 supporting gpt4 vision should be at not extra code,

Let's add a cookbook to demonstrate how to do it.

Could not compile instructor in livebook

When trying to run the quickstart example in my local livebook, I get this error:

Could not compile :instructor, no "mix.exs", "rebar.config" or "Makefile" (pass :compile as an option to customize compilation, set it to "false" to do nothing)
Unchecked dependencies for environment dev:
* instructor (/home)
  could not find an app file at "_build/dev/lib/instructor/ebin/instructor.app". This may happen if the dependency was not yet compiled or the dependency indeed has no app file (then you can pass app: false as option)

support claude

Hey,

I've been thinking about supporting Claude. I'll raise a PR once I am done.
@thmsmlr If you have suggestions around this, please let me know.

Unable to use a schema with a module name containing a "."

I was trying to use a schema module with the name like:

defmodule Teleprompt.GrammarClassification do

Error:

** (MatchError) no match of right hand side value: {:error, "LLM Adapter Error: %{\"error\" => %{\"code\" => nil, \"message\" => \"'Teleprompt.GrammarClassification' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'tools.0.function.name'\", \"param\" => nil, \"type\" => \"invalid_request_error\"}}"}
    (teleprompt 0.1.0) lib/teleprompt/text_helper.ex:5: Teleprompt.TextHelper.is_correct?/1
    iex:1: (file)

I believe the error came from the autogeneration of the changeset.

When I changed the module name it worked:

defmodule GrammarClassification do

json response error: gpt-4-vision-preview

Currently I'm getting the following error when trying to run the GPT4-Vision livebook example:

** (MatchError) no match of right hand side value: {:error, "Invalid JSON returned from LLM: %Jason.DecodeError{position: 0, token: nil, data: \"\"}"}

Any idea what might be causing it?

Enable passing through Req opts?

We have a need for ipv6 (fly.io flycast) to connect to ollama. Would you be up for a PR that allows passing through Req options to the adapter? (Much like how Req itself allows passing connect_options through to Finch). request_options on the chat_completion config should do it.

Return HTTP errors, rather than raise

When using this library to automate things, it's not great to have the process crash on errors that are to be expected, like HTTP errors. Any thoughts?

Code not being executed during tests

Currently, it looks like calls to Instructor.Adapter are being mocked here:

Mox.defmock(InstructorTest.MockOpenAI, for: Instructor.Adapter)

That means that Instructor.Adapter itself, and the modules being called from it, are not actually executed during tests.

E.g. If I add an error like tmp = 1/0 to Instructor.Adapters.OpenAI.do_chat_completion and run the tests locally, they pass.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.