Code Monkey home page Code Monkey logo

Comments (10)

ahyatt avatar ahyatt commented on August 23, 2024 1

Thanks for the response. Maybe we can have a solution that has aspects of both. I like the macro solution because it solves a very concrete problem for me: the structure for these methods are all exactly the same, and it's non-trivial, so it's difficult to make improvements to every provider. Perhaps using a method with lambdas and not a macro would be good. I can give it a shot, and either solution could still allow generic functions in Open AI. I think if we didn't use a macro but used a normal function, you could do the plz-curl-args binding.

Anyway maybe I'll just release the simple solution of having generic functions, and work on a macro or function that simplifies actually defining these, and consider these separate work.

from llm.

ahyatt avatar ahyatt commented on August 23, 2024

I think this seems reasonable, at least for Open AI. I'm not sure it's needed for other implementations, but maybe it makes sense to simplify the code. Let me think and look into it. We don't need an llm-error-code, though, anything like that should go into the error message itself.

I'll try to fool around with this idea today, or soon, and let you know how it goes. Thanks for the thoughtful suggestion!

from llm.

r0man avatar r0man commented on August 23, 2024

Ok, perfect. Thanks!

from llm.

ahyatt avatar ahyatt commented on August 23, 2024

I've done this, but haven't pushed it yet. One thing that is stopping me is that I'd like to try an alternate idea, which is making it very easy to define a new provider, via macros. I think ultimately that will be a better solution, if it works out, which I'm in the middle of trying to verify.

from llm.

ahyatt avatar ahyatt commented on August 23, 2024

Here's an example I'm currently experimenting with. It works, but I haven't figured out how to get edebug to work on it yet.

(defllm-embedding-provider llm-openai (provider string response)
  :precheck (llm-openai--check-key provider)
  :url (llm-openai--url provider "embeddings")
  :headers `(("Authorization" . ,(format "Bearer %s" (llm-openai-key provider))))
  :request (llm-openai--embedding-request (llm-openai-embedding-model provider) string)
  :error-extractor (llm-openai--error-message provider response)
  :result-extractor (llm-openai--embedding-extract-response response))

from llm.

r0man avatar r0man commented on August 23, 2024

Hi @ahyatt ,

thanks for working on this. I'm not convinced a macro will simplify making a new provider. It's a new little language people need to learn on top of looking into how a LLM provider works. I think I would prefer the ability to build my provider via specializing generic functions. That way I'm mostly in control, can leverage what generic functions provide (specialize certain functions on my own provider, call the "next" method, and customize methods via :before :around) and it works with edebug.

The code for my custom provider is actually very straightforward and can easily be understood. The extension points I needed were:

  • Add my own headers (solved with llm-openai--headers)
  • Use a certificate and private key (solved with binding plz-curl-args in the methods that do HTTP requests and calling the next method)
  • Transforming some errors from the proxy I use into an OpenAI compatible format (would be solved by a generic function that extracts errors and/or the response).

from llm.

ahyatt avatar ahyatt commented on August 23, 2024

I think I have a better solution that should solve all concerns. I'll create a "standard provider" implementation, and all of today's implementations can have methods for getting the url, extracting errors, extracting the response, etc. This is nice, because it mostly means that we can just rename existing methods to do the same, make sure they are standardized, and remove the actual llm-chat etc. methods. And it can be debugged. Around methods you can do either way. I'll have to figure out whether I want to obsolete the open AI generic methods or just have some duplication. So I'm going to pursue this, and should be it out today or tomorrow to try out.

from llm.

r0man avatar r0man commented on August 23, 2024

Sounds like a good plan

from llm.

ahyatt avatar ahyatt commented on August 23, 2024

In both main and plz branch, this now is how everything works. Thank you for the suggestion!

from llm.

r0man avatar r0man commented on August 23, 2024

Thanks for implementing this. It is working great!

from llm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.