Code Monkey home page Code Monkey logo

Comments (8)

ryanpeach avatar ryanpeach commented on May 20, 2024

I also would like to know.

from promptfoo.

typpo avatar typpo commented on May 20, 2024

Unfortunately not really. I built this because there wasn't anything else out there that did what I needed it to do. OpenAI does have an Evals framework you can take a look at. Its focus is on testing OpenAI models with heavier test cases, and some of the more advanced test cases require Python implementation.

from promptfoo.

ryanpeach avatar ryanpeach commented on May 20, 2024

The main thing I need is this but in python with langchain compatibility. It might be worth cloning and converting.

from promptfoo.

Keiku avatar Keiku commented on May 20, 2024

As far as I know, QAEvalChain in langchain module might be useful to me. I'm still looking to see if there are other alternatives.

from promptfoo.

Keiku avatar Keiku commented on May 20, 2024

@typpo Thanks for the link reference.

from promptfoo.

typpo avatar typpo commented on May 20, 2024

For those of you working in Python, have a look at the end-to-end LLM chain testing documentation.

Specifically, I've created an example that shows how to evaluate a Python LangChain implementation.

The example compares raw GPT-4 with LangChain's LLM-Math plugin by using the exec provider to run the LangChain script:

# promptfooconfig.yaml
# ...
providers:
  - openai:chat:gpt-4-0613
  - exec:python langchain_example.py
# ...

The result is a side-by-side comparison of GPT-4 and LangChain doing math:

langchain gpt-4 eval

Hope this helps your use cases. If not, interested in learning more.

Side note - QAEvalChain is similar in approach to the llm-rubric assertion type of promptfoo. It can help evaluate whether a specific answer makes sense for a specific question.

from promptfoo.

Keiku avatar Keiku commented on May 20, 2024

It looks like it was released recently.
hegelai/prompttools: Open-source tools for prompt testing and experimentation

from promptfoo.

karrtikiyer avatar karrtikiyer commented on May 20, 2024

@typpo : First of all congratulations on the great work in building this library. It would be great if we can have some way to directly compare and contrast promptfoo with prompttools and evals by OpenAi. This will make life easier for consumers to pick & choose best among these based on the usecase.

from promptfoo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.