Code Monkey home page Code Monkey logo

langsmith-docs's Introduction

LangSmith Documentation

This repository hosts the source code for the LangSmith Docs.

  • For the code for the LangSmith client SDK, check out the LangSmith SDK repository.
  • For a "cookbook" on use cases and guides for how to get the most out of LangSmith, check out the LangSmith Cookbook repo

The docs are built using Docusaurus 2, a modern static website generator.

Installation

$ yarn

Local Development

$ yarn start

This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.

Build

$ yarn build

This command generates static content into the build directory and can be served using any static contents hosting service.

Contributing

If you see an issue in one of the examples, feel free to open a PR! Or file an issue reporting it.

Thanks for reading!

langsmith-docs's People

Contributors

2016bgeyer avatar agola11 avatar akira avatar andrewnguonly avatar barberscott avatar baskaryan avatar bracesproul avatar bvs-langchain avatar chasemcdo avatar dependabot[bot] avatar devleejb avatar dqbd avatar efriis avatar eric-langchain avatar hinthornw avatar hwchase17 avatar jacoblee93 avatar jakerachleff avatar jqnxyz avatar kaijietti avatar langchain-infra avatar madams0013 avatar manuel-soria avatar marclave avatar nfcampos avatar nsanden avatar prempv avatar prince-mendiratta avatar rlancemartin avatar samnoyes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langsmith-docs's Issues

Custom Run IDs & tags with proxy

Hi there,

Thanks for all the innovative stuff the team is putting out.

I'm using the langsmith proxy because I need langsmith to support streaming, and, to my understanding of the docs, using the proxy is the only way to do this currently (?).

However, when using the proxy, there's no documentation on how to specify custom Run IDs, Trace IDs, tags, metadata, etc. The pattern shown in the docs is basically just raw OpenAI API syntax with the proxy URL specified as a param in openai.Client().

Are tags, IDs currently supported by the proxy?

Tyia

DOC: <Issue related to /how_to_guides/evaluation/evaluate_llm_application>

The documentation for evaluating a Langchain runnable for Typescript has an error. It mentions Then, pass the runnable.invoke method to the evaluate method. with the following code snippet:

import { evaluate } from "langsmith/evaluation";

await evaluate(chain, {
  data: datasetName,
  evaluators: [correctLabel],
  experimentPrefix: "Toxic Queries",
});

What I have found in my testing is that the chain function needs to be called -> chain().

Updated example:

import { evaluate } from "langsmith/evaluation";

await evaluate(chain(), {
  data: datasetName,
  evaluators: [correctLabel],
  experimentPrefix: "Toxic Queries",
});

Support my own judge model? --custom judge model

Hi there,
I am wondering does the llm-as-a-judge evaluation from LangSmith support customized my own model as a judge?
I wish to develop my custom prompts for my own judge model through langsmith.

I see currently there are only OpenAI models supported.

DOC: <Issue related to /how_to_guides/tracing/log_llm_trace>

Hi there, I'm using RunTree and want to log the cost, I figured out how to add the token count but not the cost. Can someone guide me please?

PS: The provider openai and gpt-4 are just values to test with. I figured if it works for them it'll work for the other custom models I'm using like Command-R

Reference: https://docs.smith.langchain.com/how_to_guides/tracing/log_llm_trace#manually-provide-token-counts

   trace.end({
      data,
      usage_metadata: {
        input_tokens: 27,
        output_tokens: 13,
        total_tokens: 40,
        total_cost: ((inCost ?? 0) + (outCost ?? 0)),
        prompt_cost: inCost,
        completion_cost: outCost,
      },
      ls_provider: 'openai',
      ls_model_name: 'gpt-4',
    });

DOC: <Issue related to /concepts/tracing>

Does this mean that after 400 days links to traces from experiments will return 404? Is there any way to retain traces beyond 400 days while still being able to use all of LangSmith's features?

langsmith.client:Failed to batch ingest runs: LangSmithError('Failed to POST') 403 Client Error: Forbidden for url

Using google colab, and following the LangServe documentation with a simple LLM call after setting up my LangServe API Key, I am getting this error:

Reference: [https://docs.smith.langchain.com/tracing/quick_start]

WARNING:langsmith.client:Failed to batch ingest runs: LangSmithError('Failed to POST https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError('403 Client Error: Forbidden for url: [https://api.smith.langchain.com/runs/batch](https://api.smith.langchain.com/runs/batch/)', '{"detail":"Forbidden"}')')

Here is my code:

%env LANGCHAIN_TRACING_V2=True
%env LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
%env LANGCHAIN_API_KEY=userdata.get('langsmith_api_key')

from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI()
llm.invoke("Hello, world!")

The LLM call is succesfull, but the LangSmith package is not working! Tried disconnecting the run time with no luck

Mistake in example code for [Tracing FAQ]: run_id = id.traced_runs[0].id

In the documentation:
Tracing FAQ/How do I get the run id from a call, there is a mistake in the code (should be cb instead of id:

from langchain import chat_models, prompts, callbacks
chain = (
    prompts.ChatPromptTemplate.from_template("Say hi to {name}")
    | chat_models.ChatAnthropic()
)
with callbacks.collect_runs() as cb:
  result = chain.invoke({"name": "Clara"})
  run_id = id.traced_runs[0].id
print(run_id)

should be corrected to

from langchain import chat_models, prompts, callbacks
chain = (
    prompts.ChatPromptTemplate.from_template("Say hi to {name}")
    | chat_models.ChatAnthropic()
)
with callbacks.collect_runs() as cb:
  result = chain.invoke({"name": "Clara"})
  run_id = cb.traced_runs[0].id
print(run_id)

Why in LangSmith it is always removing the datasets of the projects?

Hi, really enjoying what you are building. But, I'm a bit frustrated that I can't just add the dataset to the project. It always removes it and have to add it again. Is there maybe a limit of attaching datasets to projects on LangSmith? I have 6 projects and 4 datasets. It is a very weird behaviour.

Wrote on Discord but it seems LangSmith support isn't really active.

Appreciate any suggestion.

Any clue how to make the agent access to the KV Dataset on Langsmith?

Hi,

I'm trying to make that my agent has access to the KV Dataset on LangSmith but it just has no clue. I've tried everything and from what I've found so far there is not really a good explanation about how to accomplish this. It is also very weird that the Disccord server has not really support.

I'm currently lost and have no clue how to continue. Everything seems ok but it just doesn't look at the the KV Dataset on Langsmith.

Here the full code:

        import os
        from dotenv import load_dotenv
        from langchain_community.llms import Ollama
        from langchain.chains import LLMChain
        from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
        from langsmith import Client
        from langchain.memory import ConversationBufferMemory
        from langchain.tools import Tool
        from langchain_community.utilities import GoogleSearchAPIWrapper
        from pydantic import BaseModel

        # Load environment variables
        load_dotenv()

        # LangChain and Google Search setup
        langchain_tracing_v2 = os.getenv("LANGCHAIN_TRACING_V2")
        langchain_api_key = os.getenv("LANGCHAIN_API_KEY")
        langchain_endpoint = os.getenv("LANGCHAIN_ENDPOINT")
        langchain_project = os.getenv("LANGCHAIN_PROJECT")
        os.environ["GOOGLE_CSE_ID"] = os.getenv("GOOGLE_CSE_ID")
        os.environ["GOOGLE_API_KEY"] = os.getenv("GOOGLE_API_KEY")

        # Initialize LangSmith Client
        client = Client(api_key=langchain_api_key, api_url=langchain_endpoint)

        # Define the dataset name
        dataset_name = "ai_films_business_db"

        # LangSmith Dataset functions
        def get_dataset_id(dataset_name):
            # Retrieve the dataset ID based on the given dataset name
            datasets = client.list_datasets()
            for dataset in datasets:
                if dataset.name == dataset_name:
                    return dataset.id
            raise ValueError(f"Dataset '{dataset_name}' not found.")

        def add_conversation_to_dataset(conversation_history, dataset_id):
            for user_input, ai_response in conversation_history:
                client.create_example(
                    inputs={"instruction": "Input to AI_FILMS_BUSINESS_GUY " + user_input},
                    outputs={"output": "Output from AI_FILMS_BUSINESS_GUY " + ai_response},
                    dataset_id=dataset_id
                )

        # Initialize the LLM (Ollama)
        llm = Ollama(model="dolphin-mistral", temperature=0.2)

        # Initialize memory for the conversation chain
        memory = ConversationBufferMemory()

        # Define a prompt template
        prompt_template = ChatPromptTemplate(
            messages=[
                SystemMessagePromptTemplate.from_template("You are a professional business expert in streaming, films productions, internet business models, in Intellectual property, copyright, entertainment, and artificial intelligence."),
                HumanMessagePromptTemplate.from_template("{input}")
            ]
        )

        # Define the LangSmithKVTool class using composition
        class LangSmithKVTool:
            def __init__(self, client, dataset_name):
                self.tool = Tool(name='LangSmithKVTool', func=self.retrieve_from_dataset, description='A tool for RAG using LangSmith KV dataset')
                self.client = client
                self.dataset_name = dataset_name

            def retrieve_from_dataset(self, prompt):
                # Retrieve the dataset ID
                dataset_id = get_dataset_id(self.dataset_name)
                # Read the examples within the dataset using the dataset_id
                dataset_examples = self.client.list_examples(dataset_id=dataset_id)
                # Process the dataset examples to find relevant information
                relevant_info = self.process_query_results(dataset_examples, prompt)
                return relevant_info

            def process_query_results(self, dataset_examples, prompt):
                compiled_info = ""
                for example in dataset_examples:
                    instruction = example.inputs.get('instruction')
                    output = example.outputs.get('output')
                    if prompt in instruction:
                        compiled_info += output + " "
                return compiled_info.strip()


        # Add the tool to the LLMChain
        chain = LLMChain(
            llm=llm,
            prompt=prompt_template,
            memory=memory
        )

        # Initialize the LangSmithKVTool instance
        langsmith_kv_tool = LangSmithKVTool(client, dataset_name)

        # Conversation function
        def run_conversation():
            dataset_id = get_dataset_id(dataset_name)
            conversation_history = []
            conversation_active = True
            while conversation_active:
                user_input = input("You: ")
                if user_input.lower() == "end_":
                    conversation_active = False
                    continue
                # Retrieve contextually relevant information using LangSmithKVTool
                context_info = langsmith_kv_tool.retrieve_from_dataset(user_input)
                # Combine user input with context information
                combined_input = f"{context_info} {user_input}" if context_info else user_input
                # Pass the combined input to the LLMChain using the invoke method
                response = chain.invoke({'input': combined_input})
                # Extract the response text
                response_text = response.get('text', "Sorry, I can't provide an answer right now.")
                conversation_history.append((user_input, response_text))
                print("AI:", response_text)
                if "save_" in user_input:
                    add_conversation_to_dataset(conversation_history[-1:], dataset_id)
            return conversation_history

        # Main execution
        if __name__ == "__main__":
            run_conversation()

Any help would be much appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.