Code Monkey home page Code Monkey logo

langsmith-sdk's Introduction

LangSmith Client SDKs

Release Notes Python Downloads

NPM Version JS Downloads

This repository contains the Python and Javascript SDK's for interacting with the LangSmith platform.

LangSmith helps your team debug, evaluate, and monitor your language models and intelligent agents. It works with any LLM Application, including a native integration with the LangChain Python and LangChain JS open source libraries.

LangSmith is developed and maintained by LangChain, the company behind the LangChain framework.

Quick Start

To get started with the Python SDK, install the package, then follow the instructions in the Python README.

pip install -U langsmith
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=ls_...

Then start tracing your app:

import openai
from langsmith import traceable
from langsmith.wrappers import wrap_openai

client = wrap_openai(openai.Client())

client.chat.completions.create(
    messages=[{"role": "user", "content": "Hello, world"}],
    model="gpt-3.5-turbo"
)

To get started with the JavaScript / TypeScript SDK, install the package, then follow the instructions in the JS README.

yarn add langsmith
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=ls_...

Then start tracing your app!

import { OpenAI } from "openai";
import { traceable } from "langsmith/traceable";
import { wrapOpenAI } from "langsmith/wrappers";

const client = wrapOpenAI(new OpenAI());

await client.chat.completions.create({
  model: "gpt-3.5-turbo",
  messages: [{ content: "Hi there!", role: "user" }],
});
{
  id: 'chatcmpl-8sOWEOYVyehDlyPcBiaDtTxWvr9v6',
  object: 'chat.completion',
  created: 1707974654,
  model: 'gpt-3.5-turbo-0613',
  choices: [
    {
      index: 0,
      message: { role: 'assistant', content: 'Hello! How can I help you today?' },
      logprobs: null,
      finish_reason: 'stop'
    }
  ],
  usage: { prompt_tokens: 10, completion_tokens: 9, total_tokens: 19 },
  system_fingerprint: null
}

Cookbook

For tutorials on how to get more value out of LangSmith, check out the Langsmith Cookbook repo.

Documentation

To learn more about the LangSmith platform, check out the docs

langsmith-sdk's People

Contributors

adham-elarabawy avatar agola11 avatar atomicjon avatar baskaryan avatar bracesproul avatar bvs-langchain avatar chasemcdo avatar dependabot[bot] avatar dqbd avatar efriis avatar eltociear avatar eyurtsev avatar h4r5h4 avatar hinthornw avatar hoangnguyen689 avatar hwchase17 avatar jacoblee93 avatar jakerachleff avatar joshuasundance-swca avatar langchain-infra avatar madams0013 avatar nfcampos avatar obi1kenobi avatar samnoyes avatar savvasmohito avatar tmk04 avatar vowelparrot avatar yue-fh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langsmith-sdk's Issues

Custom Dashboards

Feature request

Enable developers to build their own visualizations on LangSmith using matplotlib or plotly.

Motivation

I would like to visualize certain performance metrics that are not sufficiently shown in the graphs in the LangSmith metrics. This way leaders and developers can understand what exactly is happening.

LangSmith without LangChain imports Fail

Issue you'd like to raise.

When running the LangSmith without LangChain Hello World example for TypeScript I get to following error when trying to import the required components.

Module '"langsmith"' has no exported member 'RunTreeConfig'.

Suggestion:

Fix imports as seen in:

Issue: Test Compare has inaccurate aggregated results

Issue you'd like to raise.

When you select results from your tests tables and compare them the aggregated results at the bottom can be in a different order than the actually displayed outputs.
image
image

You'll note that in these pictures I clicked ZS, FS, COT in that order, so we see the headers and results columns of the compare view display that.

But at the very bottom the aggregated results are shown in sorted creation order of COT, FS, ZS

Suggestion:

It looks like it has to due with the aggregated results at the bottom being sorted by creation date, while the result display is based on the order you clicked them in.

Suggestion would be to remove the sort applied to the aggregated results in the compare test view.

about renaming to langsmith

Hello, thank you for your daily development.
I really appreciate it.

I'm working on packaging langchain in NixOS/nixpkgs.
I found this issue. Is it right to understand that langchainplus-sdk has been wholly replaced by langsmith?
If so, I'm planning to remove the langchainplus-sdk package from nixpkgs.

I also noticed in the above issue that this code is now open.
Thanks for making it public!

Unexpected keyword argument 'allowed_methods'

Whenever I try to run the most basic code from the documentation -

from langsmith import Client
client = Client()

I get:

  File "/opt/homebrew/lib/python3.11/site-packages/langsmith/client.py", line 106, in _default_retry_config
    return Retry(
           ^^^^^^
TypeError: Retry.__init__() got an unexpected keyword argument 'allowed_methods'

Not sure why this is happening. Can you please help?

version tag conflict

This repo uses the langchainplus-sdk as it is now.
As you can see from the tags and releases, there is a conflict between langchainplus-sdk's tags and langsmith's.
Since you released v0.0.7, it has been released daily with the tag "v".
Although the PyPI packages seem to be successfully released from the CI, I cannot trust that it is the correct version without comparing the source code.

I believe that proper tagging is helpful for both users and maintainers.

Thanks again for developing this. ❤️

[JS] Forever pending state due to user misconfiguration

LangSmith trace enters a forever pending state due to a user misconfiguration. The error handling can be improved here.

Shared the LangSmith trace with @jacoblee93 privately.

To replicate, use the incorrect metadata configuration that makes a trace end up in a pending state:

chain.call({
  question,
  metadata: { experiment: 'summary' },
})

while the correct configuration makes a trace complete as expected:

chain.call({
  question,
}, {
  metadata: { experiment: 'summary' }
})

Issue: Run Count and Total Tokens not updating

Issue you'd like to raise.

After a few hours, the number of runs and the token count stopped increasing in all my apps. New requests are being traced correctly and perfectly (langsmith is really helpful, I don't think I can use something else).
This might be caused by the fact that the 0.0.56 was released in the meantime.
Still after updating, I see no changes on the run & tokens count.
Any Idea about where this might come from ?
Thank you

Suggestion:

No response

Allow name customization for LLM Objects

Feature request

Allow name customization for LLM Objects of each run

Motivation

Right now, name customization is only supported for LLMChain, but LLMChain doesn't support .generate, which makes parallel calls and returns rich outputs.
For our use case, each trace has multiple LLM (object) calls, and it gets confusing in the UI to identify each call, and we have to click through all the items to find the needed call. The custom name would make it much easier.

Adding Proxy Feature

Problem:
Currently, the JavaScript SDK lacks support for proxying requests, which is essential for users in certain network environments or security configurations.

Feature Request:
I'd like to propose adding a proxy feature to the JavaScript SDK to address this limitation. This feature would allow users to configure a proxy server for SDK requests, ensuring that the SDK can be used seamlessly in various network setups.

Not able to get run id based on tag or metadata

I want a user to be able to provide feedback on an inference result within my chatbot ui.

The scenario is:
= user asks a question

  • chatbot returns answer
  • user provide feedback on answer (good/bad for example).

I would think the ideal way to write feedback to the langsmith trace/run is to get the run id when calling the chain. I have tried two ways to do chains:

qa_chain = RetrievalQA.from_chain_type(
    llm,
    retriever=retriever,
    return_source_documents=True,
    chain_type_kwargs={"prompt": prompt},
)

and

qa_chain = (
    {"context": itemgetter("question") | retriever,
     "question":itemgetter("question")}
    | prompt 
    | llm 
)

neither chain appears to be able to get the run id from the result as noted in https://docs.smith.langchain.com/tracing/tracing-faq#how-do-i-get-the-run-id-from-a-call.

So this didn't work.

I was able to tag runs, but when I try to retrieve with the tag (or i also tried metadata), I get

langsmith.utils.LangSmithError: Failed to get https://api.langchain.plus/runs in LangSmith API. {"detail":"At least one of 'session', 'id', 'parent_run', or 'reference_example' must be specified"}

which seems to tell me that although the langsmith documentation said list(client.list_runs(filter='has(metadata, \'{"variant": "abc123"}\')')), it didn't work for me.

What am i missing ? All I want is to write user feedback to a run in langsmith. Thank you.

Go Support

It'd be great to see an implementation allowing langchaingo-based programs to integrate with langsmith.

Issue: LANGSMITH API KEY access request

Issue you'd like to raise.

Hello development team, I hope this message can reach you. I took a look at the features Langsmith brings to the community, it's amazing. That's the solution my team needs. I hope I can access it, but I can't find my API key. Can you help me?Please.
Email: [email protected]

Suggestion:

No response

Delete an "organization"

Feature request

In the "Manage Organization" section I would need a way to delete organizations that are no longer needed.

Motivation

Clean up the list of organizations that I created as a trial or that I no longer manage.

Issue: `list_runs` does not include tokens for each run.

Issue you'd like to raise.

The list_runs method doesn't provide the tokens in the API response for each given run even though the Pydantic response schema has it listed.

output = langsmith_client.list_runs(
    project_id="5b5b88d3-af77-4a64-9607-51782ac7a62f",
    filter=f"has(tags, '{agent_id}')",
)

OS info:

LangChain Environment:
sdk_version           : 0.0.26
library               : langsmith
platform              : macOS-13.3-arm64-arm-64bit
runtime               : python
runtime_version       : 3.11.4
langchain_version     : 0.0.285
docker_version        : Docker version 24.0.5, build ced0996
docker_compose_command: docker compose
docker_compose_version: Docker Compose version v2.20.2-desktop.1

Suggestion:

Return all available fields for available for each run in both list and get methods of the sdk

Multi-user support

It would be great to be able to setup an organization and invite other users to Langsmith to have multiple users monitor the prod environment, share and contribute to datasets and run the same tests.

[JS] run on dataset functionality for js/ts?

The funtionality to run evaluators on a dataset using run_on_dataset in Python is exactly what we are looking for. Right now we've build our Langchain implementation in Typescript. I am wondering if you are working also on the same functionality in js/ts and if so do you have any expectations on when?

Front-end 404 after installing

Hello,

I cannot seem to install this LangSmith locally and have an operational server.

No problems indicated when starting:

image

Logs:

image

Front-end:

image

Is there some initialization procedure that I am missing? I have also tried to send traces but receive messages regarding internal server errors.

Any help would be much appreciated.

Issue: langsmith don't work and can't trace any data

Issue you'd like to raise.

I tried langsmith in google colab
`
from langchain.chat_models import ChatOpenAI
import os
os.environ['OPENAI_API_KEY'] = 'My_OPENAI_API_KEY'
os.environ['LANGCHAIN_TRACING_V2'] = 'true'
os.environ['LANGCHAIN_ENDPOINT'] = 'https://api.smith.langchain.com'
os.environ['LANGCHAIN_API_KEY'] = 'My_LANGCHAIN_API_KEY'
os.environ['LANGCHAIN_PROJECT'] = 'default'

from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI()
llm.predict("Hello, world!")
`
response: Hello! How can I assist you today?

But langsmith can't capture any data and I didn't find errors anywhere.

PS: ALL my API_KEY is validate

Suggestion:

No response

DOC: the @traceable decorator docs example not working

Issue with current documentation:

The script below, taken from the documentation page does not trace anything to my langsmith project. I have created a new api key but nothing seems to work.

import os
from datetime import datetime
from typing import List, Optional, Tuple

import openai

openai.api_key = os.environ['OPENAI_API_KEY']
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGCHAIN_API_KEY"] = ""
os.environ["LANGCHAIN_PROJECT"] = ""


from langsmith.run_helpers import traceable

# We will label this function as an 'llm' call to better organize it
@traceable(run_type="llm")
def call_openai(
    messages: List[dict], model: str = "gpt-3.5-turbo", temperature: float = 0.0
) -> str:
    return openai.chat.completions.create(
        model=model,
        messages=messages,
        temperature=temperature,
    )


# The 'chain' run_type can correspond to any function and is the most common
@traceable(run_type="chain")
def argument_generator(query: str, additional_description: str = "") -> str:
    return (
        call_openai(
            [
                {
                    "role": "system",
                    "content": f"You are a debater making an argument on a topic."
                    f"{additional_description}"
                    f" The current time is {datetime.now()}",
                },
                {"role": "user", "content": f"The discussion topic is {query}"},
            ]
        )
        .choices[0]
        .message.content
    )


@traceable(run_type="chain")
def critic(argument: str) -> str:
    return (
        call_openai(
            [
                {
                    "role": "system",
                    "content": f"You are a critic."
                    "\nWhat unresolved questions or criticism do you have after reading the following argument?"
                    "Provide a concise summary of your feedback.",
                },
                {"role": "system", "content": argument},
            ]
        )
        .choices[0]
        .message.content
    )


@traceable(run_type="chain")
def refiner(
    query: str, additional_description: str, current_arg: str, criticism: str
) -> str:
    return (
        call_openai(
            [
                {
                    "role": "system",
                    "content": f"You are a debater making an argument on a topic."
                    f"{additional_description}"
                    f" The current time is {datetime.now()}",
                },
                {"role": "user", "content": f"The discussion topic is {query}"},
                {"role": "assistant", "content": current_arg},
                {"role": "user", "content": criticism},
                {
                    "role": "system",
                    "content": "Please generate a new argument that incorporates the feedback from the user.",
                },
            ]
        )
        .choices[0]
        .message.content
    )


@traceable(run_type="chain")
def argument_chain(query: str, additional_description: str = "") -> str:
    argument = argument_generator(query, additional_description)
    criticism = critic(argument)
    return refiner(query, additional_description, argument, criticism)


result = argument_chain(
    "Whether sunshine is good for you.",
    additional_description="Provide a concise, few sentence argument on why sunshine is good for you.",
)
print(result)

Idea or request for content:

No response

Feature Request: Integrate prompt management capabilities into LangSmith

LangSmith is great at visualizing and debugging calls made to LLMs but it doesn't have the capabilities (at least in my knowledge) to manage prompts. It will be great to see something like https://www.tensorflow.org/tfx/guide/mlmd capabilities integrated so that LangSmith becomes a one-stop tool providing coverage over the entire pipeline.

Expectations:

  1. Ability to create prompt artifacts, llm artifacts and custom artifacts.
  2. Create multiple re-usable prompt templates that users fetch to add their prompts.
  3. Connect artifacts with each other through an edge (call it events, for example) to create a lineage graph
  4. Enable versioning across different artifacts
  5. Ability to export these artifacts to local/cloud storage
  6. Visualization and translation through the graph created above in pt. 3

[Feature Request] Show Prompt vs. Completion Tokens per Run

Current Behavior

Currently, the LangSmith UI appears to only allow showing prompt/completion tokens for an entire project.
image

The only view on this data that is available per run is total tokens, but not prompt tokens vs. completion tokens.
image

Desired Behavior

I'd like to be able to show columns for prompt/completion tokens per run, when viewing a project.

[Feature Request?] OpenTelemetry

Hi,

Are there any plans to utilize the OpenTelemetry SDK and APIs as the basis for generating observability Signals rather than a proprietary Tracing implementation?

Error: Failed to create run: 409 Conflict {"detail":"Run already exists"}

Hey, I am trying to integrate Langsmith, in my LangChain chains, and I am getting the following error, when I invoke a chain -

Error in handler Ce, handleLLMStart: Error: Failed to create run: 409 Conflict {"detail":"Run already exists"}
image

I am using Deno to handle my functions and have initialised all env vars like this -
image

Here is a sample of my code -

import { ChatOpenAI } from "langchain/chat_models/openai";
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "langchain/prompts";

import { Client } from "langsmith";
import { LangChainTracer } from "langchain/callbacks";
import { LANGCHAIN_API_KEY, LANGCHAIN_ENDPOINT, LANGCHAIN_PROJECT } from "../_shared/env.ts";

... more imports

const prompt = PromptTemplate.fromTemplate(spellCorrectPromptString);

... more code    

serve(async (req) => {
... more code

  const client = new Client({
		apiUrl: LANGCHAIN_ENDPOINT,
		apiKey: LANGCHAIN_API_KEY,
	});

	const tracer = new LangChainTracer({
		projectName: LANGCHAIN_PROJECT,
		client,
	});

  try {
    const { input: query, platform } = await req.json();  

    const llm = new ChatOpenAI({
      ...openAIOptions,
      callbacks: [tracer],
      tags: ["spell-correct"]
    }, openAIConfiguration);
    
    // For a non-streaming response we can just await the result of the
    // chain.run() call and return it.
    const chain = new LLMChain({ llm, prompt, verbose: isDevelopment});

    const response = await chain.call({ query }).catch((e) => console.error(e));

    return new Response(JSON.stringify(response), {
      headers: { ...corsHeaders, "Content-Type": "application/json" },
    });
  } catch (e) {
    return new Response(JSON.stringify({ error: e.message }), {
      status: 500,
      headers: { ...corsHeaders, "Content-Type": "application/json" },
    });
  }
});

The chain shows up on the Langsmith dashboard, but it is shown as pending
image

If I remove LangSmith everything is working perfectly, but once I integrate it like this, I get 502 on my functions (as Langsmith requests itself fail)

Please let me know if you need anything else from my end.

Token information is not captured after upgrading

Recently we have upgraded LangSmith to 0.0.33 (latest version) from 0.0.12, and we are no longer able to see the token information while "Logging Traces with LangChain", to test post upgrade I am following the standard sample . We tried downgrading LangSmith to 0.0.28 and 0.0.27 but still facing the issue.

from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI()
llm.predict("Hello, world!")

image

Please let me know if this new issue or know issue with solution.

[Bug] No Completion Tokens shown in UI

Hi, I am using traceable operator for logging into langsmith. In the UI I just see the prompt token and the completion token is always zero.

I am attaching the screenshot for your reference. I am not using langchain.

image

Get Feedback comments from LangSmith

Feature request

When calling list_runs() from the client, only the feedback scores are returned, but not the comments. The feedback_stats should also contain the feedback comment.

Motivation

I want to run text analysis and easily display all the runs that have negative feedback.

Issue: Token counts not returned by `list_runs`

Issue you'd like to raise.

Following the cookbook: https://github.com/langchain-ai/langsmith-cookbook/blob/main/exploratory-data-analysis/exporting-llm-runs-and-feedback/llm_run_etl.ipynb

list_runs is supposed to return run.prompt_tokens, run.completion_tokens and run.total_tokens. However all three is not present:

runs = list(client.list_runs(
    project_name=project_name,
    run_type="llm",
    start_time=start_time,
))

Other info:

  • hosted langsmith endpoint
'runtime': 'python',
  'platform': 'Linux-5.19.0-1025-aws-x86_64-with-glibc2.35',
  'sdk_version': '0.0.33',
  'library_version': '0.0.283',
  'runtime_version': '3.11.4',
  'langchain_version': '0.0.283',

Issue: Self hosted can't find the API

Issue you'd like to raise.

Hi everyone,

I trying to self host langsmith, I installed the package version 0.0.69, and then ran

langsmith start

It's running in a VM in azure cloud, I opened all the ports just to be sure there is no error there. also, there is no active firewall
Docker looks like this
image

At the moment of opening it can't find the api
image

any idea how can i continue?

Suggestion:

No response

Issue: Getting an error Licence key is not valid

Issue you'd like to raise.

Hi Team,
We are using langsmith to create and test LLM applications internally but while running docker.yml file below error -
Exception: License key is not valid

In a docker file we are not passing any license key. it was working previously but from last 2-3 days its not working.
We are running docker images locally and its self hosted langsmith.
can someone please provide some inputs how to fix the error.

Error Image -
langsmith-error

Suggestion:

No response

access to langsmith

Feature request

I have requested access to langsmith kindly provide me with the same

Motivation

I have requested access to Langsmith kindly provide me with the same

image Support x86_64

the langchain/langchainplus-backend:latest only Support linux/arm64/v8

Please support x86_64

Thank you

Bulk Deletion of Test Runs

Feature request

If you have selected multiple test runs it would be great if there was an option to bulk delete them rather than having to do it individually.

The popup at the bottom seems like a great place to fit in the new button.
image

Motivation

As a user iterating on test cases within LangSmith, I often get several erroneous runs which aren't of much use to keep for comparison. The addition of this feature would enable quicker management of test runs.

Langsmith expose is not working Azure OpenAI services

Issue you'd like to raise.

Hi everyone, I'm trying to deploy and use langsmith locally.
I deployed in a docker container using

langsmith start --expose --openai-api-key=<my azure OpenAi key>

the docker container looks good
image
I opened all the used ports to avoid any problem there, I'm running langsmith in a remote computer

I set up the environment variables
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT=https://cc23-20-79-217-xxx.ngrok.io
LANGCHAIN_API_KEY=

but the interface is not loading the projects
image

when I try to access the langsmith endpoint it returns

{
"detail": "Not Found"
}

using the chat example that appears in this repo
https://github.com/langchain-ai/langsmith-cookbook/tree/main/feedback-examples/streamlit

I can see in the endpoint https://cc23-20-79-217-xxx.ngrok.io that the runs are being tracked, but I can't see them in the frontend

debugging the front end it is failing trying to fetch the tenants, it's trying to fetch them from http://127.0.0.1:1984/tenants while if I'm not understanding it wrong it should get them from http://20.79.217.xxx:1984/tenants
image

could it be a problem with the Azure OpenAI? or did I do something wrong with the installation?

Thanks in advance

Suggestion:

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.