Code Monkey home page Code Monkey logo

langchain-swift's Introduction

🐇 LangChain Swift

Swift License Swift Package Manager Twitter

🚀 LangChain for Swift. Optimized for iOS, macOS, watchOS (part) and visionOS.(beta)

Setup

  1. Please set up before using this library
        LC.initSet([
            "NOTION_API_KEY":"xx",
            "NOTION_ROOT_NODE_ID":"xx",
            "OPENAI_API_KEY":"xx",
            "OPENAI_API_BASE":"xx",
        ])
  1. Set some var like OPENAI_API_KEY or OPENAI_API_BASE

Such as.

OPENAI_API_KEY=sk-xxx
OPENAI_API_BASE=xxx
SUPABASE_URL=xxx
SUPABASE_KEY=xxx
SERPER_API_KEY=xxx
HF_API_KEY=xxx
BAIDU_OCR_AK=xxx
BAIDU_OCR_SK=xxx
BAIDU_LLM_AK=xxx
BAIDU_LLM_SK=xxx
CHATGLM_API_KEY=xxx
OPENWEATHER_API_KEY=xxx
LLAMA2_API_KEY=xxx
GOOGLEAI_API_KEY=xxx
LMSTUDIO_URL=xxx
NOTION_API_KEY=xxx
NOTION_ROOT_NODE_ID=xxx
BILIBILI_SESSION=xxx
BILIBILI_JCT=xxx

Get stated

🔥 Local Model

Please use 'local' branch, because of dependency on projects. Model here

 .package(url: "https://github.com/buhe/langchain-swift", .branch("local"))

Code

 Task {
            if let modelPath = Bundle.main.path(forResource: "stablelm-3b-4e1t-Q4_K_M", ofType: "txt") {
                let local = Local(inference: .GPTNeox_gguf, modelPath: modelPath, useMetal: true)
                let r = await local.generate(text: "hi")
                print("🥰\(r!.llm_output!)")
            } else {
                print("⚠️ loss model")
            }

        }
💬 Chatbots

Code

let template = """
Assistant is a large language model trained by OpenAI.

Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.

Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.

Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.

{history}
Human: {human_input}
Assistant:
"""

let prompt = PromptTemplate(input_variables: ["history", "human_input"], partial_variable: [:], template: template)


let chatgpt_chain = LLMChain(
    llm: OpenAI(),
    prompt: prompt,
    memory: ConversationBufferWindowMemory()
)
Task(priority: .background)  {
    var input = "I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd."
    
    var res = await chatgpt_chain.predict(args: ["human_input": input])
    print(input)
    print("🌈:" + res!)
    input = "ls ~"
    res = await chatgpt_chain.predict(args: ["human_input": input])
    print(input)
    print("🌈:" + res!)
}

Log

I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.
🌈:
/home/user

ls ~
🌈:
Desktop  Documents  Downloads  Music  Pictures  Public  Templates  Videos

❓ QA bot

An main/Sources/LangChain/vectorstores/supabase/supabase.sql is required.

ref: https://supabase.com/docs/guides/database/extensions/pgvector

Code

Task(priority: .background)  {
    let loader = TextLoader(file_path: "state_of_the_union.txt")
    let documents = await loader.load()
    let text_splitter = CharacterTextSplitter(chunk_size: 1000, chunk_overlap: 0)

    let embeddings = OpenAIEmbeddings()
    let s = Supabase(embeddings: embeddings)
    for text in documents {
        let docs = text_splitter.split_text(text: text.page_content)
        for doc in docs {
            await s.addText(text: doc)
        }
    }
    
    let m = await s.similaritySearch(query: "What did the president say about Ketanji Brown Jackson", k: 1)
    print("Q🖥️:What did the president say about Ketanji Brown Jackson")
    print("A🚀:\(m)")
}

Log

Q🖥️:What did the president say about Ketanji Brown Jackson
A🚀:[LangChain.MatchedModel(content: Optional("In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. "), similarity: 0.8024642)]
📄 Retriever

Code

Task(priority: .background)  {
    let retriever = WikipediaRetriever()
    let qa = ConversationalRetrievalChain(retriver: retriever, llm: OpenAI())
    let questions = [
        "What is Apify?",
        "When the Monument to the Martyrs of the 1830 Revolution was created?",
        "What is the Abhayagiri Vihāra?"
    ]
    var chat_history:[(String, String)] = []

    for question in questions{
        let result = await qa.predict(args: ["question": question, "chat_history": ConversationalRetrievalChain.get_chat_history(chat_history: chat_history)])
        chat_history.append((question, result!))
        print("⚠️**Question**: \(question)")
        print("✅**Answer**: \(result!)")
    }
}

Log

⚠️**Question**: What is Apify?
✅**Answer**: Apify refers to a web scraping and automation platform.
read(descriptor:pointer:size:): Connection reset by peer (errno: 54)
⚠️**Question**: When the Monument to the Martyrs of the 1830 Revolution was created?
✅**Answer**: The Monument to the Martyrs of the 1830 Revolution was created in 1906.
⚠️**Question**: What is the Abhayagiri Vihāra?
✅**Answer**: The term "Abhayagiri Vihāra" refers to a Buddhist monastery in ancient Sri Lanka.
🤖 Agent

Code

let agent = initialize_agent(llm: OpenAI(), tools: [WeatherTool()])
Task(priority: .background)  {
    let res = await agent.run(args: "Query the weather of this week")
    switch res {
    case Parsed.str(let str):
        print("🌈:" + str)
    default: break
    }
}

Log

🌈: The weather for this week is sunny.
📡 Router
let physics_template = """
You are a very smart physics professor. \
You are great at answering questions about physics in a concise and easy to understand manner. \
When you don't know the answer to a question you admit that you don't know.

Here is a question:
{input}
"""


let math_template = """
You are a very good mathematician. You are great at answering math questions. \
You are so good because you are able to break down hard problems into their component parts, \
answer the component parts, and then put them together to answer the broader question.

Here is a question:
{input}
"""
   
let prompt_infos = [
   [
       "name": "physics",
       "description": "Good for answering questions about physics",
       "prompt_template": physics_template,
   ],
   [
       "name": "math",
       "description": "Good for answering math questions",
       "prompt_template": math_template,
   ]
]

let llm = OpenAI()

var destination_chains: [String: DefaultChain] = [:]
for p_info in prompt_infos {
   let name = p_info["name"]!
   let prompt_template = p_info["prompt_template"]!
   let prompt = PromptTemplate(input_variables: ["input"], partial_variable: [:], template: prompt_template)
   let chain = LLMChain(llm: llm, prompt: prompt, parser: StrOutputParser())
   destination_chains[name] = chain
}
let default_prompt = PromptTemplate(input_variables: [], partial_variable: [:], template: "")
let default_chain = LLMChain(llm: llm, prompt: default_prompt, parser: StrOutputParser())

let destinations = prompt_infos.map{
   "\($0["name"]!): \($0["description"]!)"
}
let destinations_str = destinations.joined(separator: "\n")

let router_template = MultiPromptRouter.formatDestinations(destinations: destinations_str)
let router_prompt = PromptTemplate(input_variables: ["input"], partial_variable: [:], template: router_template)

let llmChain = LLMChain(llm: llm, prompt: router_prompt, parser: RouterOutputParser())

let router_chain = LLMRouterChain(llmChain: llmChain)

let chain = MultiRouteChain(router_chain: router_chain, destination_chains: destination_chains, default_chain: default_chain)
Task(priority: .background)  {
   print("💁🏻‍♂️", await chain.run(args: "What is black body radiation?"))
}

Log

router text: {
    "destination": "physics",
    "next_inputs": "What is black body radiation?"
}
💁🏻‍♂️ str("Black body radiation refers to the electromagnetic radiation emitted by an object that absorbs all incident radiation and reflects or transmits none. It is an idealized concept used in physics to understand the behavior of objects that emit and absorb radiation. \n\nAccording to Planck\'s law, the intensity and spectrum of black body radiation depend on the temperature of the object. As the temperature increases, the peak intensity of the radiation shifts to shorter wavelengths, resulting in a change in color from red to orange, yellow, white, and eventually blue.\n\nBlack body radiation is important in various fields of physics, such as astrophysics, where it helps explain the emission of radiation from stars and other celestial bodies. It also plays a crucial role in understanding the behavior of objects at high temperatures, such as in industrial processes or the study of the early universe.\n\nHowever, it\'s worth noting that while I strive to provide accurate and concise explanations, there may be more intricate details or specific mathematical formulations related to black body radiation that I haven\'t covered.")

Parser

ObjectOutputParser
let demo = Book(title: "a", content: "b", unit: Unit(num: 1))

var parser = ObjectOutputParser(demo: demo)

let llm = OpenAI()

let t = PromptTemplate(input_variables: ["query"], partial_variable:["format_instructions": parser.get_format_instructions()], template: "Answer the user query.\n{format_instructions}\n{query}\n")

let chain = LLMChain(llm: llm, prompt: t, parser: parser, inputKey: "query")
Task(priority: .background)  {
    let pasred = await chain.run(args: "The book title is 123 , content is 456 , num of unit is 7")
    switch pasred {
    case Parsed.object(let o): print("🚗object: \(o)")
    default: break
    }
}
EnumOutputParser
    enum MyEnum: String, CaseIterable  {
        case value1
        case value2
        case value3
    }
    for v in MyEnum.allCases {
        print(v.rawValue)
    }
    let llm = OpenAI()
    let parser = EnumOutputParser<MyEnum>(enumType: MyEnum.self)
    let i = parser.get_format_instructions()
    print("ins: \(i)")
    let t = PromptTemplate(input_variables: ["query"], partial_variable:["format_instructions": parser.get_format_instructions()], template: "Answer the user query.\n{format_instructions}\n{query}\n")
    
    let chain = LLMChain(llm: llm, prompt: t, parser: parser, inputKey: "query")
    Task(priority: .background)  {
        let result = await chain.run(args: "Value is 'value2'")
        switch result {
           case .enumType(let e):
               print("🦙enum: \(e)")
           default:
               print("parse fail. \(result)")
           }
    }

Other

Stream Chat - Must be use ChatOpenAI model
Task(priority: .background)  {
    let eventLoopGroup = MultiThreadedEventLoopGroup(numberOfThreads: 1)
    
    let httpClient = HTTPClient(eventLoopGroupProvider: .shared(eventLoopGroup))
    
    defer {
        // it's important to shutdown the httpClient after all requests are done, even if one failed. See: https://github.com/swift-server/async-http-client
        try? httpClient.syncShutdown()
    }
    let llm = ChatOpenAI(httpClient: httpClient, temperature: 0.8)
    let answer = await llm.generate(text: "Hey")
    print("🥰")
    for try await c in answer!.getGeneration()! {
        if let message = c {
            print(message)
        }
    }
}

🌐 Trusted by

Convict Conditioning Investment For Long Term AI Summary AI Pagily B 站 AI 总结 帮你写作文

Open an issue or PR to add your app.

🚗 Roadmap

  • LLMs
    • OpenAI
    • Hugging Face
    • Dalle
    • ChatGLM
    • ChatOpenAI
    • Baidu
    • Llama 2
    • Gemini
    • LMStudio API
    • Local Model
  • Vectorstore
    • Supabase
    • SimilaritySearchKit
  • Store
    • BaseStore
    • InMemoryStore
    • FileStore
  • Embedding
    • OpenAI
    • Distilbert
  • Chain
    • Base
    • LLM
    • SimpleSequentialChain
    • SequentialChain
    • TransformChain
    • Router
      • LLMRouterChain
      • MultiRouteChain
    • QA
      • ConversationalRetrievalChain
  • Tools
    • Dummy
    • InvalidTool
    • Serper
    • Zapier
    • JavascriptREPLTool(Via JSC)
    • GetLocation(Via CoreLocation)
    • Weather
    • TTSTool
  • Agent
    • ZeroShotAgent
  • Memory
    • BaseMemory
    • BaseChatMemory
    • ConversationBufferWindowMemory
    • ReadOnlySharedMemory
  • Text Splitter
    • CharacterTextSplitter
    • RecursiveCharacterTextSplitter
  • Document Loader
  • OutputParser
    • MRKLOutputParser
    • ListOutputParser
    • SimpleJsonOutputParser
    • StrOutputParser
    • RouterOutputParser
    • ObjectOutputParser
    • EnumOutputParser
    • DateOutputParser
  • Prompt
    • PromptTemplate
    • MultiPromptRouter
  • Callback
    • StdOutCallbackHandler
  • LLM Cache
    • InMemery
    • File
  • Retriever
    • WikipediaRetriever
    • PubmedRetriever
    • ParentDocumentRetriever

👍 Got Ideas?

Open an issue, and let's discuss!

Join Slack: https://join.slack.com/t/langchain-mobile/shared_invite/zt-26tzdzb2u-8RnP7hDQz~MWMg8EeIu0lQ

💁 Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.

langchain-swift's People

Contributors

bsorrentino avatar buhe avatar stacksharebot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

langchain-swift's Issues

env.txt Security

大佬能不能把API Key改成初始化注入之类的形式,在工程文件里感觉不安全啊,怕被人解包什么的,谢谢

Add retriever

qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)

LangChain Module not found

So I see that the Share app has a version of 0.7.0 while the git version is 0.16.0 and I am unable to build my app as it cannot find LangChain although the package is there.

Appreciate the help....

Hello Everyone,

I am new to IOS Development, I saw a Youtube tutorial of - how to make a pdf Q&A bot with python. I wanted to make an IOS app of that. I find this langchain swift repository - but cause I am very new I am having difficulties understanding this - please help me to understand this.

Fatal Crash on ChatOpenAI

This line -

let buffer = try! await openAIClient.chats.stream(model: model, messages: [.user(content: text)], temperature: temperature)

is causing fatal crashes when awaiting a response specifically for me on slow internet. I would recommend not force unwrapping here.

Thread 15: Fatal error: 'try!' expression unexpectedly raised an error: HTTPClientError.deadlineExceeded

Thread Performance Checker

I have a warning in macOS app. Xcode 15

Thread Performance Checker: Thread running at User-initiated quality-of-service class waiting on a lower QoS thread running at Default quality-of-service class. Investigate ways to avoid priority inversions
...

Probably related with https://developer.apple.com/documentation/xcode/diagnosing-performance-issues-early

Specifically, you're using the dispatch_group_wait function, which doesn't provide a way to avoid priority inversion; thus, your waiting thread is susceptible to inversion. It looks like that's what's happening here.

The term "priority inversion" refers to a situation where a higher priority thread has to wait for a lower priority thread. This is undesirable because we may want higher priority threads to execute faster. In the context of your error, it seems that a high-priority thread is waiting for a lower-priority thread, leading to a warning.

As for the dispatch_group_wait function, it is used to pause a thread until all tasks in a certain group are completed. This can cause an issue if the paused thread is of higher priority than the tasks it is waiting for, which is referred to as priority inversion. To avoid this, you might want to consider using other mechanisms that help prevent priority inversion.

Create RetrievalQA chain

  • BaseRetrievalQA
    • combine_documents_chain
  • StuffDocumentsChain
    • combine_docs
    • _get_inputs
  • RetrievalQA

[Feature Request] Data Extraction

I would like to request the addition of support for Data Extraction using Langchain like kor. This would help users extract structured data from text.

from langchain.chat_models import ChatOpenAI
from kor import create_extraction_chain, Object, Text

llm = ChatOpenAI(
    model_name="gpt-3.5-turbo",
    temperature=0,
    max_tokens=2000,
    frequency_penalty=0,
    presence_penalty=0,
    top_p=1.0,
)

schema = Object(
    id="player",
    description=(
        "User is controlling a music player to select songs, pause or start them or play"
        " music by a particular artist."
    ),
    attributes=[
        Text(
            id="song",
            description="User wants to play this song",
            examples=[],
            many=True,
        ),
        Text(
            id="album",
            description="User wants to play this album",
            examples=[],
            many=True,
        ),
        Text(
            id="artist",
            description="Music by the given artist",
            examples=[("Songs by paul simon", "paul simon")],
            many=True,
        ),
        Text(
            id="action",
            description="Action to take one of: `play`, `stop`, `next`, `previous`.",
            examples=[
                ("Please stop the music", "stop"),
                ("play something", "play"),
                ("play a song", "play"),
                ("next song", "next"),
            ],
        ),
    ],
    many=False,
)

chain = create_extraction_chain(llm, schema, encoder_or_encoder_class='json')
chain.run("play songs by paul simon and led zeppelin and the doors")['data']
{'player': {'artist': ['paul simon', 'led zeppelin', 'the doors']}}

BlibiliLoader requires login

https://api.bilibili.com/x/web-interface/view?bvid=BV1fu411G7e3

https://api.bilibili.com/x/player/v2?cid=1214627038&aid=531599394

https://aisubtitle.hdslb.com/bfs/ai_subtitle/prod/5315993941214627038f17d00723511cf7cb3833a8abf040475?auth_key=1690774936-d09569c323024c5fba68a98c88cc5a8c-0-fccea3cd8600050c09983982beabe8de

BlibiliLoader requires a login, and langchain is wrong and does not get AI-generated subtitles correctly

Supabase insert errors

I'm getting errors when using Supabase, I don't know what ### Insert Error: unacceptableStatusCode(400) this error means, I assume it's related to inserting data to my database. It would be nice if you also showed how your Supabase database is setup. if you can lead me in the right direction it would be much appreciated. Also adding some localised description of this error would help better debug - ### RPC Error: unacceptableStatusCode(404)

Chat Stream

Its unclear how to get the stream output from a chat, I assumed you could do something like this using AsyncSequence -

 let output = await chatgpt_chain.predict(args: input)
            for try await line in output {
                print(line)
        }

Please show an example of how to use it with chats. Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.