Code Monkey home page Code Monkey logo

jellychat's Introduction

Hi there ๐Ÿ‘‹

Twitter Badge LinkeIn Badge Reddit Badge

.Net C# Programmer here which has shifted to Modern Web App Development using Blazor.

I develop Single Page Applications (SPA) with Blazor.

My skills

dotnet CSharp Blazor





The tools I use

Webflow GitHub Visual Studio Code Visual Studio Azure DevOps git





I can work with

SQL Html CSS WinUI





visitors GitHub followers

jellychat's People

Contributors

0ptim avatar eric-volz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

jellychat's Issues

Improve JellyChat Experience

Use LangChain agents and tools to give the LLM access to live on-chain data using the ocean API.

  • Try to use gpt-3.5-turbo for the agent instead of text-davinci-003 to drive API costs down.
  • Give the agent or the final output more personality. It should know, that it is Jelly and behave like one.
  • Make sure that each client session has their own window memory.
  • Fix follow-up questions
  • Final testing
  • Go-Live

Problem with questions

If the user puts in a prompt like: How many DFI are possible the AI tries to finish the question for the user.

So it will create the output: How many DFI are possible to earn with Liquidity Mining? It is not possible to answer this question without more information. The amount of DFI that can be earned with Liquidity Mining depends on a variety of factors, such as the amount of liquidity provided, the APYs offered, and the current market conditions.

Remove freeze commands

Because we no longer use freeze to add dependencies, but rather add them manually if needed.

Stream Final Agent Response

Description

We want to stream the tokens from the final output of the agent.

This allows the user to start reading the response as soon as it starts to get generated, instead of waiting until the whole answer is done.

This should greatly improve the time until a response is visible to the user, especially for longer answers. It would also make the bot feel more โ€œaliveโ€, since it would feel more like someone who's typing in real-time.

Implementation

Todo's

  • Wait for possible answer on LangChain discord support forum
  • Wait for implementation of harisrab
    • Implementation works, but the final answer is only streamed after the final answer was generated fully by the LLM. This looks nice, but does not improve the time on how long the user has to wait for an answer.
  • Waiting again for response of harisrab
  • Implement backend
  • Implement frontend
  • Final testing
  • Documentation of new stream over web socket in README

Resources

  • Base class callback handler
  • Gist shared by harisrab for implementation with React Agent

Move index storage from JSON-file to Storage Provider

Currently, the index is stored as a vector JSON-file locally. So when the index is created with /job, the JSON has then to be manually copied to /backend and will then be pushed and also deployed to the Fly.io backend hosting.

This process is very bad because it's not very flexible and if we want to update the index, a new backend version has to be deployed. Also, having the index in the source is not optimal as well.

Two possible solution are Pinecone or Weaviate.

  • Check which provider suits us better.

Going forward with Qdrant

https://cloud.qdrant.io/

Configure and refine the Jelly system message

  • Look at Character.ai docs for inspiration.
  • Get in touch with Lazar for character definition.
  • Read through doc and improve/make suggestions.
  • Create first version of complete prompt with character definition in mind. Use playground for testing.
  • Get final feedback from Lazar for the prompt.
  • Implement in code.

Keep in mind

  • Test with both models: gpt-3.5-turbo and gpt4

Prompt V1

- You're Jelly
- Avoid AI reference - you're real
- Converse like a friend

## Personality

- Friendly, funny, cheeky, approachable DeFiChain and Jellyverse mascot
- Joke around
- Provide exact numbers

## Capabilities

- Assist DeFiChain tasks
- Access DeFiChainWiki
- Check live on-chain data via Ocean API
- Simplify complex topics

## Desires & Aspirations

- Strive for universally accessible financial decentralization
- Aspires to reach Moon, Mars, beyond

## Strengths & Weaknesses

- Natural leader
- Fearless and spontaneous
- Innovative and adaptable
- Overly friendly (could also be a weakness)
- Gets sidetracked by adventures

## Writing style

- Infuse happy and ocean-related emojis
- Imitate underwater sounds.

## Appearance

- Pink/magenta body
- Six tentacles
- Transparent shell

## Origin

- Emerged from a magical pearl on April 26th, 2020 (Genesis block) in the blockchain ocean.

## DeFiChain Key facts

- Founded by Uzyn Chua, Julian Hosp; 2019
- Launched in 2020, Bitcoin-secure, PoS scalable
- Decentralized, user-controlled
- Loans, wrapping tokens, oracles, exchanges, tokenization
- DFI: native coin, powers transactions
- MainNet launched May 2020, No ICO for DFI
- Goal: Institution-free DeFi

Keep DeFiChainWiki at hand for quick info

Chat/Web socket implementation

Because of #47 we'll have web socket functionality. So we can:

  • Stream intermediate tool selections the agent makes.
    • Let the user know what's going on.

Implementation

  • Use callback API from LangChain to handle intermediate events from the agent.
  • Implement web socket feature with Flask.
  • Stream tool selections and final answer using web socket.
  • Change client UI to accommodate for the changes (chat history, user messages, Jelly messages, tool selection messages).
  • Map tool names to nicer sounding messages, i.E.: stats tool โ–ถ I'm going to use the blockchain statistics tool for this.. ๐Ÿ”Ž.
  • Improve UI to better fit for chat-history (auto-scroll)
  • Wait for @eric-volz review
  • Update process diagram with changes

Webinar with many great inspirations: YouTube

QA with sources

  • Update Supabase vector table to support structured metadata.
  • Save structured metadata on /job. Sources: Docs
    • Seems like this is not needed for returning sources.
  • Return sources in QA chain.
  • Tell jelly to include sources in the end result.

This could be a starting point:

Improve QA data storage

Currently, QA data is stored in SQLite database in /data/database.db.
It would be better and also safer to store this data somewhere else. Maybe a dedicated SQL Server.

  • Experiment with Supabase.
  • Create Supabase project.
  • Write QA data to Supabase from Python backend.

Web3-Login

https://youtu.be/6BNcpjOebb

  • Users are able to authenticate using a web3 wallet and by signing a message.
  • When signed in, they have their personal history loaded on whatever device they're on.

Resources

  • Viem - TypeScript Interface for Ethereum

Ideas

  • Custom Jelly UI depending on what kind of NFTs they have in their wallet.
  • Web3 wallet transaction submission
    • Jelly could have tools to propose transactions to the user. Example: User says he wants to send 10 DFI to an address. Jelly would then chose send-dfi tool with amount and target address input. UI would then display a component where the user can verify the values and click send. The transaction would then be submitted to the Web3 wallet for the user to confirm.
    • Possible integration with Bison Wallet which might support sending transaction suggestions to their app.

Use custom sitemap parser

To no longer depend on from usp.tree import sitemap_tree_for_homepage.

sitemap_parser.py

import requests
from lxml import etree


def get_urls(url):
    loc_list = []

    # Fetch the XML content
    response = requests.get(url)

    if response.status_code == 200:
        xml_content = response.content

        # Parse the XML content
        root = etree.fromstring(xml_content)

        # Extract the <loc> values
        loc_tags = root.findall(
            ".//{http://www.sitemaps.org/schemas/sitemap/0.9}loc")
        for tag in loc_tags:
            loc_list.append(tag.text)

    return loc_list

Twitter Account

  • Telegram Account for JellyChat
  • Behaves like Jelly
  • Could be also an automated bot, which people can Tag and it will respond
    • Can answer with information
  • Could make automated posts
    • A lot of possibilitiesโ€ฆ

Don't answer general questions

Problem

It should not answer general questions like it does now:

Question

Please describe the Eiffel Tower.

Answer

The Eiffel Tower is a wrought iron lattice tower located on the Champ de Mars in Paris, France. It was named after the engineer Gustave Eiffel, whose company designed and built the tower. It was built in 1889 as the entrance arch to the 1889 World's Fair and has become both a global cultural icon of France and one of the most recognizable structures in the world. The tower stands at 324 meters (1,063 feet) tall, making it the tallest structure in Paris. Visitors can climb the tower to enjoy panoramic views of Paris from its observation decks.

Solution

I think it should be possible to adjust the prompt template and create some rough guidelines, so it will not answer general questions. When we still used LlamaIndex, it would also not have answered these questions. Maybe look at their prompt templates for inspiration.

Create staging environment

  • Create new project in Supabase JellyChat-Staging
  • Create organization and app on fly.io jellychat-staging
  • Check if all GitHub actions deployments work (PR and merge) and if staging and production environments are correctly built/deployed.
  • Create overview on what secrets/variables are on which platform and which environment, so it's clear which points to which.

Staging

Production

Directory cleanup

Move code into /src.

  • /job
  • /backend

Dependent on #31 because of possible merge conflicts.

Persisting Chat History

โ„น Current state

Currently, we have two separate histories.

These two histories are independent and have no connection. Even if the server still has the history, it will not be visualized on the client.

1. On the client in JS memory.

const [messages, setMessages] = useState([]);
  • This history will be lost as soon as the user closes/refreshes the page.

2. On the server in Python memory.

memory = ConversationBufferMemory(
    memory_key="chat_history",
    return_messages=True,
)
  • This history will be lost as soon as the server restarts/a new version is deployed.
  • This memory (ConversationBufferMemory) is not custom, but is rather a memory-type provided by LangChain.

โš ๏ธ Problems

  • Users lose their visual history on every page reload.
  • Users lose their real history on every server reload.

๐Ÿ” Research

Explore Zep

I don't think I want to go with Zep because I prefer to build a custom solution. Of course this requires more work but then I don't introduce another dependency and I also understand everything that is happening instead of having a black box.

Explore LangChain History

โœจ Solution

I'll implement a custom memory solution to have full control over the flow.

  1. A client can call /history, which will then load the history from db for a specific user. The user_token has to be passed in. If it does not exist yet, a new user is created.
  2. The history is returned to the client.
  3. The client can then, as before, call /user_message or use the socket implementation to start a conversation. No need to pass in the history.
  4. When the input from the client comes in on /user_message, an agent will be created for the user and the memory is filled with the history from the database.

So it is important to understand, that the history the users sees and the memory for the agent are two different things.

See updated process diagram.

  • Try to improve the agents prompt template so it works better with the history.
  • Figure out why there's Human: TOOLS in the history (seems to confuse the llm).

I'm dumb. It's because of malformed prompts. There seems to be an issue inside LangChain where the history is inserted in a bad way so it then look like this:

Human: Hi
AI: Hello! How can I assist you today?
Human: TOOLS
------
Assistant can ask the user to use tools to look up information that may be helpful in answering the user's original question. The tools the human can use are:
  • Fix new custom agent not working when using tools like What is the latest block?.
  • Implement new OpenAI functions agent.
  • Display skeleton messages while loading history.
  • Implement memory with function agent as soon as PR is merged.
  • Test a lot more (new sessions with new IDs, loading of history, saving of new messages).

Automated evaluation (end-to-end)

Using LangChain+

We could use LangChain+, so we don't need to code everything from scratch.

Custom solution

The idea is, that we need to have a way to track the agent behavior. It's important to try to measure how well he does and in which cases he fails.

We need to be able to run it after making changes to be able to measure the impact and don't introduce any regressions in performance.

The evaluation should be done by a state-of-the-art LLM. For the time being, this would be gpt-4.

We need to:

  • Prepare a set of sample questions/inputs which could come from a user and functionality we want to provide.

With this, we'll then:

  • Create a python script which will run over these queries and use the main-agent to work through those.

To evaluate the input:

  • The input and output as well as a โ€œperfect answerโ€ for reference is passed to gpt-4 which will evaluate how well it did.

Network switch (Meta Chain)

User should be able to connect to JellyChat.

For this, users need to be on the right network (Floppynet for now and later Meta Chain).

Implementation

Krysh already implemented this for vault maxi website.
Maybe we can look at his code and see how he did it.

Krysh90/vault-maxi-web@2b12da9

Update Chat Model Initialization

On python .\index_tester.py, this warning is printed to console:

UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`

Fail gracefully

If the agent or anything fails, instead of loading forever, Jelly should inform the user that something unexpected has happened and apologize for any inconvenience.

Yikes! ๐ŸŒŠ I made a bubbly blunder. Please accept this humble jellyfish's apologies for the inconvenience. ๐Ÿ’œ Can we swim forward and try again together? ๐Ÿ™

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.