.Net C# Programmer here which has shifted to Modern Web App Development using Blazor.
I develop Single Page Applications (SPA) with Blazor.
![dotnet](./media/dotnet.png)
![CSharp](./media/CSharp.png)
![Blazor](./media/Blazor.png)
![Webflow](./media/Webflow.png)
![GitHub](./media/GitHub.png)
![Visual Studio Code](./media/VSCode.png)
![Visual Studio](./media/VS.png)
![Azure DevOps](./media/AzureDevOps.png)
![git](./media/git.png)
![SQL](./media/SQL.png)
![Html](./media/Html.png)
![CSS](./media/CSS.png)
![WinUI](./media/WinUI.png)
๐ชผ AI chatbot for the DeFiChain ecosystem.
Home Page: https://defichainwiki.com/jellychat
License: MIT License
Use LangChain agents and tools to give the LLM access to live on-chain data using the ocean API.
gpt-3.5-turbo
for the agent instead of text-davinci-003
to drive API costs down.If the user puts in a prompt like: How many DFI are possible
the AI tries to finish the question for the user.
So it will create the output: How many DFI are possible to earn with Liquidity Mining? It is not possible to answer this question without more information. The amount of DFI that can be earned with Liquidity Mining depends on a variety of factors, such as the amount of liquidity provided, the APYs offered, and the current market conditions.
Because we no longer use freeze to add dependencies, but rather add them manually if needed.
Test and maybe add to Readme.
Because we still want to support a REST endpoint, we still need to find a solution to this problem.
Read more details in Part 1: #47
We want to stream the tokens from the final output of the agent.
This allows the user to start reading the response as soon as it starts to get generated, instead of waiting until the whole answer is done.
This should greatly improve the time until a response is visible to the user, especially for longer answers. It would also make the bot feel more โaliveโ, since it would feel more like someone who's typing in real-time.
on_llm_new_token
this does not work, because all sort of output is then streamed.FinalStreamingStdOutCallbackHandler
does not work because it was written for the ZERO_SHOT_REACT_DESCRIPTION
agent.Currently, the index is stored as a vector JSON-file locally. So when the index is created with /job
, the JSON has then to be manually copied to /backend
and will then be pushed and also deployed to the Fly.io backend hosting.
This process is very bad because it's not very flexible and if we want to update the index, a new backend version has to be deployed. Also, having the index in the source is not optimal as well.
Two possible solution are Pinecone or Weaviate.
/job
to store generated index on Qdrant. https://gpt-index.readthedocs.io/en/latest/how_to/vector_stores.htmlBecause fly.io has a 60-second idle timeout, some questions fail.
After 60 seconds, fly.io just returns an HTTP 500 error.
https://community.fly.io/t/60-second-timeout-for-http-server/1243
Instead of using HTTP Rest, we'll be using a web socket to
Since Supabase now improved their support for all kinds of vector storage related tasks, it would make sense to migrate from Qdrant. This would be a wise decision in my opinion, because then we have fewer dependencies.
Announcement Post: https://twitter.com/supabase/status/1663984066172006400
Implement callbacks improvements.
Try contextual compression to improve information gathering in the Wiki.
Initial Tweet
Blog
https://blog.langchain.dev/auto-eval-of-question-answering-tasks/
Repo
Very nice project, but not exactly what I was looking for. Skipping for now.
gpt-3.5-turbo
and gpt4
- You're Jelly
- Avoid AI reference - you're real
- Converse like a friend
## Personality
- Friendly, funny, cheeky, approachable DeFiChain and Jellyverse mascot
- Joke around
- Provide exact numbers
## Capabilities
- Assist DeFiChain tasks
- Access DeFiChainWiki
- Check live on-chain data via Ocean API
- Simplify complex topics
## Desires & Aspirations
- Strive for universally accessible financial decentralization
- Aspires to reach Moon, Mars, beyond
## Strengths & Weaknesses
- Natural leader
- Fearless and spontaneous
- Innovative and adaptable
- Overly friendly (could also be a weakness)
- Gets sidetracked by adventures
## Writing style
- Infuse happy and ocean-related emojis
- Imitate underwater sounds.
## Appearance
- Pink/magenta body
- Six tentacles
- Transparent shell
## Origin
- Emerged from a magical pearl on April 26th, 2020 (Genesis block) in the blockchain ocean.
## DeFiChain Key facts
- Founded by Uzyn Chua, Julian Hosp; 2019
- Launched in 2020, Bitcoin-secure, PoS scalable
- Decentralized, user-controlled
- Loans, wrapping tokens, oracles, exchanges, tokenization
- DFI: native coin, powers transactions
- MainNet launched May 2020, No ICO for DFI
- Goal: Institution-free DeFi
Keep DeFiChainWiki at hand for quick info
Because of #47 we'll have web socket functionality. So we can:
stats tool
โถ I'm going to use the blockchain statistics tool for this.. ๐
.Webinar with many great inspirations: YouTube
Currently, it looks like LangChain is writing directly to Supabase. But this is not true.
The Flask app is saving the data to Supabase.
Add more tools which provide on-chain data via the Ocean API.
Implementing this using the DefichainPython library by @eric-volz.
Maybe something to consider: langchain-ai/langchain#5050
/job
. Sources: Docs
This could be a starting point:
Currently, QA data is stored in SQLite database in /data/database.db
.
It would be better and also safer to store this data somewhere else. Maybe a dedicated SQL Server.
send-dfi
tool with amount and target address input. UI would then display a component where the user can verify the values and click send
. The transaction would then be submitted to the Web3 wallet for the user to confirm.To no longer depend on from usp.tree import sitemap_tree_for_homepage
.
sitemap_parser.py
import requests
from lxml import etree
def get_urls(url):
loc_list = []
# Fetch the XML content
response = requests.get(url)
if response.status_code == 200:
xml_content = response.content
# Parse the XML content
root = etree.fromstring(xml_content)
# Extract the <loc> values
loc_tags = root.findall(
".//{http://www.sitemaps.org/schemas/sitemap/0.9}loc")
for tag in loc_tags:
loc_list.append(tag.text)
return loc_list
It should not answer general questions like it does now:
Question
Please describe the Eiffel Tower.
Answer
The Eiffel Tower is a wrought iron lattice tower located on the Champ de Mars in Paris, France. It was named after the engineer Gustave Eiffel, whose company designed and built the tower. It was built in 1889 as the entrance arch to the 1889 World's Fair and has become both a global cultural icon of France and one of the most recognizable structures in the world. The tower stands at 324 meters (1,063 feet) tall, making it the tallest structure in Paris. Visitors can climb the tower to enjoy panoramic views of Paris from its observation decks.
I think it should be possible to adjust the prompt template and create some rough guidelines, so it will not answer general questions. When we still used LlamaIndex, it would also not have answered these questions. Maybe look at their prompt templates for inspiration.
Currently, we use an Agent as the top layer, which decides on what tools it should use.
We could maybe bring cost/delay down by using embeddings for this step.
Explore Development Containers for a better Developer Experience and reliability.
JellyChat-Staging
jellychat-staging
Rework Readme.md with the current state of the project.
PR already open in LangChain repo, which should fix this.
PR
langchain-ai/langchain#5037
My comment
langchain-ai/langchain#5037 (comment)
Use the early stopping method generate
.
max_iterations=2,
early_stopping_method="generate",
Currently, we have two separate histories.
These two histories are independent and have no connection. Even if the server still has the history, it will not be visualized on the client.
1. On the client in JS memory.
const [messages, setMessages] = useState([]);
2. On the server in Python memory.
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True,
)
Explore Zep
I don't think I want to go with Zep because I prefer to build a custom solution. Of course this requires more work but then I don't introduce another dependency and I also understand everything that is happening instead of having a black box.
Explore LangChain History
I'll implement a custom memory solution to have full control over the flow.
/history
, which will then load the history from db for a specific user. The user_token
has to be passed in. If it does not exist yet, a new user is created./user_message
or use the socket implementation to start a conversation. No need to pass in the history./user_message
, an agent will be created for the user and the memory is filled with the history from the database.So it is important to understand, that the history the users sees and the memory for the agent are two different things.
See updated process diagram.
Human: TOOLS
in the history (seems to confuse the llm).I'm dumb. It's because of malformed prompts. There seems to be an issue inside LangChain where the history is inserted in a bad way so it then look like this:
Human: Hi
AI: Hello! How can I assist you today?
Human: TOOLS
------
Assistant can ask the user to use tools to look up information that may be helpful in answering the user's original question. The tools the human can use are:
What is the latest block?
.We could use LangChain+, so we don't need to code everything from scratch.
The idea is, that we need to have a way to track the agent behavior. It's important to try to measure how well he does and in which cases he fails.
We need to be able to run it after making changes to be able to measure the impact and don't introduce any regressions in performance.
The evaluation should be done by a state-of-the-art LLM. For the time being, this would be gpt-4
.
We need to:
With this, we'll then:
main-agent
to work through those.To evaluate the input:
gpt-4
which will evaluate how well it did.User should be able to connect to JellyChat.
For this, users need to be on the right network (Floppynet for now and later Meta Chain).
Krysh already implemented this for vault maxi website.
Maybe we can look at his code and see how he did it.
This could improve more complicated scenarios at the cost of tokens and time the user has to wait.
/backend
/job
Move hosting of Python backend from PythonAnywhere to fly.io.
https://www.youtube.com/watch?v=tuPmhciyfIA
https://fly.io/docs/languages-and-frameworks/dockerfile/
https://fly.io/docs/reference/secrets/#setting-secrets
Use HyDE for document retrieval.
As seen here: https://github.com/solana-labs/chatgpt-plugin/blob/master/.env.template
.env.template
files.On python .\index_tester.py
, this warning is printed to console:
UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
If the agent or anything fails, instead of loading forever, Jelly should inform the user that something unexpected has happened and apologize for any inconvenience.
Yikes! ๐ I made a bubbly blunder. Please accept this humble jellyfish's apologies for the inconvenience. ๐ Can we swim forward and try again together? ๐
https://gpt-index.readthedocs.io/en/latest/how_to/custom_prompts.html
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.