Code Monkey home page Code Monkey logo

nlkitai / nlux Goto Github PK

View Code? Open in Web Editor NEW
1.0K 8.0 59.0 19.81 MB

The ๐—ฃ๐—ผ๐˜„๐—ฒ๐—ฟ๐—ณ๐˜‚๐—น Conversational AI JavaScript Library ๐Ÿ’ฌ โ€”ย UI for any LLM, supporting LangChain / HuggingFace / Vercel AI, and more ๐Ÿงก React, Next.js, and plain JavaScript โญ๏ธ

Home Page: https://docs.nlkit.com/nlux

License: Other

TypeScript 93.10% CSS 5.31% JavaScript 1.58% Shell 0.01%
artificial-intelligence chatbot chatgpt huggingface javascript large-language-models llm openai reactjs vercel-ai-sdk

nlux's People

Contributors

aspirantzhang avatar dependabot[bot] avatar divyeshradadiya avatar eltociear avatar franciscomoretti avatar lragnarsson avatar marklysze avatar mattparlane avatar salmenus avatar somebodyawesome-dev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nlux's Issues

Expose snapshot markdown parser in markdown package

Hi!
First of all, great work on this library, I especially found use for the streaming markdown renderer.
I would also like to render non-streaming markdown text with the same look, and I see that there is a snapshotParser that might do this in the shared/src folder:
https://github.com/nlkitai/nlux/blob/latest/packages/shared/src/markdown/snapshot/snapshotParser.ts

Would it be possible to also export this functionality in the nlux/markdown package? I can try and submit a PR as well if you think this would be a good addition.

Thanks!

Submitting chat entries programmatically

Discussed in #58

Originally posted by wright-io April 22, 2024
I would like to be able to set a chat message and submit programmatically. The use case is that I want to load a conversation, including a next message, and then submit and allow the user to see the streaming response. Is this possible? Thanks!

Custom renderer investigation

Note

There is a lot of information below, sorry if this is a bit disordered, I try to update the issue everytime it gets clearer on my mind, I'll try to split it too.

I'm currently using the streaming custom adapter (to have the writing effect) while I'm received a single (full) message of text from my API.

Summary

  • Typing issue of content prop. (See #75)
  • Rendering difference between environments (SEE #74)
  • status: "streaming" never go complete

Code sample

Click to see my code

import { StreamResponseComponentProps } from "@nlux/react";
import React from "react";

export function MyResponseResponseRenderer<AiMsg = string>(props: StreamResponseComponentProps<AiMsg>) {
  console.log(props);
  return (
    <div className="flex flex-col">
      <div ref={props.containerRef} />
      <div className="grid grid-cols-3">
        <button onClick={() => console.log("I like it!")}>๐Ÿ‘</button>
        <button onClick={() => console.log("I love it!")}>โค๏ธ</button>
        <button onClick={() => console.log("I hate it!")}>๐Ÿ˜ต</button>
      </div>
    </div>
  );
}
import type { ChatAdapter, StreamingAdapterObserver } from "@nlux/core";
import { sendMessage } from "../server/actions/sendMessage";
import { parseResponseMessageParsing } from "../utils/message-parsers";

export const MyAdapterBuilder = (characterID: string, conversationID: string): ChatAdapter => ({
  streamText: async (message: string, observer: StreamingAdapterObserver) => {
   
    const result = await sendMessage({
      character: characterID,
      conversation: conversationID,
      userInputMessage: message,
    });

    const parseResult = parseResponseMessageParsing(result.data);

    if (!result.serverError && parseResult.status === "success") {
      observer.next(parseResult.data.message);
      observer.complete();
    } else {
      observer.error(new Error(result.serverError || "Could not generate message"));
    }
  },
});

Attachments

  • Through initialConversation from NextJS React server component with use client directive.
    Screenshot 2024-06-05 at 13 33 28
  • Through adapter (full client side)
    Screenshot 2024-06-05 at 13 32 44

Typing issue

  • 1๏ธโƒฃ Type is a string, but typed as an array for the content property.

Note

See #75

See the screenshot above.

type StreamResponseComponentProps<AiMsg> = {
    uid: string;
    dataTransferMode: 'stream';
    status: 'streaming' | 'complete';
    content?: AiMsg[];
    serverResponse?: unknown[];
    containerRef: RefObject<never>;
};

Custom response renderer.

  • With the response renderer defined in sample code at the beginning of this post I get no content on NextJS SSR (use client). Which means the containerRef is not defined. Which seems OK because SSR. And if i use props.content (which is a string and not an array as we saw) I got the plain markdown content.

What can we do ?

Note

See #74

Tip

Here are some observations / suggestions of what we might do. Or you can tell me if you have some.

1. It's okay that server/initialConversation content is considered dataTransferMode: "batch" that way we can define a different render code for server/initialConversation. But we could need the default parser, otherwise, messages are not rendered as markdown but plain text ? Or should i use my own renderer for MD ?.
2. For client/adapter message with stream or whatever what can we do ? Maybe expose primitives and/or some components for streaming message instead of using the containerRef would be better.

๐Ÿ’ก Briefly:
- Expose a DefaultStreamingRenderer component instead of containerRef to handle the streaming mode if someone use a custom renderer. (more flexibility) This component would receive content props which should be the same for streaming/batch.

- For client side (after receiving a message from the adapter), containerRef is defined but content is empty.

status: "streaming" never go complete

When using the above code, I never get status: "streaming", going to complete. for streaming messages

Add content outside of the response textbox

Is it possible to add extra elements around the response box?

For me specifically, boxes for each of the sources that I'm getting back in my response would be great to present to the user.

As an example, LangChain's chatbot (chat.langchain.com) shows the sources in a response as separate boxes at the top of the response text, and these can be clicked on to access the source webpages.

I am planning to concatenate the sources to LangChain's response, e.g.:
"... response from LLM ...
["Source 1", "Document Name and link"]
["Source 2", "Document Name and link"]
"

I can put these as links in the response text, though it would be nice to be able to beautify these boxes with things like thumbnails, titles, and text.

Error with Content-Security-Policy/Feature-Request

Hi,

we use the Content-Security-Policy: require-trusted-types-for 'script'; and we do not have the possibility to define a sanitizer for HTML.
Is there a possibility to integrate this feature in your project?
Thanks in advance!

Make defaultDelayInMsBeforeComplete configurable

Description

Currently, the stream is automatically marked as complete if no data is received for more than 2 seconds (defaultDelayInMsBeforeComplete). In some cases, the interval between data chunks can exceed this limit, causing the stream to be prematurely marked as complete.

https://github.com/nlkitai/nlux/blob/c564e4105bd99a399bf51b6fb9b8e1e3a704b9ac/packages/shared/src/markdown/stream/streamParser.ts#L85C1-L91C10

const defaultDelayInMsBeforeComplete = 2000;
...

        if (buffer.length === 0) {
            if (streamIsComplete || nowTime - parsingContext.timeSinceLastProcessing > defaultDelayInMsBeforeComplete) {
                completeParsing();
            }

            return;
        }

I believe this should be configurable to accommodate streams with longer gaps between data chunks.

I am working with Alibaba's Tongyi Qianwen model, and I have observed that the interval between data chunks can be 10 seconds or longer (possibly due to network issues). This causes the stream to be marked complete too early.

Suggestion

Can we introduce an option to configure this timeout? This flexibility would help prevent premature completion of streams in various use cases.

As I am not a professional frontend developer, I am unsure how to implement this change myself and would greatly appreciate assistance from others.

Thank you!

Update adapters to enable sending over the entire conversation

Discussed in #28

Originally posted by dragosmc February 9, 2024
Hi, I'm trying to setup nlux w/ OpenAI to mimic the ChatGPT flow where gpt knows the previous conversation. As I could see there is no way to add previous messages to the current one to be sent, or I didn't find this functionality.

I guess the idea would be to maintain a list of questions/answers and send the last n back to openai.

Enable conversation starters

Add an option to allow developers to add conversation starters.

  • Option: conversationOptions.conversationStarters: ConversationStarter[]
  • Types:
type ConversationStarter {
    prompt: string;
    label?: string;
    icon?: string | JSX.Element;
}
  • Behaviour:
  • When the conversation is empty (no history, no prompt submitted yet), if the conversationStarters is provided, display a list possible conversation starters matching the options provided.
  • When the user clicks on one of the conversation starters, submit the value in the prompt attribute to the chat.
  • If the submission fails, re-display conversation starters.
  • If too many conversation starters are provided and cannot fit in the UI, activate horizontal scrolling.
  • Equivalent feature in ChatGPT UI:
    Image

How patch/attach to the original AiMessageRenderer instead of building a new one

Hi, nice work as usual.

i am wondering how to just patch/attach something to already rendered AiMessageComponent from nlux instead of building a new one. is this feasible ? would be great to have something like that, for example I wanna attach feedback buttons to the message and rebuilding the whole AiMessageComponent is just a drag.

Thanks in advance!

Using `React.memo` with a custom renderer crashes.

As we discussed on Discord https://discord.com/channels/1197831161938980945/1247893613229379645

Given this code

import { StreamResponseComponentProps } from "@nlux/react";
import React from "react";

function _CustomRenderer<AiMsg = string>(props: StreamResponseComponentProps<AiMsg>) {
  return (
    <div className="flex flex-col">
      <div ref={props.containerRef} />
      <div className="grid grid-cols-3">
        <button onClick={() => console.log("I like it!")}>๐Ÿ‘</button>
        <button onClick={() => console.log("I love it!")}>โค๏ธ</button>
        <button onClick={() => console.log("I hate it!")}>๐Ÿ˜ต</button>
      </div>
    </div>
  );
}

export const CustomRenderer = React.memo(_CustomRenderer);

gives TypeError: c.responseRenderer is not a function

I discovered that while trying to avoid having one log line per item each time i touch a key on my keyboard to debug custom renderer props. I don't get why is that not considered as a simple React component ?

When trying implement in next.js , getting too many requests error..

Ai wrapper

import React from "react";
import { AiChat } from "@nlux/react";
import { useAdapter } from "@nlux/openai-react";
import "@nlux/themes/nova.css";

interface OpenAIAdapterProps {
  show: boolean;
  temperature?: number;
  // Include other props as needed
}

export const OpenAIAdapter: React.FC<OpenAIAdapterProps> = ({
  show,
  temperature,
}) => {
  // Config should use the props if needed
  const adapterConfig = {
    apiKey: "key",
    systemMessage:
      "Give sound, tailored financial advice. Explain concepts simply. " +
      "Write concise answers under 5 sentences. Be funny.",
    // Use temperature or other props in config if applicable and supported
  };

  const chatGptAdapter = useAdapter(adapterConfig);

  // Corrected conditional rendering
  return show ? (
    <AiChat
      adapter={chatGptAdapter}
      promptBoxOptions={{ placeholder: "How can I help you today?" }}
    />
  ) : null;
};

export default OpenAIAdapter;

Rendering Modal


'use client'
import React, { useState } from "react";
import Image from "next/image";
import Modal from "../shared/modal";
import OpenAIAdapter from "../openAi/nlux";

function ContactButton() {
  const [show, setShow] = useState(false);

  const handleButtonClick = () => {
    setShow(true);
  };

  return (
    <>
      <button
        onClick={handleButtonClick}
        className="peer fixed bottom-20 right-5 z-[100] flex h-16 w-16 items-center justify-center rounded-full shadow-lg duration-75 hover:shadow-xl hover:transition-all lg:bottom-20 lg:right-5"
      >
        <Image
          className="object-contain"
          src={"/images/chat-button.png"}
          alt="chat button"
          width={60}
          height={60}
        />
      </button>
      <Modal setShowModal={setShow} showModal={show}>
        <OpenAIAdapter show={show} />
      </Modal>
    </>
  );
}

export default ContactButton;

image

Error in console

Request URL:
https://api.openai.com/v1/chat/completions
Request Method:
POST
Status Code:
429 Too Many Requests
Remote Address:

Referrer Policy:
strict-origin-when-cross-origin

OST https://api.openai.com/v1/chat/completions 429 (Too Many Requests)

[FEATURE REQUEST] Lazy loaded history

I want to be able to lazy load my history.

Currently, I need to do a hacky trick with a react key as initialConversation is not reactive.

It could be nice to load very long conversations in a cleaner way that just loads 5000 messages in a row just for history.

Custom renderer `content` prop is defined as an Array but is a string

Following what we exchanged on #73 and in the vibe of #74

Note

Suggestion:
It should be an Array all time with one (batch/streaming) or more elements (streaming).

Currently we have

type StreamResponseComponentProps<AiMsg> = {
    uid: string;
    dataTransferMode: 'stream';
    status: 'streaming' | 'complete';
    content?: AiMsg[];
    serverResponse?: unknown[];
    containerRef: RefObject<never>;
};

And if we look at the payload it's a string

image

[Documentation] - example of using nlux with on device browser llm

Looking at the "in preview" documentation for in browser llms, I thought it might be of interest to a basic example of how to do so.

Once in device llms are generally available, I think it makes sense to add a basic example like:

import {ChatAdapter, StreamingAdapterObserver} from '@nlux/react';

// A demo endpoint by NLUX that connects to OpenAI
// and returns a stream of Server-Sent events
const demoProxyServerUrl = "https://demo.api.nlux.ai/openai/chat/stream";

export const streamAdapter: ChatAdapter = {
    streamText: async (
        prompt: string,
        observer: StreamingAdapterObserver,
    ) => {
        const canCreate = await window.ai.canCreateTextSession();
        // canCreate will be one of the following:
        // * "readily": the model is available on-device and so creating will happen quickly
        // * "after-download": the model is not available on-device, but the device is capable,
        //   so creating the session will start the download process (which can take a while).
        // * "no": the model is not available for this device.

        if (canCreate !== "no") {
            const session = await window.ai.createTextSession();

            // Prompt the model and stream the result:
            const stream = session.promptStreaming(prompt);
            for await (const chunk of stream) {
                observer.next(chunk);
            }
        }

        observer.complete();
    },
};

adapter memory

Hello,

I can't figure out how to add memory to my custom adapter. My custom adapter uses Ollama chat api to serve phi3:3.8b locally.

I store message memory using messageReceivedCallback and messageSentCallback, but I just don't see how to pass this to the adapter...

I guess my problem really comes from a lack of understanding of what react/js actually does, or that I should use a ChatAdapter somehow, but any hint to put me on the right path would be much appreciated!

import {AiChat} from '@nlux/react';
import '@nlux/themes/nova.css';

const personaOptions = {assistant: {name: 'DatAI'},user: {name: 'Me'}};

const message = [
  {
    role: 'system',
    message: "You are an expert in data analysis, making sense of unstructed data."
  },
  {
    role: 'user',
    message: "Hi, I need your insights on the data I have collected." 
  },
  {
    role: 'assistant',
    message: "Sure, let see what sense we can make of these data!"
  }
]

const streamAdapter = {
  streamText: async (prompt, observer) => {
    const message = [];
    message.push({"role": "user","content": prompt,})
    const chatURL = 'http://localhost:11434/api/chat';
    const response = await fetch(chatURL, {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({
            'model': 'phi3:3.8b',
            'messages':message,
            'stream':true,
            'temperature':0.0,
        })
      })  

    if (response.status !== 200) {observer.error(new Error('Failed to connect to the server'));return;}
    if (!response.body) {return;}

    const reader = response.body.getReader();
    const textDecoder = new TextDecoder();
    let doneReading = false;

    while (!doneReading) {
      const { value, done } = await reader.read();
      if (done) {doneReading = true;continue;}

      const content = textDecoder.decode(value);
      if (content) {
        try {
            observer.next(JSON.parse(content).message.content);
            message.push({
                "role": "assistant",
                "content": JSON.parse(content).message.content,
            })
        }
        catch (error) {observer.next('--');console.log(error);}      
      }
    }
    observer.complete();
  }
};

const messageReceivedCallback = (ev) => {
  message.push({"role":"assistant","message": ev.message.join('')});
  console.log(message);
};

const messageSentCallback = (ev) => {
  message.push({"role":"user","message": ev.message});
};

const eventCallbacks = {
  messageReceived: messageReceivedCallback,
  messageSent: messageSentCallback,
};

function MiniChat()  {
    return (
      <AiChat 
        initialConversation={message}
        adapter={streamAdapter}
        events={eventCallbacks} 
        personaOptions={personaOptions}
      />
    );

};

export default MiniChat;

I also have some troubles with fetch body sometimes catching more than one item, but this I can probably find a work around!

Thanks for this nice project!

Load chat history

I'm not sure if this chatbot currently supports loading chat history? Say for example, the user had some chat with the chatbot and then closed the chat window. After a few minutes, the user reopens the chat window; he wants to see the chat history and continue with the previous chat, rather than starting a new one. In this case, I can load the chat history from my getChatHistory API, which returns below "ChatHistory" object:

[
    {
        "role": "user",
        "content": "what file formats I can upload?"
    },
    {
        "role": "assistant",
        "content": "You can upload the following file formats: PDF, MP4, MOV, GIF, JPG, PNG, and PSD."
    }
]

Is it possible to load this ChatHistory in the chat window?

Is is compatible with agent_executors?

I tried this without success:

app = FastAPI(
    title="LangChain Server",
    version="1.0",
    description="Spin up a simple api server using Langchain's Runnable interfaces",
)

# We need to add these input/output schemas because the current AgentExecutor
# is lacking in schemas.
class Input(BaseModel):
    input: str
    chat_history: List[Union[HumanMessage, AIMessage, FunctionMessage]]


class Output(BaseModel):
    output: Any

def add_route(path: str, chain: Runnable):
    add_routes(
        app,
        runnable=chain,
        path=path,
        enabled_endpoints=["invoke", "stream", "input_schema", "output_schema"],
    )

add_route("/test", agent_executor.with_types(input_type=Input, output_type=Output))

I'm using langchain adapter in the frontend.
langchain version is 0.1.4

Is it possible to support more markdown features, especially links?

Thanks for creating this cool chatbot UI.

However, I found it only supports very limited Markdown features, such as bold and italic fonts. For example, I created below test adapter:

export const myTestAdapter: Adapter = {
    fetchText(message: string): Promise<string> {
        return new Promise((resolve) => {
            setTimeout(() => {
                const messageToStream = 'For more **detailed instructions**, you can refer to [more details](https://example.com)';
                resolve(messageToStream);
            }, 100);
        });
    },
};

The link was not rendered as I expected. Is it possible to support more markdown features, especially links? If you can support Tables, that would be even better :)

custom message renderer or wrapper

Several chat UIs out there have extra widgets along a message like thumbs up / down, share, report, etc.

It would be handy to have a way to decorate the message box with interactive components.
I think there are 3 ways to approach this:

  1. custom components to render messages

  2. custom component to wrap the message (keep current rendering)

  3. injecting a widget/markup in the current nluxc-text-message-content div element

  4. In Discord @salmenus mentioned something along passing a custom rendered:

<AiChat messageRenderer={MyCustomMessageRenderer} />
type CustomMessageRenderer = (message: string, extras: OtherMessageRelatedInfo): ReactElement

But I guess this won't work in stream mode, I mean it will work as in fetch mode unless we make the component also deal with the streaming adapter.

  1. Alternatively a wrapper would just decorate the message which is rendered by the original core component (<AiChat /> renders <MyCustomMessageWrapper message={message} extras={extras}>{children}</Myโ€ฆ>) where {children} is the original core component that does handle streaming, markdown, etc. And I guess while streaming the message and extras props will get updated, maybe a complete prop (fed from the streaming adapter observer?) would be needed too to know that now I can interact with my decorations?

  2. Like 2, let's keep the original rendering but instead of wrapping it just add some custom component in React or markup in js:

<AiChat messageHeader={MyCustomMessageHeader} messageFooter={MyCustomMessageFooter} />

But even injecting plain markup would be enough with React, having messageHeader={<div className="header"></div>} would allow me to render what I need inside with createPortal.

It would be nice if the customisation could be done in a way to works at core component level, such that could be used in both js and react.

LangServe adapter with Configurable

I'm now trying to set some configurable options on my LangServe chains, but it looks like I can only set input keys with inputPreprocessor.

Is there a way to override the getRequestBody method to send additional options alongside the input?

Security Vulnerability: Exposing OpenAI API Key in Browser with NLux Adapter

Security Vulnerability: Exposing OpenAI API Key in Browser with NLux Adapter

Summary

The NLux OpenAI adapter currently facilitates direct connections to the OpenAI API from the client's browser. This approach inherently exposes the OpenAI API key in client-side code, posing a significant security risk. Unauthorized users could potentially retrieve and misuse the API key, leading to unauthorized access and potential financial implications due to the misuse of the OpenAI services.

Steps to Reproduce

  1. Install and configure the NLux OpenAI adapter according to the official documentation.
  2. Inspect the network requests made from the browser when interacting with the OpenAI API via the NLux adapter.
  3. Observe the API key is included in client-side JavaScript files or network requests, which can be accessed by any user.

Expected Behavior

The API key should not be exposed to the client-side. Instead, requests to the OpenAI API should be proxied through a secure server-side environment, where the API key can be safely stored and managed.

Actual Behavior

The OpenAI API key is exposed in the browser, making it accessible to anyone who inspects the client-side code or network traffic.

Potential Solution

Implement a server-side proxy that handles requests to the OpenAI API. This proxy would store the API key securely and forward requests from the client-side application to OpenAI, without exposing sensitive credentials. This approach also allows for additional security measures, such as rate limiting and logging, to prevent abuse.

Environment

  • NLux Version:Recent
  • Browser Version: Firefox Dev
  • Operating System: Windows 11

Possible Impacts

  • Unauthorized access to the OpenAI API using the exposed API key.
  • Financial charges for the unauthorized use of OpenAI services.
  • Potential breach of OpenAI's terms of service, leading to the revocation of API access.

Suggested Labels

  • security
  • bug
  • enhancement

Issue with rendering back from history code blocks

My issue is I saved history with code blocks and md blocks but they are saved as string which is understandable, now when I render inititalConversation it just give me text without highlight syntax or code blocks, pre, code html tags is there a way from nlux to handle this ?
Screenshot 2024-04-22 at 12 29 44 AM

Thanks in advance!

Customizing components for Input box with a attach documents feature(RAG)

Hey @salmenus, first of all thanks for this nice project. I have recently been working on one of the pilot(RAG) for my company and found this project very helpful. however I have noted some friction in terms of customization, that can be improved. for ex our bot would have a document context which can be uploaded or selected from existing docs(which are already indexed). second thing I would like the input box to grow when i add new lines and shrink when deleted.

Is there anything which can make that possible. My react knowledge is limited and I cant think of much here. In case not any plans on the same.

Thanks

Error in the documentation about javascript apis with bot persona

Chat personas error

https://nlux.dev/learn/chat-personas

How To Define The Bot Persona

In Javascript, you can define the bot persona by calling withPersonaOptions when creating the AiChat component. The withPersonaOptions function takes an object with two properties: bot and user. The bot property is what you use to define the bot persona, as shown in the example below.

const aiChat = createAiChat()
    .withAdapter(adapter)
    .withPersonaOptions({
        bot: {
            name: 'HarryBotter',
            avatar: 'https://nlux.ai/images/demos/persona-harry-botter.jpg',
            tagline: 'Mischievously Making Magic With Mirthful AI!'
        }
    });

avatar should be replaced with picture

I wanted to fix the error message but I dont see the docs repository

Module parse failed: Unexpected token

./node_modules/@nlux/core/umd/nlux-core.js
Module parse failed: Unexpected token (1:1405)
You may need an appropriate loader to handle this file type.

Intermittent Ignoring of First or Last Chunk in StreamAdapter

I am encountering some issues and am quite puzzled. I'm not sure if I'm on the right track, so I need your help to review it.

I wrote a very simple streamAdapter and simulated the 26 English letters, adding them using observer.next. However, I noticed some random problems. Sometimes, the first chunk is ignored, and other times, the last chunk is ignored, as shown in the attached image.

ๅพฎไฟกๆˆชๅ›พ_20240614134922

I wonder what might be causing this. Could it be that the data is being pushed too quickly?

To address this, I added some delays each time, which seemed to help a bit. After consulting with an AI, I found a solution by using the following method:

observer.next('\u200B');

I inserted this invisible character before and after each processing step, and it seems to work wonders.

However, I am still confused as to why this is happening. ๐Ÿ˜‚

I believe this might be an issue, possibly a significant one, but my understanding is limited. I truly don't comprehend why this is occurring. It seems I need everyone's help.

Reproduction

https://codesandbox.io/p/sandbox/custom-renderer-status-forked-5y2gfz

Error with line break when using streaming

Hi,

Have a look at my code

export const MyAdapter: ChatAdapter = {
  streamText: async (message: string, observer: StreamingAdapterObserver) => {
    const result = await sendMessage({
      // TODO:
      supaFriend: "clwt7czaf0000fiuonrkj7dlk",
      // TODO
      conversation: "clwt7e3zg00037csd347yqcs3",
      userInputMessage: message,
    });

    console.log(result);

    const parseResult = parseResponseMessageParsing(result.data);

    if (!result.serverError && parseResult.status === "success") {
      console.log("parsed", parseResult.data.message);
      observer.next(parseResult.data.message);
      observer.complete();
    } else {
      observer.error(new Error(result.serverError || "Could not generate message"));
    }
  },
};

This gives me

image

As you can see single \n are getting ignored. While double are working. Why not, but let's look further.

Screenshot 2024-05-31 at 17 02 05

I just refreshed the page and printed the last message in console. Everything get passed to initialConversation.

Now it works.

Do you have any explications about this ?

Thanks

Add updates for latest GPT models

Would appreciate updating the OpenAIModel type to allow support for the latest gpt-4-turbo-preview and gpt-4-1106-preview models!

Supporting LangGraph

Are there any plans to support langgraph? Langgraphs already support the langserve interface, but there are human in the loop components to some of the the graphs that I am building. Just thought I would ask about the state of nlux in view of langgraph.
Thank you for your effort on this. Looks really good

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.