Code Monkey home page Code Monkey logo

typechat's Introduction

TypeChat

TypeChat is a library that makes it easy to build natural language interfaces using types.

Building natural language interfaces has traditionally been difficult. These apps often relied on complex decision trees to determine intent and collect the required inputs to take action. Large language models (LLMs) have made this easier by enabling us to take natural language input from a user and match to intent. This has introduced its own challenges including the need to constrain the model's reply for safety, structure responses from the model for further processing, and ensuring that the reply from the model is valid. Prompt engineering aims to solve these problems, but comes with a steep learning curve and increased fragility as the prompt increases in size.

TypeChat replaces prompt engineering with schema engineering.

Simply define types that represent the intents supported in your natural language application. That could be as simple as an interface for categorizing sentiment or more complex examples like types for a shopping cart or music application. For example, to add additional intents to a schema, a developer can add additional types into a discriminated union. To make schemas hierarchical, a developer can use a "meta-schema" to choose one or more sub-schemas based on user input.

After defining your types, TypeChat takes care of the rest by:

  1. Constructing a prompt to the LLM using types.
  2. Validating the LLM response conforms to the schema. If the validation fails, repair the non-conforming output through further language model interaction.
  3. Summarizing succinctly (without use of a LLM) the instance and confirm that it aligns with user intent.

Types are all you need!

Getting Started

Install TypeChat for TypeScript/JavaScript:

npm install typechat

You can also work with TypeChat from source for:

To see TypeChat in action, we recommend exploring the TypeChat example projects. You can try them on your local machine or in a GitHub Codespace.

To learn more about TypeChat, visit the documentation which includes more information on TypeChat and how to get started.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

typechat's People

Contributors

ahejlsberg avatar aliciahetrick avatar antross-brex avatar bchavez avatar ccbond avatar danielrosenwasser avatar davrous avatar dependabot[bot] avatar dlehenbauer avatar dugganb avatar eltociear avatar gvanrossum avatar hillary-mutisya avatar hlucco avatar jakebailey avatar johannlai avatar kirklin avatar lxfriday avatar microsoft-github-operations[bot] avatar microsoftopensource avatar mikehopcroft avatar pcdeadeasy avatar pierceboggan avatar sinedied avatar steveluc avatar stevelucco avatar tychotic avatar umeshma avatar weykon avatar yannxaver avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

typechat's Issues

ChatGPT interpretes prompt as wanting and example of JSON, not JSON with actual data for the user request

Hello

I've been playing around with a meal plan app for which I've been working a lot on how to get ChatGPT to give the result as JSON.
For this reason, when I heard about TypeChat, I thought that, that sounded like a perfect solution.

Not sure whether its due to my type definitions are a bit complex, but the resulting prompts make ChatGPT interpret that I want an example on how a resulting JSON could look.

Here's a link to a chat I made with the prompt generated by Typechat.

https://chat.openai.com/share/5bce537a-f9b8-4fbc-afd7-7ebb724b3f89

In my own approach I've had success with ending my prompt with the type definition. Like for example:

"Your response should be in JSON format {meals: {"description": string, "ingredients": {"name": string, "quantity": number, "unit": string}[], "directions": string[]}[]}."

Have you considered this sort of approach?

Design Meeting Notes (2023-11-06)

Python and TypeChat

  • Still thinking about Pydantic as a basis.
  • pydantic-core specifies the built-in discriminators for validators
  • Seems feasible to generate TypeScript from these. Either over JSON schema or directly over data structures.
    • What about custom validators/serializers? Can't handle those with something custom.
  • Would be weird to tell users they have to be on a fixed version of Pydantic.
  • Anecdotally, have had good results with JSON schema (or more specifically, YAML versions of JSON schema).
    • YAML seems to do really well...
    • As well as TypeScript as a spec language? (check back here)
  • What workflow could we have here?
    • Start with a YAML-authored JSON schema.
    • Have a proof-of-concept of using kwalify
  • The good part of these schemas is that they can specify more than built-in annotations for types.
  • So specify in
  • 3 concerns
    • Spec language for language models (succinct, few tokens, familiar to recent LLMs)
    • Validation expressivity (you can say "it's a zip code" or "it's an email address").
    • Developer UX (end-to-end, you have a pleasant authoring language, type-checking, auto-complete, etc.)
  • Tied to that are the following:
    • What does a developer write?
    • What does an LLM see?
    • What
  • Be aware - there's a distinction for errors committed by an LLM versus errors committed by an end-user.
    • If a user says "my zip code is abcdefg", then that's a user error, not a language model error.
  • Another example - TypeChat Programs in Python
    • Top level functions exported.

      def add(x: float, y: float): float
      def sub(x: float, y: float): float
      # ...
  • Nothing seems to work as great as TypeScript for LLMs.
    • Lightest on tokens, most familiar.
  • Okay, but what's the authoring format? What do you do here? What if you need to generate types on the fly?
  • How will we solve the programmatic case in the TypeScript world?
    • We don't have a perfect solution right now. Maybe rely on libraries like Zod?
    • What's that going to have to look like? You say that something is a string, but then it's generated on the fly.
    • How does that get there?
  • Do these string unions/enums actually matter? Maybe for discriminated unions, but maybe not for items in a database?
  • What are these supposed to look like?
    • It may be best to insert these into comments.
  • So what would we do with Python?
  • We really really want to see what the accuracy is between the TypeScript and Python forms.
    • If it's not accurate, we need to see if we can convert it into TypeScript.

Date type works but returns validate error

With this interface

export interface DateTime {
    type: 'dateTime',
    dateTime?: Date;
};

The json results look good but validate fails:

JSON validation failed: Cannot find name 'Date'.
{
  "entities": [
    {
      "type": "dateTime",
      "dateTime": "2022-08-20T08:30:00.000Z"
    }
  ]
}

JSON validation failed: File '/schema.ts' is not a module.\n'Object' only refers to a type, but is being used as a value here.\nCannot find name 'exports'

I'm using this function to get summary

export const  getSummary = async (mails:string) =>{
return new Promise(async (resolve,reject)=> {

       const response= await  translator.translate(mails);
       if (!response.success) {
           console.log(response);
           return reject(response);
    }
    const summarizedMail = response.data;
    console.log(JSON.stringify(summarizedMail,undefined, 2));
    if (summarizedMail.summaryObject.type==="unknown") {
        console.log("I didn't understand the following:");
            console.log(summarizedMail.summaryObject.text);
        
    }
    resolve(response)
})

Here's the schema file

export interface SummarizedMailItems {
    type:'mailSummary',
    summarizationLanguage: 'arabic' | 'english' | 'french' | 'spanish'
    summaryParagraph:string ,
    summaryBulletpoints: [string,string,string]
    
}

export interface UnknownText {
    type:'unknown',
    text: string; // The text that wasn't understood
}

export type SummarizedMail = {
    summaryObject: SummarizedMailItems | UnknownText;
};

I'm getting this error

{
  success: false,
  message: "JSON validation failed: File '/schema.ts' is not a module.\n'Object' only refers to a type, but is being used as a value here.\nCannot find name 'exports'.\n{\n  \"paragraphSummary\": \"..........",
}```

Design Meeting Notes (2023-10-30)

Agent Patterns

  • Related to work from AutoGPT and AutoGen
  • Agents "specifying" other agents
    • "Matryoshka agents"
  • Agents passing messages back and forth.
    • e.g. actor-critic pattern - an actor agent responds, a critic agent questions the actor to refine answers or evaluate answers.
  • Can we extend TypeChat schemas so that they're full descriptions of agents?
  • Can't much of this be achieved through comments in our schemas?
    • Maybe
  • If we can build these out as examples, that would be useful.
  • What are the criteria by which people evaluate success of these agents?
    • Critic agents can evaluate in a very restricted way - "respond in 'good'/'bad'"
  • Theres a view of this which is like reinforcement learning, where actor and critic just feed off each other. Where's the schema?
  • Stepping back - projects like AutoGPT and autogen are in Python - what are we even talking about without a Python version of TypeChat?

TypeChat in Python (and .NET, and revisiting the Programs approach in TypeScript)

  • We have experiments in Python (e.g. Pypechat)

  • People use Pydantic a lot for validation. Sending Python doesn't work all that well as TypeScript as a spec language, JSON schema dumps of Pydantic doesn't work as well as TypeScript as a spec language.

  • Could we generate TypeScript from something like Pydantic data structures?

  • Libraries like Pydantic also have certain kinds of validation beyond what any static type system can encode. We can encode those in comments.

  • We could do the same thing with something like Zod as well.

  • We don't know how well libraries like Pydantic work on discriminated unions and collections of literals.

  • One of the nice things about these solutions is that for dynamic schema generation (i.e. "my information is all in a database, generate a schema out of that") can be achieved because they all have programmatic APIs.

  • Using a runtime type validation library sounds nice, but what about TypeChat programs?

    • Type-checking between steps is not all that simple.
  • Have to extend Pydantic in some way to describe APIs

  • Something where each ref is inlined and type-checked in that manner.

  • Will that work? What about the csv example? Table types are basically opaque, but exist across values.

  • Problem with this approach and opaque values (things that can't be JSONy) is... well, let's dive into the current programs approach.

  • Given the following API...

    interface API {
        getThing(...): Thing;
        processStuff({ thing: Thing, a: ..., b: ... }): ...;
    }

    for an intent, a language model will generate something like...

    {
        "@steps": [
            ...,
            {
                "@func": {
                    "name": "...",
                    "args": [
                        {
                            "thing": { "@ref": 0 },
                            "a": "...",
                            "b": "..."
                        }
                    ]
                }
            }
        ]
    }
    • Can imagine a runtime validator substitute the { "@ref": 0 } with the earlier value.
    • If you take a substitutive approach, all you end up with is pure JSON.
    • Can't have an API where you have factories for opaque types.
  • If we did this for Python and .NET, we would probably do the same for TypeScript as well.

  • Does this validation approach work? Don't you need an exemplar value for each return type?

  • Forget Python, how does this work with up-front validation?

    interface API {
      getThing(): { x: number, y: number };
      eatThing(value: { x: number, y: number }): void
    }

    could generate

    {
        "@steps": [
            {
                "@func": {
                    "name": "getThing",
                    "args": []
                },
                "@func": {
                    "name": "eatThing",
                    "args": [{ "@ref": 0 }]
                },
            }
        ]
    }

    which turns into...

    {
        "@steps": [
            {
                "@func": {
                    "name": "getThing",
                    "args": []
                },
                "@func": {
                    "name": "eatThing",
                    "args": [{"@func": { "name": "getThing", "args": [] } }]
                },
            }
        ]
    }
    • Well not quite, you would serialize the result of the first step and send it right into eatThing
  • But that's not the same thing that's in TypeChat today - this doesn't do up-front validation, it validates at each step of evaluation.

  • We might be able to figure something out with runtime type validation libraries to do up-front validation.

  • Is up-front validation important?

    • We do think so, we believe that validation and summarization before executing is something we should strive to provide.
  • StuffArg = {
      thing: Thing,
      a: number,
      b: number
    }
    
    interface API {
      eatThing(value: StuffArg): void
    }
    • You'll need some sort of Pydantic/Zod object to describe this...
    • What is the exemplar value for each?
    • If you use nominal equivalence, it's easier to provide some basic checking here.
    • That might be enough?
  • But TypeChat programs permit some amount of structural construction - object literals etc.

    • Kind of at odds with this concept of "nominal only".
    • Could say all refs have to be nominal.
  • Could come up with a very minimal type-checker across APIs.

  • How do you deal with the divergence between how this type-checks versus how it all type-checks in the behind-the-scenes implementation of the API.

    • Impedance mismatch problem - not limited to "nominal versus structural". Does this type-checking strategy in TypeChat support subtyping?
  • We will need to prototype this out a bit.

    • We'll likely focus on Python here first just to prove it out and get a proof-of-concept.
    • Need to see what the Pydantic and Zod and the like provide in the way here.

Too many request error

Thanks for this innovation. I'm excited to learn about it.

I tried running the sentiment example and I was getting too many requests error
image

I used a new API key and still got the same response

TypeChat design question: Aren't we mixing schema and data?

Hi all,

When looking at the samples like CoffeeShop or Restaurant we can see schemas like:

export interface BakeryProducts {
    type: 'BakeryProducts';
    name: 'apple bran muffin' | 'blueberry muffin' | 'lemon poppyseed muffin' | 'bagel';
    options: (BakeryOptions | BakeryPreparations)[];
}

or

export type Pizza = {
    itemType: 'pizza';
    // default: large
    size?: 'small' | 'medium' | 'large' | 'extra large';
    // toppings requested (examples: pepperoni, arugula)
    addedToppings?: string[];
    // toppings requested to be removed (examples: fresh garlic, anchovies)
    removedToppings?: string[];
    // default: 1
    quantity?: number;
    // used if the requester references a pizza by name
    name?: "Hawaiian" | "Yeti" | "Pig In a Forest" | "Cherry Bomb";
};

Here, we use data like the pizza names or the bakery products inside the schema definitions.

Is this a realistic approach?
Usually, we have data from a data source/store, e.g., all the pizzas a restaurant offers. They won't/cannot live inside the schema definitions file in real life :-).

Do you have any thoughts on this design?

Thank you.

How to model classical NLU intents and entities(slots)?

The calendar example's actions are similar to intents that one would define for Dialogflow or Alexa. How to use OpenAI as an intent NLU engine?

The issue is that "yes" and "yes, please" matches the YesIntent but "ok" or other affirmative responses do not.

Inputs:

  • yes
  • ok
  • sure
  • i will
  • yes, please

Type schema:

// The following types define the structure of an object of type BotIntent that represents a user request that matches most closely to the sample or synonyms

export type BotIntent = YesIntent | NoIntent | UnknownIntent;

// if the user types text that closely matches 'yes' or a synonym, this intent is used
export type YesIntent = {
  intentName: 'YesIntent';
  sample: 'yes';
  text: string;
};

// if the user types text that closely matches 'no' or a synonym, this intent is used
export type NoIntent = {
  intentName: 'NoIntent';
  sample: 'no';
  text: string;
};

// if the user types text that can not easily be understood as a bot intent, this intent is used
export interface UnknownIntent {
  intentName: 'UnknownIntent';
  sample: 'unknown';
  // text typed by the user that the system did not understand
  text: string;
}

How to model more complicated intents with required and optional entities?

Give access to `data` if `result.success === false`

/**
 * An object representing a successful operation with a result of type `T`.
 */
export type Success<T> = {
    success: true;
    data: T;
};
/**
 * An object representing an operation that failed for the reason given in `message`.
 */
export type Error = {
    success: false;
    message: string;
};
/**
 * An object representing a successful or failed operation of type `T`.
 */
export type Result<T> = Success<T> | Error;
/**
 * Returns a `Success<T>` object.
 * @param data The value for the `data` property of the result.
 * @returns A `Success<T>` object.
 */
export declare function success<T>(data: T): Success<T>;
/**
 * Returns an `Error` object.
 * @param message The value for the `message` property of the result.
 * @returns An `Error` object.
 */
export declare function error(message: string): Error;
/**
 * Obtains the value associated with a successful `Result<T>` or throws an exception if
 * the result is an error.
 * @param result The `Result<T>` from which to obtain the `data` property.
 * @returns The value of the `data` property.
 */
export declare function getData<T>(result: Result<T>): T;

It would be useful to get access to data typed as unknown even if result.success === false. This would enable you to use the data and create your own repair pipeline.

@azure/openai vs axios

Did you consider using @azure/openai? If yes, we would love to hear what were the pain points and whether there is any feedback 😊

If not, please checkout the following samples of using completions:

As you notice, the library works with both openai.com and Azure OpenAI and provides a seamless experience of switching between either.

Design Meeting Notes (2023-10-23)

Possible Topics

  • Issue tracker
  • Library integrations
  • OpenAI functions
  • Formal representations
  • TypeChat Programs
  • Other languages (e.g. Python and C#)
  • Other features

Issue Tracker Maintenance and Community Engagement

  • We had a pause - what happened?
    • Vacations, explorations with internal teams (e.g. Copilot implementations), etc.
    • Direct discussions with users, but took us away from GitHub for a bit.
  • Where are we now?
  • Still want more blog posts, want to have a video explainer - seeing is believing.
  • Plan to do a sweep over issues and PRs.

TypeChat and Orchestrators

  • Things like Semantic Kernel, langchain, etc.
  • Currently exploring how these can be integrated - loose ideas at this moment?
  • Want to be able to find where these can complement each other, integrate better, etc.
    • Planners based on TypeChat's JSON Programs

OpenAI Functions

#45

  • OpenAI functions are one function at a time.
  • Described via JSON schema.
  • There's a function role that fits within a conversation.
  • Fine-tuned - not guaranteed to get schema-conforming data (nor even well-formed data!).
  • Is there a lot of usage?
    • There's a lot of excitement, but we haven't yet spoken with many users.
  • So why not just use the TypeChat approach here? Either TypeChat JSON validation or TypeChat JSON programs?
    • We believe one subsumes the other - TypeChat being cross-model with type-checked validation is more robust.
    • Could plug in your favorite schema validator to do this technically, right?
    • Anecdotally, TypeChat performs very very well. To be honest, a lot better in our experience.
      • We're missing evidence we can show to the outside world though.
  • Do we have any insight into long-term plans with OpenAI functions?
    • Not yet, we would love to discuss further with these teams.
  • Conclusion?
    • We don't yet think it makes sense to support directly - would love to better understand long-term plans from LLM providers like OpenAI.

Formal Representations for LLMs

  • What's that mean?
    • Verifiable and repairable syntactically/semantically
  • Areas of investigation
    • Best representations of...
      • specifications (e.g. TypeScript types, "JSON templates", JSON schema...)
      • return formats (e.g. JSON, YAML, code in specific languages)
    • Is there a compact schema form that we can adopt/invent with high accuracy? It'd be easier to verify if we had something more compact than JSON schema.
      • But new languages = new toolchains. Picking a well-defined subset of a known language like TypeScript might be more successful.
    • How do we make these work across languages?
  • What about a separate authoring format?
    • "SchemaLite"?
  • What about that subset of TypeScript?
  • What about TypeSpec?
  • TypeScript versus JSON Schema?
    • TypeScript really shines on discriminated unions.
    • What's the best way to describe a discriminated union to an LLM? For data interchange, that's fundamentally how you describe polymorphism.

Further Evolution of JSON Programs/Planning/Scripting/Orchestration

  • Some feedback on programs is that they're cool, but too limited.
    • Clever ways to enable some stuff like branching and iteration, but they don't always scale.
  • Models are being asked to produce an IR that is turned into another language, then interpreted.
  • The feedback loop from a type-checker is pretty removed.
    • Hard problem with verification.
  • But we have concerns about sandboxing and guaranteed availability (i.e. keeping your host programs working in spite of the halting problem).
  • Plus, what if you have millions of functions, or methods on objects with thousands of types, etc.?
    • And if we want to deliver plans with no hallucinations, we want to be able to summarize plans for humans too. So we want that...
    • But how do you actually present this to a user?
    • Just be able to provide transactions/undo? Commit/unroll?
  • Maybe there's some inspiration to be taken from languages like PowerShell, Tcl? Bring your own language features, build it up.

Multi-Agent/Multi-Schema/Routing Support

  • Dynamic Schema Generation from Data
  • Programmatic Schema Construction
    • Dynamically populating structure and entities

Long-Term Features We'd Like to Tackle

  • Embeddings
  • Vocabulary
  • Multi-Schema
  • Routing
  • Multi-Model Infrastructure

Ability to provide additional validation logic

This library is great, but how can I add additional validation?
For example, how can I ensure that a string doesn't contain newlines?

GPT3 will often ignore instructions to return a string in a certain format, and it would be great if this library could retry until the right response (e.g. passing validation) was returned.

Better prompt for chatglm2-6b-4bit

Description:

When I run sentiment example, the response result from chatglm is always '{\nsentiment : "xxx"\n}' which parse failed by JSON.Parse().

Environment:

I used fastchat and chatglm2-6b-4bit to mock openai api

Root cause:

Because chatglm2-6b-4bit has lower performance than chatgpt, it cannot return the correct result via the example

Solution:

Add one prompt in source code "Please note that ..."

function createRequestPrompt(request) {
        return `You are a service that translates user requests into JSON objects of type "${validator.typeName}" according to the following TypeScript definitions:\n` +
            `\`\`\`\n${validator.schema}\`\`\`\n` +
            `Please note that the response string can be parsed to JSON object via JSON.Parse() such as {"aaa" : "bbb"}\n` +
            `The following is a user request:\n` +
            `"""\n${request}\n"""\n` +
            `The following is the user request translated into a JSON object with 2 spaces of indentation and no properties with the value undefined:\n`
    }

Result:

Now the response result from chatglm is '{"sentiment": "xxx"}' which can be parsed to JSON correctly

Allow customize prompt per field in schema

I'd like to customize the prompt for some fields and have that imported from the schema. For the calendar example, I'd like to prompt the LLM to give me startTime and endTime in ISO 8601 format

export type EventTimeRange = {
    /** provide the time in the systems timezone, assume user is refering to a future date */
    startTime?: Date;
    /** Expect endTime to be in ISO 8601 format, if not otherwise specified, assume user is refering to a future date, it should always be equal to or later than startTime */
    endTime?: string;
    /** Expect duration to be in minutes */
    duration?: string;
};

Support to add prompts to the system prompt.

    function createRequestPrompt(request: string) {
        return `${prefixPrompt}\nYou are a service that translates user requests into JSON objects of type "${validator.typeName}" according to the following TypeScript definitions:\n` +
            `\`\`\`\n${validator.schema}\`\`\`\n` +
            `The following is a user request:\n` +
            `"""\n${request}\n"""\n` +
            `The following is the user request translated into a JSON object with 2 spaces of indentation and no properties with the value undefined:\n`;
    }

Is it possible to support adding prefix prompt? As this approach, we can define more customized prompt on the global level.
e.g.

const prefixPrompt = "You need to make inferences about what the user hasn't mentioned, based on what has been provided."

Docs on using TypeChat with other LLMs

What I understand is Typechat can be an alternative to lang chain library but in whole documentation I only see this being used for OpenAI's GPT Model only. So, is there any option to use this with any Open source model like LLAMA or Falcon ?

Full prompt for a specific task

Thank you so much for developing such a useful tool. I usually only use Python and other mathematical software, and I am not familiar with TypeScript. I don't know from this library how to give a complete prompt for a specific problem. Can you add a description to the documentation? For example, what is the full prompt for the sentiment task?

Distribute as ESM

TypeChat currently only distributes CJS targeting a very old ES version. Can TypeChat be configured to distribute both? It's a relatively straightforward configuration.

Get API-Information on usage etc

From my understanding you currently only get the answer and a success flag back.
For me it would be quite interesting to also get information on tokens used etc (which OpenAI provides for example).
Is there a way to get that information out or does this need modification in the code? Would be happy to support, just need a little guidance on where to start.

This is the information on usage you get back when using the openAI-API

"usage":{
      "prompt_tokens":13,
      "completion_tokens":7,
      "total_tokens":20
   },

"Restaurant" example has wrong output in README

The "Restaurant" example has a wrong output in the "Usage" section of the README1, if I understand the input correctly, it should be:

-2 large pizza with mushrooms
+1 large pizza with mushrooms
+1 large pizza with sausage
 1 small pizza with sausage
 1 whole Greek salad
 1 Pale Ale
 1 Mack and Jacks

If this is intentional, then maybe there should be a sentence below it explaining this to avoid confusion, for example:

This shows that TypeChat may not be 100% accurate, and you may want to consider asking the user for confirmation before performing any action. The output here erroneously shows 2 mushroom pizzas and 1 sausage pizza, while it should be 1 mushroom pizza and 2 sausage pizzas (one large and one small).

Footnotes

  1. And the "Input" might be incorrect as well, shouldn't it be only 🍕> instead of 😀> 🍕>?

.env example?

What do we need to put into the .env file to cruft up connectivity to chatgpt?

How to handle multiple languages?

From the restaurant example, we find this type in the schema:

export type Pizza = {
    itemType: "pizza";
    // default: large
    size?: "small" | "medium" | "large" | "extra large";
    // toppings requested (examples: pepperoni, arugula)
    addedToppings?: string[];
    // toppings requested to be removed (examples: fresh garlic, anchovies)
    removedToppings?: string[];
    // default: 1
    quantity?: number;
    // used if the requester references a pizza by name
    name?: "Hawaiian" | "Yeti" | "Pig In a Forest" | "Cherry Bomb";
};

It appears that values in strings and comments (in English) are used to help construct the response JSON.

Would there need to be one schema file per language where all strings and comments are in a given language?

  • foodOrderViewShema.en-US.ts
  • foodOrderViewShema.en-GB.ts
  • foodOrderViewShema.pt-BR.ts

Multi-file schemas

Typechat currently assumes that schemas are self-contained in one file. In reality, schemas are often spread out over multiple files. Ideally, typechat would enable the usage of multi-file schemas out of the box by resolving imports before stringifying the result.

How to customize typechat for my own model and personalized scenarios (exp. for Chinese users)?

I am currently deploying the model and exposing a callable OpenAI API. I am modifying model.ts to fit my own model, but I'm experiencing instability when using mpt-30b-chat and mpt-30b-instruct. Math functions are frequently being called incorrectly. JSON templates often fail to validate in complex projects. Do you have any suggestions, such as adding a system prompt or modifying the template to accommodate Chinese?

Conversations rather than one-offs

Any thoughts on how to use TypeChat in conversation-style interactions? In my use case, there is a need to go back and forth with the LLM, refining queries. In your coffee shop example, something like this:

User: Two tall lattes. The first one with no foam.
Assistant: Two tall lattes coming up.
User: The second one with whole milk. Actually make the first one a grande.
Assistant: One grande latte, one tall latte with whole milk. Coming up.

TypeChat JSON Validator Incorrectly Rejects Valid Enums

Hello!
I included some enums in my schema, but unfortunately the library seems incompatible with schemas containing enums.
My JSON data includes values that should correspond to valid enum keys, but the validator seems to incorrectly rejects these values.

Here is a part of the error and the Currency enum :
Error: Type '"EUR"' is not assignable to type 'CURRENCY'. {... "currency": "EUR", ...}

export enum CURRENCY { USD = 'USD', EUR = 'EUR', GBP = 'GBP', }

That worked using union type like the following :

type CURRENCY = 'USD' | 'EUR' | 'GBP';

Will enums be supported in the future, or my implementation may be wrong ?
Do you think type safety is guaranteed with a union type ?

Thank you !

[Feature] Support edge runtime like cloudflare worker.

Descripte

When i want to use typechat do some greate thing on cloudflare worker. when i build, i meet some error.


- warn No build cache found. Please configure build caching for faster rebuilds. Read more: https://nextjs.org/docs/messages/no-cache
Attention: Next.js now collects completely anonymous telemetry regarding usage.
This information is used to shape Next.js' roadmap and prioritize features.
You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
https://nextjs.org/telemetry

- info Creating an optimized production build...
Warning: For production Image Optimization with Next.js, the optional 'sharp' package is strongly recommended. Run 'yarn add sharp', and Next.js will use it automatically for Image Optimization.
Read more: https://nextjs.org/docs/messages/sharp-missing-in-production
Failed to compile.

../node_modules/.pnpm/[email protected]/node_modules/typechat/dist/interactive.js:7:29
Module not found: Can't resolve 'fs'

https://nextjs.org/docs/messages/module-not-found

Import trace for requested module:
../node_modules/.pnpm/[email protected]/node_modules/typechat/dist/index.js
./app/api/typeChat/route.ts
../node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/next/dist/build/webpack/loaders/next-edge-app-route-loader/index.js?absolutePagePath=private-next-app-dir%2Fapi%2FtypeChat%2Froute.ts&page=%2Fapi%2FtypeChat%2Froute&appDirLoader=bmV4dC1hcHAtbG9hZGVyP25hbWU9YXBwJTJGYXBpJTJGdHlwZUNoYXQlMkZyb3V0ZSZwYWdlPSUyRmFwaSUyRnR5cGVDaGF0JTJGcm91dGUmcGFnZVBhdGg9cHJpdmF0ZS1uZXh0LWFwcC1kaXIlMkZhcGklMkZ0eXBlQ2hhdCUyRnJvdXRlLnRzJmFwcERpcj0lMkZob21lJTJGcnVubmVyJTJGd29yayUyRmFzay1jb2RlYmFzZSUyRmFzay1jb2RlYmFzZSUyRnNyYyUyRmFwcCZhcHBQYXRocz0lMkZhcGklMkZ0eXBlQ2hhdCUyRnJvdXRlJnBhZ2VFeHRlbnNpb25zPXRzeCZwYWdlRXh0ZW5zaW9ucz10cyZwYWdlRXh0ZW5zaW9ucz1qc3gmcGFnZUV4dGVuc2lvbnM9anMmYmFzZVBhdGg9JmFzc2V0UHJlZml4PSZuZXh0Q29uZmlnT3V0cHV0PSZwcmVmZXJyZWRSZWdpb249Jm1pZGRsZXdhcmVDb25maWc9ZTMwJTNEIQ%3D%3D&nextConfigOutput=&preferredRegion=&middlewareConfig=e30%3D!

../node_modules/.pnpm/[email protected]/node_modules/typechat/dist/interactive.js:8:35
Module not found: Can't resolve 'readline/promises'

https://nextjs.org/docs/messages/module-not-found

Import trace for requested module:
../node_modules/.pnpm/[email protected]/node_modules/typechat/dist/index.js
./app/api/typeChat/route.ts
../node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/next/dist/build/webpack/loaders/next-edge-app-route-loader/index.js?absolutePagePath=private-next-app-dir%2Fapi%2FtypeChat%2Froute.ts&page=%2Fapi%2FtypeChat%2Froute&appDirLoader=bmV4dC1hcHAtbG9hZGVyP25hbWU9YXBwJTJGYXBpJTJGdHlwZUNoYXQlMkZyb3V0ZSZwYWdlPSUyRmFwaSUyRnR5cGVDaGF0JTJGcm91dGUmcGFnZVBhdGg9cHJpdmF0ZS1uZXh0LWFwcC1kaXIlMkZhcGklMkZ0eXBlQ2hhdCUyRnJvdXRlLnRzJmFwcERpcj0lMkZob21lJTJGcnVubmVyJTJGd29yayUyRmFzay1jb2RlYmFzZSUyRmFzay1jb2RlYmFzZSUyRnNyYyUyRmFwcCZhcHBQYXRocz0lMkZhcGklMkZ0eXBlQ2hhdCUyRnJvdXRlJnBhZ2VFeHRlbnNpb25zPXRzeCZwYWdlRXh0ZW5zaW9ucz10cyZwYWdlRXh0ZW5zaW9ucz1qc3gmcGFnZUV4dGVuc2lvbnM9anMmYmFzZVBhdGg9JmFzc2V0UHJlZml4PSZuZXh0Q29uZmlnT3V0cHV0PSZwcmVmZXJyZWRSZWdpb249Jm1pZGRsZXdhcmVDb25maWc9ZTMwJTNEIQ%3D%3D&nextConfigOutput=&preferredRegion=&middlewareConfig=e30%3D!


> Build failed because of webpack errors
 ELIFECYCLE  Command failed with exit code 1.
Error: Process completed with exit code 1.

Cannot find module 'readline/promises' error in production

I was using Typechat in a project I worked on and it worked fine during development, The deployed version was raising the following error from that library: Cannot find module 'readline/promises',

Could it be that it's not bundled properly, or doesn't work in a lambda environment for some reason?

I need some help here.

Should FunctionCall be part of the Expression?

The following schema is introduced in #20.

// A program consists of a sequence of expressions that are evaluated in order.
export type Program = {
    "@steps": Expression[];
}

// An expression is a JSON value, a function call, or a reference to the result of a preceding expression.
export type Expression = JsonValue | FunctionCall | ResultReference;

// A JSON value is a string, a number, a boolean, null, an object, or an array. Function calls and result
// references can be nested in objects and arrays.
export type JsonValue = string | number | boolean | null | { [x: string]: Expression } | Expression[];

// A function call specifices a function name and a list of argument expressions. Arguments may contain
// nested function calls and result references.
export type FunctionCall = {
    // Name of the function
    "@func": string;
    // Arguments for the function
    "@args": Expression[];
};

// A result reference represents the value of an expression from a preceding step.
export type ResultReference = {
    // Index of the previous expression in the "@steps" array
    "@ref": number;
};

However, I don't think FunctionCall should be part of Expression type. Because { [x: string]: Expression } is part of JsonValue. The output of gpt model is "@steps": Expression[], the execution order is determined. Because when we allow FunctionCall in an object-like value, the execution order is undetermined. Example:

with FunctionCall

{
  "@steps": [
    {
      "@func": "func1",
      "@args": [
        {
          "a": {
            "@func": "func2",
            "args": []
          },
          "b": {
            "@func": "func3",
            "args": []
          }
        }
      ]
    }
  ]
}

we don't know the execution order of func2 vs func3

with Reference

{
  "@steps": [
    {
      "@func": "func3",
      "args": []
    },
    {
      "@func": "func2",
      "args": []
    },
    {
      "@func": "func1",
      "@args": [
        {
          "a": {
            "@ref": 0
          },
          "b": {
            "@ref": 1
          }
        }
      ]
    }
  ]
}

func3 is executed first, then func2.


I'm not sure in what scenarios the gpt model can produce @steps with reference to FunctionCall, in the tests I run, only ResultReference is produced. Is gpt model able to understand the difference between reference to the result of previous functioncall and execute every times it is evaluated?

I think it would be 'safer' to design the generic schema as deterministic as possible. For example, in a financial application, the execution orders matters for the state mutation for user's funds.

therefore, a more strict schema version is:

export type Program = {
    "@steps": FunctionCall[];
}

export type Expression = JsonValue | ResultReference;

export type JsonValue = string | number | boolean | null | { [x: string]: Expression } | Expression[];

export type FunctionCall = {
    "@func": string;
    "@args": Expression[];
};

export type ResultReference = {
    "@ref": number;
};

@ahejlsberg @steveluc does it make sense? Have you seen any example with including FunctionCall in Expression?

Streaming Support

Streaming is a very critical part of our app. Are there any plans to support it some how?

Originally posted by @cb-eli in #68

Compare to Open AI's function calling

I'm wondering how this compares to Open AI's function calling as that's also made "to more reliably get structured data back from the model."
I see that TypeChat aimes to be model-agnostic and lets me pass in TS types.
How does the quality of the answer compare? OpenAI finetuned the models to work for function calling. Are TypeChat results as reliable? Or could it be combined?

A network error occurred while running the example

cause: Error: connect ETIMEDOUT 199.59.150.49:443
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) {
errno: -4039,
code: 'ETIMEDOUT',
syscall: 'connect',
address: '199.59.150.49',
port: 443

Due to my use of VPN, I have set up HTTP_ Proxy, but it doesn't seem to work, and the code still reports errors.

How does typechat output time for searching

The model I use is gpt-3.5-turbo. I asked him to return the data of the last seven days. I used it for searching.
I expected him to return my current time, but what he gave me seemed to be in 2021.
This is my low Problem with version model ?

GPT-Response:
image

image

Schema.ts
image

Add `multiline` option to `processRequests`

/**
 * A request processor for interactive input or input from a text file. If an input file name is specified,
 * the callback function is invoked for each line in file. Otherwise, the callback function is invoked for
 * each line of interactive input until the user types "quit" or "exit".
 * @param interactivePrompt Prompt to present to user.
 * @param inputFileName Input text file name, if any.
 * @param processRequest Async callback function that is invoked for each interactive input or each line in text file.
 */
export declare function processRequests(interactivePrompt: string, inputFileName: string | undefined, processRequest: (request: string) => Promise<void>): Promise<void>;

If I didn't miss anything, there is currently no possibility to pass multiline text to processRequests. Since prompts can be written over multiple lines, it would be convenient to have that option.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.