Code Monkey home page Code Monkey logo

morphic's People

Contributors

albertdbio avatar andrewginns avatar charl-kruger avatar ckt1031 avatar cosark avatar dhrishp avatar ggallon avatar iojcde avatar kuizuo avatar leerob avatar lizhe2004 avatar miurla avatar nil2000 avatar peperunas avatar sammcj avatar superlbr avatar winniesi avatar wuhei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

morphic's Issues

Add remark-gfm plugin for Markdown table rendering support

Issue Description

Our current Markdown rendering setup does not properly display tables using GitHub Flavored Markdown (GFM) syntax, leading to decreased readability in our documentation.

Solution

By integrating the remark-gfm plugin, we can support GFM table syntax during Markdown rendering. This will ensure tables are displayed correctly, improving the organization and presentation of information in our documentation.

https://github.com/remarkjs/react-markdown?tab=readme-ov-file#use-a-plugin

API error causes skeleton to remain and Related section to only show titles

When an API error occurs during the use of the Tool call, the following issues arise:

The skeleton of the page remains visible, indicating that the content failed to load properly.
The Related section, which is supposed to display relevant information, only shows the titles without any accompanying content.

Screenshot 2024-04-10 at 20 23 43

用vercel部署后是这样的

感觉只用上了gpt(用的是月之暗面的kimi)生成文本,没有用上那个搜索,但环境变量里都添加了,求解!!!
image

Can I deploy this to cloudflare?

I don`t have a vercel account ,can i use cloudflare as an alternative ?If i deploy this project to pages how to set env variables?

Question about Gemini and Vercel AI SDK

Hello @miurla @lizhe2004 @winniesi and community,
Instead of using OpenAI's keys, I wanted to experiment with Gemini (Google's LLM). According to the Vercel AI SDK, Google is a valid provider and can "create...language model objects that can be used with the generateText, streamText, generateObject, and streamObject AI functions". Despite this, upon integration, an error comes up for the model not having a default generation mode. I am rather sure that I used Gemini and the Vercel AI SDK correctly so what is the issue? Can someone help me out and point this issue out and a solution? This is my sample code for app/lib/agents/inquire.tsx:

`import { Copilot } from '@/components/copilot'
import { createStreamableUI, createStreamableValue } from 'ai/rsc'
import { ExperimentalMessage, experimental_streamObject } from 'ai'
import { PartialInquiry, inquirySchema } from '@/lib/schema/inquiry'
//import { google } from 'ai/google'
import { Google } from '@ai-sdk/google'

const google = new Google({
baseUrl: '',
apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY
})

export async function inquire(
uiStream: ReturnType,
messages: ExperimentalMessage[]
) {

const objectStream = createStreamableValue()
uiStream.update()

let finalInquiry: PartialInquiry = {}
await experimental_streamObject({
//model: openai.chat('gpt-4-turbo-preview'),
model: google.generativeAI('models/gemini-pro'), //rest of code not shown`

From this code above, this error is shown in my console in VS Code when the website is used in development:

Error: Model does not have a default object generation mode. at experimental_generateObject (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:614:13) at taskManager (webpack-internal:///(rsc)/./lib/agents/task-manager.tsx:27:89) at processEvents (webpack-internal:///(rsc)/./app/action.tsx:55:91) at $$ACTION_0 (webpack-internal:///(rsc)/./app/action.tsx:104:5) at endpoint (webpack-internal:///(rsc)/./node_modules/next/dist/build/webpack/loaders/next-flight-action-entry-loader.js?actions=%5B%5B%22%2FUsers%2FKarthik%2FDownloads%2Fmorphic%2Fapp%2Faction.tsx%22%2C%5B%22%24%24ACTION_0%22%5D%5D%2C%5B%22%2FUsers%2FKarthik%2FDownloads%2Fmorphic%2Fnode_modules%2Fai%2Frsc%2Fdist%2Frsc-server.mjs%22%2C%5B%22%24%24ACTION_0%22%5D%5D%5D&__client_imported__=!:9:17) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async eval (webpack-internal:///(rsc)/./node_modules/ai/rsc/dist/rsc-server.mjs:1138:24) at async $$ACTION_0 (webpack-internal:///(rsc)/./node_modules/ai/rsc/dist/rsc-server.mjs:1134:12) at async eval (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/app-render/action-handler.js:316:31) at async handleAction (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/app-render/action-handler.js:245:9) ⨯ unhandledRejection: Error: Model does not have a default object generation mode. at experimental_generateObject (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:614:13) at taskManager (webpack-internal:///(rsc)/./lib/agents/task-manager.tsx:27:89) at processEvents (webpack-internal:///(rsc)/./app/action.tsx:55:91) at $$ACTION_0 (webpack-internal:///(rsc)/./app/action.tsx:104:5) at endpoint (webpack-internal:///(rsc)/./node_modules/next/dist/build/webpack/loaders/next-flight-action-entry-loader.js?actions=%5B%5B%22%2FUsers%2FKarthik%2FDownloads%2Fmorphic%2Fapp%2Faction.tsx%22%2C%5B%22%24%24ACTION_0%22%5D%5D%2C%5B%22%2FUsers%2FKarthik%2FDownloads%2Fmorphic%2Fnode_modules%2Fai%2Frsc%2Fdist%2Frsc-server.mjs%22%2C%5B%22%24%24ACTION_0%22%5D%5D%5D&__client_imported__=!:9:17) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async eval (webpack-internal:///(rsc)/./node_modules/ai/rsc/dist/rsc-server.mjs:1138:24) at async $$ACTION_0 (webpack-internal:///(rsc)/./node_modules/ai/rsc/dist/rsc-server.mjs:1134:12) at async eval (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/app-render/action-handler.js:316:31) at async handleAction (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/app-render/action-handler.js:245:9) ⨯ unhandledRejection: Error: Model does not have a default object generation mode. at experimental_generateObject (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:614:13) at taskManager (webpack-internal:///(rsc)/./lib/agents/task-manager.tsx:27:89) at processEvents (webpack-internal:///(rsc)/./app/action.tsx:55:91) at $$ACTION_0 (webpack-internal:///(rsc)/./app/action.tsx:104:5) at endpoint (webpack-internal:///(rsc)/./node_modules/next/dist/build/webpack/loaders/next-flight-action-entry-loader.js?actions=%5B%5B%22%2FUsers%2FKarthik%2FDownloads%2Fmorphic%2Fapp%2Faction.tsx%22%2C%5B%22%24%24ACTION_0%22%5D%5D%2C%5B%22%2FUsers%2FKarthik%2FDownloads%2Fmorphic%2Fnode_modules%2Fai%2Frsc%2Fdist%2Frsc-server.mjs%22%2C%5B%22%24%24ACTION_0%22%5D%5D%5D&__client_imported__=!:9:17) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async eval (webpack-internal:///(rsc)/./node_modules/ai/rsc/dist/rsc-server.mjs:1138:24) at async $$ACTION_0 (webpack-internal:///(rsc)/./node_modules/ai/rsc/dist/rsc-server.mjs:1134:12) at async eval (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/app-render/action-handler.js:316:31) at async handleAction (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/app-render/action-handler.js:245:9) The streamable UI has been slow to update. This may be a bug or a performance issue or you forgot to call .done(). The streamable UI has been slow to update. This may be a bug or a performance issue or you forgot to call .done().

Sorry for any inconvenience. Thanks so much for your time and help! I really appreciate it! Have a wonderful day!

should it export edge runtime from client component?

Does it make sense to use edge runtime in a client component? I'm genuinely curious as it doesn't seem to make intuitive sense but I cannot find any explicity mention of it in the nextjs docs

// app/page.tsx
'use client'

export const runtime = 'edge'

Warning from Vercel AI SDK: Slow UI updates with certain models

The AI SDK is displaying a warning message: The streamable UI has been slow to update. This may be a bug or a performance issue or you forgot to call .done().

However, this warning does not appear when using faster models like gpt-3.5-turbo, suggesting that the issue is not related to missing .done() calls. The problem seems to be on the SDK side, although further research is needed to confirm the root cause. This issue is being created to document the warning as a known issue, making it visible to other developers. More investigation will be required to identify the underlying problem and find a suitable solution.

Google Custom Search

Hello!

Thank you for creating this awesome web application. Can we have an option to use Google Custom Search API?

pplx-ai is conveniently OpenAI client-compatible

What is the Only writers can set a specific model?
I use pplx-ai, my .env.local file is as follows:

# Used to set the base URL path for OpenAI API requests.
# If you need to set a BASE URL, uncomment and set the following:
# OPENAI_API_BASE=

# Used to set the model for OpenAI API requests.
# If not set, the default is gpt-4-turbo.
# OPENAI_API_MODEL='gpt-4-turbo'

# OpenAI API key retrieved here: https://platform.openai.com/api-keys
# OPENAI_API_KEY=

# Tavily API Key retrieved here: https://app.tavily.com/home
# TAVILY_API_KEY=

# Only writers can set a specific model. It must be compatible with the OpenAI API.
USE_SPECIFIC_API_FOR_WRITER=true
SPECIFIC_API_BASE=https://api.perplexity.ai
SPECIFIC_API_KEY=pplx-xxx
SPECIFIC_API_MODEL=sonar-small-online

Error Message: unhandledRejection: LoadAPIKeyError [AI_LoadAPIKeyError]: OpenAI API key is missing. Pass it using the 'apiKey' parameter or the OPENAI_API_KEY environment variable.

Issue deploying to Vercel possibly related to Next.js version (405 error)

Reproduction

  1. Clone the repo
  2. Run vercel
  3. Visit the url where it was deployed
  4. Perform a search

Research

Previous deployments worked just fine. Even when deploying from the official Vercel template it will still give you the 405 error.

I've attached the screenshot and a url to a fresh deployment for verification.

https://morphic-fourofive-error.vercel.app/

Screenshot 2024-04-25 at 7 04 55 PM

Possible solution

@miurla can you verify which version of Next is working.
May we also update the bun lock file to use more specific versions instead of latest

Thanks for you work @miurla!
Hope this prevents further deployment issues.

Latex rendering for AI messages

Hi, Morphic team!
I'm using the morphic template for a math tutor chatbot and I'm not sure how to ensure that math equations by the llm(written in markdown latex) are rendered correctly.
For example:
image

I would really appreciate some help on how to get these responses rendered correctly!!

[Feature] Add a login or password please

I want to access it outside the LAN and share it with my family, but GPT-4 is expensive. I tested GPT-3.5 and the Claude-3 model proxy compatible with the OpenAI API, but couldn't effectively implement the agent.
So...

Make the search box a textarea.

This change would be relatively minor but fall in line with Google which ditched the input field for a text area a long time ago. It allows for longer inputs, multi-line, doesn't suppress certain characters and formatting, and makes typing a more detailed searrch easier on a phone keyboard. There are actually alot of other compelling reasons why they switched to a textarea, for example better accessibility option handling and being able build better interfaces on top of it.

[FEATURE] support for Huggingface

Hi,
Does anyone know how I can add support for huggingface instead of OpenAI? (the OpenAI API is very expensive). If anyone could help, it would be great.

TypeError after vercel deployment.

image
image
image

Here's the error message:
TypeError: Right-hand side of 'instanceof' is not an object
at (node_modules/ai/node_modules/@ai-sdk/provider-utils/dist/index.mjs:33:0)
at (node_modules/ai/dist/index.mjs:302:20)

The home page displays fine, but after searching for content it keeps getting stuck on loading, do I need to add any more variables?

No images, results and souces are not working

The only section I get is Answer, and the sources it cites do not exist.

Config:

# Used to set the base URL path for OpenAI API requests.
# If you need to set a BASE URL, uncomment and set the following:
OPENAI_API_BASE=http://192.168.73.78:5001/v1

# Used to set the model for OpenAI API requests.
# If not set, the default is gpt-4-turbo.
# OPENAI_API_MODEL='gpt-4-turbo'

# OpenAI API key retrieved here: https://platform.openai.com/api-keys
OPENAI_API_KEY=

# Tavily API Key retrieved here: https://app.tavily.com/home
TAVILY_API_KEY=<API key>

# Only writers can set a specific model. It must be compatible with the OpenAI API.
#USE_SPECIFIC_API_FOR_WRITER=true
#SPECIFIC_API_BASE=http://192.168.73.78:5001
#SPECIFIC_API_KEY=
#SPECIFIC_API_MODEL=

OS: debian 12 (proxmox VE LXC)
LLM runner: koboldcpp (https://github.com/LostRuins/koboldcpp)
LLM: dolphin-2-9-llama3-8B
tavily pricing: free

why api is not pulling requests

i used those openai and taily api but the results are not pulling, installed from the vercel boilerplate provided in template section and fulfilling all the dependencies

Does it have to be the API key for gpt-4-turbo?

AI_APICallError: The modelgpt-4-turbodoes not exist or you do not have access to it. at (node_modules/ai/openai/dist/index.mjs:265:0) at (node_modules/ai/openai/dist/index.mjs:176:0) at (node_modules/ai/openai/dist/index.mjs:609:0) at (node_modules/ai/dist/index.mjs:405:0) at (node_modules/ai/dist/index.mjs:1416:29) at (lib/agents/researcher.tsx:38:17) at (app/action.tsx:63:31) { name: 'AI_APICallError', url: 'https://api.openai.com/v1/chat/completions', requestBodyValues: { model: 'gpt-4-turbo', logit_bias: undefined, user: undefined, max_tokens: 2500, temperature: 0, top_p: undefined, frequency_penalty: 0, presence_penalty: 0, seed: undefined, messages: [ { role: 'system', content: 'As a professional search expert, you possess the ability to search for any information on the web. \n For each user query, utilize the search results to their fullest potential to provide additional information and assistance in your response.\n If there are any images relevant to your answer, be sure to include them as well.\n Aim to directly address the user\'s question, augmenting your response with insights gleaned from the search results.\n Whenever quoting or referencing information from a specific URL, always cite the source URL explicitly.\n ' }, { role: 'user', content: [ { type: 'text', text: '{"input":"Why is Nvidia growing rapidly?"}' } ] } ], tools: [ { type: 'function', function: { name: 'search', description: 'Search the web for information', parameters: { type: 'object', properties: { query: { type: 'string', description: 'The query to search for' }, max_results: { type: 'number', maximum: 20, default: 5, description: 'The maximum number of results to return' }, search_depth: { type: 'string', enum: [ 'basic', 'advanced' ], default: 'basic', description: 'The depth of the search' } }, required: [ 'query' ], additionalProperties: false, $schema: 'http://json-schema.org/draft-07/schema#' } } } ], stream: true }, statusCode: 404, responseBody: '{\n "error": {\n "message": "The modelgpt-4-turbodoes not exist or you do not have access to it.",\n "type": "invalid_request_error",\n "param": null,\n "code": "model_not_found"\n }\n}\n', cause: undefined, isRetryable: false, data: { error: { message: 'The modelgpt-4-turbo does not exist or you do not have access to it.', type: 'invalid_request_error', param: null, code: 'model_not_found' } } }

Add support for Claude 3 Opus/Hike models with streaming API

Currently, the AI SDK includes a provider for Claude 3 models with tool use support. However, the streaming API for these models is not yet available. To fully integrate Claude 3 Opus/Hike models into our application, we need to wait for an update from the AI provider that includes streaming API support.

Once the streaming API is available, we will need to update our implementation to utilize the new functionality. This will allow us to provide a seamless user experience with real-time updates and faster response times when using Claude 3 Opus/Hike models.

References:
Claude/Tool use
Vercel AI SDK/Claude

Something other than Tavily

Tavily gives you only 1,000 requests a month, and after that the base plan is a staggering $100 a month.

Perhaps using searxng would be a better option.

[Feature] Add chat history

Save history and make it loadable. Display a list in the sidebar and restore the UI from the selected data.

related: #7

Feature Request: Additional Configuration Options

Hello,

Firstly, I want to commend you on creating an excellent project! I'm hoping to see the following features implemented:

  1. An option in the configuration to disable the image and source content panels.

  2. The ability to configure alternative search engines like Google, especially since I do not require image search capabilities.

Thank you for considering these enhancements!

URL Error / 'ERR_INVALID_URL' The streamable UI has been slow to update. This may be a bug or a performance issue or you forgot to call `.done()`. The streamable UI has been slow to update. This may be a bug or a performance issue or you forgot to call `.done()`.

I am receiving this error

at async cacheEntry.responseCache.get.routeKind (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/base-server.js:1499:28)
at async eval (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/response-cache/web.js:51:36) {
code: 'ERR_INVALID_URL',
input: '/chat/completions'
}
}
⨯ unhandledRejection: Error: URL is malformed "/chat/completions". Please use only absolute URLs - https://nextjs.org/docs/messages/middleware-relative-urls
at validateURL (/Users/waltgrace/Desktop/morphic/node_modules/next/dist/server/web/utils.js:124:15)
at new context.Request (/Users/waltgrace/Desktop/morphic/node_modules/next/dist/server/web/sandbox/context.js:344:44)
... 32 lines matching cause stack trace ...
at async eval (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/response-cache/web.js:51:36) {
[cause]: TypeError: Invalid URL
at new URL (node:internal/url:775:36)
at validateURL (/Users/waltgrace/Desktop/morphic/node_modules/next/dist/server/web/utils.js:122:23)
at new context.Request (/Users/waltgrace/Desktop/morphic/node_modules/next/dist/server/web/sandbox/context.js:344:44)
at fetch (webpack-internal:///(rsc)/./node_modules/next/dist/compiled/react/cjs/react.react-server.development.js:216:81)
at doOriginalFetch (webpack-internal:///(rsc)/./node_modules/next/dist/esm/server/lib/patch-fetch.js:368:24)
at eval (webpack-internal:///(rsc)/./node_modules/next/dist/esm/server/lib/patch-fetch.js:515:24)
at eval (webpack-internal:///(rsc)/./node_modules/next/dist/esm/server/lib/trace/tracer.js:115:36)
at NoopContextManager.with (webpack-internal:///(rsc)/./node_modules/next/dist/compiled/@opentelemetry/api/index.js:2:7062)
at ContextAPI.with (webpack-internal:///(rsc)/./node_modules/next/dist/compiled/@opentelemetry/api/index.js:2:518)
at NoopTracer.startActiveSpan (webpack-internal:///(rsc)/./node_modules/next/dist/compiled/@opentelemetry/api/index.js:2:18108)
at ProxyTracer.startActiveSpan (webpack-internal:///(rsc)/./node_modules/next/dist/compiled/@opentelemetry/api/index.js:2:18869)
at eval (webpack-internal:///(rsc)/./node_modules/next/dist/esm/server/lib/trace/tracer.js:97:103)
at NoopContextManager.with (webpack-internal:///(rsc)/./node_modules/next/dist/compiled/@opentelemetry/api/index.js:2:7062)
at ContextAPI.with (webpack-internal:///(rsc)/./node_modules/next/dist/compiled/@opentelemetry/api/index.js:2:518)
at NextTracerImpl.trace (webpack-internal:///(rsc)/./node_modules/next/dist/esm/server/lib/trace/tracer.js:97:28)
at patched (webpack-internal:///(rsc)/./node_modules/next/dist/esm/server/lib/patch-fetch.js:161:75)
at postToApi (webpack-internal:///(rsc)/./node_modules/ai/openai/dist/index.mjs:341:28)
at postJsonToApi (webpack-internal:///(rsc)/./node_modules/ai/openai/dist/index.mjs:315:7)
at OpenAIChatLanguageModel.doGenerate (webpack-internal:///(rsc)/./node_modules/ai/openai/dist/index.mjs:783:28)
at eval (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:840:21)
at _retryWithExponentialBackoff (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:706:18)
at eval (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:695:25)
at experimental_generateObject (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:839:36)
at taskManager (webpack-internal:///(rsc)/./lib/agents/task-manager.tsx:18:89)
at processEvents (webpack-internal:///(rsc)/./app/action.tsx:55:91)
at $$ACTION_0 (webpack-internal:///(rsc)/./app/action.tsx:104:5)
at endpoint (webpack-internal:///(rsc)/./node_modules/next/dist/build/webpack/loaders/next-flight-action-entry-loader.js?actions=%5B%5B%22%2FUsers%2Fwaltgrace%2FDesktop%2Fmorphic%2Fapp%2Faction.tsx%22%2C%5B%22%24%24ACTION_0%22%5D%5D%2C%5B%22%2FUsers%2Fwaltgrace%2FDesktop%2Fmorphic%2Fnode_modules%2Fai%2Frsc%2Fdist%2Frsc-server.mjs%22%2C%5B%22%24%24ACTION_0%22%5D%5D%5D&client_imported=!:9:17)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async eval (webpack-internal:///(rsc)/./node_modules/ai/rsc/dist/rsc-server.mjs:1146:24)
at async $$ACTION_0 (webpack-internal:///(rsc)/./node_modules/ai/rsc/dist/rsc-server.mjs:1142:12)
at async eval (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/app-render/action-handler.js:316:31)
at async handleAction (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/app-render/action-handler.js:245:9)
at async renderToHTMLOrFlightImpl (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/app-render/app-render.js:873:33)
at async doRender (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/base-server.js:1374:30)
at async cacheEntry.responseCache.get.routeKind (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/base-server.js:1499:28)
at async eval (webpack-internal:///(ssr)/./node_modules/next/dist/esm/server/response-cache/web.js:51:36) {
code: 'ERR_INVALID_URL',
input: '/chat/completions'
}

The streamable UI has been slow to update. This may be a bug or a performance issue or you forgot to call .done().
The streamable UI has been slow to update. This may be a bug or a performance issue or you forgot to call .done().

RetryError [AI_RetryError]: Failed after 3 attemps. Last error: Cannot connect to API: read ECONNRESET

when i am running the server.throw these errors:

RetryError [AI_RetryError]: Failed after 3 attemps. Last error: Cannot connect to API: read ECONNRESET
    at _retryWithExponentialBackoff (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:718:13)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async experimental_generateObject (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:839:30)
    at async taskManager (webpack-internal:///(rsc)/./lib/agents/task-manager.tsx:13:20)
    at async processEvents (webpack-internal:///(rsc)/./app/action.tsx:55:29) {
  reason: 'maxRetriesExceeded',
  errors: [
    APICallError [AI_APICallError]: Cannot connect to API: read ECONNRESET
        at postToApi (webpack-internal:///(rsc)/./node_modules/ai/openai/dist/index.mjs:398:15)
        at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
        at async OpenAIChatLanguageModel.doGenerate (webpack-internal:///(rsc)/./node_modules/ai/openai/dist/index.mjs:783:22)
        at async _retryWithExponentialBackoff (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:706:12)
        at async experimental_generateObject (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:839:30)
        at async taskManager (webpack-internal:///(rsc)/./lib/agents/task-manager.tsx:13:20)
        at async processEvents (webpack-internal:///(rsc)/./app/action.tsx:55:29) {
      url: 'https://api.openai.com/v1/chat/completions',
      requestBodyValues: [Object],
      statusCode: undefined,
      responseBody: undefined,
      cause: [Error],
      isRetryable: true,
      data: undefined
    },
lastError: APICallError [AI_APICallError]: Cannot connect to API: read ECONNRESET
      at postToApi (webpack-internal:///(rsc)/./node_modules/ai/openai/dist/index.mjs:398:15)
      at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
      at async OpenAIChatLanguageModel.doGenerate (webpack-internal:///(rsc)/./node_modules/ai/openai/dist/index.mjs:783:22)
      at async _retryWithExponentialBackoff (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:706:12)
      at async experimental_generateObject (webpack-internal:///(rsc)/./node_modules/ai/dist/index.mjs:839:30)
      at async taskManager (webpack-internal:///(rsc)/./lib/agents/task-manager.tsx:13:20)
      at async processEvents (webpack-internal:///(rsc)/./app/action.tsx:55:29) {
    url: 'https://api.openai.com/v1/chat/completions',
    requestBodyValues: {
      model: 'gpt-3.5-turbo',
      logit_bias: undefined,
      user: undefined,
      max_tokens: undefined,
      temperature: undefined,
      top_p: undefined,
      frequency_penalty: undefined,
      presence_penalty: undefined,
      seed: undefined,
      messages: [Array],
      tool_choice: [Object],
      tools: [Array]
    },
    statusCode: undefined,
    responseBody: undefined,
    cause: Error: read ECONNRESET
        at TLSWrap.onStreamRead (node:internal/stream_base_commons:217:20)
        at TLSWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
      errno: -54,
      code: 'ECONNRESET',
      syscall: 'read'
    },
    isRetryable: true,
    data: undefined
  }

I have been changed model to gpt3.5.

[Feature] Claude-3 is not working properly with the OpenAI-compatible API.

**Claude-3 is not working properly with the OpenAI-compatible API.

Hello, I've used a proxy to make Claude-3 compatible with the OpenAI API, and it works normally in Next-Chat, Lobe-Chat, and Bob, but it doesn't work in your Morphic. Can you share the reason for this?**

Error code from vercel LOG as follows:
AI_APICallError: Invalid JSON response
at (node_modules/ai/openai/dist/index.mjs:312:0)
at (node_modules/ai/openai/dist/index.mjs:197:0)
at (node_modules/ai/openai/dist/index.mjs:578:0)
at (node_modules/ai/dist/index.mjs:405:0)
at (node_modules/ai/dist/index.mjs:538:0)
at (lib/agents/task-manager.tsx:13:17)
at (app/action.tsx:43:24) {
name: 'AI_APICallError',
url: 'https://likegpt.xx.xx.xx/api/v1/chat/completions',
requestBodyValues: {
model: 'anthropic.claude-3-sonnet-20240229-v1:0',
logit_bias: undefined,
user: undefined,
max_tokens: undefined,
temperature: undefined,
top_p: undefined,
frequency_penalty: undefined,
presence_penalty: undefined,
seed: undefined,
messages: [
{
role: 'system',
content: 'As a professional web researcher, your primary objective is to fully comprehend the user's query, conduct thorough web searches to gather the necessary information, and provide an appropriate response.\n To achieve this, you must first analyze the user's input and determine the optimal course of action. You have two options at your disposal:\n 1. "proceed": If the provided information is sufficient to address the query effectively, choose this option to proceed with the research and formulate a response.\n 2. "inquire": If you believe that additional information from the user would enhance your ability to provide a comprehensive response, select this option. You may present a form to the user, offering default selections or free-form input fields, to gather the required details.\n Your decision should be based on a careful assessment of the context and the potential for further information to improve the quality and relevance of your response.\n For example, if the user asks, "What are the key features of the latest iPhone model?", you may choose to "proceed" as the query is clear and can be answered effectively with web research alone.\n However, if the user asks, "What's the best smartphone for my needs?", you may opt to "inquire" and present a form asking about their specific requirements, budget, and preferred features to provide a more tailored recommendation.\n Make your choice wisely to ensure that you fulfill your mission as a web researcher effectively and deliver the most valuable assistance to the user.\n '
},
{
role: 'user',
content: [
{
type: 'text',
text: '{"input":"Is the Apple Vision Pro worth buying?"}'
}
]
}
],
tool_choice: { type: 'function', function: { name: 'json' } },
tools: [
{
type: 'function',
function: {
name: 'json',
description: 'Respond with a JSON object.',
parameters: {
type: 'object',
properties: { next: { type: 'string', enum: [ 'inquire', 'proceed' ] } },
required: [ 'next' ],
additionalProperties: false,
$schema: 'http://json-schema.org/draft-07/schema#'
}
}
}
]
},
statusCode: 200,
responseBody: '{"id":"msg_01RJEqLhDqDqriF5XktcB8Vc","created":1713013011,"model":"anthropic.claude-3-sonnet-20240229-v1:0","system_fingerprint":"fp","choices":[{"index":0,"finish_reason":"tool_calls","message":{"role":"assistant","tool_calls":[{"id":"a1e851c7","type":"function","function":{"name":"json","arguments":"{\"next\": \"inquire\"}"}}]}}],"object":"chat.completion","usage":{"prompt_tokens":546,"completion_tokens":44,"total_tokens":590}}',
cause: AI_TypeValidationError: Type validation failed: Value: {"id":"msg_01RJEqLhDqDqriF5XktcB8Vc","created":1713013011,"model":"anthropic.claude-3-sonnet-20240229-v1:0","system_fingerprint":"fp","choices":[{"index":0,"finish_reason":"tool_calls","message":{"role":"assistant","tool_calls":[{"id":"a1e851c7","type":"function","function":{"name":"json","arguments":"{"next": "inquire"}"}}]}}],"object":"chat.completion","usage":{"prompt_tokens":546,"completion_tokens":44,"total_tokens":590}}.
Error message: [
{
"code": "invalid_type",
"expected": "string",
"received": "undefined",
"path": [
"choices",
0,
"message",
"content"
],
"message": "Required"
}
]
at (node_modules/ai/openai/dist/index.mjs:73:0)
at (node_modules/ai/openai/dist/index.mjs:116:28)
at (vc/edge/function:2

build error: Attempt to export a nullable value for "TextDecoderStream"

8uZ:/home/admin/code/morphic# bun next build
▲ Next.js 14.2.1

  • Environments: .env.local

Creating an optimized production build ...
✓ Compiled successfully
✓ Linting and checking validity of types
Collecting page data ..1 | (()=>{var webpack_modules={45:e=>{e.exports='"use strict";\nvar __defProp = Object.defineProperty;\nvar __getOwnPropDesc = Object.getOwnPropertyDescriptor;\nvar __getOwnPropNames = Object.getOwnPropertyNames;\nvar __hasOwnProp = Object.prototype.hasOwnProperty;\nvar __name = (target, value) => __defProp(target, "name", { value, configurable: true });\nvar __export = (target, all) => {\n for (var name in all)\n __defProp(target, name, { get: all[name], enumerable: true });\n};\nvar __copyProps = (to, from, except, desc) => {\n if (from && typeof from === "object" || typeof from === "function") {\n for (let key of __getOwnPropNames(from))\n if (!__hasOwnProp.call(to, key) && key !== except)\n __defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable });\n }\n return to;\n};\nvar __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: true }), mod);\n\n// src/primitives/abort-controller.js\nvar abort_controller_expor

error: Attempt to export a nullable value for "TextDecoderStream"
at defineProperties (/home/admin/code/morphic/node_modules/next/dist/compiled/edge-runtime/index.js:1:711683)
at addPrimitives (/home/admin/code/morphic/node_modules/next/dist/compiled/edge-runtime/index.js:1:710442)
at extend (/home/admin/code/morphic/node_modules/next/dist/compiled/edge-runtime/index.js:1:705243)
at new VM (/home/admin/code/morphic/node_modules/next/dist/compiled/edge-runtime/index.js:1:712550)
at new EdgeVM (/home/admin/code/morphic/node_modules/next/dist/compiled/edge-runtime/index.js:1:705155)
at /home/admin/code/morphic/node_modules/next/dist/server/web/sandbox/context.js:223:21

Build error occurred
822 | }
823 | const result = _routeMatcher(cleanedEntry);
824 | if (!result) {
825 | throw new Error(The provided path \${cleanedEntry}` does not match the page: `${page}`.`);
826 | }
827 | // If leveraging the string paths variant the entry should already be
^
error: Failed to collect page data for /
at /home/admin/code/morphic/node_modules/next/dist/build/utils.js:827:22

Collecting page data .error: "next" exited with code 1

RAG Support and Plugin support more generally

This is a bit of a two parter question, is there a way to connect this to a RAG easily, or extensions to connect with RAG like anyscale/pinecone/vertex/langchain?

More broadly is there any easy way to extend morphic, like a scaffolding for plugin system or boilerplate? If there isn't yet could you guys identify a entry point for building extensions, modules, and settings. Maybe like if I wanted to create a section like "Images" and "Sources".

If a bit of work could be done to support extensions I'd be interested in building plugins based on morphic.

Examples of Add-Ons that morphic would largely benefit from:
RAG Lookup
Google style pre-built answers saved in RAGs/DBs (or possibly even entire mini markdown/html widgets)
RAG Reference (let you see where in source documents/resources a certain part of the RAG knowledge came from)
Image Generation
Videos Sources (and Video to Transcription built on top of that)
Answer Router (looks at links and references in Answer and then generates any subsections that would fit the content type)
Git visualizer for editing code (bit outside the standard use case here)

I think morphic really has some crazy potential and extensibility to let the community explore how to build on top it would benefit everybody alot.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.