Code Monkey home page Code Monkey logo

gpt4all-ts's People

Contributors

andriymulyar avatar lucasjohnston avatar riderx avatar yourbuddyconner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt4all-ts's Issues

Unable to open the connection with the model

// Open the connection with the model
await gpt4all.open();

The above await never resolves and the connection never opens, no errors are thrown. I have succefsuflly dowloaded the model and the executable (The model is a .bin file). Using node js v 17.0.0

Os Mac Catalina - Version: 10.15.6 (19G73)

[Binding/UI] Node-RED nodes and flows for GPT4All

Hi!

First of all: thank you very much for your marvellous contribution! Being able to run inferences from within JavaScript/TypeScript is awesome!

Using gpt4all-ts, I have built function nodes and complete flows for Node-RED, which can be used to run inferences based on both the filtered and the unfiltered GPT4All models.

Node-RED is a data flow processor which allows people ranging from none over casual up to professional programmers to build complex systems from within their browsers just by wiring components (aka "nodes") together.

Having GPT4All models as such nodes allows these people to create their own user interfaces or even build their own autonomous agents, always having full control over everything they do!

Thanks again for your contribution!

With greetings from Germany,

Andreas Rozek

Axios error 404 when creating a new GPT4All instance -- npm version out of date?

Hello, while trying this out, using the basic example

import { GPT4All } from 'gpt4all';

const main = async () => {
    // Instantiate GPT4All with default or custom settings
    const gpt4all = new GPT4All('gpt4all-lora-unfiltered-quantized', true); // Default is 'gpt4all-lora-quantized' model
  
    // Initialize and download missing files
    await gpt4all.init();

    // Open the connection with the model
    await gpt4all.open();
    // Generate a response using a prompt
    const prompt = 'Tell me about how Open Access to AI is going to help humanity.';
    const response = await gpt4all.prompt(prompt);
    console.log(`Prompt: ${prompt}`);
    console.log(`Response: ${response}`);
  
    const prompt2 = 'Explain to a five year old why AI is nothing to be afraid of.';
    const response2 = await gpt4all.prompt(prompt2);
    console.log(`Prompt: ${prompt2}`);
    console.log(`Response: ${response2}`);
  
    // Close the connection when you're done
    gpt4all.close();
}
  
main().catch(console.error);

I encountered the following axios error

AxiosError: Request failed with status code 404
    at settle (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/axios/lib/core/settle.js:19:12)
    at RedirectableRequest.handleResponse (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/axios/lib/adapters/http.js:518:9)
    at RedirectableRequest.emit (node:events:513:28)
    at RedirectableRequest.emit (node:domain:489:12)
    at RedirectableRequest._processResponse (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/follow-redirects/index.js:356:10)
    at ClientRequest.RedirectableRequest._onNativeResponse (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/follow-redirects/index.js:62:10)
    at Object.onceWrapper (node:events:628:26)
    at ClientRequest.emit (node:events:513:28)
    at ClientRequest.emit (node:domain:489:12)
    at HTTPParser.parserOnIncomingClient [as onIncoming] (node:_http_client:693:27) {
  code: 'ERR_BAD_REQUEST',
  config: {
    transitional: {
      silentJSONParsing: true,
      forcedJSONParsing: true,
      clarifyTimeoutError: false
    },
    adapter: [ 'xhr', 'http' ],
    transformRequest: [ [Function: transformRequest] ],
    transformResponse: [ [Function: transformResponse] ],
    timeout: 0,
    xsrfCookieName: 'XSRF-TOKEN',
    xsrfHeaderName: 'X-XSRF-TOKEN',
    maxContentLength: -1,
    maxBodyLength: -1,
    env: { FormData: [Function], Blob: [class Blob] },
    validateStatus: [Function: validateStatus],
    headers: AxiosHeaders {
      Accept: 'application/json, text/plain, */*',
      'User-Agent': 'axios/1.4.0',
      'Accept-Encoding': 'gzip, compress, deflate, br'
    },
    responseType: 'stream',
    method: 'get',
    url: 'https://github.com/nomic-ai/gpt4all/blob/main/chat/gpt4all-lora-quantized-OSX-intel?raw=true',
    data: undefined
  },
  // ...etc

I noticed that the URL of the model being downloaded is https://github.com/nomic-ai/gpt4all/blob/main/chat/gpt4all-lora-quantized-OSX-intel?raw=true which seems to be out of date compared to what is in the current code of the repo, namely: https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-training/chat/gpt4all-lora-quantized-OSX-intel?raw=true (see https://github.com/nomic-ai/gpt4all-ts/blob/main/src/gpt4all.ts#L92).

I changed my local version of the package to correct this error, and the model seems to download correctly. It seems to me that maybe the version of gpt4all on npm needs to be bumped to include this change?

Thanks!

[BUG] User reports MacOS architecture detection fails silently and defaults to intel architecture when Node is managed by nvm or fnm

User reports that model selection fails silently:
https://twitter.com/mattapperson/status/1642965761676156935

Hey! heads up that this does not always get m1 vs intel Mac correctly depending on how nodes was installed... in such cases it fails silently

Not much info other then when installed using fnm or nvm node will return the same results for arch as if it was intel. You will need to run uname -m as a child process to know for sure

Printing gibrish on terminal when running

ra-quantized-linux-x86
main: seed = 1681019976
llama_model_load: loading model from 'gpt4all-lora-quantized.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.35 MB
llama_model_load: memory_size = 2048.00 MB, n_mem = 65536
llama_model_load: loading model part 1/1 from 'gpt4all-lora-quantized.bin'
llama_model_load: done
llama_model_load: model size = 78.13 MB / num tensors = 1

system_info: n_threads = 4 / 4 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
main: interactive mode on.
sampling parameters: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000

== Running in chat mode. ==

  • Press Ctrl+C to interject at any time.
  • Press Return to return control to LLaMA.
  • If you want to submit another line, end your input in ''.

hi
↑♠►↓#↔ ⁇ ↑▲☺ ‼$↨ ⁇ §↓
♫→↑
♣$

Other installation steps?

Might be a silly question but do you have to have completed the setup steps in the main gpt4all repo in order to use this TS package? It doesn't state that anywhere in the docs, but when I add the package and run the example, the gpt4all.open(); call just hangs and never completes.

Any ideas?

gpt4all-j does not seem to be supported?

First of all: thank you very much for GPT4All and its bindings!

That said, I'd like to inform you about a problem I encountered: when trying to

const gpt4all = new GPT4All('ggml-gpt4all-j',true)

I got an exception telling me that only gpt4all-lora-[filtered-]quantized would be supported - how can I change that?

 with response every time.

I used this library in a my remix app. You can find the code here https://github.com/harshil4076/ts-voice-text (live version won't work).

I am running it on my local machine Ubuntu with 24 GB Ram and no extra GPU power. It takes some time to response but its good for testing.

With every response I am seeing this �[1m�[32m�[0m. either in the begining or the end.

image

image

What could be causing this??

Great library btw. Shout out to the owners! Cheers!

it's not working.

image
as you can see, the code crashs on .open()
checkpoint 3 is not logged into my console and the application simply stops

Support for multiple requests simultaneously

I have created my own expressjs/socket.io web UI using this package. However, this package seems to only allow one connection/request for the model at a time. I could instruct my code to create a new instance of the model every time a user connects to my page, but that would be very inefficient memory-wise.

I had the idea of creating a queue system where the users wait for other requests to complete before serving them, but depending on the length of the answers and how many are waiting, users could be waiting around for a long time.

TL;DR: Would it be possible to allow the package or model to support more than one request/prompt simultaneously without dramatically increasing RAM consumption?

ENOENT when starting the app while having the model and everything downloaded

Hey, i wanted to try the gpt4all-ts so i just copied the started code from the readme and installed the npm package. It downloaded the model into C:\Users\P33tT.nomic (gpt4all & gpt4all-lora-unfiltered-quantized.bin) but when i try to start my app it throws an error:

Error: spawn C:\Users\P33tT/.nomic/gpt4all ENOENT
    at Process.ChildProcess._handle.onexit (node:internal/child_process:285:19)
    at onErrorNT (node:internal/child_process:483:16)
    at processTicksAndRejections (node:internal/process/task_queues:82:21) {
  errno: -4058,
  code: 'ENOENT',
  syscall: 'spawn C:\\Users\\P33tT/.nomic/gpt4all',
  path: 'C:\\Users\\P33tT/.nomic/gpt4all',
  spawnargs: [
    '--model',
    'C:\\Users\\P33tT/.nomic/gpt4all-lora-unfiltered-quantized.bin'
  ]
}

I don't know what to do, i tried deleting the gpt4all file but it didn't work

Support for nous-gpt4-vicuna-13b?

Is nous-gpt4-vicuna-13b not supported yet?

I have nous-gpt4-vicuna-13b downloaded into gpt4all and you like to access it programmatically.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.