nomic-ai / gpt4all-ts Goto Github PK
View Code? Open in Web Editor NEWgpt4all and llama typescript bindings
gpt4all and llama typescript bindings
// Open the connection with the model
await gpt4all.open();
The above await never resolves and the connection never opens, no errors are thrown. I have succefsuflly dowloaded the model and the executable (The model is a .bin file). Using node js v 17.0.0
Os Mac Catalina - Version: 10.15.6 (19G73)
Hi!
First of all: thank you very much for your marvellous contribution! Being able to run inferences from within JavaScript/TypeScript is awesome!
Using gpt4all-ts, I have built function nodes and complete flows for Node-RED, which can be used to run inferences based on both the filtered and the unfiltered GPT4All models.
Node-RED is a data flow processor which allows people ranging from none over casual up to professional programmers to build complex systems from within their browsers just by wiring components (aka "nodes") together.
Having GPT4All models as such nodes allows these people to create their own user interfaces or even build their own autonomous agents, always having full control over everything they do!
Thanks again for your contribution!
With greetings from Germany,
Andreas Rozek
It just keeps waiting and waiting but it never actually loads the model, please fix this
Hello, while trying this out, using the basic example
import { GPT4All } from 'gpt4all';
const main = async () => {
// Instantiate GPT4All with default or custom settings
const gpt4all = new GPT4All('gpt4all-lora-unfiltered-quantized', true); // Default is 'gpt4all-lora-quantized' model
// Initialize and download missing files
await gpt4all.init();
// Open the connection with the model
await gpt4all.open();
// Generate a response using a prompt
const prompt = 'Tell me about how Open Access to AI is going to help humanity.';
const response = await gpt4all.prompt(prompt);
console.log(`Prompt: ${prompt}`);
console.log(`Response: ${response}`);
const prompt2 = 'Explain to a five year old why AI is nothing to be afraid of.';
const response2 = await gpt4all.prompt(prompt2);
console.log(`Prompt: ${prompt2}`);
console.log(`Response: ${response2}`);
// Close the connection when you're done
gpt4all.close();
}
main().catch(console.error);
I encountered the following axios error
AxiosError: Request failed with status code 404
at settle (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/axios/lib/core/settle.js:19:12)
at RedirectableRequest.handleResponse (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/axios/lib/adapters/http.js:518:9)
at RedirectableRequest.emit (node:events:513:28)
at RedirectableRequest.emit (node:domain:489:12)
at RedirectableRequest._processResponse (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/follow-redirects/index.js:356:10)
at ClientRequest.RedirectableRequest._onNativeResponse (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/follow-redirects/index.js:62:10)
at Object.onceWrapper (node:events:628:26)
at ClientRequest.emit (node:events:513:28)
at ClientRequest.emit (node:domain:489:12)
at HTTPParser.parserOnIncomingClient [as onIncoming] (node:_http_client:693:27) {
code: 'ERR_BAD_REQUEST',
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [ 'xhr', 'http' ],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
env: { FormData: [Function], Blob: [class Blob] },
validateStatus: [Function: validateStatus],
headers: AxiosHeaders {
Accept: 'application/json, text/plain, */*',
'User-Agent': 'axios/1.4.0',
'Accept-Encoding': 'gzip, compress, deflate, br'
},
responseType: 'stream',
method: 'get',
url: 'https://github.com/nomic-ai/gpt4all/blob/main/chat/gpt4all-lora-quantized-OSX-intel?raw=true',
data: undefined
},
// ...etc
I noticed that the URL of the model being downloaded is https://github.com/nomic-ai/gpt4all/blob/main/chat/gpt4all-lora-quantized-OSX-intel?raw=true
which seems to be out of date compared to what is in the current code of the repo, namely: https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-training/chat/gpt4all-lora-quantized-OSX-intel?raw=true
(see https://github.com/nomic-ai/gpt4all-ts/blob/main/src/gpt4all.ts#L92).
I changed my local version of the package to correct this error, and the model seems to download correctly. It seems to me that maybe the version of gpt4all
on npm needs to be bumped to include this change?
Thanks!
User reports that model selection fails silently:
https://twitter.com/mattapperson/status/1642965761676156935
Hey! heads up that this does not always get m1 vs intel Mac correctly depending on how nodes was installed... in such cases it fails silently
Not much info other then when installed using fnm or nvm node will return the same results for arch as if it was intel. You will need to run uname -m as a child process to know for sure
ra-quantized-linux-x86
main: seed = 1681019976
llama_model_load: loading model from 'gpt4all-lora-quantized.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.35 MB
llama_model_load: memory_size = 2048.00 MB, n_mem = 65536
llama_model_load: loading model part 1/1 from 'gpt4all-lora-quantized.bin'
llama_model_load: done
llama_model_load: model size = 78.13 MB / num tensors = 1
system_info: n_threads = 4 / 4 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
main: interactive mode on.
sampling parameters: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000
== Running in chat mode. ==
hi
↑♠►↓#↔ ⁇ ↑▲☺ ‼$↨ ⁇ §↓
♫→↑
♣$
what i pass to constructior i have quantized
file with me
const gpt4all = new GPT4All('gpt4all-lora-unfiltered-quantized', true);
Invite is expired give new
i was installed the gpt4all from npm after i started it unlimited download something but doesn't going up from 0
A community member is trying to add GPT4All support to LangChainJS, but the models are no longer hosted at the paths in the SDK: langchain-ai/langchainjs#1204
The ideal solution would be to bundle/download the model into the node_modules
folder on installation and bump it with new version releases - would this be possible? That way there's no reliance on updating a user's home directory.
Good repository
Might be a silly question but do you have to have completed the setup steps in the main gpt4all repo in order to use this TS package? It doesn't state that anywhere in the docs, but when I add the package and run the example, the gpt4all.open();
call just hangs and never completes.
Any ideas?
First of all: thank you very much for GPT4All and its bindings!
That said, I'd like to inform you about a problem I encountered: when trying to
const gpt4all = new GPT4All('ggml-gpt4all-j',true)
I got an exception telling me that only gpt4all-lora-[filtered-]quantized
would be supported - how can I change that?
User reports that models on their Windows system aren't being properly downloaded.
Possible issues:
The instructions say this:
import { GPT4All } from 'gpt4all-ts';
But after running npm install gpt4all
the module files are inside gpt4all
.
Link: https://www.npmjs.com/package/gpt4all?activeTab=code
Thus the readme code should probably be updated to say this:
import { GPT4All } from 'gpt4all';
I used this library in a my remix app. You can find the code here https://github.com/harshil4076/ts-voice-text (live version won't work).
I am running it on my local machine Ubuntu with 24 GB Ram and no extra GPU power. It takes some time to response but its good for testing.
With every response I am seeing this �[1m�[32m�[0m
. either in the begining or the end.
What could be causing this??
Great library btw. Shout out to the owners! Cheers!
This lib does a great job of downloading and running the model! But it provides a very restricted API for interacting with it.
Here's the type signature for prompt
. Comparing to other LLMs, I expect some other params, e.g. stop tokens and temperature. At the very least, my use-case requires stop tokens.
I have created my own expressjs/socket.io web UI using this package. However, this package seems to only allow one connection/request for the model at a time. I could instruct my code to create a new instance of the model every time a user connects to my page, but that would be very inefficient memory-wise.
I had the idea of creating a queue system where the users wait for other requests to complete before serving them, but depending on the length of the answers and how many are waiting, users could be waiting around for a long time.
TL;DR: Would it be possible to allow the package or model to support more than one request/prompt simultaneously without dramatically increasing RAM consumption?
Hey, i wanted to try the gpt4all-ts so i just copied the started code from the readme and installed the npm package. It downloaded the model into C:\Users\P33tT.nomic (gpt4all & gpt4all-lora-unfiltered-quantized.bin) but when i try to start my app it throws an error:
Error: spawn C:\Users\P33tT/.nomic/gpt4all ENOENT
at Process.ChildProcess._handle.onexit (node:internal/child_process:285:19)
at onErrorNT (node:internal/child_process:483:16)
at processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -4058,
code: 'ENOENT',
syscall: 'spawn C:\\Users\\P33tT/.nomic/gpt4all',
path: 'C:\\Users\\P33tT/.nomic/gpt4all',
spawnargs: [
'--model',
'C:\\Users\\P33tT/.nomic/gpt4all-lora-unfiltered-quantized.bin'
]
}
I don't know what to do, i tried deleting the gpt4all file but it didn't work
Is nous-gpt4-vicuna-13b
not supported yet?
I have nous-gpt4-vicuna-13b
downloaded into gpt4all and you like to access it programmatically.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.