Code Monkey home page Code Monkey logo

lollms-webui's Introduction

LoLLMs (Lord of Large Language Multimodal Systems) Web UI

Logo

GitHub license GitHub issues GitHub stars GitHub forks Discord Follow me on X Follow Me on YouTube

LoLLMs core library download statistics

Downloads Downloads Downloads

LoLLMs webui download statistics

Downloads Downloads

Welcome to LoLLMS WebUI (Lord of Large Language Multimodal Systems: One tool to rule them all), the hub for LLM (Large Language Models) and multimodal intelligence systems. This project aims to provide a user-friendly interface to access and utilize various LLM and other AI models for a wide range of tasks. Whether you need help with writing, coding, organizing data, analyzing images, generating images, generating music or seeking answers to your questions, LoLLMS WebUI has got you covered.

As an all-encompassing tool with access to over 500 AI expert conditioning across diverse domains and more than 2500 fine tuned models over multiple domains, you now have an immediate resource for any problem. Whether your car needs repair or if you need coding assistance in Python, C++ or JavaScript; feeling down about life decisions that were made wrongly yet unable to see how? Ask Lollms. Need guidance on what lies ahead healthwise based on current symptoms presented, our medical assistance AI can help you get a potential diagnosis and guide you to seek the right medical care. If stuck with legal matters such contract interpretation feel free to reach out to Lawyer personality, to get some insight at hand -all without leaving comfort home. Not only does it aid students struggling through those lengthy lectors but provides them extra support during assessments too, so they are able grasp concepts properly rather then just reading along lines which could leave many confused afterward. Want some entertainment? Then engage Laughter Botand let yourself go enjoy hysterical laughs until tears roll from eyes while playing Dungeons&Dragonsor make up crazy stories together thanks to Creative Story Generator. Need illustration work done? No worries, Artbot got us covered there! And last but definitely not least LordOfMusic is here for music generation according to individual specifications. So essentially say goodbye boring nights alone because everything possible can be achieved within one single platform called Lollms...

Features

  • Choose your preferred binding, model, and personality for your tasks
  • Enhance your emails, essays, code debugging, thought organization, and more
  • Explore a wide range of functionalities, such as searching, data organization, image generation, and music generation
  • Easy-to-use UI with light and dark mode options
  • Integration with GitHub repository for easy access
  • Support for different personalities with predefined welcome messages
  • Thumb up/down rating for generated answers
  • Copy, edit, and remove messages
  • Local database storage for your discussions
  • Search, export, and delete multiple discussions
  • Support for image/video generation based on stable diffusion
  • Support for music generation based on musicgen
  • Support for multi generation peer to peer network through Lollms Nodes and Petals.
  • Support for Docker, conda, and manual virtual environment setups
  • Support for LM Studio as a backend
  • Support for Ollama as a backend
  • Support for vllm as a backend

Star History

Star History Chart

Thank you for all users who tested this tool and helped making it more user friendly.

Installation

Automatic installation (UI)

If you are using Windows, just visit the release page, download the windows installer and install it.

Automatic installation (Console)

Download the installation script from scripts folder and run it. The installation scripts are:

  • win_install.bat for Windows.
  • linux_install.shfor Linux.
  • mac_install.shfor Mac.

Manual install:

Since v 9.4, it is not advised to do manual install as many services require the creation of a separate environment and lollms needs to have complete control on the environments. So If you install it using your own conda setup, you will not be able to install any service and reduce the use of lollms to the chat interface (no xtts, no comfyui, no fast generation through vllm or petals or soever)

Code of conduct

By using this tool, users agree to follow these guidelines :

  • This tool is not meant to be used for building and spreading fakenews / misinformation.
  • You are responsible for what you generate by using this tool. The creators will take no responsibility for anything created via this lollms.
  • You can use lollms in your own project free of charge if you agree to respect the Apache 2.0 licenseterms. Please refer to https://www.apache.org/licenses/LICENSE-2.0 .
  • You are not allowed to use lollms to harm others directly or indirectly. This tool is meant for peaceful purposes and should be used for good never for bad.
  • Users must comply with local laws when accessing content provided by third parties like OpenAI API etc., including copyright restrictions where applicable.

⚠️ Security Warning

Please be aware that LoLLMs WebUI does not have built-in user authentication and is primarily designed for local use. Exposing the WebUI to external access without proper security measures could lead to potential vulnerabilities.

If you require remote access to LoLLMs, it is strongly recommended to follow these security guidelines:

  1. Activate Headless Mode: Enabling headless mode will expose only the generation API while turning off other potentially vulnerable endpoints. This helps to minimize the attack surface.

  2. Set Up a Secure Tunnel: Establish a secure tunnel between the localhost running LoLLMs and the remote PC that needs access. This ensures that the communication between the two devices is encrypted and protected.

  3. Modify Configuration Settings: After setting up the secure tunnel, edit the /configs/local_config.yaml file and adjust the following settings:

    host: 0.0.0.0  # Allow remote connections
    port: 9600  # Change the port number if desired (default is 9600)
    force_accept_remote_access: true  # Force accepting remote connections
    headless_server_mode: true  # Set to true for API-only access, or false if the WebUI is needed

By following these security practices, you can help protect your LoLLMs instance and its users from potential security risks when enabling remote access.

Remember, it is crucial to prioritize security and take necessary precautions to safeguard your system and sensitive information. If you have any further questions or concerns regarding the security of LoLLMs, please consult the documentation or reach out to the community for assistance.

Stay safe and enjoy using LoLLMs responsibly!

Disclaimer

Large Language Models are amazing tools that can be used for diverse purposes. Lollms was built to harness this power to help the user enhance its productivity. But you need to keep in mind that these models have their limitations and should not replace human intelligence or creativity, but rather augment it by providing suggestions based on patterns found within large amounts of data. It is up to each individual how they choose to use them responsibly!

The performance of the system varies depending on the used model, its size and the dataset on whichit has been trained. The larger a language model's training set (the more examples), generally speaking - better results will follow when using such systems as opposed those with smaller ones. But there is still no guarantee that the output generated from any given prompt would always be perfect and it may contain errors due various reasons. So please make sure you do not use it for serious matters like choosing medications or making financial decisions without consulting an expert first hand!

license

This repository uses code under ApacheLicense Version 2.0 , see license file for details about rights granted with respect to usage & distribution

Copyright:

ParisNeo 2023

lollms-webui's People

Contributors

agi-dude avatar andriymulyar avatar andzejsp avatar arroyoquiel avatar beratcmn avatar blasphemousjohn avatar blu3knight avatar chongy076 avatar dennisstanistan avatar dependabot[bot] avatar jadenkiu avatar jindrichmarek avatar meeech avatar ngxson avatar nikolai2038 avatar njannasch avatar nof4n avatar okkebal avatar omahs avatar parisneo avatar pb-dod avatar richiedevr avatar samgroenjes-dw avatar signalprime avatar smohr avatar thaiminhpv avatar thesnowguru avatar tkocou avatar ulanmax avatar vra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lollms-webui's Issues

Error installing sentencepiece on Ubuntu 22.04

Hey,

I got this error on Ubuntu 22.04 Azure VM. The connectivity in the cloud should be ok. Have you seen this error?

ch file or directory
178 | # include <Python.h>
| ^~~~~~~~~~
compilation terminated.
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure

× Encountered error while trying to install package.
╰─> sentencepiece

note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
Failed to install required packages. Please check your internet connection and try again.

TypeError: 'NoneType' object is not subscriptable

Expected Behavior

Please describe the behavior you are expecting.
Installed using install.bat
Downloaded model via browser option
Converted this model.
Install completed.
opened Run.bat
it works?

Current Behavior

Please describe the behavior you are currently experiencing.
open run.bat
gets error

HHHHHHHHHHHHH.##  HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH#.HHHHH/*,*,*,*,*,*,*,*,***,*,**#HHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHH.*,,***,***,***,***,***,***,*******HHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*,,,,,HHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH.,,,***,***,***,***,***,***,***,***,***,***/HHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH*,,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHH#,***,***,***,***,***,***,***,***,***,***,***,**HHHHHHHHHHHHHHHHH                                        HHHHHHHHHHHHH.##  HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH#.HHHHH/*,*,*,*,*,*,*,*,***,*,**#HHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHH.*,,***,***,***,***,***,***,*******HHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*,,,,,HHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH.,,,***,***,***,***,***,***,***,***,***,***/HHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH*,,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHH#,***,***,***,***,***,***,***,***,***,***,***,**HHHHHHHHHHHHHHHHH
HHHHHHHHHH..HHH,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*#HHHHHHHHHHHHHHHH
HHHHHHH,,,**,/H*,***,***,***,,,*,***,***,***,**,,,**,***,***,***H,,*,***HHHHHHHH
HHHHHH.*,,,*,,,,,*,*,*,***#HHHHH.,,*,*,*,*,**/HHHHH.,*,*,*,*,*,*,*,*****HHHHHHHH
HHHHHH.*,***,*,*,***,***,.HHHHHHH/**,***,****HHHHHHH.***,***,***,*******HHHHHHHH
HHHHHH.,,,,,,,,,,,,,,,,,,,.HHHHH.,,,,,,,,,,,,.HHHHHH,,,,,,,,,,,,,,,,,***HHHHHHHH
HHHHHH.,,,,,,/H,,,**,***,***,,,*,***,***,***,**,,,,*,***,***,***H***,***HHHHHHHH
HHHHHHH.,,,,*.H,,,,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,***H*,,,,/HHHHHHHHH
HHHHHHHHHHHHHHH*,***,***,**,,***,***,***,***,***,***,***,***,**.HHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH,,,,,,,,*,,#H#,,,,,*,,,*,,,,,,,,*#H*,,,,,,,,,**HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH,,*,***,***,**/.HHHHHHHHHHHHH#*,,,*,***,***,*HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH**,***,***,***,***,***,***,***,***,***,***,*.HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH**,***,***,*******/..HHHHHHHHH.#/*,*,,,***,***HHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH*,*,*,******#HHHHHHHHHHHHHHHHHHHHHHHHHHHH./**,,,.HHHHHHHHHHHHHH
Checking discussions database...HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.*#HHHHHHHHHHHH
llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
llama_model_load: n_vocab = 32001
llama_model_load: n_ctx   = 512
llama_model_load: n_embd  = 4096
llama_model_load: n_mult  = 256
llama_model_load: n_head  = 32
llama_model_load: n_layer = 32
llama_model_load: n_rot   = 128
llama_model_load: f16     = 2
llama_model_load: n_ff    = 11008
llama_model_load: n_parts = 1
llama_model_load: type    = 1
llama_model_load: ggml map size = 4017.70 MB
llama_model_load: ggml ctx size =  81.25 KB
llama_model_load: mem required  = 5809.78 MB (+ 2052.00 MB per state)
llama_model_load: loading tensors from './models/gpt4all-lora-quantized-ggml.bin'
llama_model_load: model size =  4017.27 MB / num tensors = 291
llama_init_from_file: kv self size  =  512.00 MB
Traceback (most recent call last):
  File "E:\Git_repos\gpt4all-ui\app.py", line 441, in <module>
    bot = Gpt4AllWebUI(app, args)
  File "E:\Git_repos\gpt4all-ui\app.py", line 86, in __init__
    self.prepare_a_new_chatbot()
  File "E:\Git_repos\gpt4all-ui\app.py", line 102, in prepare_a_new_chatbot
    self.condition_chatbot()
  File "E:\Git_repos\gpt4all-ui\app.py", line 118, in condition_chatbot
    if self.db.does_last_discussion_have_messages():
  File "E:\Git_repos\gpt4all-ui\db.py", line 153, in does_last_discussion_have_messages
    last_discussion_id = self.select("SELECT id FROM discussion ORDER BY id DESC LIMIT 1", fetch_all=False)[0]
TypeError: 'NoneType' object is not subscriptable

Steps to Reproduce

Please provide detailed steps to reproduce the issue.
Installed using install.bat
Downloaded model via browser option
Converted this model.
Install completed.
opened Run.bat

Possible Solution

If you have any suggestions on how to fix the issue, please describe them here.

Context

Please provide any additional context about the issue.

Screenshots

If applicable, add screenshots to help explain the issue.

table message has no column named type

It used to run fine, but after Yesterday's updates I get this.
Checking discussions database... llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ... llama_model_load: n_vocab = 32001 llama_model_load: n_ctx = 4096 llama_model_load: n_embd = 4096 llama_model_load: n_mult = 256 llama_model_load: n_head = 32 llama_model_load: n_layer = 32 llama_model_load: n_rot = 128 llama_model_load: f16 = 2 llama_model_load: n_ff = 11008 llama_model_load: n_parts = 1 llama_model_load: type = 1 llama_model_load: ggml map size = 4017.70 MB llama_model_load: ggml ctx size = 81.25 KB llama_model_load: mem required = 5809.78 MB (+ 2052.00 MB per state) llama_model_load: loading tensors from './models/gpt4all-lora-quantized-ggml.bin' llama_model_load: model size = 4017.27 MB / num tensors = 291 llama_init_from_file: kv self size = 4096.00 MB Traceback (most recent call last): File "E:\gpt4all-ui\app.py", line 468, in <module> bot = Gpt4AllWebUI(app, args) File "E:\gpt4all-ui\app.py", line 98, in __init__ self.prepare_a_new_chatbot() File "E:\gpt4all-ui\app.py", line 114, in prepare_a_new_chatbot self.condition_chatbot() File "E:\gpt4all-ui\app.py", line 135, in condition_chatbot message_id = self.current_discussion.add_message( File "E:\gpt4all-ui\db.py", line 200, in add_message message_id = self.discussions_db.insert( File "E:\gpt4all-ui\db.py", line 112, in insert cursor = conn.execute(query, params) sqlite3.OperationalError: table message has no column named type

Exits without feedback / Cannot load _pyllamacpp

Current Behavior

If I run the run.bat, I get nothing but the logo. By inspecting the run.bat-file, I saw i was in the pause > nul and the execution of the app.py had already finished.
By running app.py manually, it also just exited without doing anything.
By proffesional print-debugging (irony), I was able to trace the error back to be in the _pyllamacpp.cp311-win_amd64.pyd, as you can see from the attached screenshots.

Context

  • Windows 10 / build 10.0.19045
  • x64-Architecture
  • RAM is either 8GB or 16GB; I could not find it, but it is probably 16GB.
  • Tell me to add anything required.

Screenshots

Modified code:
grafik
grafik
Result:
grafik

docker file missing config?

Expected Behavior

docker doesn't run

Current Behavior

Got errors, I had to add
COPY ./config.py /srv/config.py
COPY ./configs /srv/configs

to the dockerfile to run

Steps to Reproduce

try to run the docker compose version on linux

Possible Solution

add to docker file
COPY ./config.py /srv/config.py
COPY ./configs /srv/configs

Context

linux

Screenshots

If applicable, add screenshots to help explain the issue.

Ask user to when remove discussion is pressed

When user deletes a discussion. It would be a good idea to ask him if he is sure before validating.
Also we need to make a way to select multiple discussions to remove zt once and an option to remove every thing.

Latest UI update - no conversations

Did a git pull right nix and the old conversations are gonski.
Also the new conversation was read out in the console but didnt sho up in the UI :(

image

Was there a need to re run install.sh after git pull?

Model frozen when prompted something - Windows

OS

Windows 11

Expected Behavior

When I send the message "List 10 dogs." or anything else, the program should give me a reply.

Current Behavior

When I send the message "List 10 dogs." or anything else, the program freezes and doesn't reply, this also disallows more messages to be sent.

Steps to Reproduce

  1. Clean git clone, install, run
  2. Go to localhost:9600 and send a message
  3. Program gets frozen

Screenshots

cmd_vEwrBWeYDU
msedge_GZk4XA4hty

Update readme.md

The readme.me should be updated to give a better project description with better organieation.

install win bat file is BROKEN

simply doesnt work, asks to install python and upon choice it just closes.

python is already installed on my machine yet it still says not detected.

:)

Selecting discussions to export

As we highly value privacy, we need to make sure users can export the discussions they want to share.
So when export button is pressed we need to make sure the user can select which discussion to export.

When using 'run.bat' an error is shown regarding the 'gpt4all-lora-quantized-ggml.bin' file being 'invalid model file'

Expected Behavior

When using 'run.bat' on Windows 10 machine, the previously downloaded model should be recognized as valid.

Current Behavior

When using 'run.bat' an error is shown regarding the 'gpt4all-lora-quantized-ggml.bin' being 'invalid model file'. Although the newest model file has been downloaded the same day (during the installation of GPT4ALL-UI).

Steps to Reproduce

Please provide detailed steps to reproduce the issue.

  1. In command prompt fill in 'run.bat'.
  2. GPT4ALL will try to run, but during the step where the model is loaded and error is shown about the fact that the model is invalid. However the newest model was downloaded and placed in the map 'models' just before running this file.
  3. The error shown is:
    "Checking discussions database...
    Ok
    llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
    ./models/gpt4all-lora-quantized-ggml.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74])
    you most likely need to regenerate your ggml files
    the benefit is you'll get 10-100x faster load times
    see ggerganov/llama.cpp#91
    use convert-pth-to-ggml.py to regenerate from original pth
    use migrate-ggml-2023-03-30-pr613.py if you deleted originals
    llama_init_from_file: failed to load model
    llama_generate: seed = 1680983129

system_info: n_threads = 8 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |"

Screenshots

See attached screenshot for an example.
image

Install.sh not working properly because the Python.h library

fatal error: Python.h: No such file or directory
178 | # include <Python.h>
| ^~~~~~~~~~
compilation terminated.
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure

× Encountered error while trying to install package.
╰─> sentencepiece

note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
Failed to install required packages. Please check your internet connection and try again.

Unable to run other models - e.g. Vicuna 7b

First of all: Thank you for this web-ui tool! It is awesome.

Expected Behavior

Load different model - Vicuna 7b

Current Behavior

Model does not load. Get error: [2023-04-08 12:52:17,856] {app.py:1744} ERROR - Exception on /update_model_params [POST]

Steps to Reproduce

Please provide detailed steps to reproduce the issue.

  1. Place vicuna 7b .bin in the models folder
  2. Run run.bat
  3. change model to vicuna 7b file
  4. click update parameters button
  5. Model doesnt load; error comes back; default model used

Test and fix install.sh for mac and update doc

I see people are fixing issues in the install.dh through messages. Can someone synthesize this and update the install script?
I have no mac to test unfortunately so i wrote the script blindly.
Thanks

does not answer

when I run the run.bat the server will start and when I entered a question the terminal shows:

system_info: n_threads = 8 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |

but the answer never shows up.I downloaded the normal GPT4All and it works fast with no issue, the problem is only on GPT4ALL-UI

[Docker] can't boot up: gpt4all-ui_webui_1 exited with code 132

Expected Behavior

Project is starting, or at least some info in logs

Current Behavior

debian@host:~/dockers/gpt4all-ui$ docker-compose up
Recreating gpt4all-ui_webui_1 ... done
Attaching to gpt4all-ui_webui_1
gpt4all-ui_webui_1 exited with code 132

Steps to Reproduce

git clone
docker-compose build --no-cache
docker-compose up

Context

Tried this on a dedicated server at OVH (Debian 11.6)

If it helps :
docker-compose build --no-cache : docker-compose_build.txt
docker-compose --verbose up : docker-compose_verbose_up.txt
cat /proc/cpuinfo : cat_proc-cpuinfo.txt

EDIT:
docker compose build && docker compose up (without the hyphen) does the same.
docker -v : Docker version 23.0.3, build 3e7cbfd

Proxypass using Nginx proxy manager

I have run the app.py with --host 0.0.0.0 and can access the web ui from local machines, but when i try to proxypass and access it from domain, it shows in the console that the "GET / HTTP/1.1" 200 - but the web page is not loading. I mean its loading forever, but i cant access it.

All it shows is this.

image

My proxy custom settings are:

location /gpt4all/ {
		rewrite ^/gpt4all(/.*)$ /$1 break;
		proxy_pass http://192.168.0.210:9600;
		proxy_http_version 1.1;
		auth_basic "Authorization required";
		auth_basic_user_file /data/access/1;
		proxy_set_header Authorization "";
		proxy_set_header Connection "";
	}

For other sites this setup is working, here i dont know why its not.

I tried enabling websocket support, but that didnt help
Im by no means an expert in nginx, All my knowledge comes from googling around for solutions, given if i ask the right questions :)

Any help would be appreciated.

Linking the settings to model definition

The app.py script has a bunch of parameters passed to the script.
When I moved to the python bindings, i didn't put them in model definition.

Would someone like to do it? It is a very eazy task

Unable to Install on Windows "--user install" ?

Expected Behavior

using installl.bat being able to install and run GPT4all

Current Behavior

begins the installation process but gives off this error message

ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv.
Failed to install required packages. Please check your internet connection and try again.

Steps to Reproduce

Please provide detailed steps to reproduce the issue.

  1. Either git clone the repo, or save as sip to windows device
  2. unzip the file and double click intall.bat, or opening terminal and running ./install.bat
  3. Run GPT4All

Possible Solution

This happens after the environment is created and it's installing the requirements.txt, but seems it's not able to retrieve what it needs.

Context

I've tried a few work arounds as you can see, thought I know it should be an easy one click installer, and I assure you I do have hardwired internet connection

PS C:\Chat_Bots\gpt4all-ui> ./install.bat HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHH .HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHH. ,HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHH.## HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHHH#.HHHHH/*,*,*,*,*,*,*,*,***,*,**#HHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHHHHH.*,,***,***,***,***,***,***,*******HHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*,,,,,HHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHH.,,,***,***,***,***,***,***,***,***,***,***/HHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHH*,,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*HHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHH#,***,***,***,***,***,***,***,***,***,***,***,**HHHHHHHHHHHHHHHHH HHHHHHHHHH..HHH,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*#HHHHHHHHHHHHHHHH HHHHHHH,,,**,/H*,***,***,***,,,*,***,***,***,**,,,**,***,***,***H,,*,***HHHHHHHH HHHHHH.*,,,*,,,,,*,*,*,***#HHHHH.,,*,*,*,*,**/HHHHH.,*,*,*,*,*,*,*,*****HHHHHHHH HHHHHH.*,***,*,*,***,***,.HHHHHHH/**,***,****HHHHHHH.***,***,***,*******HHHHHHHH HHHHHH.,,,,,,,,,,,,,,,,,,,.HHHHH.,,,,,,,,,,,,.HHHHHH,,,,,,,,,,,,,,,,,***HHHHHHHH HHHHHH.,,,,,,/H,,,**,***,***,,,*,***,***,***,**,,,,*,***,***,***H***,***HHHHHHHH HHHHHHH.,,,,*.H,,,,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,***H*,,,,/HHHHHHHHH HHHHHHHHHHHHHHH*,***,***,**,,***,***,***,***,***,***,***,***,**.HHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHH,,,,,,,,*,,#H#,,,,,*,,,*,,,,,,,,*#H*,,,,,,,,,**HHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHH,,*,***,***,**/.HHHHHHHHHHHHH#*,,,*,***,***,*HHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHH,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*HHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHH**,***,***,***,***,***,***,***,***,***,***,*.HHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*HHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHH**,***,***,*******/..HHHHHHHHH.#/*,*,,,***,***HHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHH*,*,*,******#HHHHHHHHHHHHHHHHHHHHHHHHHHHH./**,,,.HHHHHHHHHHHHHH HHHHHHHHHHHHHHHH.,,*,***.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.*#HHHHHHHHHHHH HHHHHHHHHHHHHHH/,,,*.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHH,,#HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHH.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH "Checking for git..." "Git is installed." Checking for python..."Python is installed." Checking for pip..."Pip is installed." Updating pip setuptools and wheel Requirement already satisfied: pip in c:\users\default.laptop-pehfuag9\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python310\site-packages (23.1) Requirement already satisfied: setuptools in c:\users\default.laptop-pehfuag9\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python310\site-packages (67.6.1) Requirement already satisfied: wheel in c:\users\default.laptop-pehfuag9\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python310\site-packages (0.40.0) Checking for virtual environment..."Virtual environment is installed." Creating virtual environment ...Activating virtual environment ...OK Installing requirements ... Requirement already satisfied: pip in c:\chat_bots\gpt4all-ui\env\lib\site-packages (23.0.1) Collecting pip Using cached pip-23.1-py3-none-any.whl (2.1 MB) Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 23.0.1 Uninstalling pip-23.0.1: Successfully uninstalled pip-23.0.1 Successfully installed pip-23.1 ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv. Failed to install required packages. Please check your internet connection and try again. Press any key to continue . . .

Not Support UTF8 ell

always output UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe4 in position 0: unexpected end of data
like "什么是地球?"
It simply means "whats earth?"

Converting models from gpt4all

Can someone help me to understand why they are not converting?

Default model that is downloaded by the UI converted no problem.

I wrote a script based on install.bat, Cloned the lama.cpp and then run command on all the models.

gpt4all-unfiltered  - does not work
ggml-vicuna-7b-4bit  - does not work
vicuna-13b-GPTQ-4bit-128g  - already been converted but does not work
LLaMa-Storytelling-4Bit  - does not work

Ignore the .og extension on th emodels, i renamed them so that i still have the original copy when/if it gets converted.

Lets try to convert:

sd2@sd2:~/gpt4all-ui$ bash convert-model.sh


converting .. story-llama30b-4bit-32g.safetensors
Traceback (most recent call last):
  File "/home/sd2/gpt4all-ui/./tmp/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 311, in <module>
    main()
  File "/home/sd2/gpt4all-ui/./tmp/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 272, in main
    tokens = read_tokens(fin, hparams)
  File "/home/sd2/gpt4all-ui/./tmp/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 133, in read_tokens
    word = fin.read(length)
ValueError: read length must be non-negative or -1


converting .. story-llama13b-4bit-32g.safetensors.og
Traceback (most recent call last):
  File "/home/sd2/gpt4all-ui/./tmp/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 311, in <module>
    main()
  File "/home/sd2/gpt4all-ui/./tmp/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 272, in main
    tokens = read_tokens(fin, hparams)
  File "/home/sd2/gpt4all-ui/./tmp/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 135, in read_tokens
    (score,) = struct.unpack("f", score_b)
struct.error: unpack requires a buffer of 4 bytes


converting .. ggml-vicuna-13b-1.1-q4_1.bin.og
./models/ggml-vicuna-13b-1.1-q4_1.bin.og: input ggml has already been converted to 'ggjt' magic



converting .. gpt4all-lora-unfiltered-quantized.bin.og
Traceback (most recent call last):
  File "/home/sd2/gpt4all-ui/./tmp/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 311, in <module>
    main()
  File "/home/sd2/gpt4all-ui/./tmp/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 272, in main
    tokens = read_tokens(fin, hparams)
  File "/home/sd2/gpt4all-ui/./tmp/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 133, in read_tokens
    word = fin.read(length)
ValueError: read length must be non-negative or -1
sd2@sd2:~/gpt4all-ui$

When runing vicuna model i gots this error:

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
Checking discussions database...
llama_model_load: loading model from './models/ggml-vicuna-13b-1.1-q4_1.bin.og' - please wait ...
llama_model_load: n_vocab = 32000
llama_model_load: n_ctx   = 512
llama_model_load: n_embd  = 5120
llama_model_load: n_mult  = 256
llama_model_load: n_head  = 40
llama_model_load: n_layer = 40
llama_model_load: n_rot   = 128
llama_model_load: f16     = 5
llama_model_load: n_ff    = 13824
llama_model_load: n_parts = 2
llama_model_load: type    = 2
llama_model_load: invalid model file './models/ggml-vicuna-13b-1.1-q4_1.bin.og' (bad f16 value 5)
llama_init_from_file: failed to load model
 * Serving Flask app 'GPT4All-WebUI'
 * Debug mode: off
[2023-04-13 12:29:47,313] {_internal.py:224} INFO - WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:9600
 * Running on http://192.168.0.210:9600
[2023-04-13 12:29:47,313] {_internal.py:224} INFO - Press CTRL+C to quit

When running gpt4all-lora-unfiltered-quantized.bin.og model:

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
Checking discussions database...
llama_model_load: loading model from './models/gpt4all-lora-unfiltered-quantized.bin.og' - please wait ...
llama_model_load: invalid model file './models/gpt4all-lora-unfiltered-quantized.bin.og' (too old, regenerate your model files or convert them with convert-unversioned-ggml-to-ggml.py!)
llama_init_from_file: failed to load model
 * Serving Flask app 'GPT4All-WebUI'
 * Debug mode: off
[2023-04-13 12:32:34,593] {_internal.py:224} INFO - WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:9600
 * Running on http://192.168.0.210:9600
[2023-04-13 12:32:34,593] {_internal.py:224} INFO - Press CTRL+C to quit

I hope some one here have more knowledge on how to use other models.

Installation on Windows machine aborts during step: 'Do you want to download and install the GPT4All model? [Y/N]'

Expected Behavior

Trying to install GPT4ALL on a Windows 10 machine using 'install.bat' file.

Current Behavior

During installation with 'install.bat' during the step: 'Do you want to download and install the GPT4All model? [Y/N]'
an error is given and the installation aborts.

Steps to Reproduce

  1. Run 'install.bat'.
  2. Download the latest 'gpt4all-lora-quantized-ggml.bin' file. Then the step is presented 'Do you want to download and install the GPT4All model? [Y/N]'
  3. Press 'Y' or 'N'. In both cases the same message is shown: '-v was unexpected at this time.' and the installation is aborted.

Screenshots

image

Error for mac os installation

I tried to run ./install.sh but I get the following error

Running setup.py install for sentencepiece ... error
  error: subprocess-exited-with-error

  × Running setup.py install for sentencepiece did not run successfully.
  │ exit code: 1
  ╰─> [86 lines of output]
      running install
      [...]
      ./build_bundled.sh: line 19: cmake: command not found
      ./build_bundled.sh: line 20: nproc: command not found
      ./build_bundled.sh: line 20: cmake: command not found
      Traceback (most recent call last):
        File "<string>", line 2, in <module>

I checked the mac os installation details but none of the faq solved my issue. I'll be adding it to the docs and create the PR.

Also I get the following msg:

Downloading latest model
./install.sh: line 100: wget: command not found
Virtual environment created and packages installed successfully.
Every thing is setup. Just run run.sh

so I'm going to fix that too

invalid model file

Expected Behavior

Just works

Current Behavior

The model file might be corrupted... Downloaded many times, also tried different browsers.

Steps to Reproduce

Create a new chat

[2023-04-09 23:48:55,297] {_internal.py:224} INFO - 127.0.0.1 - - [09/Apr/2023 23:48:55] "GET /new_discussion?title=test HTTP/1.1" 200 -
llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
./models/gpt4all-lora-quantized-ggml.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74])
	you most likely need to regenerate your ggml files
	the benefit is you'll get 10-100x faster load times
	see https://github.com/ggerganov/llama.cpp/issues/91
	use convert-pth-to-ggml.py to regenerate from original pth
	use migrate-ggml-2023-03-30-pr613.py if you deleted originals
llama_init_from_file: failed to load model
./run.sh: line 43: 44097 Segmentation fault      (core dumped) python app.py

Add ranking system for messages

Hi there.
Is anyone interested in adding a vote up, vote down ranking system to the ui and db?
That would help fine tuning afterwards.

[Feature] Optional prefix for URL path

As my journey ended with the proxypass thing, ( #76 ), i went through the code could not understand hot to modify the python parts to be able to define the /path/ for the base url in the Flask app part.

Later i was going through the .js parts and the path is hardcoded for the static files :(.

Its not trivial, because people might be running this on subdomain -gpt.example.com, but some people who dont have a free subdomain left might want to run this on sub path example.com/gpt.

Something to this : https://stackoverflow.com/questions/18967441/add-a-prefix-to-all-flask-routes

Enhance UI quality

As Thanos said:
Fine! I'll do it myself.

I will upgrade the UI interface to make it more user friendly.
I'll provide both a light and dark modes.

Starting the work now. Stay tuned.

run after install - error

Expected Behavior

all is ok

Current Behavior

error on run

Steps to Reproduce

Checking discussions database...
llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
llama_model_load: GPTQ model detected - are you sure n_parts should be 2? we normally expect it to be 1
llama_model_load: use '--n_parts 1' if necessary
llama_model_load: n_vocab = 32001
llama_model_load: n_ctx   = 512
llama_model_load: n_embd  = 5120
llama_model_load: n_mult  = 256
llama_model_load: n_head  = 40
llama_model_load: n_layer = 40
llama_model_load: n_rot   = 128
llama_model_load: f16     = 4
llama_model_load: n_ff    = 13824
llama_model_load: n_parts = 2
llama_model_load: type    = 2
llama_model_load: ggml map size = 9702.04 MB
llama_model_load: ggml ctx size = 101.25 KB
llama_model_load: mem required  = 11750.14 MB (+ 3216.00 MB per state)
llama_model_load: loading tensors from './models/gpt4all-lora-quantized-ggml.bin'
llama_model_load: model size =  9701.60 MB / num tensors = 363
llama_init_from_file: kv self size  =  800.00 MB
Traceback (most recent call last):
  File "D:\gpt4all\gpt4all-ui\app.py", line 468, in <module>
    bot = Gpt4AllWebUI(app, args)
  File "D:\gpt4all\gpt4all-ui\app.py", line 98, in __init__
    self.prepare_a_new_chatbot()
  File "D:\gpt4all\gpt4all-ui\app.py", line 114, in prepare_a_new_chatbot
    self.condition_chatbot()
  File "D:\gpt4all\gpt4all-ui\app.py", line 130, in condition_chatbot
    if self.db.does_last_discussion_have_messages():
  File "D:\gpt4all\gpt4all-ui\db.py", line 162, in does_last_discussion_have_messages
    last_message = self.select("SELECT * FROM message WHERE discussion_id=?", (last_discussion_id,), fetch_all=False)
  File "D:\gpt4all\gpt4all-ui\db.py", line 86, in select
    cursor = conn.execute(query, params)
sqlite3.InterfaceError: Error binding parameter 0 - probably unsupported type.

Possible Solution

fix sql code

Context

none

Screenshots

none

Update Model Params triggers traceback and returns http 500

Expected Behavior

Model parameters should be modifiable in the UI

Current Behavior

An exception is thrown and returns http 500

Steps to Reproduce

  1. Modify the settings in the UI
  2. Click on Update Parameters
  3. See terminal

Possible Solution

Key top_k must be sent with the POST request

Context

[2023-04-08 10:02:39,351] {app.py:1744} ERROR - Exception on /update_model_params [POST]
Traceback (most recent call last):
  File "T:\gpt4all-ui\env\lib\site-packages\flask\app.py", line 2528, in wsgi_app
    response = self.full_dispatch_request()
  File "T:\gpt4all-ui\env\lib\site-packages\flask\app.py", line 1825, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "T:\gpt4all-ui\env\lib\site-packages\flask\app.py", line 1823, in full_dispatch_request
    rv = self.dispatch_request()
  File "T:\gpt4all-ui\env\lib\site-packages\flask\app.py", line 1799, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "T:\gpt4all-ui\app.py", line 462, in update_model_params
    self.args.top_k = float(data["top_k"])
KeyError: 'top_k'
[2023-04-08 10:02:39,351] {_internal.py:224} INFO - 127.0.0.1 - - [08/Apr/2023 10:02:39] "POST /update_model_params HTTP/1.1" 500 -

run.bat doesn't work on Windows.

When running "run.bat", the command prompt window opens briefly and then closes immediately. This prevents the webpage for the UI from being created.

It is unclear whether the issue is with "activate.bat" or "app.py".

I tried to make a log but it's still not clear for me.

Windows install script error message

Checking for python...OK
Checking for pip...OK
Checking for venv...OK
Creating virtual environment ...OK
Activating virtual environment ...OK
Installing requirements ...
Requirement already satisfied: pip in c:\windows\system32\env\lib\site-packages (23.0.1)
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Failed to install required packages. Please check your internet connection and try again.
Press any key to continue . . .

where is venv folder?

run.sh: line 43: 79546 Illegal instruction (core dumped) python app.py

Expected Behavior

It seems like I have installed everything successfully but when I start run.sh. It is unable to start.

Current Behavior

"run.sh: line 43: 79546 Illegal instruction (core dumped) python app.py"

Steps to Reproduce

Please provide detailed steps to reproduce the issue.

  1. I started with https://github.com/nomic-ai/gpt4all-ui instructions of git clone and install

  2. I received an error and went to an issue that already had it #20 and ran "sudo apt install python3.11-dev" which worked.

  3. I then went here and followed the instructions: https://github.com/nomic-ai/gpt4all-ui/blob/main/docs/Linux_Osx_Install.md
    and sucessfully installed and told me to run.sh

  4. Now when I use bash run.sh I receive the line 43: 79546 error above.

Screenshots

Screenshot 2023-04-08 at 5 18 39 PM

Install.bat is broken

Expected Behavior

i expected it to run

Current Behavior

it gave me an error

Steps to Reproduce

Please provide detailed steps to reproduce the issue.

  1. install gpt4all-ui with git
  2. run install bat
  3. lock at error

Context

i have pip in my cmd path nad python too

Screenshots

image

Enhance renaming discussions ui

Hi there. I have built the rename discussion in a hurry. Can someone please change the aesthetics to make it fit with the overall theme?
Thnx

KeyError: 'add_automatic_return'

Checking discussions database...
llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
llama_model_load: n_vocab = 32001
llama_model_load: n_ctx = 512
llama_model_load: n_embd = 4096
llama_model_load: n_mult = 256
llama_model_load: n_head = 32
llama_model_load: n_layer = 32
llama_model_load: n_rot = 128
llama_model_load: f16 = 2
llama_model_load: n_ff = 11008
llama_model_load: n_parts = 1
llama_model_load: type = 1
llama_model_load: ggml map size = 4017.70 MB
llama_model_load: ggml ctx size = 81.25 KB
llama_model_load: mem required = 5809.78 MB (+ 2052.00 MB per state)
llama_model_load: loading tensors from './models/gpt4all-lora-quantized-ggml.bin'
llama_model_load: model size = 4017.27 MB / num tensors = 291
llama_init_from_file: kv self size = 512.00 MB
Chatbot created successfully

  • Serving Flask app 'GPT4All-WebUI'
  • Debug mode: off
    [2023-04-16 14:42:34,117] {_internal.py:224} INFO - WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on http://localhost:9600
    [2023-04-16 14:42:34,117] {_internal.py:224} INFO - Press CTRL+C to quit
    Received message : hi
    [2023-04-16 14:42:47,260] {_internal.py:224} INFO - 127.0.0.1 - - [16/Apr/2023 14:42:47] "POST /bot HTTP/1.1" 200 -
    [2023-04-16 14:42:47,405] {_internal.py:224} ERROR - Error on request:
    Traceback (most recent call last):
    File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/werkzeug/serving.py", line 333, in run_wsgi
    execute(self.server.app)
    File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/werkzeug/serving.py", line 322, in execute
    for data in application_iter:
    File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/werkzeug/wsgi.py", line 500, in next
    return self._next()
    ^^^^^^^^^^^^
    File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/werkzeug/wrappers/response.py", line 50, in _iter_encoded
    for item in iterable:
    File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/flask/helpers.py", line 149, in generator
    yield from gen
    File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/env/lib/python3.11/site-packages/flask/helpers.py", line 149, in generator
    yield from gen
    File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/app.py", line 207, in parse_to_prompt_stream
    self.discussion_messages = self.prepare_query(message_id)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/mnt/d/WSL/NAT-WORKDIR/chatgpt/NAT_WORKOUT/GPT4ALL/gpt4all-ui/pyGpt4All/api.py", line 125, in prepare_query
    if self.personality["add_automatic_return"]:
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    KeyError: 'add_automatic_return'

Error when starting: "llama_init_from_file: failed to load model"

I'm getting an error when starting:

(venv) sweet gpt4all-ui % python app.py
llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
./models/gpt4all-lora-quantized-ggml.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74])
	you most likely need to regenerate your ggml files
	the benefit is you'll get 10-100x faster load times
	see https://github.com/ggerganov/llama.cpp/issues/91
	use convert-pth-to-ggml.py to regenerate from original pth
	use migrate-ggml-2023-03-30-pr613.py if you deleted originals
llama_init_from_file: failed to load model
Checking discussions database...
Ok
llama_generate: seed = 1680843169

system_info: n_threads = 8 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
zsh: segmentation fault  python app.py

I have validated the md5sum matches:

sweet gpt4all-ui % md5sum models/gpt4all-lora-quantized-ggml.bin
387eeb7cba52aaa278ebc2fe386649b1  models/gpt4all-lora-quantized-ggml.bin
sweet gpt4all-ui % cat models/gpt4all-lora-quantized-ggml.bin.md5
387eeb7cba52aaa278ebc2fe386649b1

install.sh and app.py look for different model names

The install.sh file runs a wget to clone the model to the following folder:

model/gpt4all-lora-quantized-ggml.bin

The app.py file is looking for a model in the models folder rather than model and the name is slightly different:

    chatbot_bindings = Model(ggml_model='./models/gpt4all-converted.bin', n_ctx=512)

When I move the downloaded file over to the new location and new name though, I see this error:

llama_model_load: loading model from './models/gpt4all-converted.bin' - please wait ...
./models/gpt4all-converted.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74])
	you most likely need to regenerate your ggml files
	the benefit is you'll get 10-100x faster load times
	see https://github.com/ggerganov/llama.cpp/issues/91
	use convert-pth-to-ggml.py to regenerate from original pth
	use migrate-ggml-2023-03-30-pr613.py if you deleted originals

Is there a missing conversion that should be occurring here and that's why it is failing? I was trying to run the two llama scripts recommended but couldn't quite figure them out. I'm on a mac for what its worth.

Model conversion error

Expected Behavior

Conversion of model gpt4all-lora-quantized-ggml.bin

Current Behavior

Do you want to convert the selected model to the new format? [Y,N]?Y

Converting the model to the new format...
Cloning into 'tmp\llama.cpp'...
remote: Enumerating objects: 1707, done.
remote: Counting objects: 100% (1707/1707), done.
remote: Compressing objects: 100% (623/623), done.
remote: Total 1707 (delta 1088), reused 1629 (delta 1050), pack-reused 0Receiving objects: 100% (1707/1707), 1.52 MiB | Receiving objects: 100% (1707/1707), 1.87 MiB | 3.10 MiB/s, done.

Resolving deltas: 100% (1088/1088), done.
1 file(s) moved.
C:\Users\jtone\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\python.exe: can't open file 'C:\gpt4all-ui\tmp\llama.cpp\migrate-ggml-2023-03-30-pr613.py': [Errno 2] No such file or directory

Error during model conversion. Restarting...
1 file(s) moved.

Steps to Reproduce

run install.bat
press B
Select Y option
Select Y option

Possible Solution

I manually copied the missing file into the tmp folder that was created and it worked as expected.

Context

Os Windows 11
Cpu AMD Ryzen 7 5700G
Ram 32.0 GB (27.9 GB usable)

pyllamacpp not support M1 chips MacBook

Current Behavior

Please describe the behavior you are currently experiencing.

Traceback (most recent call last):
File "/Users/laihenyi/Documents/GitHub/gpt4all-ui/app.py", line 29, in
from pyllamacpp.model import Model
File "/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/pyllamacpp/model.py", line 21, in
import _pyllamacpp as pp
ImportError: dlopen(/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so, 0x0002): tried: '/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so' (no such file), '/Users/laihenyi/Documents/GitHub/gpt4all-ui/env/lib/python3.11/site-packages/_pyllamacpp.cpython-311-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))

Steps to Reproduce

Please provide detailed steps to reproduce the issue.

  1. Step 1
  2. Step 2
  3. Step 3

Possible Solution

If you have any suggestions on how to fix the issue, please describe them here.

Context

Please provide any additional context about the issue.

Screenshots

If applicable, add screenshots to help explain the issue.

Run.bat stuck on logo - windows10

Everything installed good, no problems, converted the model. then when i run run.bat its stuck on this:

E:\VBA_PROJECTS\Git\gpt4all-ui>echo off
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHH     .HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHH.     ,HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHH.##  HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH#.HHHHH/*,*,*,*,*,*,*,*,***,*,**#HHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHH.*,,***,***,***,***,***,***,*******HHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*,,,,,HHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH.,,,***,***,***,***,***,***,***,***,***,***/HHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH*,,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHH#,***,***,***,***,***,***,***,***,***,***,***,**HHHHHHHHHHHHHHHHH
HHHHHHHHHH..HHH,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*#HHHHHHHHHHHHHHHH
HHHHHHH,,,**,/H*,***,***,***,,,*,***,***,***,**,,,**,***,***,***H,,*,***HHHHHHHH
HHHHHH.*,,,*,,,,,*,*,*,***#HHHHH.,,*,*,*,*,**/HHHHH.,*,*,*,*,*,*,*,*****HHHHHHHH
HHHHHH.*,***,*,*,***,***,.HHHHHHH/**,***,****HHHHHHH.***,***,***,*******HHHHHHHH
HHHHHH.,,,,,,,,,,,,,,,,,,,.HHHHH.,,,,,,,,,,,,.HHHHHH,,,,,,,,,,,,,,,,,***HHHHHHHH
HHHHHH.,,,,,,/H,,,**,***,***,,,*,***,***,***,**,,,,*,***,***,***H***,***HHHHHHHH
HHHHHHH.,,,,*.H,,,,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,***H*,,,,/HHHHHHHHH
HHHHHHHHHHHHHHH*,***,***,**,,***,***,***,***,***,***,***,***,**.HHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH,,,,,,,,*,,#H#,,,,,*,,,*,,,,,,,,*#H*,,,,,,,,,**HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH,,*,***,***,**/.HHHHHHHHHHHHH#*,,,*,***,***,*HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*,*HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH**,***,***,***,***,***,***,***,***,***,***,*.HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,*HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH**,***,***,*******/..HHHHHHHHH.#/*,*,,,***,***HHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH*,*,*,******#HHHHHHHHHHHHHHHHHHHHHHHHHHHH./**,,,.HHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH.,,*,***.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.*#HHHHHHHHHHHH
HHHHHHHHHHHHHHH/,,,*.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHH,,#HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHH.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Testing this on different pc.

Windows 10
CPU 3770 intel
GPU p2000 nvidia
RAM 32gb

on my other pc it ran no problem, differenc CPU RAM GPU and OS tho :)

Not sure how to trouble shoot this as it gives me no error.

when running install.bat i get a error

HHHHHHHHHHHHH.## HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH#.HHHHH/,,,,,,,,,,#HHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHH.,,,,,,,,HHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,,,,HHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH.,,,
,
,,,,,,,,/HHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHH#,
,,,,,,,,,,,**HHHHHHHHHHHHHHHHH
HHHHHHHHHH..HHH,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
#HHHHHHHHHHHHHHHH
HHHHHHH,,,,/H*,,,,,,,,,,,,,,,,H,,,HHHHHHHH HHHHHHHHHHHHHHHHHHHHH.,,,,,,,,HHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHH
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,,,,HHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH.,,,
,
,,,,,,,,/HHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHH#,
,,,,,,,,,,,**HHHHHHHHHHHHHHHHH
HHHHHHHHHH..HHH,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
#HHHHHHHHHHHHHHHH
HHHHHHH,,,,/H*,,,,,,,,,,,,,,,,H,,,HHHHHHHH
HHHHHH.
,,,
,,,,,
,
,
,
#HHHHH.,,,,,,/HHHHH.,,,,,,,,*****HHHHHHHH
HHHHHH.
,
,,,,,.HHHHHHH/,,HHHHHHH.,,,HHHHHHHH
HHHHHH.,,,,,,,,,,,,,,,,,,,.HHHHH.,,,,,,,,,,,,.HHHHHH,,,,,,,,,,,,,,,,,HHHHHHHH
HHHHHH.,,,,,,/H,,,
,
,
,,,
,
,
,,,,,,,,,H,HHHHHHHH
HHHHHHH.,,,,
.H,,,,
,
,
,
,,,,,,,,,,,,,,,,,,H,,,,/HHHHHHHHH
HHHHHHHHHHHHHHH
,
,,,,,,,,,,,,.HHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH,,,,,,,,,,#H#,,,,,,,,,,,,,,,,#H*,,,,,,,,,HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH,,,,
,/.HHHHHHHHHHHHH#,,,,,,HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH,
,
,,,,,,,,,,,,,,,,,,,,HHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH
*,
,,,,,,,,,,.HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH*,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,HHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHH
*,,,/..HHHHHHHHH.#/,,,,,HHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHH
,
,
,
#HHHHHHHHHHHHHHHHHHHHHHHHHHHH./,,,.HHHHHHHHHHHHHH
HHHHHHHHHHHHHHHH.,,
,
.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.#HHHHHHHHHHHH
HHHHHHHHHHHHHHH/,,,
.HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHH,,#HHHHHOKHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
Checking for pip...OK
Checking for venv...OK
Creating virtual environment ...OK
Activating virtual environment ...OK
Installing requirements ...
Requirement already satisfied: pip in c:\users\areleh\desktop\gpt4all-ui\env\lib\site-packages (23.0.1)
Collecting flask
Using cached Flask-2.2.3-py3-none-any.whl (101 kB)
Collecting nomic
Using cached nomic-1.1.6.tar.gz (29 kB)
Preparing metadata (setup.py) ... done
Collecting pytest
Using cached pytest-7.3.0-py3-none-any.whl (320 kB)
Collecting pyllamacpp
Using cached pyllamacpp-1.0.6-cp38-cp38-win32.whl (160 kB)
Collecting click>=8.0
Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting importlib-metadata>=3.6.0
Using cached importlib_metadata-6.3.0-py3-none-any.whl (22 kB)
Collecting itsdangerous>=2.0
Using cached itsdangerous-2.1.2-py3-none-any.whl (15 kB)
Collecting Jinja2>=3.0
Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting Werkzeug>=2.2.2
Using cached Werkzeug-2.2.3-py3-none-any.whl (233 kB)
Collecting jsonlines
Using cached jsonlines-3.1.0-py3-none-any.whl (8.6 kB)
Collecting loguru
Using cached loguru-0.6.0-py3-none-any.whl (58 kB)
Collecting rich
Using cached rich-13.3.3-py3-none-any.whl (238 kB)
Collecting requests
Using cached requests-2.28.2-py3-none-any.whl (62 kB)
Collecting numpy
Using cached numpy-1.24.2-cp38-cp38-win32.whl (12.5 MB)
Collecting pydantic
Using cached pydantic-1.10.7-py3-none-any.whl (157 kB)
Collecting wonderwords
Using cached wonderwords-2.2.0-py3-none-any.whl (44 kB)
Collecting tqdm
Using cached tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting cohere
Using cached cohere-4.1.4-py3-none-any.whl (28 kB)
Collecting pyarrow
Using cached pyarrow-11.0.0.tar.gz (1.0 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting packaging
Using cached packaging-23.0-py3-none-any.whl (42 kB)
Collecting tomli>=1.0.0
Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
Collecting exceptiongroup>=1.0.0rc8
Using cached exceptiongroup-1.1.1-py3-none-any.whl (14 kB)
Collecting pluggy<2.0,>=0.12
Using cached pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
Collecting colorama
Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting iniconfig
Using cached iniconfig-2.0.0-py3-none-any.whl (5.9 kB)
Collecting streamlit
Using cached streamlit-1.21.0-py2.py3-none-any.whl (9.7 MB)
Collecting sentencepiece
Using cached sentencepiece-0.1.97-cp38-cp38-win32.whl (1.1 MB)
Collecting streamlit-ace
Using cached streamlit_ace-0.1.1-py3-none-any.whl (3.6 MB)
Collecting pyllamacpp
Using cached pyllamacpp-1.0.5-cp38-cp38-win32.whl (160 kB)
Using cached pyllamacpp-1.0.3-cp38-cp38-win32.whl (162 kB)
Using cached pyllamacpp-1.0.2-cp38-cp38-win32.whl (162 kB)
Using cached pyllamacpp-1.0.1-cp38-cp38-win32.whl (162 kB)
Collecting zipp>=0.5
Using cached zipp-3.15.0-py3-none-any.whl (6.8 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.1.2-cp38-cp38-win32.whl (16 kB)
Collecting aiohttp<4.0,>=3.0
Using cached aiohttp-3.8.4-cp38-cp38-win32.whl (308 kB)
Collecting backoff<3.0,>=2.0
Using cached backoff-2.2.1-py3-none-any.whl (15 kB)
Collecting urllib3<1.27,>=1.21.1
Using cached urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting idna<4,>=2.5
Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting charset-normalizer<4,>=2
Using cached charset_normalizer-3.1.0-cp38-cp38-win32.whl (89 kB)
Collecting attrs>=19.2.0
Using cached attrs-22.2.0-py3-none-any.whl (60 kB)
Collecting win32-setctime>=1.0.0
Using cached win32_setctime-1.1.0-py3-none-any.whl (3.6 kB)
Collecting typing-extensions>=4.2.0
Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Collecting pygments<3.0.0,>=2.13.0
Using cached Pygments-2.14.0-py3-none-any.whl (1.1 MB)
Collecting markdown-it-py<3.0.0,>=2.2.0
Using cached markdown_it_py-2.2.0-py3-none-any.whl (84 kB)
Collecting cachetools>=4.0
Using cached cachetools-5.3.0-py3-none-any.whl (9.3 kB)
Collecting pandas<2,>=0.25
Using cached pandas-1.5.3-cp38-cp38-win32.whl (9.8 MB)
Collecting validators>=0.2
Using cached validators-0.20.0.tar.gz (30 kB)
Preparing metadata (setup.py) ... done
Collecting toml
Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB)
Collecting tornado>=6.0.3
Using cached tornado-6.2-cp37-abi3-win32.whl (424 kB)
Collecting tzlocal>=1.1
Using cached tzlocal-4.3-py3-none-any.whl (20 kB)
Collecting gitpython!=3.1.19
Using cached GitPython-3.1.31-py3-none-any.whl (184 kB)
Collecting pydeck>=0.1.dev5
Using cached pydeck-0.8.0-py2.py3-none-any.whl (4.7 MB)
Collecting pympler>=0.9
Using cached Pympler-1.0.1-py3-none-any.whl (164 kB)
Collecting protobuf<4,>=3.12
Using cached protobuf-3.20.3-cp38-cp38-win32.whl (780 kB)
Collecting python-dateutil
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting altair<5,>=3.2.0
Using cached altair-4.2.2-py3-none-any.whl (813 kB)
Collecting watchdog
Using cached watchdog-3.0.0-py3-none-win32.whl (82 kB)
Collecting blinker>=1.0.0
Using cached blinker-1.6.1-py3-none-any.whl (13 kB)
Collecting pillow>=6.2.0
Using cached Pillow-9.5.0-cp38-cp38-win32.whl (2.2 MB)
Collecting async-timeout<5.0,>=4.0.0a3
Using cached async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Collecting multidict<7.0,>=4.5
Using cached multidict-6.0.4-cp38-cp38-win32.whl (25 kB)
Collecting yarl<2.0,>=1.0
Using cached yarl-1.8.2-cp38-cp38-win32.whl (53 kB)
Collecting frozenlist>=1.1.1
Using cached frozenlist-1.3.3-cp38-cp38-win32.whl (31 kB)
Collecting aiosignal>=1.1.2
Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Collecting jsonschema>=3.0
Using cached jsonschema-4.17.3-py3-none-any.whl (90 kB)
Collecting toolz
Using cached toolz-0.12.0-py3-none-any.whl (55 kB)
Collecting entrypoints
Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB)
Collecting gitdb<5,>=4.0.1
Using cached gitdb-4.0.10-py3-none-any.whl (62 kB)
Collecting mdurl~=0.1
Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Collecting pytz>=2020.1
Using cached pytz-2023.3-py2.py3-none-any.whl (502 kB)
Collecting six>=1.5
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting tzdata
Using cached tzdata-2023.3-py2.py3-none-any.whl (341 kB)
Collecting backports.zoneinfo
Using cached backports.zoneinfo-0.2.1-cp38-cp38-win32.whl (36 kB)
Collecting pytz-deprecation-shim
Using cached pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl (15 kB)
Collecting decorator>=3.4.0
Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting smmap<6,>=3.0.1
Using cached smmap-5.0.0-py3-none-any.whl (24 kB)
Collecting importlib-resources>=1.4.0
Using cached importlib_resources-5.12.0-py3-none-any.whl (36 kB)
Collecting pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0
Using cached pyrsistent-0.19.3-cp38-cp38-win32.whl (60 kB)
Collecting pkgutil-resolve-name>=1.3.10
Using cached pkgutil_resolve_name-1.3.10-py3-none-any.whl (4.7 kB)
Building wheels for collected packages: pyarrow
Building wheel for pyarrow (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for pyarrow (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [336 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win32-cpython-38
creating build\lib.win32-cpython-38\pyarrow
copying pyarrow\benchmark.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\cffi.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\compute.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\conftest.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\csv.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\cuda.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\dataset.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\feather.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\filesystem.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\flight.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\fs.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\hdfs.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\ipc.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\json.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\jvm.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\orc.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\pandas_compat.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\plasma.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\serialization.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\substrait.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\types.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\util.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_compute_docstrings.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_generated_version.py -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_init_.py -> build\lib.win32-cpython-38\pyarrow
creating build\lib.win32-cpython-38\pyarrow\interchange
copying pyarrow\interchange\buffer.py -> build\lib.win32-cpython-38\pyarrow\interchange
copying pyarrow\interchange\column.py -> build\lib.win32-cpython-38\pyarrow\interchange
copying pyarrow\interchange\dataframe.py -> build\lib.win32-cpython-38\pyarrow\interchange
copying pyarrow\interchange\from_dataframe.py -> build\lib.win32-cpython-38\pyarrow\interchange
copying pyarrow\interchange_init_.py -> build\lib.win32-cpython-38\pyarrow\interchange
creating build\lib.win32-cpython-38\pyarrow\parquet
copying pyarrow\parquet\core.py -> build\lib.win32-cpython-38\pyarrow\parquet
copying pyarrow\parquet\encryption.py -> build\lib.win32-cpython-38\pyarrow\parquet
copying pyarrow\parquet_init_.py -> build\lib.win32-cpython-38\pyarrow\parquet
creating build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\arrow_16597.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\arrow_7980.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\conftest.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\deserialize_buffer.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\pandas_examples.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\pandas_threaded_import.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\read_record_batch.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\strategies.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_adhoc_memory_leak.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_array.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_builder.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_cffi.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_compute.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_convert_builtin.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_cpp_internals.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_csv.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_cuda.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_cuda_numba_interop.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_cython.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_dataset.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_deprecations.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_exec_plan.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_extension_type.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_feather.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_filesystem.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_flight.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_fs.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_gandiva.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_gdb.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_hdfs.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_io.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_ipc.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_json.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_jvm.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_memory.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_misc.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_orc.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_pandas.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_plasma.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_plasma_tf_op.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_scalars.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_schema.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_serialization.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_serialization_deprecated.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_sparse_tensor.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_strategies.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_substrait.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_table.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_tensor.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_types.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_udf.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\test_util.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\util.py -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests_init_.py -> build\lib.win32-cpython-38\pyarrow\tests
creating build\lib.win32-cpython-38\pyarrow\vendored
copying pyarrow\vendored\docscrape.py -> build\lib.win32-cpython-38\pyarrow\vendored
copying pyarrow\vendored\version.py -> build\lib.win32-cpython-38\pyarrow\vendored
copying pyarrow\vendored_init_.py -> build\lib.win32-cpython-38\pyarrow\vendored
creating build\lib.win32-cpython-38\pyarrow\tests\interchange
copying pyarrow\tests\interchange\test_conversion.py -> build\lib.win32-cpython-38\pyarrow\tests\interchange
copying pyarrow\tests\interchange\test_interchange_spec.py -> build\lib.win32-cpython-38\pyarrow\tests\interchange
copying pyarrow\tests\interchange_init_.py -> build\lib.win32-cpython-38\pyarrow\tests\interchange
creating build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\common.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\conftest.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\encryption.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_basic.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_compliant_nested_type.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_dataset.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_data_types.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_datetime.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_encryption.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_metadata.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_pandas.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_parquet_file.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_parquet_writer.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
copying pyarrow\tests\parquet_init_.py -> build\lib.win32-cpython-38\pyarrow\tests\parquet
running egg_info
writing pyarrow.egg-info\PKG-INFO
writing dependency_links to pyarrow.egg-info\dependency_links.txt
writing entry points to pyarrow.egg-info\entry_points.txt
writing requirements to pyarrow.egg-info\requires.txt
writing top-level names to pyarrow.egg-info\top_level.txt
listing git files failed - pretending there aren't any
reading manifest file 'pyarrow.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '..\LICENSE.txt'
warning: no files found matching '..\NOTICE.txt'
warning: no previously-included files matching '.so' found anywhere in distribution
warning: no previously-included files matching '
.pyc' found anywhere in distribution
warning: no previously-included files matching '~' found anywhere in distribution
warning: no previously-included files matching '#
' found anywhere in distribution
warning: no previously-included files matching '.git*' found anywhere in distribution
warning: no previously-included files matching '.DS_Store' found anywhere in distribution
no previously-included directories found matching '.asv'
writing manifest file 'pyarrow.egg-info\SOURCES.txt'
copying pyarrow_init_.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_compute.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_compute.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_csv.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_csv.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_cuda.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_cuda.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_dataset.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_dataset.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_dataset_orc.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_dataset_parquet.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_exec_plan.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_feather.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_flight.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_fs.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_fs.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_gcsfs.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_hdfs.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_hdfsio.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_json.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_orc.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_orc.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_parquet.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_parquet.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_parquet_encryption.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_parquet_encryption.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_plasma.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_pyarrow_cpp_tests.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_pyarrow_cpp_tests.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_s3fs.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow_substrait.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\array.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\benchmark.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\builder.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\compat.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\config.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\error.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\gandiva.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\io.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\ipc.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\lib.pxd -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\lib.pyx -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\memory.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\pandas-shim.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\public-api.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\scalar.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\serialization.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\table.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\tensor.pxi -> build\lib.win32-cpython-38\pyarrow
copying pyarrow\types.pxi -> build\lib.win32-cpython-38\pyarrow
creating build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\common.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libarrow.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libarrow_cuda.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libarrow_dataset.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libarrow_dataset_parquet.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libarrow_feather.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libarrow_flight.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libarrow_fs.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libarrow_python.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libarrow_substrait.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libgandiva.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes\libplasma.pxd -> build\lib.win32-cpython-38\pyarrow\includes
copying pyarrow\includes_init_.pxd -> build\lib.win32-cpython-38\pyarrow\includes
creating build\lib.win32-cpython-38\pyarrow\tensorflow
copying pyarrow\tensorflow\plasma_op.cc -> build\lib.win32-cpython-38\pyarrow\tensorflow
copying pyarrow\tests\bound_function_visit_strings.pyx -> build\lib.win32-cpython-38\pyarrow\tests
copying pyarrow\tests\pyarrow_cython_example.pyx -> build\lib.win32-cpython-38\pyarrow\tests
creating build\lib.win32-cpython-38\pyarrow\src
creating build\lib.win32-cpython-38\pyarrow\src\arrow
creating build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\CMakeLists.txt -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\api.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\arrow_to_pandas.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\arrow_to_pandas.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\arrow_to_python_internal.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\benchmark.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\benchmark.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\common.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\common.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\csv.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\csv.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\datetime.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\datetime.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\decimal.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\decimal.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\deserialize.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\deserialize.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\extension_type.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\extension_type.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\filesystem.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\filesystem.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\flight.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\flight.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\gdb.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\gdb.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\helpers.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\helpers.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\inference.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\inference.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\init.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\init.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\io.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\io.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\ipc.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\ipc.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\iterators.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_convert.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_convert.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_internal.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_interop.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_to_arrow.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_to_arrow.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\parquet_encryption.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\parquet_encryption.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\pch.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\platform.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\pyarrow.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\pyarrow.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\pyarrow_api.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\pyarrow_lib.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\python_test.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\python_test.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\python_to_arrow.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\python_to_arrow.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\serialize.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\serialize.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\type_traits.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\udf.cc -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\udf.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\visibility.h -> build\lib.win32-cpython-38\pyarrow\src\arrow\python
creating build\lib.win32-cpython-38\pyarrow\tests\data
creating build\lib.win32-cpython-38\pyarrow\tests\data\feather
copying pyarrow\tests\data\feather\v0.17.0.version.2-compression.lz4.feather -> build\lib.win32-cpython-38\pyarrow\tests\data\feather
creating build\lib.win32-cpython-38\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\README.md -> build\lib.win32-cpython-38\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.emptyFile.jsn.gz -> build\lib.win32-cpython-38\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.emptyFile.orc -> build\lib.win32-cpython-38\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.test1.jsn.gz -> build\lib.win32-cpython-38\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.test1.orc -> build\lib.win32-cpython-38\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.testDate1900.jsn.gz -> build\lib.win32-cpython-38\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.testDate1900.orc -> build\lib.win32-cpython-38\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\decimal.jsn.gz -> build\lib.win32-cpython-38\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\decimal.orc -> build\lib.win32-cpython-38\pyarrow\tests\data\orc
creating build\lib.win32-cpython-38\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.all-named-index.parquet -> build\lib.win32-cpython-38\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.column-metadata-handling.parquet -> build\lib.win32-cpython-38\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.parquet -> build\lib.win32-cpython-38\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.some-named-index.parquet -> build\lib.win32-cpython-38\pyarrow\tests\data\parquet
running build_ext
creating C:\Users\areleh\AppData\Local\Temp\pip-install-y5ug8gd5\pyarrow_5157ded39da74e2bb194e993669fee23\build\temp.win32-cpython-38
Traceback (most recent call last):
File "C:\Users\areleh\Desktop\gpt4all-ui\env\lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 353, in
main()
File "C:\Users\areleh\Desktop\gpt4all-ui\env\lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\areleh\Desktop\gpt4all-ui\env\lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools\build_meta.py", line 413, in build_wheel
return self._build_with_temp_dir(['bdist_wheel'], '.whl',
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools\build_meta.py", line 398, in _build_with_temp_dir
self.run_setup()
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools\build_meta.py", line 484, in run_setup
super(BuildMetaLegacyBackend,
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools\build_meta.py", line 335, in run_setup
exec(code, locals())
File "", line 498, in
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools_init
.py", line 108, in setup
return distutils.core.setup(**attrs)
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools_distutils\core.py", line 185, in setup
return run_commands(dist)
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools\dist.py", line 1221, in run_command
super().run_command(command)
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\wheel\bdist_wheel.py", line 343, in run
self.run_command("build")
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools\dist.py", line 1221, in run_command
super().run_command(command)
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools_distutils\command\build.py", line 131, in run
self.run_command(cmd_name)
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools\dist.py", line 1221, in run_command
super().run_command(command)
File "C:\Users\areleh\AppData\Local\Temp\pip-build-env-pg4e9zum\overlay\Lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "", line 94, in run
File "", line 325, in _run_cmake
RuntimeError: Not supported on 32-bit Windows
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pyarrow
Failed to build pyarrow
ERROR: Could not build wheels for pyarrow, which is required to install pyproject.toml-based projects
Failed to install required packages. Please check your internet connection and try again.

can't run:invalid model file

Got Error

llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
./models/gpt4all-lora-quantized-ggml.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74])
you most likely need to regenerate your ggml files
the benefit is you'll get 10-100x faster load times
see ggerganov/llama.cpp#91
use convert-pth-to-ggml.py to regenerate from original pth
use migrate-ggml-2023-03-30-pr613.py if you deleted originals
llama_init_from_file: failed to load model

Enviroment

M1 Pro Mac
model file md5 is 387eeb7cba52aaa278ebc2fe386649b1 equal md5 file on website
But I can run original gpt4all clone from github.just clone,download model file in chat folder and run.

gpt4all-ui load model onto gpu not cpu

Expected Behavior

Ability to invoke ggml model in gpu mode using gpt4all-ui

Current Behavior

Unclear how to pass the parameters or which file to modify to use gpu model calls.

Steps to Reproduce

Install gpt4all-ui
run app.py
model loaded via cpu only

Possible Solution

Pass the gpu parameters to the script or edit underlying conf files (which ones?)

Context

Have gp4all running nicely with the ggml model via gpu on linux/gpu server. Trying to use the fantastic gpt4all-ui application. Struggling to figure out how to have the ui app invoke the model onto the server gpu. It is stunningly slow on cpu based loading. Many many thanks to any advice providers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.