Comments (7)
See huggingface/transformers#30334, this is not our script and sadly we can't provide support for it.
from llama.cpp.
You'll want to use convert-hf-to-gguf.py
. The Llama3 BPE pretokenizer is supported by default in convert-hf-to-gguf-update.py
.
from llama.cpp.
How to convert origin Meta-Llama-3 to gguf?
❯ ls models/Meta-Llama-3-8B
checklist.chk consolidated.00.pth params.json tokenizer.model
from llama.cpp.
Convert the .pth
weights yourself using https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py, or download the weights from huggingface. Then you can use convert-hf-to-gguf.py
from llama.cpp.
Convert the
.pth
weights yourself using https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py, or download the weights from huggingface. Then you can useconvert-hf-to-gguf.py
I have tried it:
python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir ../llama.cpp/models/Meta-Llama-3-8B --output_dir converted
but it produce error:
.venv/lib/python3.10/site-packages/sentencepiece/__init__.py", line 316, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: could not parse ModelProto from ../llama.cpp/models/Meta-Llama-3-8B/tokenizer.model
from llama.cpp.
Thanks a lot!
from llama.cpp.
If you update to the latest version of transformers the scripts supports the conversion
from llama.cpp.
Related Issues (20)
- Bug: Weird output from llama-speculative HOT 11
- Bug: Qwen-2-7b-q8 and Qwen-2-7b-instruct-q8 giving weird output when run with CUDA support HOT 2
- Bug: ROCm CUDA error HOT 1
- Run Llama.cpp in silent mode HOT 1
- Llama.cpp release notes lacking descriptions in the github.com page
- lliblama.so is missing HOT 1
- Bug: GGML_CUDA_FORCE_CUBLAS cannot be compile for hipblas HOT 2
- Newest apple model unsupported...
- Bug: Failed to load model HOT 10
- Support for H2O Danube3 Family of Models HOT 4
- Feature Request: Support Codestral Mamba HOT 6
- Este set
- <?xml version="1.0" encoding="UTF-8"?>
- Bug: Can't quantize 405B Mega merge HOT 4
- Feature Request: Add support for Lite-Mistral-Instruct chat template HOT 1
- Feature Request: Architecture "LlavaMistralForCausalLM" not supported! HOT 1
- Bug: Docker build warnings
- Bug: RPC server doesn't load GPU if I use Vulkan
- Bug: python3 convert.py [Errno 2] No such file or directory HOT 2
- Bug: After updating the docker image, legacy models began issuing an EOS token at the end of generation
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llama.cpp.