Code Monkey home page Code Monkey logo

Comments (14)

mlabonne avatar mlabonne commented on May 18, 2024 2

This problem might come from the fact that Microsoft changed the architecture after phi-2's release. The models that were fine-tuned still use the old one. It might work if you find a copy of the old base model. See the difference in mergekit:

from llm-course.

Venkman42 avatar Venkman42 commented on May 18, 2024 1

I got it working now thanks to some help from another friendly Huggingface User. I had to use an older Version of llama.cpp and use convertHFtogguf.py first and then quanitizing it with quantize (https://huggingface.co/brittlewis12/phi-2-orange-GGUF/discussions/1)

Thanks again for all your help. I finally have my first working merge now thanks to you :)
Feel free to check out my little Phiter https://huggingface.co/Venkman42/Phiter

I gave it a testrun and so far im quite satisfied wih the results. At least it doesnt seem to perform worse than the base models.

Would you kindly do me the honor and run an eval on it for YALL?

from llm-course.

mlabonne avatar mlabonne commented on May 18, 2024 1

Haha well done! Sure, running the eval now :)

from llm-course.

Venkman42 avatar Venkman42 commented on May 18, 2024

Thanks, I'll try it with the old version. But I somehow got the same error when attempting a pass-through merge between phi-2 and deepseek, but I got the error for the deepseek model. Is it not possible to merge llms with different architectures using pass-through in general? Is there like a Blogpost where you go into this already that I haven't seen?

from llm-course.

Venkman42 avatar Venkman42 commented on May 18, 2024

This problem might come from the fact that Microsoft changed the architecture after phi-2's release. The models that were fine-tuned still use the old one. It might work if you find a copy of the old base model. See the difference in mergekit:

I just tried it again with amgadhasan/phi-2 as the basemodel, which should be the old Phi-2, but now i got this error:
RuntimeError: Tensor lm_head.ln.weight required but not present in model amgadhasan/phi-2

Do i need to change a setting in LazyMergeKit so it pulls the configuration for the old phi-2?

from llm-course.

mlabonne avatar mlabonne commented on May 18, 2024

Looks like you still don't have the same tensors in all of your models. You can quickly check the names of your layers on the model card by clicking on the arrow next to "safetensors".

from llm-course.

Venkman42 avatar Venkman42 commented on May 18, 2024

Looks like you still don't have the same tensors in all of your models. You can quickly check the names of your layers on the model card by clicking on the arrow next to "safetensors".

Thanks for the hint, I'll check that on my next attempt :)

from llm-course.

Venkman42 avatar Venkman42 commented on May 18, 2024

Looks like you still don't have the same tensors in all of your models. You can quickly check the names of your layers on the model card by clicking on the arrow next to "safetensors".

I got it to work with another Phi-2 Model. I think you were right. My merge models had different names in tensors (transformer...) than Microsoft Phi-2(model....).

I got it to merge, now I'm trying to create gguf files using your notebook.
But now I get the following error:
Filtering content: 100% (3/3), 5.17 GiB | 59.74 MiB/s, done. Loading model file Phiter/model-00001-of-00003.safetensors Loading model file Phiter/model-00001-of-00003.safetensors Loading model file Phiter/model-00002-of-00003.safetensors Loading model file Phiter/model-00003-of-00003.safetensors Traceback (most recent call last): File "/content/llama.cpp/convert.py", line 1483, in <module> main() File "/content/llama.cpp/convert.py", line 1419, in main model_plus = load_some_model(args.model) File "/content/llama.cpp/convert.py", line 1280, in load_some_model model_plus = merge_multifile_models(models_plus) File "/content/llama.cpp/convert.py", line 730, in merge_multifile_models model = merge_sharded([mp.model for mp in models_plus]) File "/content/llama.cpp/convert.py", line 709, in merge_sharded return {name: convert(name) for name in names} File "/content/llama.cpp/convert.py", line 709, in <dictcomp> return {name: convert(name) for name in names} File "/content/llama.cpp/convert.py", line 684, in convert lazy_tensors: list[LazyTensor] = [model[name] for model in models] File "/content/llama.cpp/convert.py", line 684, in <listcomp> lazy_tensors: list[LazyTensor] = [model[name] for model in models] KeyError: 'transformer.h.17.mlp.fc1.bias' ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: Tesla T4, compute capability 7.5, VMM: yes main: build = 2270 (4804215c) main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu main: quantizing 'Phiter/phiter.fp16.bin' to 'Phiter/phiter.Q4_K_M.gguf' as Q4_K_M llama_model_quantize: failed to quantize: failed to open Phiter/phiter.fp16.bin: No such file or directory main: failed to quantize model from 'Phiter/phiter.fp16.bin' ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: Tesla T4, compute capability 7.5, VMM: yes main: build = 2270 (4804215c) main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu main: quantizing 'Phiter/phiter.fp16.bin' to 'Phiter/phiter.Q5_K_M.gguf' as Q5_K_M llama_model_quantize: failed to quantize: failed to open Phiter/phiter.fp16.bin: No such file or directory main: failed to quantize model from 'Phiter/phiter.fp16.bin'

Do you see whats the problem here?
The fp16.bin couldn't be created, but why?

I used the collab from this blog artile of yours:
https://mlabonne.github.io/blog/posts/Quantize_Llama_2_models_using_ggml.html

from llm-course.

mlabonne avatar mlabonne commented on May 18, 2024

Cool! You should be able to make GGUF versions of the model. Once again, maybe a problem with the old architecture? I can't really help you with that, unfortunately.

from llm-course.

Venkman42 avatar Venkman42 commented on May 18, 2024

Oh okay, I'll guess I'll try my luck by asking the people who made the ggufs for DolphinPhi and PhiOrange how they did it.

Thank you a lot for helping me troubleshoot :) Information for these kind of tasks is still hard to find, so I really appreciate you answering my rookie questions :)

from llm-course.

Venkman42 avatar Venkman42 commented on May 18, 2024

Thank you :) I'm curious how it will score
Btw, how long do these evals usually take for smaller model? And what hardware do you run them on?

from llm-course.

mlabonne avatar mlabonne commented on May 18, 2024

Congrats new SOTA in terms of phi-2 fine-tune: https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard 🎉

So I just use LLM AutoEval. It took 2 hours and 18 minutes to evaluate Phiter on a RTX 3090.

from llm-course.

Venkman42 avatar Venkman42 commented on May 18, 2024

Damn, I had a feeling it was good, but I didn't think it would outsmart both base models on all benchmarks and even outperform the phixtral models.

Oh okay, so it doesn't take that much compute. Maybe I'll try running it myself sometime.
Thanks again for taking the time 😊

I'm gonna close this issue now 😁

from llm-course.

Venkman42 avatar Venkman42 commented on May 18, 2024

I credited you on my model card for helping me troubleshoot, I hope that's okay :)

from llm-course.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.