Code Monkey home page Code Monkey logo

Comments (14)

matteoserva avatar matteoserva commented on July 17, 2024 4

There is now a PR that fixes the soft capping problem: #8197

Another issue that might be relevant is that gemma 2 uses a sliding window attention instead of global attention for every other layer. It could be missing, which means that the context is currently limited to 4096 tokens. See last comment to that issue: #3377

This might also solve the Phi3 issue: #7709

P.S. Paste your full llama.cpp command to replicate the issue. I'm curious.

@0wwafa the simplest command you can run is the following:
./llama-cli -m gemma-2-27b-it-Q6_K.gguf -p "<bos><start_of_turn>user\nRepeat the question and then answer it: Matteo has 20 apples, he buys 20 oranges. Then he discards half of his fruits equally. Then he discards a quarter of his fruits equally between apples and oranges. How many apples remain?<end_of_turn>\n<start_of_turn>model\n"

EDIT

After #8197 the output improved a lot.
The local model still explodes after simple prompts that are not an issue for the aistudio version.

The simplest prompt that completely breaks the local model is:
Completa la frase: tanto va la gatta al lardo che...

The aistudio model answers:
tanto va la gatta al lardo che ci lascia lo zampino.
which is the only correct answer.

The local model starts rambling about fat pigs and then comments its own answer in spanish.

<bos><start_of_turn>user
Completa la frase: tanto va la gatta al lardo che...<end_of_turn>
<start_of_turn>model
... **se la scrofa la ingrassa**. 

Esta es una frase hecha italiana que significa que si alguien insiste mucho en algo, al final lo conseguirá, aunque sea por casualidad o por la ayuda de alguien más.

The model was requantized from the hf repo version after updating both the hf repo and transformers, and after merging the soft capping PR. Quants used: Q8_0

from llama.cpp.

foldl avatar foldl commented on July 17, 2024 3

I have implemented those two "soft capping" and interleaved SWA/full attention in chatllm.cpp, and Q8_0 quantized Gemma-2 could solve this fruit problem with greedy sampling (while Q4_1 fails):

    ________          __  __    __    __  ___ 
   / ____/ /_  ____ _/ /_/ /   / /   /  |/  /_________  ____
  / /   / __ \/ __ `/ __/ /   / /   / /|_/ // ___/ __ \/ __ \
 / /___/ / / / /_/ / /_/ /___/ /___/ /  / // /__/ /_/ / /_/ /
 \____/_/ /_/\__,_/\__/_____/_____/_/  /_(_)___/ .___/ .___/
You are served by Gemma-2,                    /_/   /_/
with 27227128320 (27.2B) parameters.

You  > Matteo has 20 apples, he buys 20 oranges. Then he discards half of his fruits equally. Then he discards a quarter of his fruits equally between apples and oranges. How many apples remain?
A.I. > Here's how to solve the problem step-by-step:

1. **Total Fruits:** Matteo starts with 20 apples + 20 oranges = 40 fruits.

2. **First Discard:** He discards half, which is 40 fruits / 2 = 20 fruits. This leaves him with 40 fruits - 20 fruits = 20 fruits.

3. **Fruits After First Discard:** He now has 10 apples and 10 oranges.

4. **Second Discard:** He discards a quarter of his fruits, which is 20 fruits / 4 = 5 fruits.

5. **Final Apple Count:** Since he discards 5 fruits equally between apples and oranges, he loses 5 fruits / 2 = 2.5 apples. Since you can't have half an apple, we'll round down. This leaves him with 10 apples - 2 apples = 8 apples.


**Answer:** Matteo has 8 apples remaining. 

from llama.cpp.

matteoserva avatar matteoserva commented on July 17, 2024 2

Soft capping might be missing, see huggingface/transformers#31698.

They talk about it in the paper. They say that soft capping was temporarily disabled to make the model compatible with existing implementations of flash attention, and that the performance hit is negligible.

Apparently it was not negligible.

from llama.cpp.

foldl avatar foldl commented on July 17, 2024 2

All you need is to go deeper.

I would like to report that a self-merged (or self stacked) Gemma-2 9B (Q8_0) can solve this math problem, too.

Here layer No. 8/9/16/17/24/25/32/33 are repeated (resulting in a 10.8B model):

main.exe --temp 0  -m gemma-2-9b.bin --layer_spec "0:10,8:18,16:26,24:34,32:42" -i
    ________          __  __    __    __  ___ 
   / ____/ /_  ____ _/ /_/ /   / /   /  |/  /_________  ____
  / /   / __ \/ __ `/ __/ /   / /   / /|_/ // ___/ __ \/ __ \
 / /___/ / / / /_/ / /_/ /___/ /___/ /  / // /__/ /_/ / /_/ /
 \____/_/ /_/\__,_/\__/_____/_____/_/  /_(_)___/ .___/ .___/
You are served by Gemma-2,                    /_/   /_/
with 10827267584 (10.8B) parameters.

You  > Matteo has 20 apples, he buys 20 oranges. Then he discards half of his fruits equally. Then he discards a quarter of his fruits equally between apples and oranges. How many apples remain?
A.I. > Here's how to solve this problem step-by-step:

1. **Total Fruit:** Matteo starts with 20 apples + 20 oranges = 40 fruits.

2. **Discarding Half:** He discards half his fruit, so he has 40 fruits / 2 = 20 fruits left.

3. **Apples and Oranges Left:**  He now has 20 fruits, which is an equal mix of apples and oranges (since he discarded them equally). So he has 20 fruits / 2 = 10 apples and 10 oranges left.

4. **Discarding a Quarter:** He discards a quarter of his remaining fruit equally between apples and oranges. 
   *  For apples: 10 apples / 4 = 2.5 apples. Since he can't have half an apple, we'll say he discards 2 apples.
   *  For oranges: 10 oranges / 4 = 2.5 oranges.  We'll say he discards 2 oranges.

5. **Final Count:**  Matteo has 10 apples - 2 apples = 8 apples left.



**Answer:** Matteo has 8 apples remaining. 

An even deeper one (--layer_spec "0:12,6:18,12:24,18:30,24:36,30:42", 15.2B) can solve this, too.

from llama.cpp.

EliEron avatar EliEron commented on July 17, 2024 1

I'm having the exact same experience, I've been testing Gemma-2 for data extraction, the 9B model almost gets the answers perfect, whereas the 27B model only understands it has to output JSON, literally everything else (including the JSON key names) it gets wrong. Like it's a night and day difference between them.

I've tested the full model using Nvidia's NIM service (You get a 1000 requests from signing up) and the 27B model has zero issues with any of the tasks there.

I am running a Q8 Quant so the quality loss should be minimal. So I am very confident something is wrong with the quantized 27B model.

from llama.cpp.

Rotatingxenomorph avatar Rotatingxenomorph commented on July 17, 2024 1

Seems like Google broke something

https://huggingface.co/google/gemma-2-27b-it/discussions/10

from llama.cpp.

0wwafa avatar 0wwafa commented on July 17, 2024 1

@matteoserva chek my quantizations: https://huggingface.co/RobertSinclair
if you want we can further discuss this (facebook/whatsapp/discord) I speak italian too.

from llama.cpp.

0wwafa avatar 0wwafa commented on July 17, 2024

aistudio? are you confusing gemma with gemini?

from llama.cpp.

matteoserva avatar matteoserva commented on July 17, 2024

Gemma 2 is available in AI studio since yesterday. I live in Italy. I don't know if it's available everywhere.

from llama.cpp.

matteoserva avatar matteoserva commented on July 17, 2024

I can also confirm that the 9b is affected less by this.

I tried the same prompt with it. It outputs the wrong numeric solution but it was able to repeat the question word by word as requested.

Prompt was Repeat the question and then answer it: [my question]

from llama.cpp.

MoonRide303 avatar MoonRide303 commented on July 17, 2024

Soft capping might be missing, see huggingface/transformers#31698.

from llama.cpp.

0wwafa avatar 0wwafa commented on July 17, 2024

Gemma 2 is available in AI studio since yesterday. I live in Italy. I don't know if it's available everywhere.

I didn't notice! I will try it.

P.S.
Paste your full llama.cpp command to replicate the issue. I'm curious.

from llama.cpp.

matteoserva avatar matteoserva commented on July 17, 2024

I have implemented those two "soft capping" and interleaved SWA/full attention in chatllm.cpp, and Q8_0 quantized Gemma-2 could solve this fruit problem with greedy sampling (while Q4_1 fails):

I tested your implementation at Q8_0 with my benchmarks and the output matches exactly the reference implementation by google (To clarify: I mean the gemma2 model on AI studio).
My congratulations! You did a really good job.

from llama.cpp.

matteoserva avatar matteoserva commented on July 17, 2024

closing this and continuing in #8240

from llama.cpp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.