I'd like to automate a few tests to make sure a model works - ( with llama.cpp as a baseline )
Currently I can't seem to match Llama.cpp's answer... ( llama-cpp-rs answers incorrectly )
..trying the llama-cpp-rs example OR my modified version ( see below )
--
..as a reference - Oobabooga using same model get the correct answer.
( not the exactly the same - but logically correct - like llama.cpp )
--
I presume this is down to llama-cpp-rs not yet having the same sample chain ?
( we don't seem to have CFG - maybe I'm using sample greedy / sample stages / something else the wrong way )
--
That said...
Question...
Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?
Llama-cpp-rs answer... ..close but incorrect
Let's compare the cost of each type of berry:
1. Blueberries cost more than strawberries.
2. Blueberries cost less than raspberries.
From the first statement, we know that blueberries are more expensive than strawberries. From the second statement, we know that blueberries are cheaper than raspberries.
To determine if the third statement, "Raspberries cost more than strawberries and blueberries," is true, we need to compare the cost of raspberries to both strawberries and blueberries.
Since blueberries are cheaper than raspberries, but more expensive than strawberries, and we don't have enough information to compare the cost of raspberries to strawberries directly, we cannot definitively say whether the third statement is true or false based on the given information.
---> Therefore, the answer is: Insufficient information to determine.
Llama.cpp answer... ...correct
Let's compare the prices of each type of berry:
1. Blueberries cost more than strawberries.
2. Blueberries cost less than raspberries.
To determine if the third statement "Raspberries cost more than strawberries and blueberries" is true, we need to compare the price of raspberries with both strawberries and blueberries:
1. Raspberries cost more than strawberries: This is not stated directly in the given information, but it can be inferred from statement 1 (blueberries cost less than raspberries, and blueberries cost more than strawberries).
2. Raspberries cost more than blueberries: This is stated directly in the second statement.
Therefore, based on the given information,
---> the third statement "Raspberries cost more than strawberries and blueberries" is true. [end of text]
The model
TheBloke/Mistral-7B-Instruct-v0.2-GGUF --> mistral-7b-instruct-v0.2.Q4_K_S.gguf
Default Llama.cpp sample order...
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
Sample settings...
repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.100
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
The code... ( please forgive my Rust - only rusted for two months... )
let model = init_model()?;
let backend = LlamaBackend::init()?;
let ctx_params = init_context()?;
run_prompt("Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?", &model, &backend, &ctx_params)?;
Calling the following...
//! This is a translation of simple.cpp in llama.cpp using llama-cpp-2 -- with additional sample stages
#![allow(
clippy::cast_possible_wrap,
clippy::cast_possible_truncation,
clippy::cast_precision_loss,
clippy::cast_sign_loss
)]
use anyhow::{/* anyhow,*/ bail, Context, Result};
use llama_cpp_2::context::params::LlamaContextParams;
use llama_cpp_2::ggml_time_us;
use llama_cpp_2::llama_backend::LlamaBackend;
use llama_cpp_2::llama_batch::LlamaBatch;
use llama_cpp_2::model::params::LlamaModelParams;
use llama_cpp_2::model::AddBos;
use llama_cpp_2::model::LlamaModel;
use llama_cpp_2::token::data_array::LlamaTokenDataArray;
use llama_cpp_2::token::LlamaToken;
use std::io::Write;
use std::num::NonZeroU32;
use std::time::Duration;
pub fn init_model() -> Result<LlamaModel> {
let backend = LlamaBackend::init()?;
let model_params = LlamaModelParams::default()
.with_n_gpu_layers(33)
.with_use_mlock(false);
//.with_use_mlock(true);
let model_path = std::env::current_exe()
.expect("Failed to get current executable path")
.parent()
.expect("Failed to get executable directory")
.read_dir()
.expect("Failed to read directory contents")
.filter_map(|entry| entry.ok())
.find(|entry| entry.path().extension().and_then(std::ffi::OsStr::to_str) == Some("gguf"))
.expect("No .gguf file found in the current directory")
.path();
let model = LlamaModel::load_from_file(&backend, &model_path, &model_params)
.with_context(|| "unable to load model")?;
Ok(model)
}
pub fn init_context() -> Result<LlamaContextParams> {
let ctx_params = LlamaContextParams::default()
.with_n_ctx(NonZeroU32::new(2048))
.with_seed(1234);
Ok(ctx_params)
}
pub fn run_prompt(prompt: &str, model: &LlamaModel, backend: &LlamaBackend, ctx_params: &LlamaContextParams) -> Result<()> {
let n_len = 512;
let mut ctx = model
.new_context(backend, ctx_params.clone())
.with_context(|| "unable to create the llama_context")?;
let tokens_list = model
.str_to_token(prompt, AddBos::Always)
.with_context(|| format!("failed to tokenize {prompt}"))?;
let n_cxt = ctx.n_ctx() as i32;
let n_kv_req = tokens_list.len() as i32 + (n_len - tokens_list.len() as i32);
eprintln!("n_len = {n_len}, n_ctx = {n_cxt}, k_kv_req = {n_kv_req}");
if n_kv_req > n_cxt {
bail!(
"n_kv_req > n_ctx, the required kv cache size is not big enough
either reduce n_len or increase n_ctx"
)
}
if tokens_list.len() >= usize::try_from(n_len)? {
bail!("the prompt is too long, it has more tokens than n_len")
}
// print the prompt token-by-token
eprintln!();
for token in &tokens_list {
eprint!("{}", model.token_to_str(*token)?);
}
std::io::stderr().flush()?;
// create a llama_batch with size 512
// we use this object to submit token data for decoding
let mut batch = LlamaBatch::new(512, 1);
let last_index: i32 = (tokens_list.len() - 1) as i32;
for (i, token) in (0_i32..).zip(tokens_list.into_iter()) {
// llama_decode will output logits only for the last token of the prompt
let is_last = i == last_index;
batch.add(token, i, &[0], is_last)?;
}
ctx.decode(&mut batch)
.with_context(|| "llama_decode() failed")?;
// main loop
let mut n_cur = batch.n_tokens();
let mut n_decode = 0;
let t_main_start = ggml_time_us();
while n_cur <= n_len {
let candidates = ctx.candidates_ith(batch.n_tokens() - 1);
let mut candidates_p = LlamaTokenDataArray::from_iter(candidates, false);
// Llama.cpp default sample order...
// CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
// --------------------------------------------------------------------------------
// Sample settings...
//repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
// top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.100
//mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
//CFG seems we don't have it ?? ( only in llama.cpp )
// Penalties
let history = vec![
LlamaToken::new(2),
LlamaToken::new(1),
LlamaToken::new(0),
];
ctx.sample_repetition_penalty(&mut candidates_p, &history, 64, 1.1,
0.0, 0.0);
ctx.sample_top_k(&mut candidates_p, 40, 1);
ctx.sample_tail_free(&mut candidates_p, 1.0, 1);
ctx.sample_typical(&mut candidates_p, 1.0, 1);
ctx.sample_top_p(&mut candidates_p, 0.950, 1);
ctx.sample_min_p(&mut candidates_p, 0.05, 1);
ctx.sample_temp(&mut candidates_p, 0.1);
let new_token_id = ctx.sample_token_greedy(candidates_p);
if new_token_id == model.token_eos() {
eprintln!();
break;
}
print!("{}", model.token_to_str(new_token_id)?);
std::io::stdout().flush()?;
batch.clear();
batch.add(new_token_id, n_cur, &[0], true)?;
n_cur += 1;
ctx.decode(&mut batch).with_context(|| "failed to eval")?;
n_decode += 1;
}
eprintln!("\n");
let t_main_end = ggml_time_us();
let duration = Duration::from_micros((t_main_end - t_main_start) as u64);
eprintln!(
"decoded {} tokens in {:.2} s, speed {:.2} t/s\n",
n_decode,
duration.as_secs_f32(),
n_decode as f32 / duration.as_secs_f32()
);
println!("{}", ctx.timings());
Ok(())
}
Llama.cpp full log
./main -p "Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?" -m mistral-7b-instruct-v0.2.Q4_K_S.gguf -n 512 -ngl 33 --threads 8 --temp 0.1
Log start
main: build = 2409 (306d34be)
main: built with Apple clang version 15.0.0 (clang-1500.1.0.2.5) for arm64-apple-darwin23.3.0
main: seed = 1710284006
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /Users/odd/Documents/odd_LLM_rust/llama-cpp-rs-mod-odd/target/release/mistral-7b-instruct-v0.2.Q4_K_S.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai_mistral-7b-instruct-v0.2
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 14
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 217 tensors
llama_model_loader: - type q5_K: 8 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attm = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Small
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.86 GiB (4.57 BPW)
llm_load_print_meta: general.name = mistralai_mistral-7b-instruct-v0.2
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 ''
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.22 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 3877.58 MiB, ( 3877.64 / 49152.00)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: Metal buffer size = 3877.57 MiB
llm_load_tensors: CPU buffer size = 70.31 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Max
ggml_metal_init: picking default device: Apple M1 Max
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/odd/Documents/odd_LLM_rust/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M1 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 64.00 MiB, ( 3943.45 / 49152.00)
llama_kv_cache_init: Metal KV buffer size = 64.00 MiB
llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB
llama_new_context_with_model: CPU input buffer size = 10.01 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 73.02 MiB, ( 4016.47 / 49152.00)
llama_new_context_with_model: Metal compute buffer size = 73.00 MiB
llama_new_context_with_model: CPU compute buffer size = 8.00 MiB
llama_new_context_with_model: graph splits (measure): 2
system_info: n_threads = 8 / 10 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |
sampling:
repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.100
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 1
Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?
Let's compare the prices of each type of berry:
- Blueberries cost more than strawberries.
- Blueberries cost less than raspberries.
To determine if the third statement "Raspberries cost more than strawberries and blueberries" is true, we need to compare the price of raspberries with both strawberries and blueberries:
- Raspberries cost more than strawberries: This is not stated directly in the given information, but it can be inferred from statement 1 (blueberries cost less than raspberries, and blueberries cost more than strawberries).
- Raspberries cost more than blueberries: This is stated directly in the second statement.
Therefore, based on the given information, the third statement "Raspberries cost more than strawberries and blueberries" is true. [end of text]
llama_print_timings: load time = 252.89 ms
llama_print_timings: sample time = 15.60 ms / 184 runs ( 0.08 ms per token, 11797.14 tokens per second)
llama_print_timings: prompt eval time = 164.76 ms / 43 tokens ( 3.83 ms per token, 260.98 tokens per second)
llama_print_timings: eval time = 3579.66 ms / 183 runs ( 19.56 ms per token, 51.12 tokens per second)
llama_print_timings: total time = 3782.13 ms / 226 tokens
ggml_metal_free: deallocating
Log end
LLAMA-CPP-RS - original example - full log
./llama-cpp-rs --n-len 512 "Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?" local mistral-7b-instruct-v0.2.Q4_K_S.gguf
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /Users/odd/Documents/odd_LLM_rust/llama-cpp-rs-odd/target/release/mistral-7b-instruct-v0.2.Q4_K_S.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai_mistral-7b-instruct-v0.2
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 14
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 217 tensors
llama_model_loader: - type q5_K: 8 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Small
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.86 GiB (4.57 BPW)
llm_load_print_meta: general.name = mistralai_mistral-7b-instruct-v0.2
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 ''
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.22 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 3877.58 MiB, ( 3877.64 / 49152.00)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: Metal buffer size = 3877.57 MiB
llm_load_tensors: CPU buffer size = 70.31 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Max
ggml_metal_init: picking default device: Apple M1 Max
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd
ggml_metal_init: loading 'ggml-metal.metal'
ggml_metal_init: GPU name: Apple M1 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 256.00 MiB, ( 4135.45 / 49152.00)
llama_kv_cache_init: Metal KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU input buffer size = 13.02 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 164.02 MiB, ( 4299.47 / 49152.00)
llama_new_context_with_model: Metal compute buffer size = 164.00 MiB
llama_new_context_with_model: CPU compute buffer size = 8.00 MiB
llama_new_context_with_model: graph splits (measure): 2
n_len = 512, n_ctx = 2048, k_kv_req = 512
Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?
Let's compare the cost of each type of berry:
- Blueberries cost more than strawberries.
- Blueberries cost less than raspberries.
From the first statement, we know that blueberries are more expensive than strawberries. From the second statement, we know that blueberries are cheaper than raspberries.
To determine if the third statement, "Raspberries cost more than strawberries and blueberries," is true, we need to compare the cost of raspberries to both strawberries and blueberries.
Since blueberries are cheaper than raspberries, but more expensive than strawberries, and we don't have enough information to compare the cost of raspberries to strawberries directly, we cannot definitively say whether the third statement is true or false based on the given information.
decoded 177 tokens in 3.46 s, speed 51.15 t/s
load time = 350.73 ms
sample time = 20.21 ms / 178 runs (0.11 ms per token, 8805.34 tokens per second)
prompt eval time = 291.05 ms / 43 tokens (6.77 ms per token, 147.74 tokens per second)
eval time = 3437.89 ms / 177 runs (19.42 ms per token, 51.49 tokens per second)
total time = 3810.63 ms
ggml_metal_free: deallocating
LLAMA-CPP-RS modified example full log
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /Users/odd/Documents/odd_LLM_rust/llama-cpp-rs-mod-odd/target/release/mistral-7b-instruct-v0.2.Q4_K_S.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai_mistral-7b-instruct-v0.2
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 11: general.file_type u32 = 14
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {{ bos_token }}{% for message in mess...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 217 tensors
llama_model_loader: - type q5_K: 8 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Small
llm_load_print_meta: model params = 7.24 B
llm_load_print_meta: model size = 3.86 GiB (4.57 BPW)
llm_load_print_meta: general.name = mistralai_mistral-7b-instruct-v0.2
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 ''
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.22 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 3877.58 MiB, ( 3877.64 / 49152.00)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: Metal buffer size = 3877.57 MiB
llm_load_tensors: CPU buffer size = 70.31 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Max
ggml_metal_init: picking default device: Apple M1 Max
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd
ggml_metal_init: loading 'ggml-metal.metal'
ggml_metal_init: GPU name: Apple M1 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 256.00 MiB, ( 4135.45 / 49152.00)
llama_kv_cache_init: Metal KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU input buffer size = 13.02 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 164.02 MiB, ( 4299.47 / 49152.00)
llama_new_context_with_model: Metal compute buffer size = 164.00 MiB
llama_new_context_with_model: CPU compute buffer size = 8.00 MiB
llama_new_context_with_model: graph splits (measure): 2
n_len = 512, n_ctx = 2048, k_kv_req = 512
Blueberries cost more than strawberries. Blueberries cost less than raspberries. Raspberries cost more than strawberries and blueberries. If the first two statements are true, the third statement is?
Let's compare the cost of each type of berry:
- Blueberries cost more than strawberries.
- Blueberries cost less than raspberries.
From the first statement, we know that blueberries are more expensive than strawberries. From the second statement, we know that blueberries are cheaper than raspberries.
To determine if the third statement, "Raspberries cost more than strawberries and blueberries," is true, we need to compare the cost of raspberries to both strawberries and blueberries.
Since blueberries are cheaper than raspberries, but more expensive than strawberries, and we don't have enough information to compare the cost of raspberries to strawberries directly, we cannot definitively say whether the third statement is true or false based on the given information.
Therefore, the answer is: Insufficient information to determine.
decoded 192 tokens in 3.74 s, speed 51.36 t/s
load time = 379.33 ms
sample time = 14.58 ms / 193 runs (0.08 ms per token, 13238.22 tokens per second)
prompt eval time = 293.73 ms / 43 tokens (6.83 ms per token, 146.39 tokens per second)
eval time = 3720.89 ms / 192 runs (19.38 ms per token, 51.60 tokens per second)
total time = 4116.71 ms
ggml_metal_free: deallocating