Code Monkey home page Code Monkey logo

ollama's Introduction

 ollama

Ollama

Discord

Get up and running with large language models.

macOS

Download

Windows preview

Download

Linux

curl -fsSL https://ollama.com/install.sh | sh

Manual install instructions

Docker

The official Ollama Docker image ollama/ollama is available on Docker Hub.

Libraries

Quickstart

To run and chat with Llama 3.1:

ollama run llama3.1

Model library

Ollama supports a list of models available on ollama.com/library

Here are some example models that can be downloaded:

Model Parameters Size Download
Llama 3.1 8B 4.7GB ollama run llama3.1
Llama 3.1 70B 40GB ollama run llama3.1:70b
Llama 3.1 405B 231GB ollama run llama3.1:405b
Phi 3 Mini 3.8B 2.3GB ollama run phi3
Phi 3 Medium 14B 7.9GB ollama run phi3:medium
Gemma 2 2B 1.6GB ollama run gemma2:2b
Gemma 2 9B 5.5GB ollama run gemma2
Gemma 2 27B 16GB ollama run gemma2:27b
Mistral 7B 4.1GB ollama run mistral
Moondream 2 1.4B 829MB ollama run moondream
Neural Chat 7B 4.1GB ollama run neural-chat
Starling 7B 4.1GB ollama run starling-lm
Code Llama 7B 3.8GB ollama run codellama
Llama 2 Uncensored 7B 3.8GB ollama run llama2-uncensored
LLaVA 7B 4.5GB ollama run llava
Solar 10.7B 6.1GB ollama run solar

Note

You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

Customize a model

Import from GGUF

Ollama supports importing GGUF models in the Modelfile:

  1. Create a file named Modelfile, with a FROM instruction with the local filepath to the model you want to import.

    FROM ./vicuna-33b.Q4_0.gguf
    
  2. Create the model in Ollama

    ollama create example -f Modelfile
    
  3. Run the model

    ollama run example
    

Import from PyTorch or Safetensors

See the guide on importing models for more information.

Customize a prompt

Models from the Ollama library can be customized with a prompt. For example, to customize the llama3.1 model:

ollama pull llama3.1

Create a Modelfile:

FROM llama3.1

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system message
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more examples, see the examples directory. For more information on working with a Modelfile, see the Modelfile documentation.

CLI Reference

Create a model

ollama create is used to create a model from a Modelfile.

ollama create mymodel -f ./Modelfile

Pull a model

ollama pull llama3.1

This command can also be used to update a local model. Only the diff will be pulled.

Remove a model

ollama rm llama3.1

Copy a model

ollama cp llama3.1 my-model

Multiline input

For multiline input, you can wrap text with """:

>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.

Multimodal models

ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
The image features a yellow smiley face, which is likely the central focus of the picture.

Pass the prompt as an argument

$ ollama run llama3.1 "Summarize this file: $(cat README.md)"
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.

Show model information

ollama show llama3.1

List models on your computer

ollama list

Start Ollama

ollama serve is used when you want to start ollama without running the desktop application.

Building

See the developer guide

Running local builds

Next, start the server:

./ollama serve

Finally, in a separate shell, run a model:

./ollama run llama3.1

REST API

Ollama has a REST API for running and managing models.

Generate a response

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.1",
  "prompt":"Why is the sky blue?"
}'

Chat with a model

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.1",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ]
}'

See the API documentation for all endpoints.

Community Integrations

Web & Desktop

Terminal

Database

Package managers

Libraries

Mobile

Extensions & Plugins

Supported backends

  • llama.cpp project founded by Georgi Gerganov.

ollama's People

Contributors

mxyng avatar jmorganca avatar dhiltgen avatar brucemacd avatar technovangelist avatar pdevine avatar mchiang0610 avatar bmizerany avatar joshyan1 avatar royjhan avatar remy415 avatar danemadsen avatar jessegross avatar hoyyeva avatar markward0110 avatar jamesbraza avatar deichbewohner avatar hmartinez82 avatar eltociear avatar tjbck avatar slouffka avatar eliben avatar coolljt0725 avatar tusharhero avatar brycereitano avatar dansreis avatar alwqx avatar xyproto avatar mraiser avatar sqs avatar

Stargazers

 avatar Liam Goodrick avatar litter jump avatar Xu GaoXiang avatar min.wu avatar  avatar xu avatar party avatar Mr. Jack Tung avatar lcolok avatar yfpan avatar Norman Mises avatar Hao avatar k-inoway avatar  avatar jzyztzn avatar Michael Nau avatar super.single430 avatar FaM.JaE avatar teamNOOB avatar  avatar

ollama's Issues

MSBUILD : error MSB1009: 项目文件不存在。

What is the issue?

go generate ./...
Already on 'minicpm-v2.5'
Your branch is up to date with 'origin/minicpm-v2.5'.
Submodule path '../llama.cpp': checked out 'd8974b8ea61e1268a4cad27f4f6e2cde3c5d1370'
Checking for MinGW...

CommandType Name Version Source


Application gcc.exe 0.0.0.0 C:\w64devkit\bin\gcc.exe
Application mingw32-make.exe 0.0.0.0 C:\w64devkit\bin\mingw32-make.exe
Building static library
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off
cmake version 3.29.4

CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- The C compiler identification is GNU 13.2.0
-- The CXX compiler identification is GNU 13.2.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/w64devkit/bin/gcc.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/w64devkit/bin/g++.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.45.1.windows.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- x86 detected
-- Configuring done (1.7s)
-- Generating done (0.7s)
-- Build files have been written to: D:/projects/ollama/llm/build/windows/amd64_static
building with: cmake --build ../build/windows/amd64_static --config Release --target llama --target ggml
[ 16%] Building C object CMakeFiles/ggml.dir/ggml.c.obj
[ 16%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.obj
[ 33%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.obj
[ 50%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.obj
[ 50%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.obj
[ 50%] Built target ggml
[ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj
D:\projects\ollama\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file*, size_t, bool)':
D:\projects\ollama\llm\llama.cpp\llama.cpp:1428:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int ()()'} to 'BOOL ()(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int ()(void, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY*, long unsigned int)'} [-Wcast-function-type]
1428 | pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory"));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
D:\projects\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_logits_ith(llama_context*, int32_t)':
D:\projects\ollama\llm\llama.cpp\llama.cpp:17331:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=]
17331 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector::size_type {aka long long unsigned int}
| %llu
D:\projects\ollama\llm\llama.cpp\llama.cpp: In function 'float* llama_get_embeddings_ith(llama_context*, int32_t)':
D:\projects\ollama\llm\llama.cpp\llama.cpp:17376:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=]
17376 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector::size_type {aka long long unsigned int}
| %llu
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.obj
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.obj
[100%] Linking CXX static library libllama.a
[100%] Built target llama
[100%] Built target ggml
Building LCD CPU
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release
cmake version 3.29.4

CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- Building for: Visual Studio 17 2022
-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22631.
-- The C compiler identification is MSVC 19.40.33811.0
-- The CXX compiler identification is MSVC 19.40.33811.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.45.1.windows.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- CMAKE_GENERATOR_PLATFORM: x64
-- x86 detected
-- Configuring done (3.8s)
-- Generating done (0.5s)
CMake Warning:
Manually-specified variables were not used by the project:

LLAMA_F16C

-- Build files have been written to: D:/projects/ollama/llm/build/windows/amd64/cpu
building with: cmake --build ../build/windows/amd64/cpu --config Release --target ollama_llama_server
适用于 .NET Framework MSBuild 版本 17.10.4+10fbfbf2e
MSBUILD : error MSB1009: 项目文件不存在。
开关:ollama_llama_server.vcxproj
llm\generate\generate_windows.go:3: running "powershell": exit status 1

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

No response

Issue with compiling OpenBMB's ollama fork

What is the issue?

Building on Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-169-generic x86_64):

go generate ./... seems to work, but go build . gives:

# github.com/ollama/ollama/gpu
gpu/amd_linux.go:162:19: undefined: RocmComputeMin
gpu/amd_linux.go:232:20: undefined: IGPUMemLimit
gpu/amd_linux.go:253:19: undefined: rocmMinimumMemory

nccv -V gives:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:49:14_PDT_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0

Attaching full terminal output for in txt file:

  • go generate ./...
  • go build .

terminaloutput.txt

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

No response

使用Modelfile进行打包,在run的时候报错

What is the issue?

FROM ./MiniCPM-V-2_5/model/ggml-model-Q4_K_M.gguf
FROM ./MiniCPM-V-2_5/mmproj-model-f16.gguf

TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""

PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
PARAMETER num_keep 4
PARAMETER num_ctx 2048

使用的模型文件是CPU的gguf,原来是出现无法浏览访问外部文件系统中的文件,后来换了一下,执行run报错是Error: llama runner process has terminated: exit status 0xc0000409

OS

No response

GPU

No response

CPU

Intel

Ollama version

No response

degraded performance with ollama build on MacOS

What is the issue?

i built the ollama image on MacOS from source by following the instructions 3. Rebuild ./ollama binary file instruction
, the build works, but the model is not able to extract information correctly.

result on MacOS

extract prodcut information from this image, /Users/jack.wu/Desktop/breville1.png, provide product name, description, price and list price
Added image '/Users/jack.wu/Desktop/breville1.png'
Product: Nespresso Coffee Machine & Accessories
Description: The image shows a range of Nespresso coffee machines and accessories that include the machine, capsules, and possibly related products. These
items are likely displayed in an online retail environment.
Price: The price information is not visible in the image provided.
List Price: Similarly, there is no list price visible in the image.

do you see any number in the image?
Yes, there are numbers visible in the image. The number "4.2" is seen next to a star rating graphic, which indicates that this product has a 4.2-star rating
based on 240 ratings. Additionally, there is a price of "$699.95" displayed for one of the Nespresso products shown in the image.

is this a discount of the original price? what is the original price?
Yes, this appears to be a discount of the original price. The original list price is "$699.95," and there's a strike-through indicating that the

result on huggingface
Screenshot 2024-06-07 at 5 17 21 PM

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

No response

MAC部署2.6,'cblas.h' file not found

What is the issue?

在mac上部署2.6版本,执行go generate ./...时候,openblas已经安装了,就一直报错:ollama/llm/llama.cpp/ggml-blas.cpp:12:13: fatal error: 'cblas.h' file not found,怎么装都不行,该如何解决?

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

No response

Error: an unknown error was encountered while running the model

What is the issue?

按doc文档获取git分支,编译成功后,通过modelfile生成模型,在进行对话时报错Error: an unknown error was encountered while running the model
image

image
以下 是serve端报错日志
2024/05/27 23:22:34 routes.go:1028: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]"
time=2024-05-27T23:22:34.090+08:00 level=INFO source=images.go:729 msg="total blobs: 8"
time=2024-05-27T23:22:34.090+08:00 level=INFO source=images.go:736 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullModelHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateModelHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushModelHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyModelHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteModelHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowModelHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).ProcessHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListModelsHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-05-27T23:22:34.091+08:00 level=INFO source=routes.go:1074 msg="Listening on 127.0.0.1:11434 (version 0.0.0)"
time=2024-05-27T23:22:34.091+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3477894997/runners
time=2024-05-27T23:22:34.166+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2]"
time=2024-05-27T23:22:35.854+08:00 level=INFO source=types.go:71 msg="inference compute" id=GPU-a93d53d5-add0-d73c-9800-83ba35515332 library=cuda compute=8.6 driver=12.4 name="NVIDIA GeForce RTX 3060 Ti" total="8.0 GiB" available="7.0 GiB"
[GIN] 2024/05/27 - 23:22:40 | 200 | 5.063803ms | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/27 - 23:22:40 | 200 | 2.428835ms | 127.0.0.1 | GET "/api/tags"
[GIN] 2024/05/27 - 23:22:50 | 200 | 33.81µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/27 - 23:22:50 | 200 | 3.816911ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/05/27 - 23:22:50 | 200 | 223.798µs | 127.0.0.1 | POST "/api/show"

time=2024-05-27T23:22:52.868+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="7.0 GiB" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-05-27T23:22:52.868+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=33 memory.available="7.0 GiB" memory.required.full="5.3 GiB" memory.required.partial="5.3 GiB" memory.required.kv="256.0 MiB" memory.weights.total="4.3 GiB" memory.weights.repeating="3.9 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="164.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-05-27T23:22:52.869+08:00 level=INFO source=server.go:338 msg="starting llama server" cmd="/tmp/ollama3477894997/runners/cpu_avx2/ollama_llama_server --model /usr/share/ollama/.ollama/models/blobs/sha256-010ec3ba94cb5ad2d9c8f95f46f01c6d80f83deab9df0a0831334ea45afff3e2 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 1 --port 60783"
time=2024-05-27T23:22:52.869+08:00 level=INFO source=sched.go:338 msg="loaded runners" count=1
WARN [server_params_parse] Not compiled with GPU offload support, --n-gpu-layers option will be ignored. See main README.md for information on enabling GPU BLAS support | n_gpu_layers=-1 tid="139932059617152" timestamp=1716823372
INFO [main] build info | build=2994 commit="8541e996" tid="139932059617152" timestamp=1716823372
time=2024-05-27T23:22:52.871+08:00 level=INFO source=server.go:525 msg="waiting for llama runner to start responding"
INFO [main] system info | n_threads=6 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139932059617152" timestamp=1716823372 total_threads=12
time=2024-05-27T23:22:52.874+08:00 level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server error"
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="60783" tid="139932059617152" timestamp=1716823372
llama_model_loader: loaded meta data with 24 key-value pairs and 291 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-010ec3ba94cb5ad2d9c8f95f46f01c6d80f83deab9df0a0831334ea45afff3e2 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = model
llama_model_loader: - kv 2: llama.vocab_size u32 = 128256
llama_model_loader: - kv 3: llama.context_length u32 = 8192
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.block_count u32 = 32
llama_model_loader: - kv 6: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 7: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 8: llama.attention.head_count u32 = 32
llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 10: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 12: general.file_type u32 = 15
llama_model_loader: - kv 13: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 15: tokenizer.ggml.scores arr[f32,128256] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 17: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 128001
llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 128002
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 22: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
time=2024-05-27T23:22:53.126+08:00 level=INFO source=server.go:562 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
llm_load_vocab:
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: n_ctx_train = 8192
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 8192
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.58 GiB (4.89 BPW)
llm_load_print_meta: general.name = model
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
llm_load_print_meta: UNK token = 128002 ''
llm_load_print_meta: PAD token = 0 '!'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_tensors: ggml ctx size = 0.15 MiB
llm_load_tensors: CPU buffer size = 4685.30 MiB
........................................................................................
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.50 MiB
llama_new_context_with_model: CPU compute buffer size = 258.50 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 1
INFO [main] model loaded | tid="139932059617152" timestamp=1716823373
time=2024-05-27T23:22:53.629+08:00 level=INFO source=server.go:567 msg="llama runner started in 0.76 seconds"
[GIN] 2024/05/27 - 23:22:53 | 200 | 2.832875282s | 127.0.0.1 | POST "/api/chat"
[GIN] 2024/05/27 - 23:22:55 | 200 | 98.939441ms | 127.0.0.1 | POST "/api/chat"
time=2024-05-27T23:28:01.188+08:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=6.156460594
time=2024-05-27T23:28:02.666+08:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=7.634409778
time=2024-05-27T23:28:04.142+08:00 level=WARN source=sched.go:512 msg="gpu VRAM usage didn't recover within timeout" seconds=9.11041182
[GIN] 2024/05/27 - 23:29:13 | 200 | 6.560773ms | 127.0.0.1 | HEAD "/"
[GIN] 2024/05/27 - 23:29:13 | 200 | 23.035463ms | 127.0.0.1 | GET "/api/tags"

OS

WSL2

GPU

Nvidia

CPU

AMD

Ollama version

0.0.0

CPM support function calling ability?

Hi,

Would you like to create a model with Visual function calling abilities (like corping images, indexing items, so on and so forth) beyond visual QA?

Thanks!

如何往Prompt里加入图片?

首先非常感谢你们的工作

我还是有些不清楚如何用Ollama部署使用这个模型,按照README正常运行模型后,我模仿Ollama的多模态模型对话写法进行图片提问,但MiniCPMV2.5似乎没有正确接收图片。
环境:Win10,Ollama 0.1.38

image

使用Open WebUI上传图片也是同样结果
image

Modelfile除了模型路径都没改,也没有做其他设置,可能和其他多模态模型的输入方式不太兼容?希望在这里能有更清晰的指引,非常感谢!

ollama run model error

What is the issue?

I had built ollama and it can pull and create model normally,but when i run models, they all give the error:an unknown error was encountered while running the model
Uploading 1723341239066.png…

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.0.0 build myself according to the tutor

使用docker版的ollama 跑不起来

What is the issue?

docker exec -it ollama ollama run minicpm-v2.5
Error: llama runner process has terminated: signal: aborted (core dumped)

OS

Docker

GPU

No response

CPU

Intel

Ollama version

0.1.43

Ollama model does have significantly lower quality in answering than online demo

What is the issue?

I created an Ollama model (for the fp16 GGUF) based on this: https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5

When testing one of my sample forms images, I get bad/wrong results when running the model locally via Ollama.

./ollama run minicpm-v2.5
>>> How is the Ending Balance? ./credit-card-statement.jpg
Added image './credit-card-statement.jpg'
The Ending Balance is 8,010.

I get the perfect and correct answers when using the same forms image in the online demo: https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5

image

What can we do to get the same quality here locally?

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

Latest git commit (367ec3f)

先前安装了 ollama ,按说明重新安装 OpenBMB / ollama 分支的 ollama 无法读取图片

What is the issue?

talos:~/code/models/MiniCPM-Llama3-V-2_5-int4$ ollama run minicpm-llama3-v2.5-q6_k:latest
>>> 请描述这张图: ./a3.jpg
Added image './a3.jpg'
>>>
Error: no slots available after 10 retries

输入图片后要等待很久时间,最终报错返回。这个问题困扰我很久。但又发现在拉取 OpenBMB/ollama 目录下,执行 ./ollama serve 是能正常读取图片并输出。

在无数次重试中,发现要让 ollama 命令运行下在你新编译的目录下,不然就不能正常读取图片。问题确认了,但如果我们是把Ollama添加为启动服务来启动的,怎么办?

用 Kimi 搜索:”ubuntu service 如何先进入指定目录,再执行命令“,得到:

设置WorkingDirectory:
在服务单元文件中,添加或修改[Service]部分,包含WorkingDirectory指令,后跟您希望服务进入的目录路径。

好,我的 ollama.service 配置文件,最终如下 :

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
WorkingDirectory=/usr/local/ollama # 这个是你新编译 ollama 的目录
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
Environment="OLLAMA_HOST=0.0.0.0"
Environment="OLLAMA_FLASH_ATTENTION=1" #这个用于解决 Qwen2 执行不正常(回答异常,返回重复字符)

[Install]
WantedBy=default.target

希望帮助到有碰到此问题的码农们~~~

OS

ubuntu 22.04

GPU

No response

CPU

No response

Ollama version

最新版本

mac上ollama部署2.5版本,聊天一直报错

What is the issue?

mac上ollama部署2.5版本,聊天一直报错,错误信息:llama_get_logits_ith: invalid logits id 12, reason: no logits
1723028561872

OS

macOS

GPU

Apple

CPU

Apple

Ollama version

No response

can't read file ./examples/minicpm-v2.5/slice_token_for_ollama.raw

What is the issue?

Hi:

我按照文档本地编译了Ollama, 将编译产出可执行文件ollama 软连到 /usr/local/bin下面

Ollama可以正常启动 模型也可以正常导入

我看在工程的examples/minicpm-v2.5/slice_token_for_ollama.raw这里有这个文件,我应该将这个文件放在什么位置?

但是在进行chat推理时报以下错误:

llm_load_vocab: missing pre-tokenizer type, using: 'default'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab:
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: ************************************
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: CONSIDER REGENERATING THE MODEL
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: ************************************
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab:
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_vocab: special tokens definition check successful ( 256/128256 ).
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: format = GGUF V3 (latest)
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: arch = llama
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: vocab type = BPE
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_vocab = 128256
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_merges = 280147
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_ctx_train = 8192
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd = 4096
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_head = 32
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_head_kv = 8
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_layer = 32
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_rot = 128
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_head_k = 128
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_head_v = 128
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_gqa = 4
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_k_gqa = 1024
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_embd_v_gqa = 1024
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_norm_eps = 0.0e+00
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: f_logit_scale = 0.0e+00
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_ff = 14336
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_expert = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_expert_used = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: causal attn = 1
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: pooling type = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope type = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope scaling = linear
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: freq_base_train = 500000.0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: freq_scale_train = 1
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: n_yarn_orig_ctx = 8192
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: rope_finetuned = unknown
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_conv = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_inner = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_d_state = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: ssm_dt_rank = 0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model type = 8B
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model ftype = Q4_0
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model params = 8.03 B
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: model size = 4.33 GiB (4.64 BPW)
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: general.name = model
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: EOS token = 128001 '<|end_of_text|>'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: UNK token = 128002 ''
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: PAD token = 0 '!'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: LF token = 128 'Ä'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
May 30 14:34:52 wbs-desktop ollama[651389]: llm_load_tensors: ggml ctx size = 0.30 MiB
May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: offloading 13 repeating layers to GPU
May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: offloaded 13/33 layers to GPU
May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: CPU buffer size = 4437.80 MiB
May 30 14:34:54 wbs-desktop ollama[651389]: llm_load_tensors: CUDA0 buffer size = 1521.41 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: .......................................................................................
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_ctx = 2048
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_batch = 512
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: n_ubatch = 512
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: flash_attn = 0
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: freq_base = 500000.0
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: freq_scale = 1
May 30 14:34:55 wbs-desktop ollama[651389]: llama_kv_cache_init: CUDA_Host KV buffer size = 152.00 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: llama_kv_cache_init: CUDA0 KV buffer size = 104.00 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA_Host output buffer size = 0.50 MiB
May 30 14:34:55 wbs-desktop ollama[656763]: [1717050895] warming up the model with an empty run
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA0 compute buffer size = 677.48 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: graph nodes = 1030
May 30 14:34:55 wbs-desktop ollama[651389]: llama_new_context_with_model: graph splits = 213
May 30 14:34:56 wbs-desktop ollama[656763]: INFO [main] model loaded | tid="139943102652416" timestamp=1717050896
May 30 14:34:56 wbs-desktop ollama[651389]: time=2024-05-30T14:34:56.488+08:00 level=INFO source=server.go:567 msg="llama runner started in 6.06 seconds"
May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] slice_image: multiple 1
May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896]
May 30 14:34:56 wbs-desktop ollama[656763]: encode_image_with_clip: image encoded in 9.25 ms by clip_image_preprocess.
May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896]
May 30 14:34:56 wbs-desktop ollama[656763]: encode_image_with_clip: mm_patch_merge_type is flat.
May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] clip_image_build_graph: ctx->buf_compute_meta.size(): 884880
May 30 14:34:56 wbs-desktop ollama[656763]: [1717050896] clip_image_build_graph: load_image_size: 462 434
May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897] encode_image_with_clip: image embedding created: 96 tokens
May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897]
May 30 14:34:57 wbs-desktop ollama[656763]: encode_image_with_clip: image encoded in 1025.53 ms by CLIP ( 10.68 ms per image patch)
May 30 14:34:57 wbs-desktop ollama[656763]: [1717050897] llava_image_embed_make_with_clip_img_ollama: can't read file ./examples/minicpm-v2.5/slice_token_for_ollama.raw

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

No response

编译异常MSBUILD : error MSB1009: 项目文件不存在。 开关:ollama_llama_server.vcxproj即llm下没有生成ollama_llama_server.vcxproj

What is the issue?

运行go generate ./...时出现异常 以下是实际的输出,其中包含了使用到的依赖
go generate ./...
Already on 'minicpm-v2.5'
Your branch is up to date with 'origin/minicpm-v2.5'.
Submodule path '../llama.cpp': checked out 'd8974b8ea61e1268a4cad27f4f6e2cde3c5d1370'
Checking for MinGW...

CommandType Name Version Source


Application gcc.exe 0.0.0.0 w64devkit\bin\gcc.exe
Application mingw32-make.exe 0.0.0.0 w64devkit\bin\mingw32-make.exe
Building static library
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64_static -G MinGW Makefiles -DCMAKE_C_COMPILER=gcc.exe -DCMAKE_CXX_COMPILER=g++.exe -DBUILD_SHARED_LIBS=off -DLLAMA_NATIVE=off -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_F16C=off -DLLAMA_FMA=off
cmake version 3.29.3

CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- The C compiler identification is GNU 14.1.0
-- The CXX compiler identification is GNU 14.1.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: xxx/w64devkit/bin/gcc.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: xxx/w64devkit/bin/g++.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: xxx/Git/bin/git.exe (found version "2.30.0.windows.2")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- x86 detected
-- Configuring done (3.7s)
-- Generating done (1.4s)
-- Build files have been written to: xxx/llm/build/windows/amd64_static
building with: cmake --build ../build/windows/amd64_static --config Release --target llama --target ggml
[ 16%] Building C object CMakeFiles/ggml.dir/ggml.c.obj
[ 16%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.obj
[ 33%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.obj
[ 50%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.obj
[ 50%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.obj
[ 50%] Built target ggml
[ 66%] Building CXX object CMakeFiles/llama.dir/llama.cpp.obj
xxx\llm\llama.cpp\llama.cpp: In constructor 'llama_mmap::llama_mmap(llama_file*, size_t, bool)':
xxx\llm\llama.cpp\llama.cpp:1428:38: warning: cast between incompatible function types from 'FARPROC' {aka 'long long int ()()'} to 'BOOL ()(HANDLE, ULONG_PTR, PWIN32_MEMORY_RANGE_ENTRY, ULONG)' {aka 'int ()(void, long long unsigned int, _WIN32_MEMORY_RANGE_ENTRY*, long unsigned int)'} [-Wcast-function-type]
1428 | pPrefetchVirtualMemory = reinterpret_cast<decltype(pPrefetchVirtualMemory)> (GetProcAddress(hKernel32, "PrefetchVirtualMemory"));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
xxx\llm\llama.cpp\llama.cpp: In function 'float* llama_get_logits_ith(llama_context*, int32_t)':
xxx\llm\llama.cpp\llama.cpp:17331:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=]
17331 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector::size_type {aka long long unsigned int}
| %llu
xxx\llm\llama.cpp\llama.cpp: In function 'float* llama_get_embeddings_ith(llama_context*, int32_t)':
xxx\llm\llama.cpp\llama.cpp:17376:65: warning: format '%lu' expects argument of type 'long unsigned int', but argument 2 has type 'std::vector::size_type' {aka 'long long unsigned int'} [-Wformat=]
17376 | throw std::runtime_error(format("out of range [0, %lu)", ctx->output_ids.size()));
| ~~^ ~~~~~~~~~~~~~~~~~~~~~~
| | |
| long unsigned int std::vector::size_type {aka long long unsigned int}
| %llu
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode.cpp.obj
[ 83%] Building CXX object CMakeFiles/llama.dir/unicode-data.cpp.obj
[100%] Linking CXX static library libllama.a
[100%] Built target llama
[100%] Built target ggml
Building LCD CPU
generating config with: cmake -S ../llama.cpp -B ../build/windows/amd64/cpu -DCMAKE_POSITION_INDEPENDENT_CODE=on -A x64 -DLLAMA_AVX=off -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DBUILD_SHARED_LIBS=on -DLLAMA_NATIVE=off -DLLAMA_SERVER_VERBOSE=off -DCMAKE_BUILD_TYPE=Release
cmake version 3.29.3

CMake suite maintained and supported by Kitware (kitware.com/cmake).
-- Building for: Visual Studio 16 2019
-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.26217.
-- The C compiler identification is MSVC 19.29.30148.0
-- The CXX compiler identification is MSVC 19.29.30148.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: xxx//Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: xxx//Microsoft Visual Studio/2019/BuildTools/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: xxx/Git/bin/git.exe (found version "2.30.0.windows.2")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- CMAKE_GENERATOR_PLATFORM: x64
-- x86 detected
-- Configuring done (9.7s)
-- Generating done (1.2s)
CMake Warning:
Manually-specified variables were not used by the project:

LLAMA_F16C

-- Build files have been written to: xxx/ollama/llm/build/windows/amd64/cpu
building with: cmake --build ../build/windows/amd64/cpu --config Release --target ollama_llama_server
用于 .NET Framework 的 Microsoft (R) 生成引擎版本 16.11.2+f32259642
版权所有(C) Microsoft Corporation。保留所有权利。

MSBUILD : error MSB1009: 项目文件不存在。
开关:ollama_llama_server.vcxproj
llm\generate\generate_windows.go:3: running "powershell": exit status 1

OS

Windows

GPU

Nvidia

CPU

Intel

Ollama version

OpenBMB/ollama/tree/minicpm-v2.5

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.