Comments (3)
Can you be more specific what is the issue that you're actually facing?
I tested Windows builds from MSYS2 environments, CLANG64 and UCRT64, and I do not see any immediate issues.
przemoc@NUC11PHKi7C002 CLANG64 /d/git/github.com/ggerganov/whisper.cpp
$ rm -rf build && cmake -B build && cmake --build build -j $(nproc)
...
przemoc@NUC11PHKi7C002 CLANG64 /d/git/github.com/ggerganov/whisper.cpp
$ ./build/bin/main.exe -m models/ggml-large-v3.bin -f samples/jfk.wav
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-large-v3.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 32
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
whisper_model_load: CPU total size = 3094.36 MB
whisper_model_load: model size = 3094.36 MB
whisper_init_state: kv self size = 220.20 MB
whisper_init_state: kv cross size = 245.76 MB
whisper_init_state: compute buffer (conv) = 36.26 MB
whisper_init_state: compute buffer (encode) = 926.66 MB
whisper_init_state: compute buffer (cross) = 9.38 MB
whisper_init_state: compute buffer (decode) = 209.26 MB
system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0
main: processing 'samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, 5 beams + best of 5, lang = en, task = transcribe, timestamps = 1 ...
[00:00:00.300 --> 00:00:09.000] And so, my fellow Americans, ask not what your country can do for you, ask what you
[00:00:09.000 --> 00:00:11.000] can do for your country.
whisper_print_timings: load time = 1932.32 ms
whisper_print_timings: fallbacks = 0 p / 0 h
whisper_print_timings: mel time = 13.86 ms
whisper_print_timings: sample time = 95.37 ms / 147 runs ( 0.65 ms per run)
whisper_print_timings: encode time = 32488.18 ms / 1 runs (32488.18 ms per run)
whisper_print_timings: decode time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: batchd time = 4972.74 ms / 145 runs ( 34.29 ms per run)
whisper_print_timings: prompt time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: total time = 39508.21 ms
przemoc@NUC11PHKi7C002 UCRT64 /d/git/github.com/ggerganov/whisper.cpp
$ rm -rf build && cmake -B build && cmake --build build -j $(nproc)
...
przemoc@NUC11PHKi7C002 UCRT64 /d/git/github.com/ggerganov/whisper.cpp
$ ./build/bin/main.exe -m models/ggml-large-v3.bin -f samples/jfk.wav
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-large-v3.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 32
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
whisper_model_load: CPU total size = 3094.36 MB
whisper_model_load: model size = 3094.36 MB
whisper_init_state: kv self size = 220.20 MB
whisper_init_state: kv cross size = 245.76 MB
whisper_init_state: compute buffer (conv) = 36.26 MB
whisper_init_state: compute buffer (encode) = 926.66 MB
whisper_init_state: compute buffer (cross) = 9.38 MB
whisper_init_state: compute buffer (decode) = 209.26 MB
system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0
main: processing 'samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, 5 beams + best of 5, lang = en, task = transcribe, timestamps = 1 ...
[00:00:00.300 --> 00:00:09.000] And so, my fellow Americans, ask not what your country can do for you, ask what you
[00:00:09.000 --> 00:00:11.000] can do for your country.
whisper_print_timings: load time = 1010.02 ms
whisper_print_timings: fallbacks = 0 p / 0 h
whisper_print_timings: mel time = 21.38 ms
whisper_print_timings: sample time = 85.51 ms / 147 runs ( 0.58 ms per run)
whisper_print_timings: encode time = 32120.25 ms / 1 runs (32120.25 ms per run)
whisper_print_timings: decode time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: batchd time = 4907.33 ms / 145 runs ( 33.84 ms per run)
whisper_print_timings: prompt time = 0.00 ms / 1 runs ( 0.00 ms per run)
whisper_print_timings: total time = 38150.42 ms
from whisper.cpp.
Hi On windows no matter how much mem you've got on ur machine you cannot get large v3 to run cuz malloc on windows has 4GB limit. would this be smt you can test and fix Cheers
Are you still using a 32bit intel CPU and old desktop version Windows OS? If not, this link explains it well https://stackoverflow.com/questions/181050/can-you-allocate-a-very-large-single-chunk-of-memory-4gb-in-c-or-c?noredirect=1&lq=1
from whisper.cpp.
Hey, I tried the new version and it seems to be fine with the latest build. Sorry I don't have the version of the build that this happened anymore.
from whisper.cpp.
Related Issues (20)
- Windows service crashes during inference HOT 5
- Microphone not working? HOT 3
- JSON Output Contains Garbled Characters for Chinese Audio Transcription HOT 1
- can i use the fasterWhisper model?
- Last version of ruby gem "whispercpp" cannot be build HOT 2
- Degraded quality with timestamps disabled
- Disable avx / avx2 / fma / f16c at runtime
- Question: make -j4 ggml for seamlessM4T , "ggml_backend" is undefined
- Usage of deprecated OpenVINO packages under Python 3.12 prevents using OpenVINO models
- Latest 1.6.2 release substantial increase in hallucinations for large-v3 on CUDA HOT 16
- Correct parameter for cross compile for ARM A55 with WebOS ?
- Removing the `whisper_pcm_to_mel_phase_vocoder*` functions HOT 1
- cmake can't find arch HOT 2
- GGML Concat function changed and now we get error HOT 1
- Cmake: Unable to build on Windows with -DWHISPER_HIPBLAS=ON HOT 2
- Error with building on CUDA: Windows HOT 3
- Optimized/fused kernels for GEMV with 4-bit quantized weights
- Add log callback for opencl
- WASM examples could not fetch whisper models due to CORS
- apple M1 error
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from whisper.cpp.