Code Monkey home page Code Monkey logo

Comments (7)

csukuangfj avatar csukuangfj commented on July 19, 2024

Do you train the.model by yourself?
If yes, have you followed exactly our doc to export it to ncnn?

Does our provided model.work?

from sherpa-ncnn.

dirkstark avatar dirkstark commented on July 19, 2024

Thank you for your fast answer. Yes, i trained the model by myself. I wrote "streaming-ncnn-decode works" that was wrong. I mean "decode.py and streaming_decode.py" works. I followed this doc: https://icefall.readthedocs.io/en/latest/model-export/export-ncnn-conv-emformer.html

  • build ncnn from https://github.com/csukuangfj/ncnn
  • proofed with decode.py if it works
  • exported with export-for-ncnn.py (i checked the parameters beforehand)
  • pnnx for joiner, decoder and encoder

don't know if the log is okay ... seems wrong with zipformer and 'open failed', but i get the nccn-bins and params

  • add SherpaMetaData

  • test ./streaming-ncnn-decode.py:

2024-05-03 11:33:11,783 INFO [streaming-ncnn-decode.py:349] Constructing Fbank computer
2024-05-03 11:33:11,783 INFO [streaming-ncnn-decode.py:352] Reading sound files: ./exp/test.wav
2024-05-03 11:33:11,789 INFO [streaming-ncnn-decode.py:357] torch.Size([106560])
Segmentation fault

Error in: encoder_out, states = model.run_encoder(frames, states) => ret, ncnn_out0 = ex.extract("out0")

from sherpa-ncnn.

csukuangfj avatar csukuangfj commented on July 19, 2024

exported with export-for-ncnn.py (i checked the parameters beforehand)

Please describe how you checked that.

Also, please answer whether you have followed exactly the following doc
https://icefall.readthedocs.io/en/latest/model-export/export-ncnn-conv-emformer.html

Hint:

  • You don't need to modify the exported files when running streaming-ncnn-decode.py.
  • You must modify it according to the doc if you want to run it with sherpa-ncnn.

from sherpa-ncnn.

dirkstark avatar dirkstark commented on July 19, 2024

Please describe how you checked that.

I trained with this params:

./conv_emformer_transducer_stateless2/train.py
--world-size 1
--num-epochs 30
--start-epoch 1
--exp-dir conv_emformer_transducer_stateless2/exp
--max-duration 420
--master-port 12321
--num-encoder-layers 16
--chunk-length 32
--cnn-module-kernel 31
--left-context-length 32
--right-context-length 8
--memory-size 32
--encoder-dim 144
--dim-feedforward 576
--nhead 4

I tried to use the params from the small model: https://huggingface.co/marcoyang/sherpa-ncnn-conv-emformer-transducer-small-2023-01-09/blob/main/export-ncnn.sh

It seems that there is no "--bpe-model", so I used "tokens.txt" as described in the documentation:

./conv_emformer_transducer_stateless2/export-for-ncnn.py
--exp-dir conv_emformer_transducer_stateless2/exp
--tokens data/lang_bpe_500/tokens.txt
--epoch 2
--avg 1
--use-averaged-model 0
--num-encoder-layers 16
--chunk-length 32
--cnn-module-kernel 31
--left-context-length 32
--right-context-length 8
--memory-size 32
--encoder-dim 144
--dim-feedforward 576
--nhead 4

The output was similar to the documentation. Except:

emformer2.py:614: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attention.shape == (B * self.nhead, Q, self.head_dim)
emformer2.py:405: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert cache.shape == (B, D, self.cache_size), cache.shape
_trace.py:1065: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for list, use a tuple instead. for dict, use a NamedTuple instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
module._c._create_method_from_trace(

The files appear to have been created. With pnnx encoder i get this output:

fp16 = 1
optlevel = 2
device = cpu
inputshape =
inputshape2 =
customop =
moduleop = scaling_converter.PoolingModuleNoProj,zipformer.AttentionDownsampleUnsqueeze,zipformer_for_ncnn_export_only.AttentionDownsampleUnsqueeze
############# pass_level0
inline module = emformer2.Conv2dSubsampling
inline module = scaling.DoubleSwish
inline module = scaling_converter.NonScaledNorm
inline module = torch.nn.modules.linear.Identity
############# pass_level1
############# pass_level2
############# pass_level3
open failed
############# pass_level4
############# pass_level5
[...]
make_slice_expression input 1157
pnnx build without onnx-zero support, skip saving onnx
############# pass_ncnn
[...]
fallback batch axis 233 for operand pnnx_expr_126_mul(1117,1.666667e-01)
[...]
reshape tensor with batch index 1 is not supported yet!
[...]
unsqueeze batch dim 1 is not supported yet!


The missing "--bpe-model" like in your sherpa-ncnn-conv-emformer-transducer-small-2023-01-09 isn't a problem?
The output "moduleop = scaling_converter.PoolingModuleNoProj,zipformer.AttentionDownsampleUnsqueeze,zipformer_for_ncnn_export_only.AttentionDownsampleUnsqueeze" is also okay?
Is there any verbose-mode to check what's wrong?

Also, please answer whether you have followed exactly the following doc

I tried to but I can't guarantee. I'll retry in few days.

Thank you for the hint and your help. Just if it's interesting: I tested with sherpa-ncnn and streaming-ncnn on different systems but get the same error but your "sherpa-ncnn-conv-emformer-transducer-small-2023-01-09 " works well.

from sherpa-ncnn.

csukuangfj avatar csukuangfj commented on July 19, 2024

If you follow the doc exactly, there should not be any issues.

Please try to export with our provided pytorch checkpoint and make sure you can reproduce it.

from sherpa-ncnn.

dirkstark avatar dirkstark commented on July 19, 2024

Pytorch checkpoint? The documentation states: "We are using Ubuntu 18.04, Torch 1.13 and Python 3.8 for testing" and "Please use a newer version of PyTorch". I am using “2.1.1+cu121”.

I'm not sure if pnnx uses the same cuda version, but if I rebuild everything on a clean system, it's no problem to use 2.1.1, right?

from sherpa-ncnn.

csukuangfj avatar csukuangfj commented on July 19, 2024

pytorch checkpoint is a .pt file.

I suggest you to follow the doc step by step using our provided checkpoint.

from sherpa-ncnn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.