Code Monkey home page Code Monkey logo

fdlp_spectrogram's Introduction

Modulation Features for Automatic Speech Recognition

This repo contains an implementation of

FDLP-spectrogram

The implementation allows fast batch computation of FDLP-spectrogram that can even be used on the fly for DNN training.

To compute FDLP spectrogram

Python

from fdlp import FDLP
fdlp = FDLP()
# speech (batch x signal length) : padded speech signals formed into a batch
# lens (batch) : lengths of each padded speech siganl in the batch
# set lens=None if you are computing features one utterance at a time and not as a batch
feats, olens = fdlp.extract_feats(speech, lens)

The fdlp class takes the following important parameters which are set to reasonable default values.

 n_filters: int = 80, # Number of filters
 coeff_num: int = 100, # Number of modulation coefficients to compute
 coeff_range: str = '1,100', # Range of modulation coefficients to preserve 
 order: int = 150, # Order of FDLP model
 fduration: float = 1.5, # Duration of window in seconds
 frate: int = 100, # Frame rate
 overlap_fraction: float = 0.25,  # Overlap fraction in Overlap-Add
 srate: int = 16000    # Sample rate of the speech signal

CLI

# Kaldi-like features
make-fdlp kaldi wav.scp "ark:| copy-feats ark:- ark,scp:/path/to/srotage/make_fdlp.ark,data/feats.scp" [data/utt2num_frames]

For more info type:

make-fdlp kaldi --help

Results

The performance of an e2e ASR with these features can be found in https://arxiv.org/abs/2103.14129 and is summarized below

Data set mel-spectrogram FDLP-spectrogram
WSJ (test_eval92) 5.1 4.8
REVERB (et_real_1ch / et_real_1ch_wpe / et_real_8ch_beamformit) 23.2 / 20.7 / 9.2 19.4 / 18.0 / 7.2
CHIME4 (et05_real_isolated_1ch_track / et05_real_beamformit_2mics / et05_real_beamformit_5mics) 23.7 / 20.4 / 16.8 23.4 / 19.5 / 15.8

Modulation vector (M-vector)

from fdlp import FDLP
fdlp = FDLP(lfr=10, return_mvector=True)
# speech (batch x signal length) : padded speech signals formed into a batch
# lens (batch) : lengths of each padded speech siganl in the batch
feats, olens = fdlp.extract_feats(speech, lens)

The fdlp class takes the following important parameters for M-vector computation.

 n_filters: int = 80, # Number of filters
 coeff_num: int = 100, # Number of modulation coefficients to compute
 coeff_range: str = '1,100', # Range of modulation coefficients to preserve 
 order: int = 150, # Order of FDLP model
 fduration: float = 1.5, # Duration of window in seconds
 frate: int = 100, # Frame rate
 lfr: int = 10, # M-vectors are computed at this frame-rate and then interpolated to frate
 overlap_fraction: float = 0.25,  # Overlap fraction in Overlap-Add
 srate: int = 16000    # Sample rate of the speech signal

Results with these features for Kaldi TDNN models for REVERB data set can be found in Modulation Vectors as Robust Feature Representation for ASR in Domain Mismatched Conditions (https://www.isca-speech.org/archive_v0/Interspeech_2019/pdfs/2723.pdf)

Complex Frequency Domain Linear Prediction

paper: https://arxiv.org/pdf/2203.13216.pdf

This work modifies the conventional FDLP model. The M-vectors computed using complex FDLP exactly corresponds to the modulation spectrum of speech in different frequency sub-bands.

from fdlp.src.fdlp import fdlp
fdlp = fdlp(lfr=10, return_mvector=True, complex_mvectors=True)
# speech (batch x signal length) : padded speech signals formed into a batch
# lens (batch) : lengths of each padded speech siganl in the batch
feats, olens = fdlp.extract_feats(speech, lens)

Installation

Pip

To install the latest, unreleased version, do:

pip install git+https://github.com/sadhusamik/fdlp_spectrogram

fdlp_spectrogram's People

Contributors

martinkocour avatar sadhusamik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

fdlp_spectrogram's Issues

fdlp extract

I found the input of fdlp.extract is only numpy. but if I use numpy in training . the speed is too slow. Do we have the version that the input can put to cuda.

nan/inf if waveform contain bursts

Hi Samik, the extractor is generating inf/nan if waveform contains these amplitude jumps. Would you have time to look at it. The file where this issue occurs can be downloaded from [https://www.uschovna.cz/zasilka/ZQAUTIKSPNR9AN9M-BG9] (till 14.Jul)
Could you look at it. Thanks
Martin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.