Code Monkey home page Code Monkey logo

nested-u-net-based-real-time-speech-enhancement-mobile-app's Introduction

A Real-time Implementation Of a Nested U-NET based Speech Enhancement

https://doi.org/10.7840/kics.2023.48.9.1064

abstract : Speech enhancement is a technology that removes noise and enhances speech intelligibility and is used in many fields, such as voice recognition, video conferencing, etc. Recently, DNN-based speech enhancement technology has been actively researched, and a nested U-Net-based speech enhancement model that can effectively utilize local and global information of a speech signal shows excellent performance. To extend the usage of speech enhancement, real-time execution should be possible on various edge devices such as smartphones. In this paper, NUNet-TLS, one of the latest models based on nested U-Net, was implemented in real-time in a smartphone app environment. We first analyzed the model's operation time according to the blocks used, indicating that the dilated convolution block at the bottleneck of NUNet-TLS requires a relatively long operation time because of frequent memory usage. Based on this, we replaced dilated convolution with Long-Short Term Memory (LSTM), significantly reducing the operation time. We implemented the proposed model as a smartphone app after changing the data input/output to a form of frame-basis; the implementation app showed that, compared to the previous model, it used less memory when executed in real-time but achieved equivalent speech enhancement performance.

Demo

Real-time speech enhancement app UI and description.

The app provides recording, audio upload, audio download, audio play, and real-time speech enhancement on/off. speech input in real time through recording or audio upload, and enhanced voice is output in real time when the speech enhancement button is activated. Later, the output speech can be downloaded and compared with the input speech.

# version 1 (2022.11.23)

real_time_speech_enhancement_android_app.mp4

# version 2 (2023.06.07)

Screen_Recording_20230607_214841_rtse_demo.mp4
Screen_Recording_20230607_214731_rtse_demo.mp4

Real-time Implementation

A general DNN-based speech enhancement system uses a method of batch processing by receiving speech clips of a specific chunk. However, since this method is not suitable for real-time implementation, this paper uses the data input/output method as shown in Figure. The real-time system processes continuous speech input by dividing it into short frame units obtained through overlapping windows. From the frames output in this process, the final enhanced speech signal can be obtained through the inter-frame overlap-sum process. Frame-to-frame overlap is allowed around the input frame at time t. The number of overlapping samples between frames is directly related to the amount of computation in a real-time system. In this paper, a Hann window with a length of 512 samples was used and 50% overlap between frames was allowed. Therefore, when the sampling frequency is 16 kHz, the permissible calculation time per input frame is 16 msec (256 samples). Meanwhile, in the input/output structure shown in Figure, a time delay of half frame (16msec) occurs between the input and output of the model, which is an allowable range in a general speech communication system.

Nested U-Net TLS using LSTM

NUNet-TLS achieves excellent speech enhancement performance by using an additional skip connection and CTFA (Causal Time-Frequency Attention) for nested U-Nets. Figure shows the structure of NUNet-TLS. Downsampling (DS) and upsampling (US) layers exist between the encoder and decoder stages, and the bottleneck block is the Dilated Dense block. The U-Net nested in the encoder-decoder stage consists of an input layer (IL), an encoder layer (EL), a bottleneck block (BB), and a decoder layer (DL).

Real-time processing limitations of Nested U-Net

Figure shows the operation of dilated convolution used in the bottleneck block of the model. Dilated convolution uses (2, 3) convolutional modules just like the encoder and decoder blocks. However, unlike the conventional convolution module, the filter operation is performed at intervals equal to the dilation rate, so the acceptance range is very wide. Therefore, dilated convolution further delays processing time compared to conventional convolution. Since the dilated dense block used in the baseline uses a large number of dilated convolution layers, the processing time increases in proportion to the dilation rate and the number of layers.

Optimization

On the other hand, there is LSTM (Long-Short Term Memory) as a module that utilizes data information over time, such as dilated convolution. LSTM determines storage and deletion according to the importance of data through the forget gate, and since only one state needs to be passed to pass the previous data to the next input, the time required to buffer and process a large amount of past data can be reduced. That is, using LSTM can operate more efficiently in terms of real-time processing than dilated convolution. In fact, we compared the processing time by changing the dilated convolution of the bottleneck block of the baseline model to LSTM. In this paper, we experimented by fixing the unit of LSTM to 21 to keep the number of parameters similar to that of the baseline model.

Experimental Setting

  • Dataset

speech data for training used Wall Street Journal (WSJ0) data of 22 hours and 30 minutes in which 81 English speakers read news articles. The SNR of the input speech was composed of SNR of 0, 5, 10, and 15 dB. At this time, Chime2, Chime3, and NoiseX-92 data were used as noise data. The sampling rate is 16kHz. As test data, speech and ETSI noise data that were not used for training were used. The SNR of the test speech was also set to 0, 5, 10, and 15 dB.

  • Loss

  • Hyperparameter

The deep learning model was trained in the TensorFlow framework, learning rate was 0.001, batch size was 3, epoch was 100, and Adam was used as optimizer. The FFT length of the STFT used during training was 32 ms, and the Hann window was used as the window.

  • Settings

Experimental Results

  • Performance
  • RTF(Real-time Factor) and Memory Usage

Real-time factor (RTF) was measured to check whether the speech enhancement system was implemented in real time. RTF is the value obtained by dividing the inference time of the system by the input time, and if RTF is less than 1, real-time processing is possible. RTF can be seen in Figure (a). When the NUNet-TLS set as the baseline model was implemented in real time without modification, the RTF was 2.708, significantly exceeding 1, making real-time processing impossible, whereas the proposed model with the bottleneck block changed to LSTM showed that the RTF was 0.897, which was processed within 1. You can check.

Figure (b) compares the memory usage of the TFLite model installed in the app. Memory usage was divided into model loading memory and real-time execution memory. The model loading memory refers to the memory increased after the speech enhancement model is loaded into the device, and the real-time execution memory refers to the memory increased when the loaded model is executed in real time. At this time, the memory usage was measured using the CPU profiler provided by Android Studio, and was measured based on the maximum memory usage. Looking at (b) of Figure 7, it can be seen that the loading memory of the baseline and the proposed model is almost the same, but the baseline model uses about 10MB more in real-time execution memory. Memory usage varies depending on the amount of computation and parameters of the model. The reason for the difference in memory usage during real-time execution, even though the parameters and MACs of the two models are similar, is due to memory allocation and access by dilated convolution. The reduction of RTF by about 3 times is also possible because memory usage is reduced by replacing dilated convolution with LSTM.

  • Spectrogram

(a) Spectrogram of clean target speech without noise (b) Spectrogram of input speech mixed with noise (c) Spectrogram of enhanced speech through file-to-file inference in PC (d) Spectrogram of enhanced speech obtained through frame-by-frame inference of the TFLite model in the app

Requirements

This repo is tested with Ubuntu 20.04, Tensorflow 2.9, Python3.7, CUDA11.2. For package dependencies

Reference

Monoaural Speech Enhancement Using a Nested U-Net withTwo-Level Skip Connections
S. Hwang, S. W. Park, and Y. Park
[paper] [code]

Update

  • 2023.06.07 upload codes

nested-u-net-based-real-time-speech-enhancement-mobile-app's People

Contributors

jaebincha7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

nested-u-net-based-real-time-speech-enhancement-mobile-app's Issues

Can't support GPU ?

Process: com.example.real_time_speech_enhancer_app, PID: 29958
java.lang.RuntimeException: java.lang.IllegalArgumentException: Internal error: Error applying delegate:
at com.example.real_time_speech_enhancer_app.RTSE_NUTLS_LSTM.(RTSE_NUTLS_LSTM.java:84)
at com.example.real_time_speech_enhancer_app.bottomnavi.ProposedFragement.initSE(ProposedFragement.java:242)
at com.example.real_time_speech_enhancer_app.bottomnavi.ProposedFragement.onCreateView(ProposedFragement.java:92)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.