Audio Denoiser
This is an implementation of A Fully Convolutional Neural Network for Speech Enhancement. This research paper presents a method for speech enhancement that can enhance the quality of understanding speech for hearing aid users. As input, the model takes 8 frames of a noisy spectrogram and outputs the corresponding clean frame. This way the clean frame will have the time and frequency dependencies of the current frame and the past 7.
The biggest challenges faced in this project was understanding how to efficiently train and evaluate a model. As a college student, I do not have access to 100 GPU's so I had to be very methodical on how I approached this. The method I found most useful was to purposely overfit the model on 1 training example, that way I knew the model worked as it should. Then from there I incrementally increased the amount of data until I found the result that brought me the best model loss.
The model I trained in this repo was also used in a web app for speech enhancement in my senior design class and you can see that here.
Here you can see a spectrogram of the clean audio, clean audio with noise added onto it, and finally the denoised audio. You can see in the denoised audio that the audio with the highest db have great reconstruction, however, the lower the db is the worse the reconstruction gets. You can compare the noisy and denoised audio files here.