Comments (14)
Hi,
thanks for your questions.
I hope those diagram will help you:
yes, I mix the classes. I am adding them up in order to create a large set of labels, each representing one speaker.
from broadcast-news-videos-dataset.
I am the developer of pyannote.audio, which is also based on Yaafe for (MFCC) feature extraction and Keras for (LSTM-based) embedding.
It would be great if you could share both your Yaafe featureplan and your Keras model so that anyone can easily reproduce your great work. Is this something you'd be willing to do?
from broadcast-news-videos-dataset.
To clarify my thoughts: I think your architecture is a good candidate for the triplet loss paradigm used in https://github.com/hbredin/TristouNet.
I'd be happy to collaborate with you on this if you are interested.
from broadcast-news-videos-dataset.
Hi @hbredin
I am really glad that you write. I know your work and admire it.
I am okey to open the code or portion of it.
Currently working on extending the paper and method for ICASSP, so after submitting it I'll publish something.
Let's be in touch.
I'll write an email to you regarding collaboration.
from broadcast-news-videos-dataset.
Hello, @cyrta
i would like to ask some question about the paper:
- What do you mean with SFTF ? is that Short Time Fourier Transform?
- "3.072 seconds (96 frames of 512 audio samples)" isn't this mean that the audio already in 16kHz? but the paper do downsampling right after that step.
thank you in advance
from broadcast-news-videos-dataset.
Hi @dieka13
-
Yes, It is a Short Time Fourier Transform
-
The audio preprocessing sentence have some ambiguity unfortunately.
Let me explain it:
a. We downsample the input audio stream to 16kHz,
b. then we segment it into frames of 512 samples every 256 samples (50% hop).
c. Each frame is then multiply with Hamming window
d. each frame goes to SFTF function, so to have a spectral representation.
e. this output is putted to "spectrum data" bufferf. we take 96 frames from this "spectral data" buffer as a input to the network
g. we shift by 8 frames in stream (256ms) and put another portion of 96 frames buffer into input.
h. repeat until the stream end.
from broadcast-news-videos-dataset.
ah i see, it's concise now.
i hope you don't mind if i ask additional question:
- from where you get resulting size of N ×1×15?
- how do you apply the CQT one?
Thanks again, @cyrta
from broadcast-news-videos-dataset.
@cyrta what is the input shape of the network ?
from broadcast-news-videos-dataset.
@venkatesh-1729 mine is (96, 96): 96 mel, 96 frame, when using mel spectrogram feature
from broadcast-news-videos-dataset.
@dieka13 but using 96x96 gives only Nx1x1 after pooling 4 times. You cant get the sequence of Nx1x15. The only way to get a sequence length of 15 is if the input shape is 96x1440. But this doesnt make sense either, as it would then contain a receptive field of about 23 seconds. Its rare that anyone talks that long in AMI.
from broadcast-news-videos-dataset.
@dieka13 @venkatesh-1729 however i think based on one of his reply above. He does input 96x96. But to get 15 sequence length, he shifted by 8 frames 15 times. This gives a receptive field of about 3.42 seconds, which makes more sense.
from broadcast-news-videos-dataset.
@leonardltk Yes, to go around that i use 3x3 polling in the last CNN layer so there's will be some sequence to pass to RNN layers. I'm in the middle of completing the evaluation phase, so if my approach didn't turn out satisfactory i'll try yours. I hope the author give more information regarding this input size.
from broadcast-news-videos-dataset.
@dieka13 I think even if you (3,3) pooling on the last CNN layer, you only have sequence length of 2 right. It might be difficult for the RNN to learn much. But do let me know your result! I managed to build me method. Will test it soon.
May I know how do you get 150 speakers classes from AMI ? From http://groups.inf.ed.ac.uk/ami/corpus/participantids.shtml & http://groups.inf.ed.ac.uk/ami/corpus/signals.shtml i could only get 186 unique speakers.
from broadcast-news-videos-dataset.
@cyrta Could you shed light as to how you get the 150 unique speakers from the AMI Dataset?
from broadcast-news-videos-dataset.
Related Issues (4)
- Access to the dataset HOT 3
- paper details HOT 2
- Dataset link
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from broadcast-news-videos-dataset.