Comments (40)
There are two simple and most often implemented ways of handling this:
- Bucketing and Padding
- Separate input sample into buckets that have similar length, ideally such that each bucket has a number of samples that is a multiple of the mini-batch size
- For each bucket, pad the samples to the length of the longest sample in that bucket with a neutral number. 0's are frequent, but for something like speech data, a representation of silence is used which is often not zeros (e.g. the FFT of a silent portion of audio is used as a neutral padding).
- Bucketing
- Separate input samples into buckets of exactly the same length
- removes the need for determining what a neutral padding is
- however, the size of the buckets in this case will frequently not be a multiple of the mini-batch size, so in each epoch, multiple times the updates will not be based on a full mini-batch.
- Separate input samples into buckets of exactly the same length
from keras.
You can, but you would have to pad the shorter sequences with zeros, since all inputs to Keras models must be tensors. Here's an example of how to do it: https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py#L46
Another solution would be to feed sequences to your model one sequence at a time (batch_size=1). Then differences in sequence lengths would be irrelevant.
from keras.
I was not able to put anything to work, but I believe to allow for arbitrary sequence lengths you must supply None for the dimension that should be arbitrary. So in the above case you should replace maxlen with None.
from keras.
It doesn't know. But it learns to ignore them: in practice, sequence padding won't noticeably impact training. But if you're worried about it, you can always use batches of size 1.
from keras.
Hi, I'm planning to use the approach of "batch_size = 1" to allow for arbitrary input lengths. However, what dimensions should I use for the input_shape argument? For example:
model.add(LSTM(512, return_sequences=True, input_shape=(maxlen, len(chars))))
What should I replace "maxlen" with?
from keras.
A timestep is one step/element of a sequence. For example, each frame in a video is a timestep; the data for that timestep is the RGB picture at that frame.
from keras.
@Kevinpsk You cannot batch sequences of different length together.
@Binteislam it would like (batch_input_shape, time_length, input_dim) = (1, None, input_dim)
Then you can give any sequence to your model, as long as you feed them one by one.
Other possibilities are:
- pad them with zeros
- group sequences of same-length together inside a batch
from keras.
@LeZhengThu I'm not sure if this is still helpful or relevant, but I had the same problem so I wrote a generic bucketed Sequence class for exactly this purpose. It is used as a generator and sped up training for me in orders of magnitude (~100x faster, since some sequences were very long but that did not reflect the median sequence length).
https://github.com/tbennun/keras-bucketed-sequence
from keras.
Hi everyone,
I am working with video frames, my training data consists of video files of variable length due to which the no of timesteps (or the no of frames) in video files are variable. Can you please help me in using LSTM for such scenario.
Thanks
from keras.
@fchollet Actually I have a similar (may look stupid) question: When I check the imdb data, e.g. X_train[1] after operating pad_sequences(nb_timesteps = 100), I notice that the result will only preserve the very last 100 words to make the sequence vector length 100. My question is why not the first 100 words?
from keras.
@haoqi batch_size=1 in model.fit
from keras.
@paipai880429 it's a matter of point of view. Usually the stronger information is stored at the end of the comment rather than at the beginning. Anyway you have to make a choice.
from keras.
@Binteislam convert them to the same width and height. That will be the simplest for you.
Or you can convert all the images to max(widths) x height and pad with blank the rest.
from keras.
@philipperemy
It worked. the problem was in the batch generator.
input shape = (None,78) worked for me
from keras.
How can we use variable number of time steps per sample? number of features for each timestep remains same.
e.g
x_train = [
[[0.41668948], #1
[0.38072783], #2
[0.70242528]], #3
[[0.65911036],#1
[0.01740353],#2
[0.03617037],#3
[0.04617037]]#4
]
people are saying to use padding but i donβt know how padding will solve this ? what would be the shape of input array then?
I have tried using None in input shape but it doesn't work
You can, but you would have to pad the shorter sequences with zeros, since all inputs to Keras models must be tensors. Here's an example of how to do it: https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py#L46
Another solution would be to feed sequences to your model one sequence at a time (batch_size=1). Then differences in sequence lengths would be irrelevant.
from keras.
@Deltaidiots, following your example the padding means adding a value that is not present in your data as a marker of no data. For example, add a zero at the end of your first sequence.
x_train = [
[[0.41668948], [0.38072783], [0.70242528], [0.0]],
[[0.65911036], [0.01740353], [0.03617037], [0.04617037]]
]
this obviously works if you can assume 0.0 is not real value in your data. For example, if your values cannot be negative you could use a negative number or sum one to all numbers and use zero as padding. If there no simple transformation to make sure there is a value you can use as padding. You can increase the dimension if your data and use that extra dimension to make sure you can create a value that is not in your data domain. For example.
x_train = [
[[1.0, 0.41668948], [1.0, 0.38072783], [1.0, 0.70242528], [0.0, 0.0]],
[[1.0, 0.65911036], [1.0, 0.01740353], [1.0, 0.03617037], [1.0, 0.04617037]]
]
Then, Keras has a layer to tell this explicitely to the network.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Masking
from keras.
Thank you for the quick answer. So after padding, how the library knows to ignore the padded values (and not to use them in the training).
from keras.
I have been experimenting in my own implementations with output masks that manually set the error gradients for datapoints you don't want to train on to 0 and so they don't receive any updates. It would be a nice feature to have.
from keras.
I faced the same problem, if I want to train by batchsize of size 1, which function should i use?? thanks
from keras.
@fchollet Instead of using batch_size = 1
, you could presort your data by sequence length, and group sequences of same length into batches, and call train_on_batch
on each batch right?
from keras.
@aabversteeg Have you figured out what to do in your case? I am also facing this problem.
from keras.
Hi everyone.
I am starting to learn LSTM and I have a little doubt, what is a "time step"? a time step, is the length of the sequence?. I will really thanks your answer.
from keras.
from keras.
@fchollet
In the imdb_lstm
eample,arguments use_bias
of layers is True
.
So ,unless we have paded the sequence ,the bias is still working in paded timesteps.
Would it cause some problems?
from keras.
I have images of different widths and fixed height of 46px and total samples are 1000. How should i define the input shape of my data to an LSTM Layer, using functional api???
from keras.
@philipperemy Is there no other solution? If I convert them to fixed width it destroys my data as I am working on OCR, whereas if I pad them with black it wastes huge memory.
from keras.
@Binteislam I'm not aware of a better solution. Or maybe you can feed them one by one but you increases the computational time (batch size = 1)
from keras.
@philipperemy If I keep batch size=1, how to define the input shape?
@patyork could you please suggest a solution?
from keras.
@Binteislam Hi I think you just need to set the dimension of timestep to None. But I am not sure how should you formulate the training data in this case. Add each image in a list?
from keras.
@philipperemy Hi, Yeah i understand that after reading other people's posts. But in the case of video processing, each image at each timestep would be 2D, how shall I specify input shape then? Something like (batch_size, timestep, input_dim) = (1, None, (height, width))? In this case, I will be training video files of variable length one by one?
from keras.
@Kevinpsk In your case I would advise you to have a look at Conv3D
https://keras.io/layers/convolutional/
It's specifically done for handling videos the same way a regular Conv Net handles images.
If you still want to stick to your LSTM, then input_dim = height * width
. Just flatten the last two dimensions. You will have (batch_size, timestep, input_dim) = (1, None, height * width)
from keras.
@philipperemy
Is it possible to feed batches with varying number of timesteps ?
(The seq length within each batch is kept same)
for example for batch size of 100:
(100,250,78)
(100,300,78)
(100,167,78)
If yes, what would be the input shape? setting to (100,None,78) gives error
from keras.
@habdullah yes it's possible and should work. The number of parameters of recurrent networks do not depend on the time length. It's only a batch problem that prevents from using different lengths. In your case it should work well.
from keras.
@habdullah ok cool.
from keras.
@patyork because your batch_size you pass as argument is not the same as each bucket maximum length?
How to deal with batch_size ? e.g. You process these article inside one bucket , and then you change to another bunch of article inside "this" bucket?
that programming work will be very complex?
from keras.
@lemuriandezapada Hi, I'm facing the same problem and think your idea is the way to solve this. Can you kindly show how to code your idea? Thanks.
from keras.
Hi @fchollet, using batch size=1, has some performance issues on GPU. It takes really long to train the sequence of variant length.
Could you please guide.
/Ashima
from keras.
Sorry to bring this up again @fchollet but I am having a problem with how to present the training data. I also want to analyse video and let's say I have 100 videos and in each 5 frames so 500 frames total. How do I build the training data so I can feed a 5D vector to my neural network? I suppose that the input shape should be (nb of frames, nb of sequence, rows, cols, channels) where nb of frames is 500 (?) and the nb of sequence is between 1 and 5 depending the order of the frame in each video. Am I thinking correctly?
Thank you
from keras.
@habdullah i'm doing LSTM encoder-decoder, if i set input shape = (None,78) , do you have any idea of how to use RepeatVector(n) to let n matches the real shape[0] of input dynamically
from keras.
Hello, if you are using batch_size = 1, and return_sequence = True, I think I read somewhere that at every batch, the cell state is reset
from keras.
Related Issues (20)
- Torch 2.3.0 (next ver) fails with AttributeError: 'Parameter' object has no attribute 'fget'
- Add support for jnp.linalg.slogdet HOT 2
- keras.layers.Layer.call method fails when building keras model with functional API HOT 1
- Getting Wrong output even though vgg16 model showing 95% val_accuracy HOT 3
- import keras error (V3.3.2) (kaggle Notebook) HOT 1
- Keras 3 with Pytorch backend ERROR - Layer 'lstm_cell' expected 3 variables, but received 0 variables during loading. Expected: ['kernel', 'recurrent_kernel', 'bias'] HOT 4
- The source code URL in the documentation leads to a non-existent page. HOT 1
- keras.ops.linalg.cholesky can't JIT HOT 1
- Model fails to train with Linux and Keras 3.3.2 HOT 9
- Compatible with .ogg format HOT 2
- keras autocast casts numpy int types to float HOT 2
- bug in TF _keepdims? HOT 3
- keras.ops.cross doesn't propagate input sizes HOT 1
- GSOC '24 Project? HOT 1
- Dice loss - incorrect HOT 1
- The loss becomes neagative from positive values dring taining loop HOT 3
- Unusual behavior of `predict` for JAX backend HOT 11
- Layernorm not supporting axis [-2, 3] HOT 1
- LSTM layer with dropout does not use fast CuDNN implementation in Keras 3 HOT 1
- RaggedTensor HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from keras.