Code Monkey home page Code Monkey logo

Comments (40)

patyork avatar patyork commented on May 2, 2024 63

There are two simple and most often implemented ways of handling this:

  1. Bucketing and Padding
    1. Separate input sample into buckets that have similar length, ideally such that each bucket has a number of samples that is a multiple of the mini-batch size
    2. For each bucket, pad the samples to the length of the longest sample in that bucket with a neutral number. 0's are frequent, but for something like speech data, a representation of silence is used which is often not zeros (e.g. the FFT of a silent portion of audio is used as a neutral padding).
  2. Bucketing
    1. Separate input samples into buckets of exactly the same length
      • removes the need for determining what a neutral padding is
      • however, the size of the buckets in this case will frequently not be a multiple of the mini-batch size, so in each epoch, multiple times the updates will not be based on a full mini-batch.

from keras.

fchollet avatar fchollet commented on May 2, 2024 38

You can, but you would have to pad the shorter sequences with zeros, since all inputs to Keras models must be tensors. Here's an example of how to do it: https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py#L46

Another solution would be to feed sequences to your model one sequence at a time (batch_size=1). Then differences in sequence lengths would be irrelevant.

from keras.

aabversteeg avatar aabversteeg commented on May 2, 2024 24

I was not able to put anything to work, but I believe to allow for arbitrary sequence lengths you must supply None for the dimension that should be arbitrary. So in the above case you should replace maxlen with None.

from keras.

fchollet avatar fchollet commented on May 2, 2024 16

It doesn't know. But it learns to ignore them: in practice, sequence padding won't noticeably impact training. But if you're worried about it, you can always use batches of size 1.

from keras.

aabversteeg avatar aabversteeg commented on May 2, 2024 15

Hi, I'm planning to use the approach of "batch_size = 1" to allow for arbitrary input lengths. However, what dimensions should I use for the input_shape argument? For example:
model.add(LSTM(512, return_sequences=True, input_shape=(maxlen, len(chars))))
What should I replace "maxlen" with?

from keras.

patyork avatar patyork commented on May 2, 2024 13

A timestep is one step/element of a sequence. For example, each frame in a video is a timestep; the data for that timestep is the RGB picture at that frame.

from keras.

philipperemy avatar philipperemy commented on May 2, 2024 8

@Kevinpsk You cannot batch sequences of different length together.
@Binteislam it would like (batch_input_shape, time_length, input_dim) = (1, None, input_dim)
Then you can give any sequence to your model, as long as you feed them one by one.

Other possibilities are:

  • pad them with zeros
  • group sequences of same-length together inside a batch

from keras.

tbennun avatar tbennun commented on May 2, 2024 5

@LeZhengThu I'm not sure if this is still helpful or relevant, but I had the same problem so I wrote a generic bucketed Sequence class for exactly this purpose. It is used as a generator and sped up training for me in orders of magnitude (~100x faster, since some sequences were very long but that did not reflect the median sequence length).

https://github.com/tbennun/keras-bucketed-sequence

from keras.

anirudhgupta22 avatar anirudhgupta22 commented on May 2, 2024 3

Hi everyone,

I am working with video frames, my training data consists of video files of variable length due to which the no of timesteps (or the no of frames) in video files are variable. Can you please help me in using LSTM for such scenario.

Thanks

from keras.

pengpaiSH avatar pengpaiSH commented on May 2, 2024 2

@fchollet Actually I have a similar (may look stupid) question: When I check the imdb data, e.g. X_train[1] after operating pad_sequences(nb_timesteps = 100), I notice that the result will only preserve the very last 100 words to make the sequence vector length 100. My question is why not the first 100 words?

from keras.

philipperemy avatar philipperemy commented on May 2, 2024 1

@haoqi batch_size=1 in model.fit

from keras.

philipperemy avatar philipperemy commented on May 2, 2024 1

@paipai880429 it's a matter of point of view. Usually the stronger information is stored at the end of the comment rather than at the beginning. Anyway you have to make a choice.

from keras.

philipperemy avatar philipperemy commented on May 2, 2024 1

@Binteislam convert them to the same width and height. That will be the simplest for you.
Or you can convert all the images to max(widths) x height and pad with blank the rest.

from keras.

habdullah avatar habdullah commented on May 2, 2024 1

@philipperemy
It worked. the problem was in the batch generator.
input shape = (None,78) worked for me

from keras.

Deltaidiots avatar Deltaidiots commented on May 2, 2024 1

How can we use variable number of time steps per sample? number of features for each timestep remains same.

e.g

x_train = [

[[0.41668948], #1
[0.38072783], #2
[0.70242528]], #3

[[0.65911036],#1
[0.01740353],#2
[0.03617037],#3
[0.04617037]]#4

]

people are saying to use padding but i don’t know how padding will solve this ? what would be the shape of input array then?

I have tried using None in input shape but it doesn't work

You can, but you would have to pad the shorter sequences with zeros, since all inputs to Keras models must be tensors. Here's an example of how to do it: https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py#L46

Another solution would be to feed sequences to your model one sequence at a time (batch_size=1). Then differences in sequence lengths would be irrelevant.

from keras.

rzilleruelo avatar rzilleruelo commented on May 2, 2024 1

@Deltaidiots, following your example the padding means adding a value that is not present in your data as a marker of no data. For example, add a zero at the end of your first sequence.

x_train = [
  [[0.41668948], [0.38072783], [0.70242528], [0.0]],
  [[0.65911036], [0.01740353], [0.03617037], [0.04617037]]
]

this obviously works if you can assume 0.0 is not real value in your data. For example, if your values cannot be negative you could use a negative number or sum one to all numbers and use zero as padding. If there no simple transformation to make sure there is a value you can use as padding. You can increase the dimension if your data and use that extra dimension to make sure you can create a value that is not in your data domain. For example.

x_train = [
  [[1.0, 0.41668948], [1.0, 0.38072783], [1.0, 0.70242528], [0.0, 0.0]],
  [[1.0, 0.65911036], [1.0, 0.01740353], [1.0, 0.03617037], [1.0, 0.04617037]]
]

Then, Keras has a layer to tell this explicitely to the network.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Masking

from keras.

benjaminklein avatar benjaminklein commented on May 2, 2024

Thank you for the quick answer. So after padding, how the library knows to ignore the padded values (and not to use them in the training).

from keras.

lemuriandezapada avatar lemuriandezapada commented on May 2, 2024

I have been experimenting in my own implementations with output masks that manually set the error gradients for datapoints you don't want to train on to 0 and so they don't receive any updates. It would be a nice feature to have.

from keras.

haoqi avatar haoqi commented on May 2, 2024

I faced the same problem, if I want to train by batchsize of size 1, which function should i use?? thanks

from keras.

farizrahman4u avatar farizrahman4u commented on May 2, 2024

@fchollet Instead of using batch_size = 1, you could presort your data by sequence length, and group sequences of same length into batches, and call train_on_batch on each batch right?

from keras.

wangpichao avatar wangpichao commented on May 2, 2024

@aabversteeg Have you figured out what to do in your case? I am also facing this problem.

from keras.

FernandoLpz avatar FernandoLpz commented on May 2, 2024

Hi everyone.

I am starting to learn LSTM and I have a little doubt, what is a "time step"? a time step, is the length of the sequence?. I will really thanks your answer.

from keras.

FernandoLpz avatar FernandoLpz commented on May 2, 2024

from keras.

QuantumLiu avatar QuantumLiu commented on May 2, 2024

@fchollet
In the imdb_lstm eample,arguments use_bias of layers is True.
So ,unless we have paded the sequence ,the bias is still working in paded timesteps.
Would it cause some problems?

from keras.

Binteislam avatar Binteislam commented on May 2, 2024

I have images of different widths and fixed height of 46px and total samples are 1000. How should i define the input shape of my data to an LSTM Layer, using functional api???

from keras.

Binteislam avatar Binteislam commented on May 2, 2024

@philipperemy Is there no other solution? If I convert them to fixed width it destroys my data as I am working on OCR, whereas if I pad them with black it wastes huge memory.

from keras.

philipperemy avatar philipperemy commented on May 2, 2024

@Binteislam I'm not aware of a better solution. Or maybe you can feed them one by one but you increases the computational time (batch size = 1)

from keras.

Binteislam avatar Binteislam commented on May 2, 2024

@philipperemy If I keep batch size=1, how to define the input shape?
@patyork could you please suggest a solution?

from keras.

Kevinpsk avatar Kevinpsk commented on May 2, 2024

@Binteislam Hi I think you just need to set the dimension of timestep to None. But I am not sure how should you formulate the training data in this case. Add each image in a list?

from keras.

Kevinpsk avatar Kevinpsk commented on May 2, 2024

@philipperemy Hi, Yeah i understand that after reading other people's posts. But in the case of video processing, each image at each timestep would be 2D, how shall I specify input shape then? Something like (batch_size, timestep, input_dim) = (1, None, (height, width))? In this case, I will be training video files of variable length one by one?

from keras.

philipperemy avatar philipperemy commented on May 2, 2024

@Kevinpsk In your case I would advise you to have a look at Conv3D

https://keras.io/layers/convolutional/

It's specifically done for handling videos the same way a regular Conv Net handles images.

If you still want to stick to your LSTM, then input_dim = height * width. Just flatten the last two dimensions. You will have (batch_size, timestep, input_dim) = (1, None, height * width)

from keras.

habdullah avatar habdullah commented on May 2, 2024

@philipperemy
Is it possible to feed batches with varying number of timesteps ?
(The seq length within each batch is kept same)
for example for batch size of 100:
(100,250,78)
(100,300,78)
(100,167,78)
If yes, what would be the input shape? setting to (100,None,78) gives error

from keras.

philipperemy avatar philipperemy commented on May 2, 2024

@habdullah yes it's possible and should work. The number of parameters of recurrent networks do not depend on the time length. It's only a batch problem that prevents from using different lengths. In your case it should work well.

from keras.

philipperemy avatar philipperemy commented on May 2, 2024

@habdullah ok cool.

from keras.

machanic avatar machanic commented on May 2, 2024

@patyork because your batch_size you pass as argument is not the same as each bucket maximum length?
How to deal with batch_size ? e.g. You process these article inside one bucket , and then you change to another bunch of article inside "this" bucket?
that programming work will be very complex?

from keras.

LeZhengThu avatar LeZhengThu commented on May 2, 2024

@lemuriandezapada Hi, I'm facing the same problem and think your idea is the way to solve this. Can you kindly show how to code your idea? Thanks.

from keras.

AshimaChawla avatar AshimaChawla commented on May 2, 2024

Hi @fchollet, using batch size=1, has some performance issues on GPU. It takes really long to train the sequence of variant length.

Could you please guide.

/Ashima

from keras.

dbsousa01 avatar dbsousa01 commented on May 2, 2024

Sorry to bring this up again @fchollet but I am having a problem with how to present the training data. I also want to analyse video and let's say I have 100 videos and in each 5 frames so 500 frames total. How do I build the training data so I can feed a 5D vector to my neural network? I suppose that the input shape should be (nb of frames, nb of sequence, rows, cols, channels) where nb of frames is 500 (?) and the nb of sequence is between 1 and 5 depending the order of the frame in each video. Am I thinking correctly?
Thank you

from keras.

NookLook2014 avatar NookLook2014 commented on May 2, 2024

@habdullah i'm doing LSTM encoder-decoder, if i set input shape = (None,78) , do you have any idea of how to use RepeatVector(n) to let n matches the real shape[0] of input dynamically

from keras.

MasterHansCoding avatar MasterHansCoding commented on May 2, 2024

Hello, if you are using batch_size = 1, and return_sequence = True, I think I read somewhere that at every batch, the cell state is reset

from keras.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.