Code Monkey home page Code Monkey logo

guillaume-chevalier / lstm-human-activity-recognition Goto Github PK

View Code? Open in Web Editor NEW
3.3K 159.0 930.0 1.41 MB

Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier

License: MIT License

Python 0.43% Jupyter Notebook 99.57%
machine-learning deep-learning lstm human-activity-recognition neural-network rnn recurrent-neural-networks tensorflow activity-recognition

lstm-human-activity-recognition's Issues

LSTM + KNN

Hey,

Do you know if it is possible to use 1 LSTM and 1 KNN instead of 2 LSTM ?

Thanks in advance,

R.F.

How to know the values of all kind of weights

Hello,
I am really impressed by your work. But I have met some issues.
I used the tf.saver to get all the ops and save the model. However, I printed it out all the tf.global_variables but couldn't know each of the individual array means. Is there a way that I can know the detail variables that I saved.

Shape of input signal (7352, 128, 9)

Hi Guillaume,

First of all, thanks a lot for this wonderful walkthrough of Human Activity Recognition using LSTMs. Your code worked like a charm on my CPU. I'm new to deep learning and don't have a lot of practical experience with RNN/LSTM. Your code has provided an excellent guide to see what's happening under the hood.

This isn't really an issue, but I was wondering how did the number 128 come about. I know it's the number of timesteps per series. Does this number 128 refer to the number of columns of the 9 input text files? And more abstractly, I am not sure about how to choose the number of time steps. Is there any guideline to do this?

Thanks a lot,

  • Madhav

live recognition activity

hello thanks for great github ๐Ÿ‘
is this model capable of detecting and recognition activity from laptop's camera for live ?

IndexError when n_classes < 6

Hello,

First, thank you very much for your code. It has helped me out a lot.

I tried to change n_classes to 2, as I am only classifying between two states. However, I receive an IndexError whenever I reduce n_classes below 6. The error message is below:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-15-581f85d3f7ff> in <module>()
     43             feed_dict={
     44                 x: X_test,
---> 45                 y: one_hot(y_test)
     46             }
     47         )

<ipython-input-13-7d65b978d73d> in one_hot(y_, n_classes)
     50     # Function to encode output labels from number indexes
     51     y_ = y_.reshape(len(y_))
---> 52     return np.eye(n_classes)[np.array(y_, dtype=np.int32)]  # Returns FLOATS

IndexError: index 2 is out of bounds for axis 0 with size 2

I'm not sure if I'm just misunderstanding what n_classes is supposed to represent or if there is a bug when it is reduced below 6 (any number I set it to that is greater than 6 still works).

My data has the shape:

X_train: (6312, 50, 9)
y_train: (6312, 1)
X_test: (1578, 50, 9)
y_test: (1578, 1) 

Where the sole feature in the y arrays are labelled either 1 or 2 for my two classes.

My hyperparameters are currently set to:

training_data_count = len(X_train)
test_data_count = len(X_test)
n_steps = len(X_train[0])
n_input = len(X_train[0][0])

# NN Internal Structure

n_hidden = 32
n_classes = 2

# Training

learning_rate = 0.001
lambda_loss_amount = 0.0015
training_iters = training_data_count * 300  # Loop 300 times on the dataset
batch_size = 1500
display_iter = 30000  # To show test set accuracy during training

and the one_hot function that I'm using is a fix that you suggested in another issue

def one_hot(y_, n_classes=n_classes):
    # Function to encode output labels from number indexes 
    y_ = y_.reshape(len(y_))
    return np.eye(n_classes)[np.array(y_, dtype=np.int32)]  # Returns FLOATS

Any help with this is much appreciated.

Best,
Sean

when I run the train_and_save.py,I got this error

Traceback (most recent call last):
File "D:/GraduationCode/LSTM-HAR-latest/train_and_save.py", line 11, in
from neuraxle_tensorflow.tensorflow_v1 import TensorflowV1ModelStep
ModuleNotFoundError: No module named 'neuraxle_tensorflow'

Thanks.

Changing batch size

Hello
I tried to change the batch size from 1500 to 100, since I am using different features.
The input dimension of my features are 4096 instead of your 128.
But this causes the problem of Resource exhausted. So, I tried to reduce the batch size. But now the problem is that I am getting this error
ValueError: Cannot feed value of shape (100, 4) for Tensor u'Placeholder_1:0', which has shape '(?, 14)'
Thank You in advance for the help.

Less data

I used your lstm blackbox for a small sized training dataset, and got very poor results? Could you suggest any ideas?

Model application in android

Hi, @guillaume-chevalier
Whether this work can be applied to android. I have also seen other open-source projects but encountered many problems in the application process. Can you give me some suggestions? Thank you.

something about video

Hi,thank you for your wonderful turtiol,there exist three lines in the vedio,they may resprent three data sets in your code,every moment each line has only one value,I wonder how do you get the 3-dimension datas?would you help me understand it?thank you

ValueError: Variable rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel already exists

when I run this code

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

# Graph weights
weights = {
    'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
    'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
    'hidden': tf.Variable(tf.random_normal([n_hidden])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

pred = LSTM_RNN(x, weights, biases)

# Loss, optimizer and evaluation
l2 = lambda_loss_amount * sum(
    tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()
) # L2 loss prevents this overkill neural network to overfit the data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer

correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

I get this error

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-20-7963db4edbf4> in <module>()
     14 }
     15 
---> 16 pred = LSTM_RNN(x, weights, biases)
     17 
     18 # Loss, optimizer and evaluation

<ipython-input-13-1da1ce9bcbd5> in LSTM_RNN(_X, _weights, _biases)
     24     lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)
     25     # Get LSTM cell output
---> 26     outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32)
     27 
     28     # Get last time step's output feature for a "many to one" style classifier,

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn.py in static_rnn(cell, inputs, initial_state, dtype, sequence_length, scope)
   1235             state_size=cell.state_size)
   1236       else:
-> 1237         (output, state) = call_cell()
   1238 
   1239       outputs.append(output)

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn.py in <lambda>()
   1222         varscope.reuse_variables()
   1223       # pylint: disable=cell-var-from-loop
-> 1224       call_cell = lambda: cell(input_, state)
   1225       # pylint: enable=cell-var-from-loop
   1226       if sequence_length is not None:

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in __call__(self, inputs, state, scope)
    178       with vs.variable_scope(vs.get_variable_scope(),
    179                              custom_getter=self._rnn_get_variable):
--> 180         return super(RNNCell, self).__call__(inputs, state)
    181 
    182   def _rnn_get_variable(self, getter, *args, **kwargs):

E:\Anaconda\lib\site-packages\tensorflow\python\layers\base.py in __call__(self, inputs, *args, **kwargs)
    448         # Check input assumptions set after layer building, e.g. input shape.
    449         self._assert_input_compatibility(inputs)
--> 450         outputs = self.call(inputs, *args, **kwargs)
    451 
    452         # Apply activity regularization.

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state)
    936                                       [-1, cell.state_size])
    937           cur_state_pos += cell.state_size
--> 938         cur_inp, new_state = cell(cur_inp, cur_state)
    939         new_states.append(new_state)
    940 

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in __call__(self, inputs, state, scope)
    178       with vs.variable_scope(vs.get_variable_scope(),
    179                              custom_getter=self._rnn_get_variable):
--> 180         return super(RNNCell, self).__call__(inputs, state)
    181 
    182   def _rnn_get_variable(self, getter, *args, **kwargs):

E:\Anaconda\lib\site-packages\tensorflow\python\layers\base.py in __call__(self, inputs, *args, **kwargs)
    448         # Check input assumptions set after layer building, e.g. input shape.
    449         self._assert_input_compatibility(inputs)
--> 450         outputs = self.call(inputs, *args, **kwargs)
    451 
    452         # Apply activity regularization.

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state)
    399       c, h = array_ops.split(value=state, num_or_size_splits=2, axis=1)
    400 
--> 401     concat = _linear([inputs, h], 4 * self._num_units, True)
    402 
    403     # i = input_gate, j = new_input, f = forget_gate, o = output_gate

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in _linear(args, output_size, bias, bias_initializer, kernel_initializer)
   1037         _WEIGHTS_VARIABLE_NAME, [total_arg_size, output_size],
   1038         dtype=dtype,
-> 1039         initializer=kernel_initializer)
   1040     if len(args) == 1:
   1041       res = math_ops.matmul(args[0], weights)

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
   1063       collections=collections, caching_device=caching_device,
   1064       partitioner=partitioner, validate_shape=validate_shape,
-> 1065       use_resource=use_resource, custom_getter=custom_getter)
   1066 get_variable_or_local_docstring = (
   1067     """%s

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
    960           collections=collections, caching_device=caching_device,
    961           partitioner=partitioner, validate_shape=validate_shape,
--> 962           use_resource=use_resource, custom_getter=custom_getter)
    963 
    964   def _get_partitioned_variable(self,

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
    358           reuse=reuse, trainable=trainable, collections=collections,
    359           caching_device=caching_device, partitioner=partitioner,
--> 360           validate_shape=validate_shape, use_resource=use_resource)
    361     else:
    362       return _true_getter(

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in wrapped_custom_getter(getter, *args, **kwargs)
   1403     return custom_getter(
   1404         functools.partial(old_getter, getter),
-> 1405         *args, **kwargs)
   1406   return wrapped_custom_getter
   1407 

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in _rnn_get_variable(self, getter, *args, **kwargs)
    181 
    182   def _rnn_get_variable(self, getter, *args, **kwargs):
--> 183     variable = getter(*args, **kwargs)
    184     trainable = (variable in tf_variables.trainable_variables() or
    185                  (isinstance(variable, tf_variables.PartitionedVariable) and

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in _rnn_get_variable(self, getter, *args, **kwargs)
    181 
    182   def _rnn_get_variable(self, getter, *args, **kwargs):
--> 183     variable = getter(*args, **kwargs)
    184     trainable = (variable in tf_variables.trainable_variables() or
    185                  (isinstance(variable, tf_variables.PartitionedVariable) and

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource)
    350           trainable=trainable, collections=collections,
    351           caching_device=caching_device, validate_shape=validate_shape,
--> 352           use_resource=use_resource)
    353 
    354     if custom_getter is not None:

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape, use_resource)
    662                          " Did you mean to set reuse=True in VarScope? "
    663                          "Originally defined at:\n\n%s" % (
--> 664                              name, "".join(traceback.format_list(tb))))
    665       found_var = self._vars[name]
    666       if not shape.is_compatible_with(found_var.get_shape()):

ValueError: Variable rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

  File "E:\Anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
  File "E:\Anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "E:\Anaconda\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
    op_def=op_def)

Different Performance Using the Current Version of lstm.py with TensorFlow r1.0

To fit the current code in to the new released TensorFlow r1.0, I made several modification on the code

In the Loading Function

#line 25:
file = open(signal_type_path, 'rb')    ===>>>     file = open(signal_type_path, 'r')

#line 40:
file = open(y_path, 'rb')     ===>>>    file = open(y_path, 'r')

In the LSTM_NETWORK() Function

#line 110:     
hidden = tf.split(0, config.n_steps, hidden)     ===>>>    hidden = tf.split(hidden, config.n_steps, 0)

#line 114    
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(config.n_hidden, forget_bias=1.0)    ===>>>    lstm_cell = tf.contrib.rnn.BasicLSTMCell(config.n_hidden, forget_bias=1.0)

#line 117 
lsmt_layers = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * 2)    ===>>>    lsmt_layers = tf.contrib.rnn.MultiRNNCell([lstm_cell] * 2)

#line 120
outputs, _ = tf.nn.rnn(lsmt_layers, hidden, dtype=tf.float32)    ===>>>    outputs, _ = tf.contrib.rnn.static_rnn(lsmt_layers, hidden, dtype=tf.float32)

In main()

#line 216         
tf.nn.softmax_cross_entropy_with_logits(pred_Y, Y)) + l2    ===>>>    tf.nn.softmax_cross_entropy_with_logits(labels=pred_Y,logits= Y)) + l2

#line 228
tf.initialize_all_variables().run()    ===>>>    tf.global_variables_initializer().run()

However, I ran the code and the performance does not seem good as it is shown in the readme file, I want to know whether the modification of the code have some mistakes. The results shown as below:

traing iter: 0, test accuracy : 0.34781134128570557, loss : 1.3058252334594727
traing iter: 1, test accuracy : 0.3338988721370697, loss : 1.5186371803283691
traing iter: 2, test accuracy : 0.287750244140625, loss : 1.7945531606674194
traing iter: 3, test accuracy : 0.2789277136325836, loss : 2.190826416015625
traing iter: 4, test accuracy : 0.36274176836013794, loss : 2.607555866241455
traing iter: 5, test accuracy : 0.3366135060787201, loss : 2.898186206817627
traing iter: 6, test accuracy : 0.235154390335083, loss : 3.007314443588257
traing iter: 7, test accuracy : 0.18154054880142212, loss : 3.0111827850341797
traing iter: 8, test accuracy : 0.18052256107330322, loss : 2.9800398349761963
traing iter: 9, test accuracy : 0.18052256107330322, loss : 2.953343391418457
traing iter: 10, test accuracy : 0.18052256107330322, loss : 2.934436559677124
traing iter: 11, test accuracy : 0.18052256107330322, loss : 2.927518844604492
traing iter: 12, test accuracy : 0.18052256107330322, loss : 2.9316229820251465
traing iter: 13, test accuracy : 0.18052256107330322, loss : 2.935426712036133
traing iter: 14, test accuracy : 0.18052256107330322, loss : 2.9258742332458496
traing iter: 15, test accuracy : 0.18052256107330322, loss : 2.9044976234436035
traing iter: 16, test accuracy : 0.18052256107330322, loss : 2.878373622894287
traing iter: 17, test accuracy : 0.18052256107330322, loss : 2.850264310836792
traing iter: 18, test accuracy : 0.18052256107330322, loss : 2.820138454437256
traing iter: 19, test accuracy : 0.18052256107330322, loss : 2.787750244140625
traing iter: 20, test accuracy : 0.18052256107330322, loss : 2.753265380859375
traing iter: 21, test accuracy : 0.18052256107330322, loss : 2.717087507247925
traing iter: 22, test accuracy : 0.18052256107330322, loss : 2.6796491146087646
traing iter: 23, test accuracy : 0.18052256107330322, loss : 2.6416709423065186
traing iter: 24, test accuracy : 0.18052256107330322, loss : 2.6035842895507812
traing iter: 25, test accuracy : 0.18052256107330322, loss : 2.5656495094299316
traing iter: 26, test accuracy : 0.18052256107330322, loss : 2.5279884338378906
traing iter: 27, test accuracy : 0.18052256107330322, loss : 2.4905736446380615
traing iter: 28, test accuracy : 0.18052256107330322, loss : 2.453395366668701
traing iter: 29, test accuracy : 0.18052256107330322, loss : 2.416445732116699
traing iter: 30, test accuracy : 0.18052256107330322, loss : 2.3797318935394287
traing iter: 31, test accuracy : 0.18052256107330322, loss : 2.3432376384735107
traing iter: 32, test accuracy : 0.18052256107330322, loss : 2.3069679737091064
traing iter: 33, test accuracy : 0.18052256107330322, loss : 2.27091646194458
traing iter: 34, test accuracy : 0.18052256107330322, loss : 2.235081911087036
traing iter: 35, test accuracy : 0.18052256107330322, loss : 2.1994683742523193
traing iter: 36, test accuracy : 0.18052256107330322, loss : 2.164074182510376
traing iter: 37, test accuracy : 0.18052256107330322, loss : 2.1289024353027344
traing iter: 38, test accuracy : 0.18052256107330322, loss : 2.0939483642578125
traing iter: 39, test accuracy : 0.18052256107330322, loss : 2.059211492538452
traing iter: 40, test accuracy : 0.18052256107330322, loss : 2.0247159004211426
traing iter: 41, test accuracy : 0.18052256107330322, loss : 1.9904437065124512
traing iter: 42, test accuracy : 0.18052256107330322, loss : 1.9563994407653809
traing iter: 43, test accuracy : 0.18052256107330322, loss : 1.9225943088531494
traing iter: 44, test accuracy : 0.18052256107330322, loss : 1.889019250869751
traing iter: 45, test accuracy : 0.18052256107330322, loss : 1.8556859493255615
traing iter: 46, test accuracy : 0.18052256107330322, loss : 1.8225984573364258
traing iter: 47, test accuracy : 0.18052256107330322, loss : 1.7897469997406006
traing iter: 48, test accuracy : 0.18052256107330322, loss : 1.757143259048462
traing iter: 49, test accuracy : 0.18052256107330322, loss : 1.7247881889343262
traing iter: 50, test accuracy : 0.18052256107330322, loss : 1.6926804780960083
traing iter: 51, test accuracy : 0.18052256107330322, loss : 1.6608327627182007
traing iter: 52, test accuracy : 0.18052256107330322, loss : 1.6292425394058228
traing iter: 53, test accuracy : 0.18052256107330322, loss : 1.5979050397872925
traing iter: 54, test accuracy : 0.18052256107330322, loss : 1.566849946975708
traing iter: 55, test accuracy : 0.18052256107330322, loss : 1.536041498184204
traing iter: 56, test accuracy : 0.18052256107330322, loss : 1.5055114030838013
traing iter: 57, test accuracy : 0.18052256107330322, loss : 1.4752501249313354
traing iter: 58, test accuracy : 0.18052256107330322, loss : 1.4452615976333618
traing iter: 59, test accuracy : 0.18052256107330322, loss : 1.4155560731887817
traing iter: 60, test accuracy : 0.18052256107330322, loss : 1.386133074760437
traing iter: 61, test accuracy : 0.18052256107330322, loss : 1.3569962978363037
traing iter: 62, test accuracy : 0.18052256107330322, loss : 1.3281437158584595
traing iter: 63, test accuracy : 0.18052256107330322, loss : 1.299586534500122
traing iter: 64, test accuracy : 0.18052256107330322, loss : 1.2713215351104736
traing iter: 65, test accuracy : 0.18052256107330322, loss : 1.2433592081069946
traing iter: 66, test accuracy : 0.18052256107330322, loss : 1.2156893014907837
traing iter: 67, test accuracy : 0.18052256107330322, loss : 1.1883275508880615
traing iter: 68, test accuracy : 0.18052256107330322, loss : 1.1612651348114014
traing iter: 69, test accuracy : 0.18052256107330322, loss : 1.134517788887024
traing iter: 70, test accuracy : 0.18052256107330322, loss : 1.108081340789795
traing iter: 71, test accuracy : 0.18052256107330322, loss : 1.0819562673568726
traing iter: 72, test accuracy : 0.18052256107330322, loss : 1.0561437606811523
traing iter: 73, test accuracy : 0.18052256107330322, loss : 1.030653953552246
traing iter: 74, test accuracy : 0.18052256107330322, loss : 1.0054810047149658
traing iter: 75, test accuracy : 0.18052256107330322, loss : 0.9806308746337891
traing iter: 76, test accuracy : 0.18052256107330322, loss : 0.9561023712158203
traing iter: 77, test accuracy : 0.18052256107330322, loss : 0.9319024085998535
traing iter: 78, test accuracy : 0.18052256107330322, loss : 0.9080308079719543
traing iter: 79, test accuracy : 0.18052256107330322, loss : 0.8844877481460571
traing iter: 80, test accuracy : 0.18052256107330322, loss : 0.8612725138664246
traing iter: 81, test accuracy : 0.18052256107330322, loss : 0.8383844494819641
traing iter: 82, test accuracy : 0.18052256107330322, loss : 0.8158326148986816
traing iter: 83, test accuracy : 0.18052256107330322, loss : 0.7936134934425354
traing iter: 84, test accuracy : 0.18052256107330322, loss : 0.7717282772064209
traing iter: 85, test accuracy : 0.18052256107330322, loss : 0.750174880027771
traing iter: 86, test accuracy : 0.18052256107330322, loss : 0.7289565801620483
traing iter: 87, test accuracy : 0.18052256107330322, loss : 0.7080764770507812
traing iter: 88, test accuracy : 0.18052256107330322, loss : 0.6875315308570862
traing iter: 89, test accuracy : 0.18052256107330322, loss : 0.667317271232605
traing iter: 90, test accuracy : 0.18052256107330322, loss : 0.6474432945251465
traing iter: 91, test accuracy : 0.18052256107330322, loss : 0.6279003024101257
traing iter: 92, test accuracy : 0.18052256107330322, loss : 0.6086910367012024
traing iter: 93, test accuracy : 0.18052256107330322, loss : 0.5898177623748779
traing iter: 94, test accuracy : 0.18052256107330322, loss : 0.5712740421295166
traing iter: 95, test accuracy : 0.18052256107330322, loss : 0.5530636310577393
traing iter: 96, test accuracy : 0.18052256107330322, loss : 0.5351837277412415
traing iter: 97, test accuracy : 0.18052256107330322, loss : 0.517633318901062
traing iter: 98, test accuracy : 0.18052256107330322, loss : 0.5004111528396606
traing iter: 99, test accuracy : 0.18052256107330322, loss : 0.48351573944091797
traing iter: 100, test accuracy : 0.18052256107330322, loss : 0.46694350242614746
traing iter: 101, test accuracy : 0.18052256107330322, loss : 0.45069605112075806
traing iter: 102, test accuracy : 0.18052256107330322, loss : 0.4347696900367737
traing iter: 103, test accuracy : 0.18052256107330322, loss : 0.4191637635231018
traing iter: 104, test accuracy : 0.18052256107330322, loss : 0.403874009847641
traing iter: 105, test accuracy : 0.18052256107330322, loss : 0.3889009356498718
traing iter: 106, test accuracy : 0.18052256107330322, loss : 0.37423935532569885
traing iter: 107, test accuracy : 0.18052256107330322, loss : 0.35988837480545044
traing iter: 108, test accuracy : 0.18052256107330322, loss : 0.34584617614746094
traing iter: 109, test accuracy : 0.18052256107330322, loss : 0.33210957050323486
traing iter: 110, test accuracy : 0.18052256107330322, loss : 0.31867480278015137
traing iter: 111, test accuracy : 0.18052256107330322, loss : 0.3055408000946045
traing iter: 112, test accuracy : 0.18052256107330322, loss : 0.2927030920982361
traing iter: 113, test accuracy : 0.18052256107330322, loss : 0.28015977144241333
traing iter: 114, test accuracy : 0.18052256107330322, loss : 0.26790836453437805
traing iter: 115, test accuracy : 0.18052256107330322, loss : 0.2559434473514557
traing iter: 116, test accuracy : 0.18052256107330322, loss : 0.24426409602165222
traing iter: 117, test accuracy : 0.18052256107330322, loss : 0.2328660935163498
traing iter: 118, test accuracy : 0.18052256107330322, loss : 0.2217465490102768
traing iter: 119, test accuracy : 0.18052256107330322, loss : 0.21090266108512878
traing iter: 120, test accuracy : 0.18052256107330322, loss : 0.20032905042171478
traing iter: 121, test accuracy : 0.18052256107330322, loss : 0.1900242269039154
traing iter: 122, test accuracy : 0.18052256107330322, loss : 0.17998453974723816
traing iter: 123, test accuracy : 0.18052256107330322, loss : 0.17020505666732788
traing iter: 124, test accuracy : 0.18052256107330322, loss : 0.16068293154239655
traing iter: 125, test accuracy : 0.18052256107330322, loss : 0.15141479671001434
traing iter: 126, test accuracy : 0.18052256107330322, loss : 0.14239707589149475
traing iter: 127, test accuracy : 0.18052256107330322, loss : 0.13362593948841095
traing iter: 128, test accuracy : 0.18052256107330322, loss : 0.12509757280349731
traing iter: 129, test accuracy : 0.18052256107330322, loss : 0.11680810153484344
traing iter: 130, test accuracy : 0.18052256107330322, loss : 0.10875467956066132
traing iter: 131, test accuracy : 0.18052256107330322, loss : 0.10093227028846741
traing iter: 132, test accuracy : 0.18052256107330322, loss : 0.09333805739879608
traing iter: 133, test accuracy : 0.18052256107330322, loss : 0.08596782386302948
traing iter: 134, test accuracy : 0.18052256107330322, loss : 0.07881791889667511
traing iter: 135, test accuracy : 0.18052256107330322, loss : 0.07188472896814346
traing iter: 136, test accuracy : 0.18052256107330322, loss : 0.06516419351100922
traing iter: 137, test accuracy : 0.18052256107330322, loss : 0.058652739971876144
traing iter: 138, test accuracy : 0.18052256107330322, loss : 0.05234657600522041
traing iter: 139, test accuracy : 0.18052256107330322, loss : 0.04624189808964729
traing iter: 140, test accuracy : 0.18052256107330322, loss : 0.04033491760492325
traing iter: 141, test accuracy : 0.18052256107330322, loss : 0.034621983766555786
traing iter: 142, test accuracy : 0.18052256107330322, loss : 0.029099291190505028
traing iter: 143, test accuracy : 0.18052256107330322, loss : 0.023763025179505348
traing iter: 144, test accuracy : 0.18052256107330322, loss : 0.018609726801514626
traing iter: 145, test accuracy : 0.18052256107330322, loss : 0.013635683804750443
traing iter: 146, test accuracy : 0.18052256107330322, loss : 0.0088372603058815
traing iter: 147, test accuracy : 0.18052256107330322, loss : 0.004210382699966431
traing iter: 148, test accuracy : 0.18052256107330322, loss : -0.0002478770911693573
traing iter: 149, test accuracy : 0.18052256107330322, loss : -0.004541546106338501
traing iter: 150, test accuracy : 0.18052256107330322, loss : -0.008673999458551407
traing iter: 151, test accuracy : 0.18052256107330322, loss : -0.012648768723011017
traing iter: 152, test accuracy : 0.18052256107330322, loss : -0.01646951586008072
traing iter: 153, test accuracy : 0.18052256107330322, loss : -0.020139258354902267
traing iter: 154, test accuracy : 0.18052256107330322, loss : -0.02366192266345024
traing iter: 155, test accuracy : 0.18052256107330322, loss : -0.027040652930736542
traing iter: 156, test accuracy : 0.18052256107330322, loss : -0.03027883544564247
traing iter: 157, test accuracy : 0.18052256107330322, loss : -0.03337998315691948
traing iter: 158, test accuracy : 0.18052256107330322, loss : -0.036346666514873505
traing iter: 159, test accuracy : 0.18052256107330322, loss : -0.03918309509754181
traing iter: 160, test accuracy : 0.18052256107330322, loss : -0.04189173877239227
traing iter: 161, test accuracy : 0.18052256107330322, loss : -0.04447639361023903
traing iter: 162, test accuracy : 0.18052256107330322, loss : -0.04693935066461563
traing iter: 163, test accuracy : 0.18052256107330322, loss : -0.049284275621175766
traing iter: 164, test accuracy : 0.18052256107330322, loss : -0.051514316350221634
traing iter: 165, test accuracy : 0.18052256107330322, loss : -0.05363213270902634
traing iter: 166, test accuracy : 0.18052256107330322, loss : -0.055640846490859985
traing iter: 167, test accuracy : 0.18052256107330322, loss : -0.05754372850060463
traing iter: 168, test accuracy : 0.18052256107330322, loss : -0.059342704713344574
traing iter: 169, test accuracy : 0.18052256107330322, loss : -0.0610412135720253
traing iter: 170, test accuracy : 0.18052256107330322, loss : -0.06264205276966095
traing iter: 171, test accuracy : 0.18052256107330322, loss : -0.06414808332920074
traing iter: 172, test accuracy : 0.18052256107330322, loss : -0.06556138396263123
traing iter: 173, test accuracy : 0.18052256107330322, loss : -0.06688489019870758
traing iter: 174, test accuracy : 0.18052256107330322, loss : -0.06812205910682678
traing iter: 175, test accuracy : 0.18052256107330322, loss : -0.06927430629730225
traing iter: 176, test accuracy : 0.18052256107330322, loss : -0.07034479826688766
traing iter: 177, test accuracy : 0.18052256107330322, loss : -0.07133537530899048
traing iter: 178, test accuracy : 0.18052256107330322, loss : -0.07224904000759125
traing iter: 179, test accuracy : 0.18052256107330322, loss : -0.07308772951364517
traing iter: 180, test accuracy : 0.18052256107330322, loss : -0.07385437935590744
traing iter: 181, test accuracy : 0.18052256107330322, loss : -0.07455061376094818
traing iter: 182, test accuracy : 0.18052256107330322, loss : -0.07517953217029572
traing iter: 183, test accuracy : 0.18052256107330322, loss : -0.07574253529310226
traing iter: 184, test accuracy : 0.18052256107330322, loss : -0.07624218612909317
traing iter: 185, test accuracy : 0.18052256107330322, loss : -0.07668038457632065
traing iter: 186, test accuracy : 0.18052256107330322, loss : -0.07705892622470856
traing iter: 187, test accuracy : 0.18052256107330322, loss : -0.07738093286752701
traing iter: 188, test accuracy : 0.18052256107330322, loss : -0.07764744758605957
traing iter: 189, test accuracy : 0.18052256107330322, loss : -0.07786049693822861
traing iter: 190, test accuracy : 0.18052256107330322, loss : -0.078022301197052
traing iter: 191, test accuracy : 0.18052256107330322, loss : -0.07813508808612823
traing iter: 192, test accuracy : 0.18052256107330322, loss : -0.07819987088441849
traing iter: 193, test accuracy : 0.18052256107330322, loss : -0.07821857929229736
traing iter: 194, test accuracy : 0.18052256107330322, loss : -0.07819265872240067
traing iter: 195, test accuracy : 0.18052256107330322, loss : -0.07812502235174179
traing iter: 196, test accuracy : 0.18052256107330322, loss : -0.07801615446805954
traing iter: 197, test accuracy : 0.18052256107330322, loss : -0.07786814868450165
traing iter: 198, test accuracy : 0.18052256107330322, loss : -0.07768278568983078
traing iter: 199, test accuracy : 0.18052256107330322, loss : -0.07746139913797379
traing iter: 200, test accuracy : 0.18052256107330322, loss : -0.07720571756362915
traing iter: 201, test accuracy : 0.18052256107330322, loss : -0.07691645622253418
traing iter: 202, test accuracy : 0.18052256107330322, loss : -0.07659582793712616
traing iter: 203, test accuracy : 0.18052256107330322, loss : -0.07624495029449463
traing iter: 204, test accuracy : 0.18052256107330322, loss : -0.07586495578289032
traing iter: 205, test accuracy : 0.18052256107330322, loss : -0.0754581168293953
traing iter: 206, test accuracy : 0.18052256107330322, loss : -0.07502477616071701
traing iter: 207, test accuracy : 0.18052256107330322, loss : -0.0745663046836853
traing iter: 208, test accuracy : 0.18052256107330322, loss : -0.07408446073532104
traing iter: 209, test accuracy : 0.18052256107330322, loss : -0.07357922941446304
traing iter: 210, test accuracy : 0.18052256107330322, loss : -0.07305324822664261
traing iter: 211, test accuracy : 0.18052256107330322, loss : -0.07250723987817764
traing iter: 212, test accuracy : 0.18052256107330322, loss : -0.0719418153166771
traing iter: 213, test accuracy : 0.18052256107330322, loss : -0.07135853171348572
traing iter: 214, test accuracy : 0.18052256107330322, loss : -0.07075759023427963
traing iter: 215, test accuracy : 0.18052256107330322, loss : -0.07014109939336777
traing iter: 216, test accuracy : 0.18052256107330322, loss : -0.06950978189706802
traing iter: 217, test accuracy : 0.18052256107330322, loss : -0.06886371970176697
traing iter: 218, test accuracy : 0.18052256107330322, loss : -0.06820454448461533
traing iter: 219, test accuracy : 0.18052256107330322, loss : -0.06753383576869965
traing iter: 220, test accuracy : 0.18052256107330322, loss : -0.0668511614203453
traing iter: 221, test accuracy : 0.18052256107330322, loss : -0.06615811586380005
traing iter: 222, test accuracy : 0.18052256107330322, loss : -0.06545504927635193
traing iter: 223, test accuracy : 0.18052256107330322, loss : -0.06474266946315765
traing iter: 224, test accuracy : 0.18052256107330322, loss : -0.06402260065078735
traing iter: 225, test accuracy : 0.18052256107330322, loss : -0.06329485028982162
traing iter: 226, test accuracy : 0.18052256107330322, loss : -0.06256052106618881
traing iter: 227, test accuracy : 0.18052256107330322, loss : -0.06181925907731056
traing iter: 228, test accuracy : 0.18052256107330322, loss : -0.06107352674007416
traing iter: 229, test accuracy : 0.18052256107330322, loss : -0.06032247841358185
traing iter: 230, test accuracy : 0.18052256107330322, loss : -0.05956796929240227
traing iter: 231, test accuracy : 0.18052256107330322, loss : -0.05880892276763916
traing iter: 232, test accuracy : 0.18052256107330322, loss : -0.0580473430454731
traing iter: 233, test accuracy : 0.18052256107330322, loss : -0.057283416390419006
traing iter: 234, test accuracy : 0.18052256107330322, loss : -0.05651719868183136
traing iter: 235, test accuracy : 0.18052256107330322, loss : -0.05574985221028328
traing iter: 236, test accuracy : 0.18052256107330322, loss : -0.05498150736093521
traing iter: 237, test accuracy : 0.18052256107330322, loss : -0.05421300232410431
traing iter: 238, test accuracy : 0.18052256107330322, loss : -0.05344397947192192
traing iter: 239, test accuracy : 0.18052256107330322, loss : -0.05267596244812012
traing iter: 240, test accuracy : 0.18052256107330322, loss : -0.051908738911151886
traing iter: 241, test accuracy : 0.18052256107330322, loss : -0.05114242434501648
traing iter: 242, test accuracy : 0.18052256107330322, loss : -0.05037837475538254
traing iter: 243, test accuracy : 0.18052256107330322, loss : -0.04961588233709335
traing iter: 244, test accuracy : 0.18052256107330322, loss : -0.048856236040592194
traing iter: 245, test accuracy : 0.18052256107330322, loss : -0.04809919744729996
traing iter: 246, test accuracy : 0.18052256107330322, loss : -0.04734491556882858
traing iter: 247, test accuracy : 0.18052256107330322, loss : -0.046594373881816864
traing iter: 248, test accuracy : 0.18052256107330322, loss : -0.045847661793231964
traing iter: 249, test accuracy : 0.18052256107330322, loss : -0.04510471224784851
traing iter: 250, test accuracy : 0.18052256107330322, loss : -0.04436592012643814
traing iter: 251, test accuracy : 0.18052256107330322, loss : -0.04363199323415756
traing iter: 252, test accuracy : 0.18052256107330322, loss : -0.04290255159139633
traing iter: 253, test accuracy : 0.18052256107330322, loss : -0.04217810183763504
traing iter: 254, test accuracy : 0.18052256107330322, loss : -0.04145902022719383
traing iter: 255, test accuracy : 0.18052256107330322, loss : -0.040745168924331665
traing iter: 256, test accuracy : 0.18052256107330322, loss : -0.04003699868917465
traing iter: 257, test accuracy : 0.18052256107330322, loss : -0.03933443874120712
traing iter: 258, test accuracy : 0.18052256107330322, loss : -0.038638122379779816
traing iter: 259, test accuracy : 0.18052256107330322, loss : -0.03794777765870094
traing iter: 260, test accuracy : 0.18052256107330322, loss : -0.03726353123784065
traing iter: 261, test accuracy : 0.18052256107330322, loss : -0.036586061120033264
traing iter: 262, test accuracy : 0.18052256107330322, loss : -0.035915084183216095
traing iter: 263, test accuracy : 0.18052256107330322, loss : -0.035250455141067505
traing iter: 264, test accuracy : 0.18052256107330322, loss : -0.03459298610687256
traing iter: 265, test accuracy : 0.18052256107330322, loss : -0.03394236043095589
traing iter: 266, test accuracy : 0.18052256107330322, loss : -0.033298444002866745
traing iter: 267, test accuracy : 0.18052256107330322, loss : -0.03266187384724617
traing iter: 268, test accuracy : 0.18052256107330322, loss : -0.03203270584344864
traing iter: 269, test accuracy : 0.18052256107330322, loss : -0.031410589814186096
traing iter: 270, test accuracy : 0.18052256107330322, loss : -0.030795607715845108
traing iter: 271, test accuracy : 0.18052256107330322, loss : -0.030188273638486862
traing iter: 272, test accuracy : 0.18052256107330322, loss : -0.02958841621875763
traing iter: 273, test accuracy : 0.18052256107330322, loss : -0.028995685279369354
traing iter: 274, test accuracy : 0.18052256107330322, loss : -0.028410688042640686
traing iter: 275, test accuracy : 0.18052256107330322, loss : -0.027833130210638046
traing iter: 276, test accuracy : 0.18052256107330322, loss : -0.02726338803768158
traing iter: 277, test accuracy : 0.18052256107330322, loss : -0.02670123055577278
traing iter: 278, test accuracy : 0.18052256107330322, loss : -0.02614673227071762
traing iter: 279, test accuracy : 0.18052256107330322, loss : -0.025599848479032516
traing iter: 280, test accuracy : 0.18052256107330322, loss : -0.025060418993234634
traing iter: 281, test accuracy : 0.18052256107330322, loss : -0.024528808891773224
traing iter: 282, test accuracy : 0.18052256107330322, loss : -0.02400490641593933
traing iter: 283, test accuracy : 0.18052256107330322, loss : -0.023488491773605347
traing iter: 284, test accuracy : 0.18052256107330322, loss : -0.022979963570833206
traing iter: 285, test accuracy : 0.18052256107330322, loss : -0.02247888222336769
traing iter: 286, test accuracy : 0.18052256107330322, loss : -0.021985376253724098
traing iter: 287, test accuracy : 0.18052256107330322, loss : -0.02149956300854683
traing iter: 288, test accuracy : 0.18052256107330322, loss : -0.02102125622332096
traing iter: 289, test accuracy : 0.18052256107330322, loss : -0.02055053971707821
traing iter: 290, test accuracy : 0.18052256107330322, loss : -0.020087242126464844
traing iter: 291, test accuracy : 0.18052256107330322, loss : -0.01963147521018982
traing iter: 292, test accuracy : 0.18052256107330322, loss : -0.019183173775672913
traing iter: 293, test accuracy : 0.18052256107330322, loss : -0.018742157146334648
traing iter: 294, test accuracy : 0.18052256107330322, loss : -0.018308615311980247
traing iter: 295, test accuracy : 0.18052256107330322, loss : -0.017882268875837326
traing iter: 296, test accuracy : 0.18052256107330322, loss : -0.017463278025388718
traing iter: 297, test accuracy : 0.18052256107330322, loss : -0.017051348462700844
traing iter: 298, test accuracy : 0.18052256107330322, loss : -0.016646670177578926
traing iter: 299, test accuracy : 0.18052256107330322, loss : -0.01624903827905655

final test accuracy: 0.18052256107330322
best epoch's test accuracy: 0.36274176836013794

Can I use any smartphone sensors to test it?

I just wanted to test the model with a real smartphone. The data was gathered from Samsung Galaxy S2 but I have Huawei P20. Is the dataset suitable for other types of a phone?

How to set up number of Epoch

I wanna describe the number of times the algorithm sees the entire data set by setting An epoch.
Where can I set inside this?

How to use this model

I am able to download and ran this code. But could you please provide steps to use this for identifying the human activity from video or any image.

LSTM model is giving an ValueError while predicting based on X_test data

Hi need a help to solve value erorr wile running LSTM. It seems everything works fine on training data but prediction generates less then expected dimensions
my x_train data shape is (846, 30, 3), my y_train data shape is (846,) my x_test 363, 30, 3), my y_test (363)
hat = modell.predict(test_X) generates (363, 100)

part of the code

reshape input to be 3D [samples, timesteps, features]

train_X = tr_xval.reshape((tr_xval.shape[0], 30, 3))
test_X = ts_xval.reshape((ts_xval.shape[0], 30, 3))
train_y=tr_yval
test_y=ts_yval
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)

design network

modell = Sequential()
modell.add(LSTM(200, activation='relu',input_shape=(train_X.shape[1], train_X.shape[2]),return_sequences=False,stateful=False))

#model.add(LSTM(neurons, batch_input_shape=(batch_size, X.shape[1], X.shape[2]), stateful=True))
modell.add(Dense(100, activation='relu'))
modell.compile(loss='mae', optimizer='adam',metrics=['accuracy'])
modell.summary()

fit network

history = modell.fit(train_X, train_y, epochs=200, batch_size=72, validation_data=(test_X, test_y), verbose=2, shuffle=False)

#works fin until here but then
yhat = modell.predict(test_X)

from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
values=lnorm.values.astype('float32')
scaled = scaler.fit_transform(values)

invert scaling for forecast

inv_yhat = np.concatenate((yhat, test_X[:, -2:]), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,0]

invert scaling for actual

test_y = test_y.reshape((len(test_y), 1))
inv_y = np.concatenate((test_y, test_X[:, -2:]), axis=1)
inv_y = scaler.inverse_transform(inv_y)
inv_y = inv_y[:,0]


ValueError Traceback (most recent call last)
C:\Users\M55F1~1.AYU\AppData\Local\Temp/ipykernel_7572/1751946881.py in
5
6 # invert scaling for forecast
----> 7 inv_yhat = np.concatenate((yhat, test_X[:, -2:]), axis=1)
8 inv_yhat = scaler.inverse_transform(inv_yhat)
9 inv_yhat = inv_yhat[:,0]

<array_function internals> in concatenate(*args, **kwargs)

ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 3 dimension(s)

Something about the coordinate of IMU

Hi, thank you for your great work!
I have a question about the IMU's coordinate in your dataset. When you collect the acc and gyro data, what is the IMU's coordinate? That is what directons are the x-axis, y-axis and z-axis respectively?
Looking forward to your reply!
Thank you!

IndexError: index 3 is out of bounds for axis 0 with size 3

Hello,

I'm trying to run your code with a smaller Dataset. I have :
X_train.shape (61,100,75)
y_train.shape (61,1)
X_test.shape (27,100,75)
y_test.shape (27,1)

I changed the number of classes (3) but I still have this error :

image

Do you know where is my problem ?

Thanks a lot for your help and thank you for your code !!

Robin Fays

Realtime classification

Hi!

Thanks for posting this code. Do you think this approach will work for realtime classification of the human activity?

How to keep the best performance model

Hi Guillaume,
Thanks for your great post, this helps me a lot.
When training the RNN model, the model gives a very high performance in the middle of the training process, while after all the iterations, the final performance is not the best. I am not sure whether this case is normal, is there any way I can keep the best performance model in the training process, instead of the final model after all the iterations?
Thanks!

Validation on Saved Checkpoint

I want to validate the validation data on saved checkpoints . I am unable to load the saved checkpoints.Can you provide the validation script for validation on saved checkpoints.Thank you

Upgrade to the latest tensorflow ?

The tensorflow used here is out-of-date, the latest tensorflow has something change in the rnn and rnn_cell. So it will be great if one can upgrade this project.

Creating chunks of segments

Hi,

Thank you for putting these immense resources in one place.

  1. May I please ask why the creation of fixed-sized chunks/blocks of the training series?

  2. Support I choose to use a classical machine learning classifier, such as random forest, would you also recommend creating these fixed-sized chunks first?

Regards.

issue with builiding of nueral network on the current version of tensorflow

Graph input/output

x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

Graph weights

weights = {
'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
'hidden': tf.Variable(tf.random_normal([n_hidden])),
'out': tf.Variable(tf.random_normal([n_classes]))
}

pred = LSTM_RNN(x, weights, biases)

Loss, optimizer and evaluation

l2 = lambda_loss_amount * sum(
tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()
) # L2 loss prevents this overkill neural network to overfit the data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer

correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

for this part of the code

Query regarding data collection

HI @guillaume-chevalier , your code is very useful. I wanted to know if the collected data was from multiple users or from single user? and if it is from multiple users are all the users present in both train and test (overlap) or train and test has different users?

Clean pipeline using Neuraxle

Should do something that looks like this to clean the project by using Neuraxle:

deep_learning_seq_classif_pipeline = EpochRepeater(Pipeline([
    TrainOnlyWrapper(DataShuffler(seed=42)),
    MiniBatchSequentialPipeline([
        ForEachDataInput(Pipeline([
            ToNumpy(np_dtype=np.float32),
            DefaultValuesFiller(0.0),
        ])),
        ClassificationLSTM(n_stacked=2, n_residual=3),
    ], batch_size=32),
]), epochs=200, fit_only=True)

Where the ClassificationLSTM class contains the actual TensorFlow Code.

Why change the dimensional of inputs from n_inputs to n_hidden?

I'm interested in this brillirant project but I have some doubts about it. Hopefully someone can help me a little, thank you very much!
I used to believe num_units(config.n_hidden=32) of BasicLSTMCell is the dimension of the hidden cell state, while input_size(config.n_inputs=9) is the dimension of a input vector, i.e. one set of 9-axis sensor data, at one time in a time serie, of one sliding window, out of a batch.
Could somebody tell me if I'm getting this right?

Also, in

# Linear activation
_X = tf.nn.relu(tf.matmul(_X, config.W['hidden']) + config.biases['hidden'])
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(_X, config.n_steps, 0)
# new shape: n_steps * (batch_size, n_hidden)

Why transform the input size from 9 to 32? Is it a neccessary step to make the model mathematically right? e.g. My guess is to intentionally make the input, output and hidden state of a cell all at the same dimension? (if true, how can we benefit from it? Cause as far as I know, the LSTM cell would concatenate the inputs _X and the hidden state ht-1, so with and without this relu transformation , the only diffrence is in the weight mapping inside LSTM cell: (9 + 32)--->(32) versus (32 + 32)-->(32)?)

I kinda want to know for sure how does tf.contrib.rnn.static_rnn() combine "inputs + hidden state" in the forward propagation or what's the dimensions of the weight matrix inside the LSTM?

Or maybe it is just a "fully connected input layer" or something like that, which helped the overall learning process? Actually, I've tried to remove it. The code still worked but the accuracy dropped a little. So I'm even more confused. Please help me with this!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.