Code Monkey home page Code Monkey logo

dlaicourse's Introduction

Hi there, welcome to my Github Page ๐Ÿ‘‹

  • ๐Ÿ’ฌ Ask me about Artificial Intelligence or Google
  • ๐Ÿ“ซ How to reach me: [email protected]
  • ๐Ÿ˜„ Pronouns: he/him
  • โšก Fun fact: Father to Chris and Claudia Moroney

Learn more about what I do by visiting my website!

Laurence's GitHub stats

dlaicourse's People

Contributors

barnardb avatar ghchinoy avatar himalayanzephyr avatar italopontes avatar jakevdp avatar juan-coursera avatar lmoroney avatar markdaoust avatar sabhi-29 avatar saksham291 avatar utopiah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dlaicourse's Issues

Possible unintended switch from fashion_mnist to mnist in same notebook

For below notebook,
https://github.com/lmoroney/dlaicourse/blob/master/Course%201%20-%20Part%204%20-%20Lesson%202%20-%20Notebook.ipynb
Exercise #1 is based on fashion_mnist.
Exercise #2 is based on mnist.
It's possible that this was unintended. (since the author is requesting for comparision after changing after updating the network somewhat)
Please re-check
It's possible the Exercise#2 was intended to retain dataset as fashion_mnist, and not use mnist instead.

Loss vs. Accuracy In Course 1 - Part 4 - Lesson 2, Exercise 8

In Course 1 - Part 4 - Lesson 2 - Notebook.ipynb, Exercise 8 suggests it will use an accuracy cutoff, and then includes the following code:

    if(logs.get('loss')<0.4):
      print("\nReached 60% accuracy so cancelling training!")

This seems to assume that loss and accuracy sum to 1, which is not the case. (Or am I very confused?)

To illustrate the problem, replace the second last line

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')

with

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

and rerun the code example.

For me, the accuracy is above 0.6 long before the loss drops below 0.4.

(There is some related confusion in the text just before the exercises, where following a result showing a loss of 0.3490932399511337 and accuracy of 0.8754 there is text reading "For me, that returned a loss of about .8838, which means it was about 88% accurate." I raised PR #7 to address this.)

tfjs@latest:2 Uncaught (in promise) Error: Argument 'b' passed to 'mul' must be a Tensor or TensorLike, but got 'null'

I am currently taking the Browser-based Models with TensorFlow.js course. I am only stuck in the Week 1 exercise.

Here's the code:

<html>
<head></head>
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@latest"></script>
    <script lang="js">
        async function run(){
            const trainingUrl = 'wdbc-train.csv';
            const trainingData = tf.data.csv(trainingUrl, {
                columnConfigs: {
                    diagnosis: {
                        isLabel: true
                    }
                }
            });

            const convertedTrainingData = 
                trainingData.map(({xs, ys}) => {
                      // console.log(trainingData);
                      return{ xs: Object.values(xs), ys: Object.values(ys)};
                  }).batch(10);
                  
            // const testingUrl = 'wdbc-test.csv';
            
            // const testingData = tf.data.csv(testingUrl, {
            //     columnConfigs: {
            //         diagnosis: {
            //             isLabel: true
            //         }
            //     }
                
            // });

            // const convertedTestingData = 
            //     testingData.map(({xs, ys}) => {
            //           return{ xs: Object.values(xs), ys: Object.values(ys)};
            //       }).batch(10);
            
            const numOfFeatures = 30;
            // console.log(numOfFeatures);
            
            const model = tf.sequential();
            model.add(tf.layers.dense({inputShape: [numOfFeatures], activation: "relu", units: 20}))
            model.add(tf.layers.dense({activation: "relu", units: 20}))
            model.add(tf.layers.dense({activation: "relu", units: 10}))
            model.add(tf.layers.dense({activation: "relu", units: 5}))
            model.add(tf.layers.dense({activation: "sigmoid", units: 1}));
            
            model.compile({loss: "binaryCrossentropy", optimizer: tf.train.rmsprop(), metrics: ["accuracy"]});

            model.summary();

            //console.log(convertedTrainingData);

            await model.fitDataset(convertedTrainingData, 
                             {epochs:100,
                              callbacks:{
                                  onEpochEnd: async(epoch, logs) =>{
                                      console.log("Epoch: " + epoch + " Loss: " + logs.loss);
                                  }
                              }});
            
            // await model.fitDataset(convertedTrainingData, 
            //                  {epochs:100,
            //                   validationData: convertedTestingData,
            //                   callbacks:{
            //                       onEpochEnd: async(epoch, logs) =>{
            //                           console.log("Epoch: " + epoch + " Loss: " + logs.loss + " Accuracy: " + logs.acc);
            //                       }
            //                   }});
            // await model.save('downloads://my_model');
            
            
        }
        run();
    </script>
<body>
</body>
</html>

And here's the console log:

tfjs@latest:2 _________________________________________________________________
tfjs@latest:2 Layer (type)                 Output shape              Param #   
tfjs@latest:2 =================================================================
tfjs@latest:2 dense_Dense1 (Dense)         [null,20]                 620       
tfjs@latest:2 _________________________________________________________________
tfjs@latest:2 dense_Dense2 (Dense)         [null,20]                 420       
tfjs@latest:2 _________________________________________________________________
tfjs@latest:2 dense_Dense3 (Dense)         [null,10]                 210       
tfjs@latest:2 _________________________________________________________________
tfjs@latest:2 dense_Dense4 (Dense)         [null,5]                  55        
tfjs@latest:2 _________________________________________________________________
tfjs@latest:2 dense_Dense5 (Dense)         [null,1]                  6         
tfjs@latest:2 =================================================================
tfjs@latest:2 Total params: 1311
tfjs@latest:2 Trainable params: 1311
tfjs@latest:2 Non-trainable params: 0
tfjs@latest:2 _________________________________________________________________
tfjs@latest:2 Uncaught (in promise) Error: Argument 'b' passed to 'mul' must be a Tensor or TensorLike, but got 'null'
    at Ke (tfjs@latest:2)
    at mul_ (tfjs@latest:2)
    at Object.mul (tfjs@latest:2)
    at t.mul (tfjs@latest:2)
    at tfjs@latest:2
    at tfjs@latest:2
    at t.scopedRun (tfjs@latest:2)
    at t.tidy (tfjs@latest:2)
    at We (tfjs@latest:2)
    at tfjs@latest:2
Ke @ tfjs@latest:2
mul_ @ tfjs@latest:2
mul @ tfjs@latest:2
t.mul @ tfjs@latest:2
(anonymous) @ tfjs@latest:2
(anonymous) @ tfjs@latest:2
t.scopedRun @ tfjs@latest:2
t.tidy @ tfjs@latest:2
We @ tfjs@latest:2
(anonymous) @ tfjs@latest:2
e.applyGradients @ tfjs@latest:2
e.minimize @ tfjs@latest:2
(anonymous) @ tfjs@latest:2
(anonymous) @ tfjs@latest:2
(anonymous) @ tfjs@latest:2
(anonymous) @ tfjs@latest:2
o @ tfjs@latest:2
async function (async)
run @ wdbc_exercise.html:53
(anonymous) @ wdbc_exercise.html:73

I am absolutely running out of options here to debug this. Help would be appreciated. Thanks!

Course 3 - Week 2 - Lesson 3 model.fit raises ValueError

During the model fitting I get the error ValueError: Shapes () and (None, 1) must have the same rank and during handling of it get more errors and I am reproducing the stacktrace below. However, if I peek into the next lesson and do the following

BATCH_SIZE = 64
train_data = train_data.padded_batch(BATCH_SIZE, train_data.output_shapes)
test_data = test_data.padded_batch(BATCH_SIZE, test_data.output_shapes)

before the fit then it works. I understand the issue is with the inputs not having the same shape but do not understand why this is not a problem in the course video and what the best fix is.

Epoch 1/10
1/Unknown - 0s 61ms/step


ValueError Traceback (most recent call last)
~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_shape.py in merge_with(self, other)
927 try:
--> 928 self.assert_same_rank(other)
929 new_dims = []

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_shape.py in assert_same_rank(self, other)
982 raise ValueError("Shapes %s and %s must have the same rank" %
--> 983 (self, other))
984

ValueError: Shapes () and (None, 1) must have the same rank

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/nn_impl.py in sigmoid_cross_entropy_with_logits(_sentinel, labels, logits, name)
167 try:
--> 168 labels.get_shape().merge_with(logits.get_shape())
169 except ValueError:

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_shape.py in merge_with(self, other)
933 except ValueError:
--> 934 raise ValueError("Shapes %s and %s are not compatible" % (self, other))
935

ValueError: Shapes () and (None, 1) are not compatible

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
in
3 model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
4
----> 5 history = model.fit(train_data, epochs=num_epochs, validation_data=test_data)

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
732 max_queue_size=max_queue_size,
733 workers=workers,
--> 734 use_multiprocessing=use_multiprocessing)
735
736 def evaluate(self,

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
322 mode=ModeKeys.TRAIN,
323 training_context=training_context,
--> 324 total_epochs=epochs)
325 cbks.make_logs(model, epoch_logs, training_result, ModeKeys.TRAIN)
326

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
121 step=step, mode=mode, size=current_batch_size) as batch_logs:
122 try:
--> 123 batch_outs = execution_function(iterator)
124 except (StopIteration, errors.OutOfRangeError):
125 # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in execution_function(input_fn)
84 # numpy translates Tensors to values in Eager mode.
85 return nest.map_structure(_non_none_constant_value,
---> 86 distributed_function(input_fn))
87
88 return execution_function

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in call(self, *args, **kwds)
425 # This is the first call of call, so we have to initialize.
426 initializer_map = object_identity.ObjectIdentityDictionary()
--> 427 self._initialize(args, kwds, add_initializers_to=initializer_map)
428 if self._created_variables:
429 try:

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
368 self._concrete_stateful_fn = (
369 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 370 *args, **kwds))
371
372 def invalid_creator_scope(*unused_args, **unused_kwds):

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
1845 if self.input_signature:
1846 args, kwargs = None, None
-> 1847 graph_function, _, _ = self._maybe_define_function(args, kwargs)
1848 return graph_function
1849

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs)
2145 graph_function = self._function_cache.primary.get(cache_key, None)
2146 if graph_function is None:
-> 2147 graph_function = self._create_graph_function(args, kwargs)
2148 self._function_cache.primary[cache_key] = graph_function
2149 return graph_function, args, kwargs

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2036 arg_names=arg_names,
2037 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2038 capture_by_value=self._capture_by_value),
2039 self._function_attributes,
2040 # Tell the ConcreteFunction to clean up its graph once it goes out of

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
913 converted_func)
914
--> 915 func_outputs = python_func(*func_args, **func_kwargs)
916
917 # invariant: func_outputs contains only Tensors, CompositeTensors,

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds)
318 # wrapped allows AutoGraph to swap in a converted function. We give
319 # the function a weak reference to itself to avoid a reference cycle.
--> 320 return weak_wrapped_fn().wrapped(*args, **kwds)
321 weak_wrapped_fn = weakref.ref(wrapped_fn)
322

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in distributed_function(input_iterator)
71 strategy = distribution_strategy_context.get_strategy()
72 outputs = strategy.experimental_run_v2(
---> 73 per_replica_function, args=(model, x, y, sample_weights))
74 # Out of PerReplica outputs reduce or pick values to return.
75 all_outputs = dist_utils.unwrap_output_dict(

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/distribute/distribute_lib.py in experimental_run_v2(self, fn, args, kwargs)
758 fn = autograph.tf_convert(fn, ag_ctx.control_status_ctx(),
759 convert_by_default=False)
--> 760 return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
761
762 def reduce(self, reduce_op, value, axis):

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/distribute/distribute_lib.py in call_for_each_replica(self, fn, args, kwargs)
1785 kwargs = {}
1786 with self._container_strategy().scope():
-> 1787 return self._call_for_each_replica(fn, args, kwargs)
1788
1789 def _call_for_each_replica(self, fn, args, kwargs):

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/distribute/distribute_lib.py in _call_for_each_replica(self, fn, args, kwargs)
2130 self._container_strategy(),
2131 replica_id_in_sync_group=constant_op.constant(0, dtypes.int32)):
-> 2132 return fn(*args, **kwargs)
2133
2134 def _reduce_to(self, reduce_op, value, destinations):

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
290 def wrapper(*args, **kwargs):
291 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED):
--> 292 return func(*args, **kwargs)
293
294 if inspect.isfunction(func) or inspect.ismethod(func):

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in train_on_batch(model, x, y, sample_weight, class_weight, reset_metrics)
262 y,
263 sample_weights=sample_weights,
--> 264 output_loss_metrics=model._output_loss_metrics)
265
266 if reset_metrics:

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_eager.py in train_on_batch(model, inputs, targets, sample_weights, output_loss_metrics)
309 sample_weights=sample_weights,
310 training=True,
--> 311 output_loss_metrics=output_loss_metrics))
312 if not isinstance(outs, list):
313 outs = [outs]

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_eager.py in _process_single_batch(model, inputs, targets, output_loss_metrics, sample_weights, training)
250 output_loss_metrics=output_loss_metrics,
251 sample_weights=sample_weights,
--> 252 training=training))
253 if total_loss is None:
254 raise ValueError('The model cannot be run '

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_eager.py in _model_loss(model, inputs, targets, output_loss_metrics, sample_weights, training)
164
165 if hasattr(loss_fn, 'reduction'):
--> 166 per_sample_losses = loss_fn.call(targets[i], outs[i])
167 weighted_losses = losses_utils.compute_weighted_loss(
168 per_sample_losses,

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/losses.py in call(self, y_true, y_pred)
214 Loss values per sample.
215 """
--> 216 return self.fn(y_true, y_pred, **self._fn_kwargs)
217
218 def get_config(self):

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/losses.py in binary_crossentropy(y_true, y_pred, from_logits, label_smoothing)
987 _smooth_labels, lambda: y_true)
988 return K.mean(
--> 989 K.binary_crossentropy(y_true, y_pred, from_logits=from_logits), axis=-1)
990
991

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/backend.py in binary_crossentropy(target, output, from_logits)
4471 assert len(output.op.inputs) == 1
4472 output = output.op.inputs[0]
-> 4473 return nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)
4474
4475

~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/nn_impl.py in sigmoid_cross_entropy_with_logits(_sentinel, labels, logits, name)
169 except ValueError:
170 raise ValueError("logits and labels must have the same shape (%s vs %s)" %
--> 171 (logits.get_shape(), labels.get_shape()))
172
173 # The logistic loss formula from above is

ValueError: logits and labels must have the same shape ((None, 1) vs ())

Explanation seems wrong in Course 1 - Part 4 - Lesson 2 Exercise 1

For Exercise 1:

For the 7, the probability was .999+, i.e. the neural network is telling us that it's almost certainly a 7.

image

I know the probability result each time will be slightly different. But the 10th element 9 is biggest. So shouldn't this change to this? Thanks

For the 9, the probability was .999+, i.e. the neural network is telling us that it's almost certainly a 9.

stopword appears in word_index

for row in reader:
        labels.append(row[0])
        sentence = row[1]
        for word in stopwords:
            token = " " + word + " " 

This part of code is buggy, as any stop word appearing at the end of sentence is ignored.

By checking word_index, for example, "to" appeared. Please kindly fix the issue.

TF 2.0 compatibility, course 1 - part 4 - lesson 2

I have reworked the notebook with Tf 2.0.0-alpha0

model.compile(optimizer = tf.train.AdamOptimizer(),
              loss = 'sparse_categorical_crossentropy',
              metrics=['accuracy'])

throws

AttributeError: module 'tensorflow._api.v2.train' has no attribute 'AdamOptimizer'

model.compile(optimizer = tf.optimizers.Adam(),
              loss = 'sparse_categorical_crossentropy',
              metrics=['accuracy'])

seems to be (one of the) new way of passing optimizers.

predict

why is it giving error when I'm trying to predict :

print(model.predict(x_test[5]))

ValueError: Error when checking input: expected flatten_3_input to have 3 dimensions, but got array with shape (28, 28)

Course 2 exercise 7(missing question and few more problems)

Anyways, thank you for this course and the free exercises. I have learned a lot. Thank You

Exercise 7 becomes answer 7, which the exercise itself is answers included
For the code block 13, the code:

train_dir = '/tmp/training'
validation_dir = '/tmp/validation'

It should be at the top of the code block 12 in order to allow the code block 12 to run.

Course 4 Week 4 Exercise Answer - windowed_dataset

The cell that defines the function windowed_dataset in S+P Week 4 Exercise Answer is as follows:

def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
    series = tf.expand_dims(series, axis=-1)
    ds = tf.data.Dataset.from_tensor_slices(series)
    ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
    ds = ds.flat_map(lambda w: w.batch(window_size + 1))
    ds = ds.shuffle(shuffle_buffer)
    ds = ds.map(lambda w: (w[:-1], w[1:]))
    return ds.batch(batch_size).prefetch(1)

The second-to-last line,

    ds = ds.map(lambda w: (w[:-1], w[1:]))

seems wrong. Should it not be the following?

    ds = ds.map(lambda w: (w[:-1], w[-1]))

As it stands, if the window is

1 2 3 4 5 6

then we're getting a tuple:

([1 2 3 4 5],[2 3 4 5 6])

However, it appears there's another issue that's allowing the whole thing to compile, and that is that the data isn't flattened after it's convolved. Now, to my mind, that means that the network still has to figure out the 6, but it also gets to claim to have 'predicted' 2-5. Wouldn't that falsely inflate accuracy?

Please add a license to this repo

Thanks for sharing these notebooks with us!

Could you please add an explicit LICENSE file to the repo so that it's clear under what terms the content is provided, and under what terms user contributions are licensed?

Per GitHub docs on licensing:

[...] without a license, the default copyright laws apply, meaning that you retain all rights to your source code and no one may reproduce, distribute, or create derivative works from your work. If you're creating an open source project, we strongly encourage you to include an open source license.

Thanks!

TFLite_Week2_Exercise

I am facing the following issue whenever running the exercise on Tensorflow deployment:

splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))

splits, info = tfds.load('rock_paper_scissors', with_info=True, as_supervised=True, split = splits)

(train_examples, validation_examples, test_examples) = splits

num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes

AssertionError: Unrecognized instruction format: NamedSplitAll()(tfds.percent[0:80])

Anyone know what's the problem?

Typo in Course 1 - Part 6 - Lesson 2

Just a small type, in Typo in Course 1 - Part 6 - Lesson 2, Visualizing the Convolutions and Pooling section,

This code will show us the convolutions graphically. The print (test_labels[;100]) shows ...

test_labels[;100] should be test_labels[:100]

Compilation error when submitted (Week1-Housing Prices)

When I run the code in Jupyter Notebook, I'm getting a prediction of 4.008...
But when submitted it shows the below error in grader's output..

Can't compile the student's code. invalid syntax (student_solution.py, line 22)

I'm clueless.. Can you please look into it..

One mistake in Exercise 5 - Answer.ipynb

For the Exercise 5 in Course 2, there is one mistake in the split_data function.

    ...
    training_set = shuffled_set[0:training_length]
    testing_set = shuffled_set[:testing_length]
    ...
  • As we can see from the above code, the training set and testing set is overlapped with the same part of the data, which means that the testing data are no longer unseen data.

License/readme requested for guidance on forking/creating standalone repos

Hi Laurence,

I like to store my course workbooks on my github for reference, accessibility, and backup. Now that I've forked your repo and have solved code sitting in it, I want to disconnect the fork and create a standalone repo as per Github best practices: https://hisaac.net/2016/11/12/why-commits-to-forks-on-github-dont-count-toward-contributions/

However, I don't know how you would like users like me to go about this (esp. how to credit you) or if you want to enable this behaviour. Could you please provide guidance via a license or a readme?

Thanks very much!

Course 4 Week 3 Lesson 2: wrong learning rate value

As said in the videos, and from the plot of the loss vs. the learning rate, the optimal value for the learning rate lies between 1e-6 and 1e-5.

However, it is set the value: 5e-5, which is larger than 1e-5.

Instead, it should be set a value within the range [1e-6, 1e-5], e.g. 5e-6.

result of ###Running the Model in Course 1 - Part 8 - Lesson 3 - Notebook

Mr. Moroney, I always get a lot of help from your great courses. Thank you. ^^

The following is the result of ##Running the Model in Course 1 - Part 8 - Lesson 3 - which I modified and executed.
I modified the source code a little because I wanted to check the results on local PC.
The execution result is MISS CLASSIFICATION RATIO: 14.0625
This is the rate at which a horse image is judged to be human or a person image is judged to be a horse.
I think the rate of misjudgment is higher than expected.

How do I get this ratio close to zero percent?

------ modified source code "##Running the Model in Course 1 - Part 8 - Lesson 3 " -------
import numpy as np
#from google.colab import files
from tensorflow.keras.preprocessing import image

#uploaded = files.upload()
#for fn in uploaded.keys():

valid_image_list = [ { 'path': validation_horse_dir, 'images': os.listdir(validation_horse_dir) },
{ 'path': validation_human_dir, 'images': os.listdir(validation_human_dir) }]

total_image_cnt = len(os.listdir(validation_horse_dir)) + len(os.listdir(validation_human_dir))
miss_classification_count = 0
for valid_image in valid_image_list:
for image_name in valid_image['images']:
# predicting images
path = '{}/{}'.format(valid_image['path'], image_name)
#print("path:{}".format(path))
img = image.load_img(path, target_size=(300, 300))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)

    images = np.vstack([x])
    classes = model.predict(images, batch_size=10)
    #print(classes[0])
    if classes[0] > 0.5:
        if image_name.find("horse") > -1:
            miss_classification_count += 1
            res = "WRONG"
        else:
            res = "CORRECT"

        print("{} {} is a human : {}".format(classes[0], path, res))
    else:
        if image_name.find("human") > -1:
            miss_classification_count += 1
            res = "WRONG"
        else:
            res = "CORRECT"
        print("{} {} is a horse : {}".format(classes[0], path, res))

print("MISS CLASSIFICATION RATIO : {}".format((miss_classification_count / total_image_cnt) * 100))

Course 1 (Tensorflow JS) - Week 1 - TF Deployment

There is a function defined in FirstHTML.html in the Examples as follows:

async function doTraining(model){
            const history = 
                  await model.fit(xs, ys, 
                        { epochs: 500,
                          callbacks:{
                              onEpochEnd: async(epoch, logs) =>{
                                  console.log("Epoch:" 
                                              + epoch 
                                              + " Loss:" 
                                              + logs.loss);
                                  
                              }
                          }
                        });
        }

The issue is that xs and ys are globally defined variables and this function is not self-contained as is. Shouldn't xs and ys be passed as function arguments rather than as global variables?

Incorrect Convolution Multiplication in Course 1 - Part 6 - Lesson 3 - Notebook.ipynb

The convolution multiplication statement looks like this:

convolution = convolution + (i[x, y-1] * filter[0][1])

i[x, y-1] is the pixel horizontally to the left of the center:
Screen Shot 2020-03-19 at 12 30 34 AM

The corresponding filter value should be filter[1][0], which is horizontally left of the center:
Screen Shot 2020-03-19 at 12 28 48 AM

In light of this, that code block should be changed to the following:

convolution = 0.0
convolution = convolution + (i[x - 1][y - 1] * filter[0][0])
convolution = convolution + (i[x - 1][y    ] * filter[0][1])
convolution = convolution + (i[x - 1][y + 1] * filter[0][2])
convolution = convolution + (i[x    ][y - 1] * filter[1][0])
convolution = convolution + (i[x    ][y    ] * filter[1][1])
convolution = convolution + (i[x    ][y + 1] * filter[1][2])
convolution = convolution + (i[x + 1][y - 1] * filter[2][0])
convolution = convolution + (i[x + 1][y    ] * filter[2][1])
convolution = convolution + (i[x + 1][y + 1] * filter[2][2])
convolution = convolution * weight

Tokenize new sentences will change the word_index because of the word frequency

I encountered a new case. I first tokenize a set of sentences and then tokenize another set of sentences. The word_index will change because of the word frequency.
For instance.

sentences = [
'i love my dog',
'I, love my cat',
'You love my dog!',
'Do you think my dog is amazing?',
]
tokenizer = Tokenizer(num_words = 100, oov_token="<OOV>")
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
print(word_index)
================================
{'<OOV>': 1, 'my': 2, 'love': 3, 'dog': 4, 'i': 5, 'you': 6, 'cat': 7, 'do': 8, 'think': 9, 'is': 10, 'amazing': 11}
================================

sen = [
'there is a big car',
'there is a big cat',
'there is a big dog',
]
tokenizer.fit_on_texts(sen)
word_index = tokenizer.word_index
================================
{'<OOV>': 1, 'my': 2, 'dog': 3, 'is': 4, 'love': 5, 'there': 6, 'a': 7, 'big': 8, 'i': 9, 'cat': 10, 'you': 11, 'do': 12, 'think': 13, 'amazing': 14, 'car': 15}
================================

I have some concerns. When I want to train a model with the updated news, I need to tokenize new sentences. Because of changing the word_index, I need to train all old news again with the new word_index. Is there an easier way to keep the old word_index and only add new indexes for new words only?

Update for Adam optimizer compatibility on TF 2.0

When running into Course 1 - Part 4 - Lesson 2 - Notebook with Tensorflow 2.0, ran into an issue with compile method when setting up the optimizer.

I think fix would be appropriate by setting the string instead of the tf.train.Adam

Error I ran into:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-17-3863a6d90e8a> in <module>
----> 1 model.compile(optimizer = tf.train.Adam(),
      2               loss = 'sparse_categorical_crossentropy',
      3               metrics=['accuracy'])
      4 
      5 model.fit(training_images, training_labels, epochs=5)

AttributeError: module 'tensorflow_core._api.v2.train' has no attribute 'Adam'

@lmoroney , I'll open a PR for this :)

Error in Course 1 - Part 8 - Lesson 2 - Notebook.ipynb

successive_outputs starts at the 1:, i.e. the max pooling layer
successive_feature_maps starts at 0:, i.e. the first convolution layer.

I guess author might want to skip an input layer, but in fact the first layer is exactly the convolution layer, which should be shown.

-successive_outputs = [layer.output for layer in model.layers[1:]]
+successive_outputs = [layer.output for layer in model.layers]

Ambiguous instructions, guess work to pass (House Prices - Week 1 assignment)

Great course so far - thank you! - but:

It is unclear from the instructions for House Prices whether the output must be in units of hundreds of thousands, or not. Please be crystal clear!

To get a pass on my fifth attempt(!), I had to (by trial and error):

  1. Comment out last two lines of javascript
  2. Add "hundreds of thousands" to the print statement

Surely surely surely - in this day and age - the function and prediction should be evaluated programatically??? i.e. unit-tested by the "grader" and the prediction assessed as being within some tolerance?

Type Error: Python Notebook Course 1 - Part 2 - Lesson 2

A type error is found while defining model. Following is the error:
TypeError: The added layer must be an instance of class Layer.

Code:
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])

Python: 3.6.5
Numpy: 1.16.1
Tensorflow: 1.12.0
Keras: 2.2.4

NLP Course - Week 3 Exercise

I run this code with tf=='2.2.0-rc1' and the code raises value error.
Would you help me to solve this, please ?
ValueError Traceback (most recent call last)
in ()
11
12 num_epochs = 50
---> 13 history = model.fit(training_sequences, training_labels, epochs=num_epochs, validation_data=(test_sequences, test_labels), verbose=2)
14
15 print("Training Complete")

3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
63 def _method_wrapper(self, *args, **kwargs):
64 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 65 return method(self, *args, **kwargs)
66
67 # Running inside run_distribute_coordinator already.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
748 workers=workers,
749 use_multiprocessing=use_multiprocessing,
--> 750 model=self)
751
752 # Container that configures and calls tf.keras.Callbacks.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in init(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model)
1094 self._insufficient_data = False
1095
-> 1096 adapter_cls = select_data_adapter(x, y)
1097 self._adapter = adapter_cls(
1098 x,

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in select_data_adapter(x, y)
958 "Failed to find data adapter that can handle "
959 "input: {}, {}".format(
--> 960 _type_name(x), _type_name(y)))
961 elif len(adapter_cls) > 1:
962 raise RuntimeError(

ValueError: Failed to find data adapter that can handle input: <class 'numpy.ndarray'>, (<class 'list'> containing values of types {"<class 'int'>"})

Error uploading week 1 assignment submissions

Getting the error :
Sorry, your submission was incorrect. Please try again. [Errno 2] No such file or directory: '/shared/submission/submission.zip'

Even tried renaming my zip to submission.zip and uploading again, but still same error!

Exercise2-Answer.ipynb: it should get acc instead of accuracy

class myCallback(tf.keras.callbacks.Callback):
  def on_epoch_end(self, epoch, logs={}):
    if(logs.get('accuracy')>0.6):
      print("\nReached 60% accuracy so cancelling training!")
      self.model.stop_training = True

This does not work. logs.get('accuracy') always returns None.
It should be

class myCallback(tf.keras.callbacks.Callback):
  def on_epoch_end(self, epoch, logs={}):
    if(logs.get('acc')>0.6):
      print("\nReached 60% accuracy so cancelling training!")
      self.model.stop_training = True

TF 2.0 compatibility - callbacks

The following code does not work with TF 2.0:

accuracy = logs.get('acc')

instead, the following code should be used.

accuracy = logs.get('accuracy')

Complete working example code:

def on_epoch_end(self, epoch, logs={}):
        acc_threshold = 0.99
        accuracy = logs.get('accuracy')
        if accuracy > acc_threshold:
            print('\nReached {0} % accuracy so cancelling training!\n'.format(acc_threshold))
            self.model.stop_training = True

small syntax bug

small bug in colab notebook plot_graphs(history, "accuracy") gives key error it should be plot_graphs(history, "acc")

how to save and load models that was trained using fit_generator

Hello,
I have been trying to reproduce the code(human or horse classification) in tensor flow in practise course (/Course 1 - Part 8 - Lesson 4 - Notebook.ipynb ) locally on my pc.
It works so far but when I save a model with this line:

history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=1,
validation_data = validation_generator,
validation_steps=8)

try:
model.save_weights('human_horse1.h5')
except Exception as e:
print("error in saving model: ", e)

and to load the saved:
new_model = tf.keras.models.load_model('human_horse3.h5')

if I try to used the saved model to predict with the code:

new_model.predict('./Users/macbook/Desktop/image_classifier/human01-04.png')

I normally get this error:
AttributeError: 'str' object has no attribute 'shape'

please how do I properly load and save models and use the model to predict locally on my pc.
Kind regards.

Many errors in Course 1 - Part 4 - Lesson 2 - Notebook.ipynb

Hello,

There are a large number of errors in Course 1 - Part 4 - Lesson 2 - Notebook.ipynb - it needs to be reviewed and updated. Or perhaps was the wrong version uploaded?

  • Cell output should not be shown when initially opening the notebook, it should be cleared.
  • Notebook begins by saying that it will cover Fashion MNIST. Exercise 1 does, then the remaining exercises use MNIST with no explanation.
  • "For me, that returned a loss of about .8838, which means it was about 88% accurate." -> confusion of loss vs. accuracy, which are two different metrics and will confuse new learners.
  • Exercise 1 erroneously has a set of answers that reference MNIST: "For the 7, the probability was .999+, i.e. the neural network is telling us that it's almost certainly a 7." -> makes no sense as the data is Fashion MNIST. (Also noted in #12)
  • Exercise 2: "Experiment with different values for the dense layer with 512 neurons." -> the code is already set to 1024 neurons.
  • Exercise 5: output layer is incorrectly set to 5 neurons as was shown in exercise 4 to generate an error. This is also an error in exercise 6.
  • Exercise 8: Incorrect use of loss metric when explanation claims an accuracy cutoff. (Also noted in #9). I'm wondering if the author doesn't understand that loss is on a different scale from accuracy.
  • Most of the exercises are missing the additional reporting of accuracy metric.

I've been enjoying the course so far but was really struck by how many errors are in this notebook - not something I've seen on Coursera before.

Thanks,
Chris

TensorFlowDeployment-Course1-TensorFlowJS-Week 2-Exercise

I am trying to solve the exercise every time I run it loss and acc never changes
I tried to just change the example files of data images and labels from mnist to fashion-mnist
but loss and acc never changes
Any tips on what I am doing wrong?

Fashion-Script Exercise JS File:

import {FMnistData} from './fashion-data.js';
var canvas, ctx, saveButton, clearButton;
var pos = {x:0, y:0};
var rawImage;
var model;

function getModel() {

// In the space below create a convolutional neural network that can classify the 
// images of articles of clothing in the Fashion MNIST dataset. Your convolutional
// neural network should only use the following layers: conv2d, maxPooling2d,
// flatten, and dense. Since the Fashion MNIST has 10 classes, your output layer
// should have 10 units and a softmax activation function. You are free to use as
// many layers, filters, and neurons as you like.  
// HINT: Take a look at the MNIST example.
model = tf.sequential();

model.add(tf.layers.conv2d({inputShape: [28 , 28 , 1] , kernelSize: 3 , filters: 8 , activation: 'sigmoid'}));
model.add(tf.layers.maxPooling2d({poolSize: [2 , 2]}));
model.add(tf.layers.conv2d({filters: 16 , kernelSize: 3 , activation: 'sigmoid'}));
model.add(tf.layers.maxPooling2d({poolSize: [2 , 2]}));
model.add(tf.layers.flatten());
model.add(tf.layers.dense({units: 128 , activation: 'sigmoid'}));
model.add(tf.layers.dense({units: 10 , activation: 'softmax'}));

// Compile the model using the categoricalCrossentropy loss,
// the tf.train.adam() optimizer, and accuracy for your metrics.
model.compile({optimizer: tf.train.adam() , loss: 'categoricalCrossentropy' , metrics: ['accuracy']});

return model;

}

async function train(model, data) {

// Set the following metrics for the callback: 'loss', 'val_loss', 'acc', 'val_acc'.
const metrics = ['loss' , 'val_loss' , 'acc' , 'val_acc'];
const container = {name: 'Model Training' , styles: {height: '1000px' }};
const fitCallbacks = tfvis.show.fitCallbacks(container , metrics);

const BATCH_SIZE = 512;
const TRAIN_DATA_SIZE = 6000;
const TEST_DATA_SIZE = 1000;

const [trainXs , trainYs] = tf.tidy(() =>{
    const d = data.nextTrainBatch(TRAIN_DATA_SIZE);
    return[
        d.xs.reshape([TRAIN_DATA_SIZE , 28 , 28 , 1]),
        d.labels
    ];
});

const [testXs , testYs] = tf.tidy(() =>{
    const d = data.nextTestBatch(TEST_DATA_SIZE);
    return [
        d.xs.reshape([TEST_DATA_SIZE , 28 , 28 , 1]),
        d.labels
    ];
});

return model.fit(trainXs, trainYs, {
    batchSize: BATCH_SIZE,
    validationData: [testXs, testYs],
    epochs: 10,
    shuffle: true,
    callbacks: fitCallbacks
});

}

function setPosition(e){
pos.x = e.clientX-100;
pos.y = e.clientY-100;
}

function draw(e) {
if(e.buttons!=1) return;
ctx.beginPath();
ctx.lineWidth = 24;
ctx.lineCap = 'round';
ctx.strokeStyle = 'white';
ctx.moveTo(pos.x, pos.y);
setPosition(e);
ctx.lineTo(pos.x, pos.y);
ctx.stroke();
rawImage.src = canvas.toDataURL('image/png');
}

function erase() {
ctx.fillStyle = "black";
ctx.fillRect(0,0,280,280);
}

function save() {
var raw = tf.browser.fromPixels(rawImage,1);
var resized = tf.image.resizeBilinear(raw, [28,28]);
var tensor = resized.expandDims(0);

var prediction = model.predict(tensor);
var pIndex = tf.argMax(prediction, 1).dataSync();

var classNames = ["T-shirt/top", "Trouser", "Pullover", 
                  "Dress", "Coat", "Sandal", "Shirt",
                  "Sneaker",  "Bag", "Ankle boot"];
        
        
alert(classNames[pIndex]);

}

function init() {
canvas = document.getElementById('canvas');
rawImage = document.getElementById('canvasimg');
ctx = canvas.getContext("2d");
ctx.fillStyle = "black";
ctx.fillRect(0,0,280,280);
canvas.addEventListener("mousemove", draw);
canvas.addEventListener("mousedown", setPosition);
canvas.addEventListener("mouseenter", setPosition);
saveButton = document.getElementById('sb');
saveButton.addEventListener("click", save);
clearButton = document.getElementById('cb');
clearButton.addEventListener("click", erase);
}

async function run() {
const data = new FMnistData();
await data.load();
const model = getModel();
tfvis.show.modelSummary({name: 'Model Architecture'}, model);
await train(model, data);
await model.save('downloads://my_model');
init();
alert("Training is done, try classifying your drawings!");
}

document.addEventListener('DOMContentLoaded', run);

Fashion Data
/**

  • @license
  • Copyright 2018 Google LLC. All Rights Reserved.
  • Licensed under the Apache License, Version 2.0 (the "License");
  • you may not use this file except in compliance with the License.
  • You may obtain a copy of the License at
  • http://www.apache.org/licenses/LICENSE-2.0
  • Unless required by applicable law or agreed to in writing, software
  • distributed under the License is distributed on an "AS IS" BASIS,
  • WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  • See the License for the specific language governing permissions and
  • limitations under the License.
  • =============================================================================
    */

const IMAGE_SIZE = 784;
const NUM_CLASSES = 10;
const NUM_DATASET_ELEMENTS = 70000;

const TRAIN_TEST_RATIO = 1 / 7;

const NUM_TRAIN_ELEMENTS = Math.floor(TRAIN_TEST_RATIO * NUM_DATASET_ELEMENTS);
const NUM_TEST_ELEMENTS = NUM_DATASET_ELEMENTS - NUM_TRAIN_ELEMENTS;

const MNIST_IMAGES_SPRITE_PATH =
'https://storage.googleapis.com/learnjs-data/model-builder/fashion_mnist_images.png';
const MNIST_LABELS_PATH =
'https://storage.googleapis.com/learnjs-data/model-builder/fashion_mnist_labels_uint8';

/**

  • A class that fetches the sprited MNIST dataset and returns shuffled batches.
  • NOTE: This will get much easier. For now, we do data fetching and
  • manipulation manually.
    */
    export class FMnistData {
    constructor() {
    this.shuffledTrainIndex = 0;
    this.shuffledTestIndex = 0;
    }

async load() {
// Make a request for the MNIST sprited image.
const img = new Image();
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');
const imgRequest = new Promise((resolve, reject) => {
img.crossOrigin = '';
img.onload = () => {
img.width = img.naturalWidth;
img.height = img.naturalHeight;

    const datasetBytesBuffer =
        new ArrayBuffer(NUM_DATASET_ELEMENTS * IMAGE_SIZE * 4);

    const chunkSize = 5000;
    canvas.width = img.width;
    canvas.height = chunkSize;

    for (let i = 0; i < NUM_DATASET_ELEMENTS / chunkSize; i++) {
      const datasetBytesView = new Float32Array(
          datasetBytesBuffer, i * IMAGE_SIZE * chunkSize * 4,
          IMAGE_SIZE * chunkSize);
      ctx.drawImage(
          img, 0, i * chunkSize, img.width, chunkSize, 0, 0, img.width,
          chunkSize);

      const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);

      for (let j = 0; j < imageData.data.length / 4; j++) {
        // All channels hold an equal value since the image is grayscale, so
        // just read the red channel.
        datasetBytesView[j] = imageData.data[j * 4] / 255;
      }
    }
    this.datasetImages = new Float32Array(datasetBytesBuffer);

    resolve();
  };
  img.src = MNIST_IMAGES_SPRITE_PATH;
});

const labelsRequest = fetch(MNIST_LABELS_PATH);
const [imgResponse, labelsResponse] =
    await Promise.all([imgRequest, labelsRequest]);

this.datasetLabels = new Uint8Array(await labelsResponse.arrayBuffer());

// Create shuffled indices into the train/test set for when we select a
// random dataset element for training / validation.
this.trainIndices = tf.util.createShuffledIndices(NUM_TRAIN_ELEMENTS);
this.testIndices = tf.util.createShuffledIndices(NUM_TEST_ELEMENTS);

// Slice the the images and labels into train and test sets.
this.trainImages =
    this.datasetImages.slice(0, IMAGE_SIZE * NUM_TRAIN_ELEMENTS);
this.testImages = this.datasetImages.slice(IMAGE_SIZE * NUM_TRAIN_ELEMENTS);
this.trainLabels =
    this.datasetLabels.slice(0, NUM_CLASSES * NUM_TRAIN_ELEMENTS);
this.testLabels =
    this.datasetLabels.slice(NUM_CLASSES * NUM_TRAIN_ELEMENTS);

}

nextTrainBatch(batchSize) {
return this.nextBatch(
batchSize, [this.trainImages, this.trainLabels], () => {
this.shuffledTrainIndex =
(this.shuffledTrainIndex + 1) % this.trainIndices.length;
return this.trainIndices[this.shuffledTrainIndex];
});
}

nextTestBatch(batchSize) {
return this.nextBatch(batchSize, [this.testImages, this.testLabels], () => {
this.shuffledTestIndex =
(this.shuffledTestIndex + 1) % this.testIndices.length;
return this.testIndices[this.shuffledTestIndex];
});
}

nextBatch(batchSize, data, index) {
const batchImagesArray = new Float32Array(batchSize * IMAGE_SIZE);
const batchLabelsArray = new Uint8Array(batchSize * NUM_CLASSES);

for (let i = 0; i < batchSize; i++) {
  const idx = index();

  const image =
      data[0].slice(idx * IMAGE_SIZE, idx * IMAGE_SIZE + IMAGE_SIZE);
  batchImagesArray.set(image, i * IMAGE_SIZE);

  const label =
      data[1].slice(idx * NUM_CLASSES, idx * NUM_CLASSES + NUM_CLASSES);
  batchLabelsArray.set(label, i * NUM_CLASSES);
}

const xs = tf.tensor2d(batchImagesArray, [batchSize, IMAGE_SIZE]);
const labels = tf.tensor2d(batchLabelsArray, [batchSize, NUM_CLASSES]);

return {xs, labels};

}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.