Code Monkey home page Code Monkey logo

keras-tqdm's Introduction

keras-tqdm

Keras integration with TQDM progress bars.

  • Keras is an awesome machine learning library for Theano or TensorFlow.
  • TQDM is a progress bar library with good support for nested loops and Jupyter/IPython notebooks.

Key features

  • TQDM supports nested progress bars. If you have Keras fit and predict loops within an outer TQDM loop, the nested loops will display properly.
  • TQDM supports Jupyter/IPython notebooks.
  • TQDM looks great!

TQDMNotebookCallback with leave_inner=False (default)

Keras TQDM leave_inner=False

TQDMNotebookCallback with leave_inner=True

Keras TQDM leave_inner=True

TQDMCallback for command-line scripts

Keras TQDM CLI

Installation

Stable release

pip install keras-tqdm

Development release

pip install git+https://github.com/bstriner/keras-tqdm.git --upgrade --no-deps

Development mode (changes to source take effect without reinstalling)

git clone https://github.com/bstriner/keras-tqdm.git
cd keras-tqdm
python setup.py develop

Basic usage

It's very easy to use Keras TQDM. The only required change is to remove default messages (verbose=0) and add a callback to model.fit. The rest happens automatically! For Jupyter Notebook required code modification is as simple as:

from keras_tqdm import TQDMNotebookCallback
# keras, model definition...
model.fit(X_train, Y_train, verbose=0, callbacks=[TQDMNotebookCallback()])

For plain text mode (e.g. for Python run from command line)

from keras_tqdm import TQDMCallback
# keras, model definition...
model.fit(X_train, Y_train, verbose=0, callbacks=[TQDMCallback()])

Advanced usage

Use keras_tqdm to utilize TQDM progress bars for Keras fit loops. keras_tqdm loops can be nested inside TQDM loops to display nested progress bars (although you can use them inside ordinary for loops as well). Set verbose=0 to suppress the default progress bar.

from keras_tqdm import TQDMCallback
from tqdm import tqdm
for model in tqdm(models, desc="Training several models"):
    model.fit(x, y, verbose=0, callbacks=[TQDMCallback()])

For IPython and Jupyter notebook TQDMNotebookCallback instead of TQDMCallback. Use tqdm_notebook in your own code instead of tqdm. Formatting is controlled by python format strings. The default metric_format is "{name}: {value:0.3f}". For example, use TQDMCallback(metric_format="{name}: {value:0.6f}") for 6 decimal points or {name}: {value:e} for scientific notation.

Questions?

Please feel free to submit PRs and issues. Comments, questions, and requests are welcome. If you need more control, subclass TQDMCallback and override the tqdm function.

keras-tqdm's People

Contributors

bnaul avatar bstriner avatar gabobert avatar github-bot-bot avatar rohitrawat avatar stared avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-tqdm's Issues

can't deepcopy TQDMCallback object

from keras_tqdm import TQDMCallback
from copy import deepcopy
deepcopy(TQDMCallback())

doesn't work because stderr can't be pickled.

A quick fix is to set output_file=None. Maybe this should be the default?

install issue?

I'm using Python 3.5.2 from Anaconda on a Windows 7 machine in a Jupyter notebook.

'3.5.2 |Continuum Analytics, Inc.| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)]'

After I installed (by git clone and then python setup.py install, it seems as if an .egg file appeared in the expected place in my conda tensorflow environment, and on from keras_tqdm import TQDMCallback it tries to look there for the installation but is not finding it correctly.

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-3-6e076365bc4a> in <module>()
      1 # using better progress bars using https://github.com/bstriner/keras-tqdm/
      2 
----> 3 from keras_tqdm import TQDMCallback, TQDMNotebookCallback

C:\Anaconda2\envs\tensorflow\lib\site-packages\keras_tqdm-0.0.1-py3.5.egg\keras_tqdm\__init__.py in <module>()
----> 1 from tqdm_callback import TQDMCallback
      2 from tqdm_notebook_callback import TQDMNotebookCallback

ImportError: No module named 'tqdm_callback'

This can be fixed by doing an sys.path.insert(1, 'C:/<PATH>/keras-tqdm/keras_tqdm/') to put the correct directory ahead of the non-working one in the sys.path, but that isn't ideal.

Note that this is after pulling the updates from yesterday.

All in all, this is a minor problem to a very nice solution to the keras progbars freezing in Jupyter problem. Thank you!

KeyError: 'metrics'

os: ubuntu 18.04
python: 3.6.9
tf: 2.6.2
keras:2.6.0
cuda:11.4
running jupyter notebook in visual studio

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-517-732094f0204b> in <module>
     79             validation_data = gen_val,
     80             verbose=0,
---> 81             callbacks=[TQDMCallback()])

~/.local/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1187               logs = tmp_logs  # No error, now safe to assign to logs.
   1188               end_step = step + data_handler.step_increment
-> 1189               callbacks.on_train_batch_end(end_step, logs)
   1190               if self.stop_training:
   1191                 break

~/.local/lib/python3.6/site-packages/keras/callbacks.py in on_train_batch_end(self, batch, logs)
    433     """
    434     if self._should_call_train_batch_hooks:
--> 435       self._call_batch_hook(ModeKeys.TRAIN, 'end', batch, logs=logs)
    436 
    437   def on_test_batch_begin(self, batch, logs=None):

~/.local/lib/python3.6/site-packages/keras/callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
    293       self._call_batch_begin_hook(mode, batch, logs)
    294     elif hook == 'end':
--> 295       self._call_batch_end_hook(mode, batch, logs)
    296     else:
    297       raise ValueError('Unrecognized hook: {}'.format(hook))

~/.local/lib/python3.6/site-packages/keras/callbacks.py in _call_batch_end_hook(self, mode, batch, logs)
    313       self._batch_times.append(batch_time)
    314 
--> 315     self._call_batch_hook_helper(hook_name, batch, logs)
    316 
    317     if len(self._batch_times) >= self._num_batches_for_timing_check:

~/.local/lib/python3.6/site-packages/keras/callbacks.py in _call_batch_hook_helper(self, hook_name, batch, logs)
    351     for callback in self.callbacks:
    352       hook = getattr(callback, hook_name)
--> 353       hook(batch, logs)
    354 
    355     if self._check_timing:

~/.local/lib/python3.6/site-packages/keras/callbacks.py in on_train_batch_end(self, batch, logs)
    718     """
    719     # For backwards compatibility.
--> 720     self.on_batch_end(batch, logs=logs)
    721 
    722   @doc_controls.for_subclass_implementers

~/.local/lib/python3.6/site-packages/keras_tqdm/tqdm_callback.py in on_batch_end(self, batch, logs)
    115         self.inner_count += update
    116         if self.inner_count < self.inner_total:
--> 117             self.append_logs(logs)
    118             metrics = self.format_metrics(self.running_logs)
    119             desc = self.inner_description_update.format(epoch=self.epoch, metrics=metrics)

~/.local/lib/python3.6/site-packages/keras_tqdm/tqdm_callback.py in append_logs(self, logs)
    134 
    135     def append_logs(self, logs):
--> 136         metrics = self.params['metrics']
    137         for metric, value in six.iteritems(logs):
    138             if metric in metrics:

KeyError: 'metrics'

on_batch_end incredibly slow

When I run an epoch with about 2 million samples and a batch size of 1, Keras/Tensorflow finishes in about 45 minutes.

When I run that same epoch, with TQDMCallback, it takes over 13 hours.

It seems like there's logging going on during batches by default, which shouldn't be default behaviour. Not sure if that entirely explains it, though.

Progress bar is stuck at almost finishing an epoch

OS: Ubuntu 18.04 LTS

@bnaul Hello, really love your elegant package!
However, recently, I've found this bug for several times, which is shown below.
bug
bug
The progress bar is just stuck there, but I've confirmed that keras is still running because the GPUs are still working.
BTW, sometimes it was stuck at not only the final point, but also some intermediate point.
Thanks a lot, looking forward to your reply!

Support for fit_generator()

Unlike the fit() method, fit_generator() does not have a 'batch_size' parameter defined. This produces a KeyError: 'batch_size' error on line 81 of tqdm_callback.py:

self.batch_count = int(ceil(self.params['nb_sample'] / self.params['batch_size']))

fit_generator() is used instead of fit() when reading image directories and augmenting images on the fly.

Here is a minimum working example that works when not using ', verbose=0, callbacks=[TQDMCallback()]', but immediately fails when used with it:

'''Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''

from __future__ import print_function
import numpy as np
np.random.seed(1337)  # for reproducibility

from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K

from keras.preprocessing.image import ImageDataGenerator

from keras_tqdm import TQDMCallback, TQDMNotebookCallback

batch_size = 128
nb_classes = 10
nb_epoch = 1

# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
pool_size = (2, 2)
# convolution kernel size
kernel_size = (3, 3)

# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()

if K.image_dim_ordering() == 'th':
    X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
    X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
    X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

model = Sequential()

model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
                        border_mode='valid',
                        input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer='adadelta',
              metrics=['accuracy'])

# fit() works well with or without TQDMCallback():
# model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
#          validation_data=(X_test, Y_test), verbose=0, callbacks=[TQDMCallback()])

# Using an ImageDataGenerator
datagen = ImageDataGenerator(
    featurewise_center=True,
    featurewise_std_normalization=True,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True)

# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)

# fits the model on batches with real-time data augmentation:

#####>>> This works well without using TQDMCallback():
#model.fit_generator(datagen.flow(X_train, Y_train, batch_size=32),
#                    samples_per_epoch=len(X_train), nb_epoch=nb_epoch)

#####>>> but fails when using TQDMCallback():
model.fit_generator(datagen.flow(X_train, Y_train, batch_size=32),
                    samples_per_epoch=len(X_train), nb_epoch=nb_epoch, verbose=0, callbacks=[TQDMCallback()])

score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])

keras-tqdm in Anaconda

I tried the following:

  • creating a keras-tqdm folder under site-packages in an Anaconda environment
  • running python setup.py develop

But I get errors. I don't suppose there's a nice cookbook way to install this in a conda environment, is there?

Getting error "metric function identifier" error

I get the following error in a notebook, and a similar error when running the corresponding code at the command line:

ValueError: ('Could not interpret metric function identifier:', <keras_tqdm.tqdm_notebook_callback.TQDMNotebookCallback object at 0x7eff8b8872b0>)

The code I am using is:

from keras_tqdm import TQDMNotebookCallback
...
model.compile(loss='binary_crossentropy', optimizer='adam', verbose=0, metrics=[TQDMNotebookCallback()])

Cannot serialize socket object

Hello

I'm trying to use the TQDMNotebookCallback when training a keras model with pipelining.
However, I'm unable to get it working.

my code is as follows

========================

from keras_tqdm import TQDMCallback, TQDMNotebookCallback
from tqdm import tqdm

seed = 108

Function to create model, required for KerasClassifier

def create_large_wide_model():

# create model
model = Sequential()
model.add(Dense(800, input_dim=784, kernel_initializer ='uniform', activation='relu'))
model.add(Dense(64, kernel_initializer='uniform', activation='relu'))
model.add(Dense(10, kernel_initializer ='uniform', activation='softmax'))

# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model

from numpy.random import seed
seed(1)

from tensorflow import set_random_seed
set_random_seed(108)

estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasClassifier(build_fn=create_large_wide_model, nb_epoch=250,
validation_split=0.15,batch_size=25, verbose=0,callbacks=[TQDMNotebookCallback()])))

pipeline = Pipeline(estimators)

%time results = pipeline.fit(X_train, y_train)

========================

the error message is as follows,

C:\Anaconda\lib\socket.py in getstate(self)
183
184 def getstate(self):
--> 185 raise TypeError("Cannot serialize socket object")
186
187 def dup(self):

TypeError: Cannot serialize socket object

how do I use TQDMNotebookCallback when pipelining?

Space between lines keeps increasing

When I use the TQDM with Keras, it works well but each time an epoch is added, the space between the train loop progressbar and the epoch loop progressbar keeps increasing. It is anoying when I have several epoch. Here is an example:

image

epoch tracking for fit_generator

Great package. really useful. however does not update the epoch bar when using fit_generator. Looks like the code has started to be adapted for that but is not quite finished? Is that right?

Can I just lower the number of samples per epoch to get more frequent updates? Or will that have side effects?

precision of display

@bstriner Really love your elegant package.

Would you please let me know if it's possible to customize the precision of display for the "loss" ? Currently, it defaults to 3 digits after the decimal point but I would like to have around 6 digits or using scientific notations.

Thanks in advance !

Error: global name 'IntProgress' is not defined

Hello. God job.

I tried to use :
model.fit(X, y, epochs=100, verbose=0, callbacks=[TQDMNotebookCallback()])

I get :
NameError: global name 'IntProgress' is not defined

I'm using Python 2.7 and Keras 2.0.1, is a compatibility problem?

Error sequence:

/home/argante/anaconda2/envs/keratina/lib/python2.7/site-packages/keras/models.pyc in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
    854                               class_weight=class_weight,
    855                               sample_weight=sample_weight,
--> 856                               initial_epoch=initial_epoch)
    857 
    858     def evaluate(self, x, y, batch_size=32, verbose=1,

/home/argante/anaconda2/envs/keratina/lib/python2.7/site-packages/keras/engine/training.pyc in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
   1496                               val_f=val_f, val_ins=val_ins, shuffle=shuffle,
   1497                               callback_metrics=callback_metrics,
-> 1498                               initial_epoch=initial_epoch)
   1499 
   1500     def evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None):

/home/argante/anaconda2/envs/keratina/lib/python2.7/site-packages/keras/engine/training.pyc in _fit_loop(self, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch)
   1120             'metrics': callback_metrics or [],
   1121         })
-> 1122         callbacks.on_train_begin()
   1123         callback_model.stop_training = False
   1124         for cbk in callbacks:

/home/argante/anaconda2/envs/keratina/lib/python2.7/site-packages/keras/callbacks.pyc in on_train_begin(self, logs)
    128         logs = logs or {}
    129         for callback in self.callbacks:
--> 130             callback.on_train_begin(logs)
    131 
    132     def on_train_end(self, logs=None):

/home/argante/anaconda2/envs/keratina/lib/python2.7/site-packages/keras_tqdm/tqdm_callback.pyc in on_train_begin(self, logs)
    127                       else self.params['nb_epoch'])
    128             self.tqdm_outer = self.build_tqdm_outer(desc=self.outer_description,
--> 129                                                     total=epochs)
    130 
    131     def on_train_end(self, logs={}):

/home/argante/anaconda2/envs/keratina/lib/python2.7/site-packages/keras_tqdm/tqdm_callback.pyc in build_tqdm_outer(self, desc, total)
     65         :return: new progress bar
     66         """
---> 67         return self.tqdm(desc=desc, total=total, leave=self.leave_outer)
     68 
     69     def build_tqdm_inner(self, desc, total):

/home/argante/anaconda2/envs/keratina/lib/python2.7/site-packages/keras_tqdm/tqdm_notebook_callback.pyc in tqdm(self, desc, total, leave)
     31         :return: new progress bar
     32         """
---> 33         return tqdm_notebook(desc=desc, total=total, leave=leave)

/home/argante/anaconda2/envs/keratina/lib/python2.7/site-packages/tqdm/__init__.pyc in tqdm_notebook(*args, **kwargs)
     17     """See tqdm._tqdm_notebook.tqdm_notebook for full documentation"""
     18     from ._tqdm_notebook import tqdm_notebook as _tqdm_notebook
---> 19     return _tqdm_notebook(*args, **kwargs)
     20 
     21 

/home/argante/anaconda2/envs/keratina/lib/python2.7/site-packages/tqdm/_tqdm_notebook.pyc in __init__(self, *args, **kwargs)
    178         # self.sp('', close=True)
    179         # Replace with IPython progress bar display (with correct total)
--> 180         self.sp = self.status_printer(self.fp, self.total, self.desc)
    181         self.desc = None  # trick to place description before the bar
    182 

/home/argante/anaconda2/envs/keratina/lib/python2.7/site-packages/tqdm/_tqdm_notebook.pyc in status_printer(_, total, desc)
     94         # Prepare IPython progress bar
     95         if total:
---> 96             pbar = IntProgress(min=0, max=total)
     97         else:  # No total? Show info style bar with no progress tqdm status
     98             pbar = IntProgress(min=0, max=1)

How to make it not print on multiple lines?

I really like the promise of this library, but I can't seem to make it work as I would like. How can I make it not print several lines? I just want one progress bar for the full training run.

I am adding this to Keras callbacks:
TQDMCallback() # I can add leave_inner=False, leave_outer=False with seemingly no effect.

Here is an example of the output in my Jupyter notebook when I do model.fit(...). Tqdm works fine, i.e. I can do a loop and have a progress bar that displays the progress on a single line.

Training: 0%| | 0/9 [00:00<?, ?it/s]
Epoch: 4: 0%| | 0/94267 [00:00<?, ?it/s]
Epoch: 4 - loss: 3.725 30%|███ | 28672/94267 [00:00<00:00, 246462.35it/s]
Epoch: 4 - loss: 3.724 56%|█████▋ | 53248/94267 [00:00<00:00, 244136.09it/s]
Epoch: 4 - loss: 3.723 83%|████████▎ | 77824/94267 [00:00<00:00, 235065.89it/s]
Epoch: 4 - loss: 3.724, val_loss: 3.722100%|██████████| 94267/94267 [00:00<00:00, 158220.92it/s]
Training: 11%|█ | 1/9 [00:00<00:04, 1.93it/s]
Epoch: 5: 0%| | 0/94267 [00:00<?, ?it/s]
Epoch: 5 - loss: 3.724 30%|███ | 28672/94267 [00:00<00:00, 253709.54it/s]
Epoch: 5 - loss: 3.723 61%|██████ | 57344/94267 [00:00<00:00, 258542.12it/s]
Epoch: 5 - loss: 3.723 91%|█████████ | 86016/94267 [00:00<00:00, 260625.83it/s]
Epoch: 5 - loss: 3.724, val_loss: 3.722100%|██████████| 94267/94267 [00:00<00:00, 125086.89it/s]
Training: 22%|██▏ | 2/9 [00:00<00:03, 1.98it/s]
Epoch: 6: 0%| | 0/94267 [00:00<?, ?it/s]
Epoch: 6 - loss: 3.724 26%|██▌ | 24576/94267 [00:00<00:00, 242534.19it/s]
Epoch: 6 - loss: 3.724 56%|█████▋ | 53248/94267 [00:00<00:00, 243923.50it/s]
Epoch: 6 - loss: 3.724 83%|████████▎ | 77824/94267 [00:00<00:00, 243747.74it/s]
Epoch: 6 - loss: 3.724, val_loss: 3.722100%|██████████| 94267/94267 [00:00<00:00, 163744.60it/s]
Training: 33%|███▎ | 3/9 [00:01<00:03, 1.99it/s]
Epoch: 7: 0%| | 0/94267 [00:00<?, ?it/s]
Epoch: 7 - loss: 3.723 30%|███ | 28672/94267 [00:00<00:00, 249677.85it/s]
Epoch: 7 - loss: 3.724 56%|█████▋ | 53248/94267 [00:00<00:00, 242249.26it/s]
Epoch: 7 - loss: 3.724 78%|███████▊ | 73728/94267 [00:00<00:00, 222114.27it/s]
Epoch: 7 - loss: 3.724100%|█████████▉| 94208/94267 [00:00<00:00, 214223.28it/s]
Epoch: 7 - loss: 3.724, val_loss: 3.722100%|██████████| 94267/94267 [00:00<00:00, 1461.41it/s]
Training: 44%|████▍ | 4/9 [00:02<00:02, 1.91it/s]
Epoch: 8: 0%| | 0/94267 [00:00<?, ?it/s]
Epoch: 8 - loss: 3.724 22%|██▏ | 20480/94267 [00:00<00:00, 201277.37it/s]
Epoch: 8 - loss: 3.724 52%|█████▏ | 49152/94267 [00:00<00:00, 213755.21it/s]
Epoch: 8 - loss: 3.724 78%|███████▊ | 73728/94267 [00:00<00:00, 211569.82it/s]
Epoch: 8 - loss: 3.724, val_loss: 3.722100%|██████████| 94267/94267 [00:00<00:00, 164307.50it/s]

Relevant package versions in my conda-environment:

ipykernel==4.6.1
ipython==6.1.0
ipython-genutils==0.2.0
ipywidgets==6.0.0
jupyter==1.0.0
jupyter-client==5.1.0
jupyter-console==5.1.0
Keras==2.0.5
keras-tqdm==2.0.1

'metrics' might not be present in params

Usually I fit my models without any metrics. So params['metrics'] drops KeyError in my case.
Also some variables are using only under show_inner clause, but initialized in outer scope.

Failed to display Jupyter Widget of type HBox.

Using the TQDMNotebookCallback doesn't seem to work, I have updated jupyter, notebook, and ipywidgets as well as rerunning jupyter nbextension enable --py widgetsnbextension but when using the callback in the fit function I get the following message.

Failed to display Jupyter Widget of type HBox.
If you're reading this message in Jupyter Notebook or JupyterLab, it may mean that the widgets JavaScript is still loading. If this message persists, it likely means that the widgets JavaScript library is either not installed or not enabled. See the Jupyter Widgets Documentation for setup instructions.
If you're reading this message in another notebook frontend (for example, a static rendering on GitHub or NBViewer), it may mean that your frontend doesn't currently support widgets.

Please support starting at a non-zero epoch

keras allows you to begin training at a specific epoch, this is useful if you want to continue training a model and keep history data clean. Can you make sure that the total length is epochs-initial_epoch to keep predictions accurate and prevent it looking like it's broken when it ends before it's done.

Thanks!

Metrics Error in categorical_embedder

code

X_train, X_test, y_train, y_test = train_test_split(X_encoded,y)

embeddings = ce.get_embeddings(X_train, y_train, categorical_embedding_info=embedding_info,
is_classification=True, epochs=100,batch_size=256
)

error

KeyError Traceback (most recent call last)
in
3
4 embeddings = ce.get_embeddings(X_train, y_train, categorical_embedding_info=embedding_info,
----> 5 is_classification=True, epochs=100,batch_size=256
6 )

/opt/conda/lib/python3.6/site-packages/categorical_embedder/init.py in get_embeddings(X_train, y_train, categorical_embedding_info, is_classification, epochs, batch_size)
173
174 nnet.compile(loss=loss, optimizer='adam', metrics=[metrics])
--> 175 nnet.fit(x_inputs, y_train.values, batch_size=batch_size, epochs=epochs, validation_split=0.2, callbacks=[TQDMNotebookCallback()], verbose=0)
176
177 embs = list(map(lambda x: x.get_weights()[0], [x for x in nnet.layers if 'Embedding' in str(x)]))

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
64 def _method_wrapper(self, *args, **kwargs):
65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 66 return method(self, *args, **kwargs)
67
68 # Running inside run_distribute_coordinator already.

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
853 context.async_wait()
854 logs = tmp_logs # No error, now safe to assign to logs.
--> 855 callbacks.on_train_batch_end(step, logs)
856 epoch_logs = copy.copy(logs)
857

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/callbacks.py in on_train_batch_end(self, batch, logs)
388 if self._should_call_train_batch_hooks:
389 logs = self._process_logs(logs)
--> 390 self._call_batch_hook(ModeKeys.TRAIN, 'end', batch, logs=logs)
391
392 def on_test_batch_begin(self, batch, logs=None):

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/callbacks.py in _call_batch_hook(self, mode, hook, batch, logs)
296 for callback in self.callbacks:
297 batch_hook = getattr(callback, hook_name)
--> 298 batch_hook(batch, logs)
299 self._delta_ts[hook_name].append(time.time() - t_before_callbacks)
300

/opt/conda/lib/python3.6/site-packages/tensorflow/python/keras/callbacks.py in on_train_batch_end(self, batch, logs)
613 """
614 # For backwards compatibility.
--> 615 self.on_batch_end(batch, logs=logs)
616
617 @doc_controls.for_subclass_implementers

/opt/conda/lib/python3.6/site-packages/keras_tqdm/tqdm_callback.py in on_batch_end(self, batch, logs)
115 self.inner_count += update
116 if self.inner_count < self.inner_total:
--> 117 self.append_logs(logs)
118 metrics = self.format_metrics(self.running_logs)
119 desc = self.inner_description_update.format(epoch=self.epoch, metrics=metrics)

/opt/conda/lib/python3.6/site-packages/keras_tqdm/tqdm_callback.py in append_logs(self, logs)
134
135 def append_logs(self, logs):
--> 136 metrics = self.params['metrics']
137 for metric, value in six.iteritems(logs):
138 if metric in metrics:

KeyError: 'metrics'

no need for extra loop

There is no need for extra loop, i.e. for i in range(10) - epochs work as expected.

(A side remark - I would default for one progress bar, as below (important for many-epoch training). Or at least - explain that it is an option.)

screenshot from 2017-01-10 17-37-09

The update of the progress bar is not correct for tensorflow 2

The progress bar is update with the batch size instead of 1.

This is due to the mode being 0, when it should be 1. The mode is set here: https://github.com/bstriner/keras-tqdm/blob/master/keras_tqdm/tqdm_callback.py#L85-L92 but this technique doesn't work for tensorflow 2, where the params might look like this:

{'batch_size': None, 'epochs': 200, 'steps': 90, 'samples': 90, 'verbose': 2, 'do_validation': True, 'metrics': ['loss',]}

I will provide a minimal example to highlight this and make a PR if this repo is still maintained (which it doesn't look like @bstriner ?).

Could we get ascii parameter

There is an issue with some consoles that causes new lines to be printed repeatedly. This is well documented on the tqdm repo (gitlink).

Would it be possible to allow us to pass ascii=True to tqdm, which is the currently recommended fix.

Is it possible to keep the progress bars?

Why do the progress bars disappear when you copy the notebook or close it and reopen it? Usually all output from jupyter notebooks is saved with the notebook. Sometimes it is useful to look back and see the progress for each epoch.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.