Code Monkey home page Code Monkey logo

machine-learning-projects's Introduction

Machine-Learning-Projects

Machine Learning Experiments and Work

machine-learning-projects's People

Contributors

willkoehrsen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

machine-learning-projects's Issues

ValueError: number sections must be larger than 0

Hi!
I was reading your post and found it very interesting.
But when I went to test here I got this error:

/usr/bin/python3.6 /home/carlos/PycharmProjects/tfg/cross-validation/main.py
Fitting 3 folds for each of 50 candidates, totalling 150 fits
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
2018-04-14 16:08:10.580413: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 1.1s
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.1s remaining: 0.0s
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 1.2s
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 1.1s
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.5s
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.5s
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.5s
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 1.4s
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 1.3s
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 1.4s
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.9s
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.8s
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.8s
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 6.2s
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 5.5s
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 5.6s
[CV] n_neurons=50, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.005, dropout_rate=0.5, batch_size=128, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
Traceback (most recent call last):
File "/home/carlos/.local/lib/python3.6/site-packages/numpy/lib/shape_base.py", line 463, in array_split
Nsections = len(indices_or_sections) + 1
TypeError: object of type 'int' has no len()

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/carlos/PycharmProjects/tfg/cross-validation/main.py", line 54, in
random_search.fit(X_train, y_train)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/model_selection/_search.py", line 639, in fit
cv.split(X, y, groups)))
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 779, in call
while self.dispatch_one_batch(iterator):
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 625, in dispatch_one_batch
self._dispatch(tasks)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 588, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 111, in apply_async
result = ImmediateResult(func)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 332, in init
self.results = batch()
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 131, in call
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 131, in
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 458, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "/home/carlos/PycharmProjects/tfg/cross-validation/dnn_classifier.py", line 194, in fit
for rnd_indices in np.array_split(rnd_idx, num_instances // self.batch_size):
File "/home/carlos/.local/lib/python3.6/site-packages/numpy/lib/shape_base.py", line 469, in array_split
raise ValueError('number sections must be larger than 0.')
ValueError: number sections must be larger than 0.

Process finished with exit code 1

The DNN_Classifier code is here: dnn_classifier.py
My main where I run things: main.py
And I'm using the iris base: iris.xls

Suggested changes

  1. I suggest that you modify the following functions to include an additional argument; i.e.
RandomizedSearchCV(...,return_train_score=True,...)
GridSearchCV(...,return_train_score=True,...)

This should eliminate many irritating warning messages when one is using
the more recent versions of the imported python packages for Python 3+.

  1. The following needs to be edited:

Training Curves

We can perform grid search over only one parameter to observe the effects of changing that parameter on performance. We will look at training time, training set accuracy, and teseting set accuracy.

  1. In: [(http://localhost:8889/notebooks/random_forest_explained/Random%20Forest%20Explained.ipynb)]

where you have code for:

Data Preparation
One-Hot Encoding

IMHO you should show that you are using 7 days not 5 days for the analysis.

RandomizedSearchCV freezes computer

I have adapted your code for a larger dataset and had it freeze when I ran the line

rf_random = RandomizedSearchCV(estimator=rf, param_distributions=random_grid, n_iter = 100, scoring='neg_mean_absolute_error', cv = 3, verbose=2, random_state=42, n_jobs=-1, return_train_score=True)
I think this is because you assigned n_jobs = -1. Keeping the default (None, or manually setting to 1 or some value less than the total cores you have) should prevent this but lower computation time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.