Machine Learning Experiments and Work
willkoehrsen / machine-learning-projects Goto Github PK
View Code? Open in Web Editor NEWMachine Learning Experiments and Work
Machine Learning Experiments and Work
Hi!
I was reading your post and found it very interesting.
But when I went to test here I got this error:
/usr/bin/python3.6 /home/carlos/PycharmProjects/tfg/cross-validation/main.py
Fitting 3 folds for each of 50 candidates, totalling 150 fits
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
2018-04-14 16:08:10.580413: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 1.1s
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.1s remaining: 0.0s
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 1.2s
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 1.1s
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.5s
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.5s
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.5s
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 1.4s
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 1.3s
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 1.4s
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.9s
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.8s
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.8s
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 6.2s
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 5.5s
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 5.6s
[CV] n_neurons=50, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.005, dropout_rate=0.5, batch_size=128, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
Traceback (most recent call last):
File "/home/carlos/.local/lib/python3.6/site-packages/numpy/lib/shape_base.py", line 463, in array_split
Nsections = len(indices_or_sections) + 1
TypeError: object of type 'int' has no len()
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/carlos/PycharmProjects/tfg/cross-validation/main.py", line 54, in
random_search.fit(X_train, y_train)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/model_selection/_search.py", line 639, in fit
cv.split(X, y, groups)))
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 779, in call
while self.dispatch_one_batch(iterator):
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 625, in dispatch_one_batch
self._dispatch(tasks)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 588, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 111, in apply_async
result = ImmediateResult(func)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 332, in init
self.results = batch()
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 131, in call
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 131, in
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 458, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "/home/carlos/PycharmProjects/tfg/cross-validation/dnn_classifier.py", line 194, in fit
for rnd_indices in np.array_split(rnd_idx, num_instances // self.batch_size):
File "/home/carlos/.local/lib/python3.6/site-packages/numpy/lib/shape_base.py", line 469, in array_split
raise ValueError('number sections must be larger than 0.')
ValueError: number sections must be larger than 0.
Process finished with exit code 1
The DNN_Classifier code is here: dnn_classifier.py
My main where I run things: main.py
And I'm using the iris base: iris.xls
RandomizedSearchCV(...,return_train_score=True,...)
GridSearchCV(...,return_train_score=True,...)
This should eliminate many irritating warning messages when one is using
the more recent versions of the imported python packages for Python 3+.
Training Curves
We can perform grid search over only one parameter to observe the effects of changing that parameter on performance. We will look at training time, training set accuracy, and teseting set accuracy.
where you have code for:
Data Preparation
One-Hot Encoding
IMHO you should show that you are using 7 days not 5 days for the analysis.
I have adapted your code for a larger dataset and had it freeze when I ran the line
rf_random = RandomizedSearchCV(estimator=rf, param_distributions=random_grid, n_iter = 100, scoring='neg_mean_absolute_error', cv = 3, verbose=2, random_state=42, n_jobs=-1, return_train_score=True)
I think this is because you assigned n_jobs = -1. Keeping the default (None, or manually setting to 1 or some value less than the total cores you have) should prevent this but lower computation time.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.