Comments (9)
@rasbt, I don't think there's much we can do here wrt early termination from running out on time on a PBS job. AFAIK those terminations immediately kill the job, and there's no way to gracefully exit the program at that point.
However, we can certainly catch a keyboard interrupt and store the best discovered pipeline so far.
from tpot.
@rhiever Yes, that's a bit tricky. I would suggest adding an default option for writing two files
model_param.yaml
- and
model_param.yaml.tmp
on the fly. After each iteration, we write the current parameters to the model_param.yaml.tmp
; then we use model_param.yaml.tmp
to overwrite model_param.yaml
; I think this 2-step approach is safer (accounting for rare scenarios where the job quits while writing one of the files).
Okay, this sounds pretty complex right? However, writing small files in Python is pretty quick (especially compared to one iteration in the deap algo), and it wouldn't really impact the computational efficiency.
The idea is to only store the parameters to these yaml files that are essential for reconstructing the last state of the model. I can see several reasons why this is useful
- avoid starting from scratch if the job crashed
- run additional iterations if the results are not satisfactory
- re-use parameters from other models that may come in handy in related projects
- having a record of the experiment
I would therefore suggest implementing a dump_params
method (or function)
model.dump_params('current_state.yaml')
that can be called in each iteration in the pipeline evaluation by default.
# run experiment
for x in range(something):
evolve_model()
model.dump_params('current_state.yaml.tmp')
copyfile(''current_state.yaml.tmp', 'current_state.yaml')
And the load method
new_model = XXX()
new_model.load_params('current_state.yaml')
from tpot.
What about just pickle
ing the model? Haven't tested pickle
with DEAP, but in theory that would make life easier.
from tpot.
Yes, I think pickle would generally be more convenient since you wouldn't have to worry about the structure of the parameter files etc. However, I think having a parameter file would be better for compatibility (e.g., python 2 vs 3, different pickle protocols etc.). I think that pickle is fine if you are working only on one machine, but for record keeping, reproducibility, and sharing, a parameter file in a simple, human readable format like yaml would be much better.
from tpot.
I'd love to see a demo of this if you have a good solution in mind. There certainly is an issue with model persistency in the command-line version: once the Python call ends, the model is gone.
My one request is that we try to avoid adding more external dependencies. We already have two major external dependencies (scikit-learn and DEAP), and I'm wary of adding more.
from tpot.
For example, to construct a "clone," we could first initialize a new object with similar parameter settings. Assuming that we wrote the contents of lr.get_params()
to a yaml file, and lr.get_params()
shall represent yaml.load(file_stream)['parameters']
>>> lr = LogisticRegression(lr.get_params())
>>> lr.get_params()
{'tol': 0.0001, 'max_iter': 100, 'warm_start': False, 'solver': 'liblinear', 'C': 1.0, 'dual': False, 'fit_intercept': True, 'random_state': None, 'n_jobs': 1, 'multi_class': 'ovr', 'verbose': 0, 'class_weight': None, 'intercept_scaling': 1, 'penalty': 'l2'}
we could then initialize the new object as
>>> lr2 = LogisticRegression(lr.get_params())
and to set the "fitted" parameters, we just use setattr
:
yaml_cont = yaml.load(file_stream)
for a in yaml_cont['attributes']:
settattr(lr2, a, yaml_cont['attributes'][a])
Practical example:
>>> lr.fit([[1], [2], [3]], [0, 1, 1])
>>> setattr(lr, 'coef_', 99.9)
>>> getattr(lr, 'coef_')
99.9
Of course, we need to go a few levels further since we have multiple nested objects in a pipeline, but I think this should not be too difficult.
from tpot.
Yeah - doing this with nested pipeline objects is going to be a challenge, especially because some of those pipeline objects are functions of custom code. In fact, all of the pipelines are nested functions. I think the saved state should represent that.
from tpot.
Keep in mind the usability of the temp output and what might be possible.... no big deal if its just a log of the current state of the model upon exit, but if you wanted to use that log as some sort parameterized object for specifying a new model run based on the last attempt, or (in some blue-sky thinking way), using the temp output from the previous model to train the new model to skip certain generations....
from tpot.
@MichaelMarkieta I agree with you; maybe we should start with a simple log file to get it going, and we can later come up with a "parameter file" to directly initialize and parameterize the model as I mentioned above.
from tpot.
Related Issues (20)
- Potential New Feature: allowing users to input customized initial pipelines HOT 1
- TPOT2 and the future of TPOT development -- From the Devs
- How can I be part of the project to develop new modules? HOT 2
- Documentation should use est=TPOTClassifier rather than tpot=TPOTClassifier
- Question: How is the data split using cross validation HOT 2
- In python 3.12, Get error after importing module. HOT 1
- How to map the features at the end of the pipeline back to the initial features HOT 1
- Can't import TPOTClassifier from tpot
- How long is the installation supposed to take HOT 3
- TPOTClassifier error for large data HOT 1
- TPOT error for xgboost multiclass classificaion HOT 1
- Unable to install pip ARM64 Mac HOT 5
- TPOT NN example fails
- last update of code and report HOT 1
- TPOT report error HOT 4
- Feature to Export the Pipeline/Model as pickle file HOT 1
- TPOT underpopulates a class, but manual sklearn does not HOT 4
- macos
- Tpot affects the nb of jobs at import HOT 1
- How to use tpot with MLFlow HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tpot.