Code Monkey home page Code Monkey logo

Comments (9)

rhiever avatar rhiever commented on May 10, 2024

@rasbt, I don't think there's much we can do here wrt early termination from running out on time on a PBS job. AFAIK those terminations immediately kill the job, and there's no way to gracefully exit the program at that point.

However, we can certainly catch a keyboard interrupt and store the best discovered pipeline so far.

from tpot.

rasbt avatar rasbt commented on May 10, 2024

@rhiever Yes, that's a bit tricky. I would suggest adding an default option for writing two files

  • model_param.yaml
  • and model_param.yaml.tmp

on the fly. After each iteration, we write the current parameters to the model_param.yaml.tmp; then we use model_param.yaml.tmp to overwrite model_param.yaml; I think this 2-step approach is safer (accounting for rare scenarios where the job quits while writing one of the files).

Okay, this sounds pretty complex right? However, writing small files in Python is pretty quick (especially compared to one iteration in the deap algo), and it wouldn't really impact the computational efficiency.

The idea is to only store the parameters to these yaml files that are essential for reconstructing the last state of the model. I can see several reasons why this is useful

  • avoid starting from scratch if the job crashed
  • run additional iterations if the results are not satisfactory
  • re-use parameters from other models that may come in handy in related projects
  • having a record of the experiment

I would therefore suggest implementing a dump_params method (or function)

 model.dump_params('current_state.yaml')

that can be called in each iteration in the pipeline evaluation by default.

 # run experiment
 for x in range(something):
     evolve_model()
     model.dump_params('current_state.yaml.tmp')
     copyfile(''current_state.yaml.tmp', 'current_state.yaml')

And the load method

new_model = XXX()
new_model.load_params('current_state.yaml')

from tpot.

rhiever avatar rhiever commented on May 10, 2024

What about just pickleing the model? Haven't tested pickle with DEAP, but in theory that would make life easier.

from tpot.

rasbt avatar rasbt commented on May 10, 2024

Yes, I think pickle would generally be more convenient since you wouldn't have to worry about the structure of the parameter files etc. However, I think having a parameter file would be better for compatibility (e.g., python 2 vs 3, different pickle protocols etc.). I think that pickle is fine if you are working only on one machine, but for record keeping, reproducibility, and sharing, a parameter file in a simple, human readable format like yaml would be much better.

from tpot.

rhiever avatar rhiever commented on May 10, 2024

I'd love to see a demo of this if you have a good solution in mind. There certainly is an issue with model persistency in the command-line version: once the Python call ends, the model is gone.

My one request is that we try to avoid adding more external dependencies. We already have two major external dependencies (scikit-learn and DEAP), and I'm wary of adding more.

from tpot.

rasbt avatar rasbt commented on May 10, 2024

For example, to construct a "clone," we could first initialize a new object with similar parameter settings. Assuming that we wrote the contents of lr.get_params() to a yaml file, and lr.get_params() shall represent yaml.load(file_stream)['parameters']

>>> lr = LogisticRegression(lr.get_params())
>>> lr.get_params()
{'tol': 0.0001, 'max_iter': 100, 'warm_start': False, 'solver': 'liblinear', 'C': 1.0, 'dual': False, 'fit_intercept': True, 'random_state': None, 'n_jobs': 1, 'multi_class': 'ovr', 'verbose': 0, 'class_weight': None, 'intercept_scaling': 1, 'penalty': 'l2'}

we could then initialize the new object as

>>> lr2 = LogisticRegression(lr.get_params())

and to set the "fitted" parameters, we just use setattr:

yaml_cont = yaml.load(file_stream)
for a in yaml_cont['attributes']:
    settattr(lr2, a, yaml_cont['attributes'][a])

Practical example:

>>> lr.fit([[1], [2], [3]], [0, 1, 1])
>>> setattr(lr, 'coef_', 99.9)
>>> getattr(lr, 'coef_')

99.9

Of course, we need to go a few levels further since we have multiple nested objects in a pipeline, but I think this should not be too difficult.

from tpot.

rhiever avatar rhiever commented on May 10, 2024

Yeah - doing this with nested pipeline objects is going to be a challenge, especially because some of those pipeline objects are functions of custom code. In fact, all of the pipelines are nested functions. I think the saved state should represent that.

from tpot.

MichaelMarkieta avatar MichaelMarkieta commented on May 10, 2024

Keep in mind the usability of the temp output and what might be possible.... no big deal if its just a log of the current state of the model upon exit, but if you wanted to use that log as some sort parameterized object for specifying a new model run based on the last attempt, or (in some blue-sky thinking way), using the temp output from the previous model to train the new model to skip certain generations....

from tpot.

rasbt avatar rasbt commented on May 10, 2024

@MichaelMarkieta I agree with you; maybe we should start with a simple log file to get it going, and we can later come up with a "parameter file" to directly initialize and parameterize the model as I mentioned above.

from tpot.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.