Code Monkey home page Code Monkey logo

Comments (9)

dietmarwo avatar dietmarwo commented on September 27, 2024 1

Indeed, in SMAC3 "Scenario" supports the "n_jobs" parameter where you also can execute multiple runs in parallel. It may indeed be a good solution. Will also have a look at this survey .

from alns.

N-Wouda avatar N-Wouda commented on September 27, 2024

Hi @dietmarwo! 👋 How does this compare to the ideas laid out in #109? I think we can combine things, but I'm hesitant to build a custom tuner just for ALNS. It might be easier to hook into existing tuning software instead, but that needs to be explored further.

from alns.

dietmarwo avatar dietmarwo commented on September 27, 2024

Not sure whether ML hyperperameter tuning is optimial here. In ML we usually use dedicated hardware (TPUs/GPUs) which blocks parallel evaluation of hyperparameter settings. ALNS is often applied to objective functions which can be evaluated single threaded, so on a modern CPU you could run >= 16 experiments in parallel. But nevertheless, you are right, applying these existing tools is better than nothing. But there are alternatives avoiding the creation of a custom tuner:

  • Start with a simple example section similar to "Tuning the destroy rate"
  • You could apply a parallel optimizer not specific to hyperparameters, but being able to evaluate an objective Python function in parallel - see for instance https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Sweep.adoc .
    In your case this "objective Python function" would be alns.iterate for specific meta-parameters choosen by the (meta-)optimization algorithm. With a time restriction of 60 seconds, on a 16-core machine you could execute 60x32=1920 runs per hour. So even when optimizing multiple meta-parameters at once, you could expect good results after some hours.

keras-tuner looks very ML specific to me, but
ray.tune may be indeed a valid choice, even supporting parallel parameter testing - even utilizing more than one CPU in a cluster.

from alns.

N-Wouda avatar N-Wouda commented on September 27, 2024

I have had some success applying SMAC3 to tune an ALNS before - some example code for setting that up is available here, in src/tune.py. That's now a few years old so it might no longer work, but should not be too hard to adapt if needed.

Alternatively:

  • We could add more to the docs about how to run these experiments using a hand-rolled tuner (like we did for the flow shop example). This'd have to be fast, showing just the general ideas but not performing a complete, many-hours-long optimisation run.
  • Find some other tuner? I have used SMAC for algorithm configuration, and also briefly looked at irace and ParamILS in the past. There might be alternatives that suit ALNS better: this survey seems like a promising start to finding those.

from alns.

N-Wouda avatar N-Wouda commented on September 27, 2024

Great, let me know what you find! This could be a cool addition to ALNS.

from alns.

dietmarwo avatar dietmarwo commented on September 27, 2024

Related to the parallelization topic:
I am currently thinking how to add parallelization to ALNS itself. For instance I could implement a "parallel_repair" operator which applies several "repairs" in parallel and submits only the one giving the best objective value. But this would probably be inferior to real parallelism implemented into the algorithm itself: Parallel destroys/repairs would use all repair results, not just the best one. Do you have any thoughts / plans about this?

from alns.

N-Wouda avatar N-Wouda commented on September 27, 2024

We used to have an issue open about just that, here: #59. I think this is the most basic parallel ALNS paper out there: PALNS. Whether that's the best way to do it is an open question. There are some papers from the past few years that describe parallel ALNS implementations, but I've not seen any that really convinced me they are significantly better than the obvious parallellisation of doing $N$ independent ALNS runs in parallel, and returning the best solution out of those. I don't think I can help you much in this direction.

from alns.

dietmarwo avatar dietmarwo commented on September 27, 2024

obvious parallellisation of doing independent ALNS runs in parallel

With evolutionary optimization - which I am more familiar with - we are faced with the same problem. Sometimes the "parallel runs" idea works very well, but sometimes it is better to parallelize the evaluation of a population. Population based EAs can support this by providing an ask/tell interface - then the user can apply parallel evaluation himself. Question is if ALNS can support a similar idea: Asking the user to apply a list of destroy/repair pairs and return the resulting states. Advantage is that ALNS doesn't have to perform any parallelization itself. So you could "do just ALNS really well" as planned, but still support parallelization. The only internal change would be that you would no longer update the internal state of the algorithms (its weights) for each destroy/restore pair separately, but would perform a kind of "batch update" for a list of destroy/restore pairs.

from alns.

N-Wouda avatar N-Wouda commented on September 27, 2024

I think something like that should be possible, but I don't have the time right now to mock something up. It might be tricky to do this without breaking existing interfaces. Would you be willing to have a go at this if you're interested?

from alns.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.