Code Monkey home page Code Monkey logo

Comments (3)

Ssukriti avatar Ssukriti commented on July 25, 2024 2

thanks for findings. We are working on the kubernetes solution in platform team in the next week.

The "PyTorchJob" operator/CR from standard Kubeflow training operator allows us to run multiple processes within single container in a pod (Master pod)

we will be testing it with kubeflow training operator. we will update when work is done as part of issue #88

from fms-hf-tuning.

kmehant avatar kmehant commented on July 25, 2024 1

There is also the third option where the processes are distributed across multiple Kube pods, but this may be over-complex. This would be the standard Kubeflow training operator approach.

The "PyTorchJob" operator/CR from standard Kubeflow training operator allows us to run multiple processes within single container in a pod (Master pod) like the option 1 when we just want to run a multi-gpu single node training job. When we wish to spawn multi-node multi-gpu job, then we would leverage the worker pod where distributed environment variables (node rank, master address, port etc) are automatically injected by the operator. We just simply replicate accelerate launch in the worker pod and the node rank from the operator determines whether the pod is a worker pod or not. Also there is local rank created by torch.distributed which differentiates between all the processes.

In option 1, AFAIK, most of the popular container runtimes are multiprocess friendly and on the resource side, the resource requests and limits are container level in Kubernetes.

from fms-hf-tuning.

fabianlim avatar fabianlim commented on July 25, 2024

If we consider the constrained problem of running gpu jobs within a single pod, where each gpu is handled by a single process. There are the following options where mutiple processes are run:

  1. all together in a single container, within a single Kube pod
  2. individually each within a container, all containers housed within a single Kube pod.

There is also the third option where the processes are distributed across multiple Kube pods, but this may be over-complex. This would be the standard Kubeflow training operator approach.

Huggingface's recommendation is to run distributed training jobs using accelerate.

  • build on top of torchrun. The main process will spawn multiple worker processes for distributed training.
  • torchrun has a watchdog agent that handles things like fault tolerance.
  • processes communicates via various backends (e.g. static or c10d).
  • GPU communicates over GPU-network interfaces (e.g., NCCL).

Option 1:

  • is docker compliant to run multiple processes to parallelise work (e.g., running workers to parallelize an SQL query). This is very much like a master process spawning many child processes for distributed training.
  • has some fault tolerance capabilities already built in.

Option 2:

  • although the HF blog seems to claim that accelerate is backward compatible with torchrun, not sure how much of it is true. Certainly in the code there are a lot of acclerate specific flags that will only be set by accelerate launch.
  • for kube jobs it is mostly recommended to use the pytorch job controller, here is an example for distributed cpu jobs that can be extrapolated to GPUs. THey use torchrun, but probably also works for accelerate launch (due to the similarities in the API) This is incorrect as this will launch each worker in its own pod.

from fms-hf-tuning.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.