Comments (21)
@Tomcli I run the example pytorch-launch-dist on two servers using 3 learners (but there just has 2 Processes), 1 learner is on one server, and 2 learners are both on another server, but the log just has "node_rank=0" and "node_rank=1", there has no "node_rank=2", it is the same as my issue above.
I wonder that is there no communication between the two servers?
from ffdl.
Hi @Eric-Zhang1990 , It could be something has timed out when you initiating your process group due to low bandwidth. What protocol did you use for initiating your process group? Is it TCP, GLOO, or NCCL?
from ffdl.
@Tomcli I have tried to use NCCL and GLOO, both of them have the same result above.
I use tool "iperf" to test the bandwidth between two servers, it shows its bandwidth is about 94.0 Mbits/sec.
Does this bandwidth cause the above problem? Thank you.
from ffdl.
@Tomcli Now we use 1000M bandwidth, it can improve the training speed.
I wonder that which bandwidth do you use? Thank you.
from ffdl.
@Tomcli Now we use 1000M bandwidth, I do a test for comparing the speed between using FfDL and just using system env (using conda) on the same server, I find that on the same setting, the training speed of just using system env is faster than that of using FfDL. I don't know why, do you know why? Thank you.
Using FfDL:
Using system env (using conda):
from ffdl.
There is cost to be paid for bandwidth if you are spreading across two servers. What would be interesting is to see if both GPUs are on same server, it it faster than system env or not? Also the training has to be distributed across multiple GPUs I would say to override the payoff for bandwidth and extra communication overhead
from ffdl.
@animeshsingh I have done some tests for comparing the speed between using FfDL and just using system env (using conda) on the same server, followings are the comparation:
2 GPUs comparation:
(a). Using system env, 2 gpus:
(b). Using FfDL, 2 learners, each learner has 1 gpu:
(c). Using FfDL, 1 learner, this learner has 2 gpus:
RESULT (> means faster): speed of (a) > speed of (b) > speed of (c).
4 GPUs comparation:
(d). Using system env, 4 gpus:
(e). Using FfDL, 4 learners, each learner has 1 gpu:
(f). Using FfDL, 1 learner, this learner has 4 gpus:
RESULT (> means faster): speed of (d) > speed of (e) > speed of (f).
(All are on the same server) Also, when using system env, speed of 4 gpus is faster that of 2 gpus (it seems normal). However, when using FfDL, speed of 2 learners (each learner has 1 gpu, (c) above) is faster that that of 1 learner (this learner has 2 gpus, (b) above), and for 4 gpus, speed of 4 learners (each learner has 1 gpu, (e) above) is faster that that of 1 learner (this learner has 4 gpus, (f) above).
But speed of 2 learners (each learner has 1 gpu, (c) above) is faster that that of 4 learners (each learner has 1 gpu, (e) above), it seems not normal.
Can you help me to explain above results? Thank you.
from ffdl.
@Tomcli @animeshsingh I am running the project maskrcnn-benchmark (https://github.com/facebookresearch/maskrcnn-benchmark) on 3 nodes, 16 gpus(2 nodes have 4 gpus respectively, 1 node has 8 gpus). What I want to ask is that original code uses "torch.distributed.reduce()", but I am on multi nodes and multi GPUs, what should I use "torch.distributed.reduce_multigpu" or "torch.distributed.all_reduce_multigpu" or someone else? And will they affect the training speed? Because my training speed is very slow when I use 3 nodes and 16 gpus, just like comparation above.
Thank you.
from ffdl.
Thanks @Eric-Zhang1990 for testing this thoroughly. While using bare metal directly without the overhead of containers if we have the same number of GPUs, the speed is going to be faster on bare metal.
Going behind the concept of FfDL, the idea is to distribute training over multiple containers and the fact that these containers can be spawned and killed on demand. This allows multiple users to share the same hardware backend environment, and then be able to provide capabilities like batch scheduling, job queuing, moitoring etc which we are working towards adding by integrating with kube-batch. The users dont need to login to individual machines, set things up etc., and are offered this as a service. Also the fact that the user journey remains the same whether they are using PyTorch of Tensorflow etc.
from ffdl.
But speed of 2 learners (each learner has 1 gpu, (c) above) is faster that that of 4 learners (each learner has 1 gpu, (e) above), it seems not normal.
Can you help me to explain above results? Thank you.
It definitely doesn`t seem normal, and we would like to simulate at our end and test more. Are these two GPUs in the first case on same machine, and while doing 4 GPUs we spread across two machines?
from ffdl.
@Tomcli @animeshsingh I am running the project maskrcnn-benchmark (https://github.com/facebookresearch/maskrcnn-benchmark) on 3 nodes, 16 gpus(2 nodes have 4 gpus respectively, 1 node has 8 gpus). What I want to ask is that original code uses "torch.distributed.reduce()", but I am on multi nodes and multi GPUs, what should I use "torch.distributed.reduce_multigpu" or "torch.distributed.all_reduce_multigpu" or someone else? And will they affect the training speed? Because my training speed is very slow when I use 3 nodes and 16 gpus, just like comparation above.
Thank you.
I would assume reduce should be faster, given that only the process with rank dst is going to receive the final result.
https://pytorch.org/docs/stable/distributed.html
from ffdl.
But speed of 2 learners (each learner has 1 gpu, (c) above) is faster that that of 4 learners (each learner has 1 gpu, (e) above), it seems not normal.
Can you help me to explain above results? Thank you.It definitely doesn`t seem normal, and we would like to simulate at our end and test more. Are these two GPUs in the first case on same machine, and while doing 4 GPUs we spread across two machines?
Thanks @animeshsingh for kind reply. I test them on same machine (this machine has 4 gpus). I also think it is not normal, but I don't know which reason will cause this phenomenon. Just like I describe above, when I use 3 nodes, 16 gpus (16 learners on 3 machines), it is very slower than 4 learners (4 learners on same machine).
What we think is speed of more gpus will be faster than less gpus, but it is inverse.
from ffdl.
@animeshsingh @Tomcli When I see the doc "FfDL/docs/gpu-guide.md" again, it says that we should use "helm install --set lcm.device_plugin=false ." to deploy FfDL, but I didn't use parameter "lcm.device_plugin=false", does it affect the training speed?
Thank you.
from ffdl.
Thanks @animeshsingh for kind reply. I test them on same machine (this machine has 4 gpus). I also think it is not normal, but I don't know which reason will cause this phenomenon. Just like I describe above, when I use 3 nodes, 16 gpus (16 learners on 3 machines), it is very slower than 4 learners (4 learners on same machine).
What we think is speed of more gpus will be faster than less gpus, but it is inverse.
On the same machine, more GPUs should definitely be faster. When going across machines, it depends on having the right combination for your hardware as described here
https://pytorch.org/docs/stable/distributed.html#which-backend-to-use
from ffdl.
@animeshsingh I use 'nccl' backend and 'reduce' which original maskrcnn-benchmark provides, and when I run maskrcnn-benchmark on 2 machines just using pytorch distributed training (not using FfDL), I can run correctly using backend 'gloo', but error occurs when using 'nccl', I am still trying to find solution.
from ffdl.
@animeshsingh Do you run the original maskrcnn-benchmark on FfDL? How about the speed between multi gpus on one machine and multi gpus on two or more machines? Thank you.
from ffdl.
Hi @Eric-Zhang1990, sorry for the late reply. Can you show us the commands and specs you used for running the maskrcnn-benchmark on FfDL? Is it similar to
And you specified 4 gpus and 3 learners? Thanks.
from ffdl.
@Tomcli My manifest .yml is similar to 'FfDL/etc/examples/pytorch-launch-dist/manifest.yml',
and my setup.sh is same as 'FfDL/etc/examples/pytorch-launch-dist/setup.sh'
Can you help me to find where the prblem is?
Thank you.
from ffdl.
@Tomcli I do a test, I run the same code using FfDL and pytorch's distributed training directly, the speed of pytorch's distributed training is almost 2 times faster than FfDL, both of them are using the same machines and GPUs.
FfDL:
pytorch's distributed training directly:
Is it because some micro services running on FfDL?
Thank you.
from ffdl.
@Eric-Zhang1990 Directly you are running on baremetal?
from ffdl.
@animeshsingh Yeah, I run the maskrcnn-benchmark using conda env, the code and parameters are all the same, the command I use is following (2 machines, each one has 4 gpus):
Master:
NCCL_SOCKET_IFNAME=eno1 python -m torch.distributed.launch --nproc_per_node=4 --nnodes=2 --node_rank=0 --master_addr="192.168.110.25" --master_port=1234 train_net.py
Node 1:
NCCL_SOCKET_IFNAME=enp129s0f0 python -m torch.distributed.launch --nproc_per_node=4 --nnodes=2 --node_rank=1 --master_addr="192.168.110.25" --master_port=1234 train_net.py
from ffdl.
Related Issues (20)
- FfDL v0.1.1 model training error HOT 4
- FfDL CLI output is not properly machine parsable
- [Documentation] Update IBM Cloud CLI instructions in /etc/converter/train-deploy-wml.md
- dind-port-forward.sh -> invalid resource name ? HOT 5
- Grafana charts shows no data points HOT 1
- Unable to mount volumes for pod Learner HOT 8
- Learner pod stuck at training step 100 using custom image with TF Object Detection HOT 5
- / FfDL/demos/fashion-mnist-adversarial/README.md references internal repository HOT 1
- how to use pytorch and caffe built by ourselves? HOT 2
- kubectl get pods :lcm ContainerCreating,prometheus trainer and trainingdata STATUS CrashLoopBackOff HOT 26
- tiller-deploy is in status CrashLoopBackOff HOT 2
- Confused about manifest.yml HOT 2
- learner pod failed HOT 19
- caffe training speed is very slow HOT 4
- pytorch training issue: insufficient shared memory HOT 2
- distributed training questions HOT 2
- .travis.yml: The 'sudo' tag is now deprecated in Travis CI
- ssh permission denied when deploying FfDL on public cloud
- fail to install
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ffdl.