Comments (5)
Maybe resource problem of some kind? Are you monitoring the logs?
The CLI tool has a delete command, that would be the best way to stop a job. Second best way is to try something like kubectl delete pod,pvc,,deploy,svc,statefulset,secrets,configmap --selector training_id=training-BzIb89qzR
.
Can you ssh into the learner container, kubectl exec -it <name-of-learner-pod> bash
and look around? Maybe do an ls -l $JOB_STATE_DIR
? And the LOG_DIR too.
from ffdl.
I ssh'ed into the learner container and nothing suspicious on this side. Also kubectl describe <name-of-learner-pod>
doesn't really show anything unusual.
I am constantly monitoring the logs and the training still hangs at a certain step (now 1100 because I lowered the frequency of saving .ckpt files) and does not proceed since hours.
0.021782178, DetectionBoxes_Precision/[email protected] = 0.070287526, DetectionBoxes_Precision/[email protected] = 0.01808199, DetectionBoxes_Recall/AR@1 = 0.064615384, DetectionBoxes_Recall/AR@10 = 0.124615386, DetectionBoxes_Recall/AR@100 = 0.16307692, DetectionBoxes_Recall/AR@100 (large) = 0.29642856, DetectionBoxes_Recall/AR@100 (medium) = 0.06666667, DetectionBoxes_Recall/AR@100 (small) = 0.05, Loss/BoxClassifierLoss/classification_loss = 0.048917063, Loss/BoxClassifierLoss/localization_loss = 0.047601078, Loss/RPNLoss/localization_loss = 0.19058931, Loss/RPNLoss/objectness_loss = 0.21819682, Loss/total_loss = 0.5053042, global_step = 1100, learning_rate = 0.0003, loss = 0.5053042 INFO:tensorflow:Saving 'checkpoint_path' summary for global step 1100: /mnt/results/tuev-od-output/training-LU2Xd7PmR/training/model.ckpt-1100 I1206 21:27:24.838608 140339569706752 tf_logging.py:115] Saving 'checkpoint_path' summary for global step 1100: /mnt/results/tuev-od-output/training-LU2Xd7PmR/training/model.ckpt-1100 INFO:tensorflow:global_step/sec: 0.199728 I1206 21:27:25.806301 140339569706752 tf_logging.py:115] global_step/sec: 0.199728 INFO:tensorflow:loss = 0.12553665, step = 1100 (125.171 sec) I1206 21:27:25.807365 140339569706752 tf_logging.py:115] loss = 0.12553665, step = 1100 (125.171 sec)
from ffdl.
It could be due to our default open source version of FfDL has a very small requirements for our helper pods. You can modify the helm chart values at https://github.com/IBM/FfDL/blob/master/values.yaml#L30-L32
to let the milli_cpu = 500 and mem_in_mb = 1500.
Then perform helm upgrade <your ffdl chart name> .
from ffdl.
I updated the helm charts and it seems to work - my jobs are running through again. If I start to increase the save_checkpoint_step (anything > every 50th step), the pod seems to require too much resources and the job gets killed. Therefore the setup is not optimal. If I increase mili_cpu and mem_in_meb the same thing happens.
We are running the following bare metal server from IBM cloud, so I doubt its a resource problem of the cluster itself, but rather a resource allocation problem in the configs.
`16 Cores 128GB RAM
Bare Metal
mg1c.16x128
1 K80 GPU cards
2TB SATA primary disk
960GB SSD secondary disk
10Gbps bonded network speed`
Any thoughts that could help?
from ffdl.
Closing this issue, since it was an IKS issue, not a FfDL issue. Thanks for your help!
from ffdl.
Related Issues (20)
- FfDL v0.1.1 model training error HOT 4
- FfDL CLI output is not properly machine parsable
- [Documentation] Update IBM Cloud CLI instructions in /etc/converter/train-deploy-wml.md
- dind-port-forward.sh -> invalid resource name ? HOT 5
- Grafana charts shows no data points HOT 1
- Unable to mount volumes for pod Learner HOT 8
- / FfDL/demos/fashion-mnist-adversarial/README.md references internal repository HOT 1
- how to use pytorch and caffe built by ourselves? HOT 2
- kubectl get pods :lcm ContainerCreating,prometheus trainer and trainingdata STATUS CrashLoopBackOff HOT 26
- tiller-deploy is in status CrashLoopBackOff HOT 2
- Confused about manifest.yml HOT 2
- learner pod failed HOT 19
- caffe training speed is very slow HOT 4
- pytorch training issue: insufficient shared memory HOT 2
- distributed training questions HOT 2
- why pytorch distributed training on two servers is slower than training on one server HOT 21
- .travis.yml: The 'sudo' tag is now deprecated in Travis CI
- ssh permission denied when deploying FfDL on public cloud
- fail to install
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ffdl.