Comments (6)
Evolving from our discussion today, here's the final schema I propose:
storage:
-name: imagenet-bucket
source: ~/imagenet/
force_backend: [s3, gcs] # Could be [s3] or [gcs], default: None
persistent: True
storage_mounts:
-storage: imagenet-bucket
mount_path: /imagenet/
-
Thinking a little more about it, I propose we change
force_upload
toforce_backend
: [s3, gcs, blob]. The reason we changed fromStorageBackend
toAbstractStore
in our code was because a "backend" class implied a singleton class to access a storage service, but we were instantiating multiple instances of the "backend" which didn't make semantic sense. Here, however, "backend" seems appropriate because it is indeed selecting a backing service that provides object storage. -
I would keep the key
persistent
(instead ofpersist_on_cloud
) since that's the only notion of persistence (files can persist or not persist only on the cloud, local persistence is under the user's control) -
storage_mounts: {imagenet-bucket: /myimagenet/}
still feels a little iffy to me. I feel keys in a YAML should indicate a property name instead of a value. Borrowing ideas from k8s, our storage_mounts should be like:
storage_mounts:
-storage: imagenet-bucket
mount_path: /imagenet/
I agree, this adds an additional line but allows us to have a well defined and consistent schema. My opinion on this is influenced by k8s, so please feel free to push back if you feel this is too much boilerplate code to write.
- I see
sky.Storage
as a lower level implementation offile_mounts
. We can (and should) replace the internals offile_mounts
to use (sky.Storage) instead of Ray autoscaler/rsync. For instance, behind the scenes we should be able to convert a YAML withfile_mounts: {/imagenet/: /home/romilb/imagenet/}
to a Storage object + storage_mount in the task.
from skypilot.
@romilbhardwaj @concretevitamin Since mount_path
is not an argument for Storage
anymore, what are your 1) suggestions for putting mount_path
in the config and 2) how to scale the yaml for multiple Storage
objets?
from skypilot.
Good point - we could take the kubernetes approach and define storage objects as a list of storages each with a unique id (e.g. see volumes
in k8s Pods), and then specify this id in storage_mounts field like so:
name: resnet-app
workdir: ~/Downloads/tpu
resources:
cloud: aws
instance_type: p3.2xlarge
storage:
-name: imagenet-bucket
source_path=s3://imagenet-bucket
-name: mscoco-bucket
source_path=s3://mscoco-bucket
storage_mounts:
-storage: imagenet-bucket
mount: /imagenet/
-storage: mscoco-bucket
mount: /mscoco/
If want to stay true to our python API, then we can define storage objects inline in storage_mounts:
storage_mounts:
-storage:
-name: imagenet-bucket
-source_path=s3://imagenet-bucket
mount: /imagenet/
What do you guys think?
from skypilot.
How should we express "get_or_copy_to_s3()"? A direct attempt:
storage_mounts:
- storage:
- name: imagenet-bucket
- source_path: /local/data/imagenet
- get_or_copy_to_s3: true
- mount: /imagenet
from skypilot.
As we discussed today, here's the schema:
name: resnet-app
workdir: ~/Downloads/tpu
resources:
cloud: aws
instance_type: p3.2xlarge
storage:
-name: imagenet-bucket
source=~/imagenet/
force_upload: [s3, gcs] # Could be [s3] or [gcs], default: None
persistent=True
storage_mounts: {imagenet-bucket: /myimagenet/}
from skypilot.
This has landed by @michaelzhiluo #121.
from skypilot.
Related Issues (20)
- [cudo] Unable to setup credentials on cudo HOT 1
- [Forward compat] Clearly surface `older client -> newer cluster` error
- [Spot/UX] Make spot job name part of `SKYPILOT_TASK_ID`
- [UX/GCP] Explicit error when GCP reauth is set
- [Doc] Reorganize multiple candidate resources page
- [AWS] Bucket on eu-south-1 fail to copy/mount
- Spot Training Controller Failed - vicuna-llama-2 HOT 5
- Cloud 'lambda_cloud' is not a valid cloud HOT 1
- [k8s] Investigate and document `podPidsLimit` kubelet arg
- [Observability] Expose new env vars for: cloud, region, cluster name
- [GCP] Add support for the Dynamic Workload Scheduler (GCE) HOT 1
- [Paperspace] Bug in stopping instance
- [Observability] Expose Prometheus Metrics (Spot Controller) HOT 1
- [Examples] TPU example fail `examples/tpu/tpuvm_mnist.yaml`
- Subprocess call hangs indefinitely in execution.py HOT 2
- [core] Make schemas more visible HOT 1
- [k8s] Log SkyPilot output to container stdout HOT 1
- [k8s] Run GPU labeller automatically on new nodes added to cluster
- [Example] Choose specific VM to run tasks HOT 7
- [Bug][UX] Meaning of `DEVICE_MEM` for multi-GPU instance type is not aligned in `sky show-gpus` HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from skypilot.