Code Monkey home page Code Monkey logo

distributed-omezarrcreator's Introduction

Distributed-OMEZarrCreator

Run encapsulated docker containers with a BioFormats2Raw converter in the Amazon Web Services infrastructure. It uses BioFormats2Raw to convert images to the next-generation file format (NGFF) .ome.zarr.

This code is an example of how to use AWS distributed infrastructure for running BioFormats2Raw. The configuration of the AWS resources is done using boto3 and the awscli. The worker is written in Python and is encapsulated in a docker container. There are four AWS components that are minimally needed to run distributed jobs:

  1. An SQS queue
  2. An ECS cluster
  3. An S3 bucket
  4. A spot fleet of EC2 instances

All of them can be managed through the AWS Management Console. However, this code helps to get started quickly and run a job autonomously if all the configuration is correct. The code runs a script that links all these components and prepares the infrastructure to run a distributed job. When the job is completed, the code is also able to stop resources and clean up components. It also adds logging and alarms via CloudWatch, helping the user troubleshoot runs and destroy stuck machines.

Running the code

Step 1

Edit the config.py file with all the relevant information for your job. Then, start creating the basic AWS resources by running the following script:

$ python run.py setup

This script intializes the resources in AWS. Notice that the docker registry is built separately, and you can modify the worker code to build your own. Anytime you modify the worker code, you need to update the docker registry using the Makefile script inside the worker directory.

Step 2

After the first script runs successfully, the job can now be submitted to with the following command:

$ python run.py submitJob files/exampleJob.json

Running the script uploads the tasks that are configured in the json file.
You have to customize the exampleJob.json file with information that make sense for your project. You'll want to figure out which information is generic and which is the information that makes each job unique.

Step 3

After submitting the job to the queue, we can add computing power to process all tasks in AWS. This code starts a fleet of spot EC2 instances which will run the worker code. The worker code is encapsulated in docker containers, and the code uses ECS services to inject them in EC2. All this is automated with the following command:

$ python run.py startCluster files/exampleFleet.json

After the cluster is ready, the code informs you that everything is setup, and saves the spot fleet identifier in a file for further reference.

Step 4

When the cluster is up and running, you can monitor progress using the following command:

$ python run.py monitor files/APP_NAMESpotFleetRequestId.json

The file APP_NAMESpotFleetRequestId.json is created after the cluster is setup in step 3. It is important to keep this monitor running if you want to automatically shutdown computing resources when there are no more tasks in the queue (recommended).

Documentation

See our full documentation website for more information about each step of the process.

Distributed-OMEZarrCreator schematic

distributed-omezarrcreator's People

Contributors

erinweisbart avatar will-moore avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

will-moore

distributed-omezarrcreator's Issues

Determine BioFormats2Raw computational requirements

Trying two methods to determine optimal setup/computational requirements:

  1. Download all files onto EBS volume
  2. Use S3FS to pull images from S3

Starting with 1)
I have an EBS volume mounted as /ebs_tmp with 2.5x size of plate with all images downloaded to it.
Enter shell in docker, allowing access to ebs_tmp:
sudo docker run -it --rm --entrypoint /bin/sh -v ~/ebs_tmp:/ebs_tmp openmicroscopy/bioformats2raw:latest
Run bioformats2raw:
sh /opt/bioformats2raw/bin/bioformats2raw /ebs_tmp/PLATE/Images/Index.idx.xml /ebs_tmp/images_zarr/PLATE.ome.zarr

Fix job done status to plate key in .zattrs

"In your case, the top-level .zattrs will be created early in the process (with the bioformats2raw.layout version) but it will only be updated at the end with the plate metadata. So the presence of a plate key would likely be a good indicator that the conversion process ran to completion."

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.