Code Monkey home page Code Monkey logo

cdphe's Introduction

CDPHE Pipelines

The pipelines included in this repository are meant to be used for identification, characterization, and quality assessment of bacterial paired-end short read sequencing. The pipelines themselves have been designed to accomodate fastq files coming from Illumina instruments, as well as fastq files which can be downloaded from NCBI's SRA database. Previous versions of the pipelines were designed for specific virtual machines designed on the Google cloud platform, but we are actively trying to adapt the current versions to be platform agnostic.

INSTALLATION

The repository can be cloned with the following command:

git clone https://github.com/StaPH-B/CDPHE.git

The path to the repository should then be added to the $PATH variable on your linux distribution.

PREREQUISITES

The versions of the pipelines which will be supported going forward require one dependency, and that is Docker. In order to download Docker, follow the instructions in the Docker User Guide. Docker is a platform which enables the creation of a "container" which contains an environment that should run identically on any system or compute environment.

USE AND SPECIFIC EXAMPLES

There are a few different pipelines that can be run in tandem or independently of one another.

type_pipe

This pipeline trims and cleans reads, assembles the reads into contigs, searches the reads against a database to look for contamination, does a quick comparison to RefSeq using mash to identify a probable species identification, then further serotypes E. coli and Salmonella spp. The pipeline also runs the program Abricate to identify possible antibiotic resistance and virulence markers.

The usage is as follows:

#Run the folowing command from the same directory where all forward and reverse fastq files are located, or where there is a 
#file named "SRR" containing the short read archive identifiers for each organism on separate lines

run_type_pipe_[version]-dockerized.sh

Tags available for this pipeline: -l [approximate_genome_length, default 5000000]

pipeline_non-ref_tree_build

This pipeline trims and cleans reads, assembles the reads into contigs, then performs an annotation with prokka, a core genome alignment with roary, and builds a phylogenetic tree with RAxML. This approach is similar to the URF pipeline developed in Utah.

The usage is as follows:

#Run the folowing command from the same directory where all forward and reverse fastq files are located, or where there is a 
#file named "SRR" containing the short read archive identifiers for each organism on separate lines

run_pipeline_non-ref_tree_build_[version]-dockerized.sh

Tags available for this pipeline: -l [approximate_genome_length, default 5000000]

cdphe's People

Contributors

kapsakcj avatar logan-fink avatar

Stargazers

Jill V. Hagey, PhD avatar Young avatar

Watchers

James Cloos avatar  avatar  avatar  avatar

cdphe's Issues

type_pipe_2.5-dockerized.sh shuffle reads wildcard bug

In line 314 of type_pipe_2.5-dockerized.sh an * is used that on my end leads to an apparent error with file names that are a substring of another file name (eg. 22 and 220) in that the resulting files for two distinct isolates are identical. Removing the * corrected the issue in my local job script.
Also, (disclaimer: I have not look into this directly) a similar problem may arise in the MASH, Kraken, and SPAdes steps as they appear to also utilize wildcards in a similar way.

create a USAGE section on the README

Would be nice to have a USAGE section on the README.md so that if others want to use the pipelines, they have the instructions to do so.

Could include:

  • Requirements / dependencies
    • local installs of tools for non-dockerized scripts
    • dockerized pipelines docker, pigz
  • Details of each pipeline (what program is run?, what results should I expect to see?)
    • type_pipe
    • pipeline_non_ref_tree_build
    • lyveset
    • nanopore scripts (not really a full-fledged pipeline yet)
  • Usage examples - run X script in this directory, receive Y files out, example commands to run the scripts
  • Known issues and/or a to-do list
  • Links to bioinformatics training videos?

Add checks for docker image versions and if they have been pulled or not

Would be good to have a check near the beginning of the dockerized scripts to see if a docker image has been pulled or not. If not, pull docker image.

I would prefer to avoid checking by running docker pull for all programs, because even if the image is present on the machine, it will still download the must up-to-date image, which may not have changed. Docker/Docker hub just thinks that the image has changed since a push to the master branch of staph-b/docker-auto-builds currently results in auto-rebuilding of ALL images

# just for illustration purposes - totally fake code
if [ docker run staphb/spades:3.12.0 spades.py -v ] returns: SPAdes v3.12.0 ; 
    print version to screen (potentially set bash variable as $SPADESVER)
else
    docker pull staphb/spades:3.12.0
fi

Would need checks for each program used by each dockerized script.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.