Code Monkey home page Code Monkey logo

hpcell's Issues

comments to add to instructions

Some comments that may be useful in the instructions:

  1. If no metadata.rds is available: create a data frame with 2 columns (sample | batch) with the samples matching the sample name in the input file. Set Batch as Sample_name.
sample batch
spleen spleen
liver liver
  1. add the module load R/4.2.1 before calling the renv1 or Rstudio … function

if filter = NULL test whether data has been filtered, and skip in case

@susansjy22 sometime, we don't know whether the date has been filtered so we can set up the default filter argument to NULL, and you can test within the filter function if the minimum RNA count per cell is X and assume that the data has been filtered already.

That X threshold can be found in the Surat tutorial

e.g.

nFeature_RNA > 200 # This is RNa feature

from https://satijalab.org/seurat/archive/v3.0/pbmc3k_tutorial.html

and

lower = 100,. # This is RNa counts

from https://rdrr.io/github/MarioniLab/DropletUtils/man/emptyDrops.html

This will be done within the function of filtering, empty droplets, so some samples could have been filtered, and some samples could have not. The reports will show as they do know how many droplets wear filtered out in the user will be able to tell.

Start the pipeline from target_file

@susansjy22 as we discussed at the meeting, we should start the pipeline from target_file (with the pots provided by the user) so it will track input files rather than duplicating them.

The user will likely use these input formatting

dir(MY_DIRECTORY) |>
   execute_pipeline()

Hopefully, with this strategy, if the directory MY_DIRECTORY gets updated the pipeline wheel run just for the new files.

add documentation of the backend algorithms to README

Hello @susansjy22 ,
 
Could you please add the documentation for each pipeline step to the README at GitHub?
 
The document should include the methods used and snippets of the cord for how we use those methods with which parameters. The goal is for Aabeginner to understand what we are doing and why we are doing it.

Add microbulk to the pipeline

This task involves creating a function that accepts a Surat object from one sample, and create a micro bulk version of that data. the function also takes a perimeter that indicates the level of summarisation, for example, the total amount of cells that we are aiming for.

Some investigation on the best micro bulk tool from our should be done.

Improve README

  • you can drop this

Clone the repository

git clone [email protected]:susansjy22/HPCell.git

  • you should point to the main repo in my github

replace this

remote::install_github("[email protected]:susansjy22/HPCell.git")

with

remote::install_github("stemangiola/HPCell")
  • Move this, below, where you give another example of execution with reference. The base example should not include this
# Load reference data 
input_reference_path <- "reference_azimuth.rds"
reference_url<- "https://atlas.fredhutch.org/data/nygc/multimodal/pbmc_multimodal.h5seurat"
download.file(reference_url, input_reference_path)
LoadH5Seurat(input_reference_path) |> saveRDS(input_reference_path)
  • store should be "./" , because this is already the default, you can drop the store variable
store <- "~/HPCell/pipeline_store"
  • Drop the default arguments, let's keep it simple
preprocessed_seurat = run_targets_pipeline(
    input_data = file_path, 
    tissue = tissue,
    filter_input = TRUE, 
    RNA_assay_name 
)

branches explained & current status of the pipeline

Here there are some additional explanation regarding the last branches I created:

  1. renv-optional: in this branch the library rent was deactivated (renv::deactivate). This was done due to the many problems performing the Jascap pipeline for the Jian/Ajith project. The following branches were created also based on the rent::deactivate mode.

  2. Jian_no azimuth_annotation: Jian project contains mouse-data, thus the reference azimuth was not useful. Even though we left the other human annotations in, the azimuth was giving problem, thus, the command lines containing it were momentarily removed. In this branch, something went wrong, and can therefore be deleted! The correct branch without the azimuth reference is the below one: Jian_2.

  3. Jian_2: the commands in this branch works without stopping until the pre-processing script. The next step is the pseudobulk scripts, and there are still some bugs to correct.
    This branch doesn't include renv() library, as mentioned above it was deactivated, and doesn't contain any azimuth reference.

Next steps are:

  • fixing bugs from the pseudobulk R script
  • adding reports to pipeline

split pseudobulk per sample

It is better to split the pseudobulk generation be sample, that can be run in parallel, and leave an overall step just for the merging.

Imrove unit tests

Rather than only testing if a function has worked and is a S3 object, each test should test specific properties of the output of a function. For example

  • The annotation step, produces a tibble output with 3 column, if seurat_reference is set
  • The annotation step, produces a tibble output with 2 column, if seurat_reference is NOT set
  • The empty droplet filtering step, produce an output with less cells (rows) than the seurat input (cols)
  • and so on..

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.