Code Monkey home page Code Monkey logo

deduper's Introduction

Deduper

Part 1

Fork this repo - you should do all your work in your fork of this repository.

Write up a strategy for writing a Reference Based PCR Duplicate Removal tool. That is, given a sam file of uniquely mapped reads, remove all PCR duplicates (retain only a single copy of each read). Develop a strategy that avoids loading everything into memory. You should not write any code for this portion of the assignment. Be sure to:

  • Define the problem
  • Write examples:
    • Include a properly formated input sam file
    • Include a properly formated expected output sam file
  • Develop your algorithm using pseudocode
  • Determine high level functions
    • Description
    • Function headers
    • Test examples (for individual functions)
    • Return statement

For this portion of the assignment, you should design your algorithm for single-end data, with 96 UMIs. UMI information will be in the QNAME, like so: NS500451:154:HWKTMBGXX:1:11101:15364:1139:GAACAGGT. Discard any UMIs with errors (or error correct, if you're feeling ambitious).

Part 2

An important part of writing code is reviewing code - both your own and other's. In this portion of the assignment, you will be assigned 3 students' algorithms to review. Be sure to evaluate the following points:

  • Does the proposed algorithm make sense to you? Can you follow the logic?
  • Does the algorithm do everything it's supposed to do? (see part 1)
  • Are proposed functions reasonable? Are they "standalone" pieces of code?

You can find your assigned reviewees on Canvas. You can find your fellow students' repositories at

github.com/<user>/Deduper

Be sure to leave comments on their repositories by creating issues or by commenting on the pull request.

Part 3

Write your deduper function!

Given a SAM file of uniquely mapped reads, remove all PCR duplicates (retain only a single copy of each read). Remember:

  • Samtools sort
  • Adjust for soft clipping
  • Single-end reads
  • Known UMIs
  • Considerations:
    • Millions of reads – avoid loading everything into memory!
    • Be sure to utilize functions appropriately
    • Appropriately comment code and include doc strings
  • CHALLENGE: Include options for
    • Single-end vs paired-end
    • Known UMIs vs randomers
    • Choice of duplicate written to file

You MUST:

  • Write Python 3 compatible code
  • Include the following argparse options
    • -f, --file: required arg, absolute file path
    • -p, --paired: optional arg, designates file is paired end (not single-end)
    • -u, --umi: optional arg, designates file containing the list of UMIs (unset if randomers instead of UMIs)
    • -h, --help: optional arg, prints a USEFUL help message (see argparse docs)
      • If your script is not capable of dealing with a particular option (ex: no paired-end functionality), your script should print an error message and quit
  • Output the first read encountered if duplicates are found
    • You may include an additional argument to designate output of a different read (highest quality or random or ???)
  • Output a properly formatted SAM file with “_deduped” appended to the filename
  • Name your python script <your_last_name>_deduper.py

deduper's People

Contributors

devarts5 avatar leslie-c avatar

deduper's Issues

Peer Review

  1. Does the proposed algorithm make sense to you? Can you follow the logic?

The logic seems okay. I think the amount of pseudocode here is very rich in terms of length. In terms of what we're supposed to be checking for, I think you have a solid handle. I like that things are very stepwise and are laid out very plainly. I did not see you planning to use a dictionary anywhere. I think using one might be helpful for storing information regarding your records. You might be able to use this dictionary to figure out if you have duplicates in "dictionary-checking" loop... That's what I've kind of heard other people chatting about as we try to better define approaches to the deduping problem. I think you have a nice start here and I anticipate this being a good starting point for you to craft your nested set of conditionals.

  1. Does the algorithm do everything it's supposed to do?

From what I can tell, yes. Referencing my above comment (1.) provides further detail on why I think this proposed algorithm bears the skeletal structure of what we're going to need to actually code over the next two weeks.

  1. Are proposed functions reasonable? Are they "standalone" pieces of code?

Generally, yes, you seem to have crafted some reasonable functions for use. It appears you plan to check for everything we've been instructed to check for-- good. I am a little confused regarding your CIGAR string function though. Make sure that for 'I' you are subtracting and for 'D' you are adding to the CIGAR string (maybe utilize a counter here too in your actual code)... I have some brainstorming here to do myself on how exactly I'm going to write out my actual algorithm. Especially when considering directionality, I believe where we add or subtract from becomes a concern given the plus or minus strand. Likewise, your functions (after being coded), will definitely have the capacity to run as standalone pieces of code.

Nice work, pal! I'm under the impression you're thinking the right thoughts here!

Peer Review

Hi David,

I think you defined the problem well, and have good examples of what a properly de-duped file may look like. I appreciate the level of detail you include in your functions. Something that I worked on this time was including more code snippets and pieces of code I at least thought should work. I'm hoping this will kind of jump start my thinking process when I actually sit down the code and it also helps me think about the problem in a more algorithmic manner when I'm writing my sudocode. It would also be good practice just thinking about python and getting to know the different data structures and variables you will need to solve your problem.

In addition I think should review some of your logic late in your main look. I'm not sure this problem can be solved with sets of conditionals that filter records that way. I would look into dictionaries/sets or some way to reference records you know are unique, so you can ask whether the current record is a member of that list and decide whether to write it or not. But overall nice job.

Peer Review

  • Does the proposed algorithm make sense to you? Can you follow the logic?

This algorithm is a little confusing. Here are some adjustments that I think might help.

Use Grep to isolate SAM alignment column 2, the bitwise flag.
Using Grep, acquire the number in column 3 of the alignment section of the SAM
It might be hard and inefficient to go between bash(grep) and python. Maybe you should only stick to python.

If the flag + 16 = 16, the read is a minus read.
You have to use the bitwise(not +) and to check the flags.

For read copied into out file, do the following.
If you copied into out file, how do you skip the duplicates?

Lastly, I think you didn’t have to make that many variables (but not important).

  • Does the algorithm do everything it's supposed to do? (see part 1)

I think this algorithm is flawed because it only checks if a read matches the next couple consecutive reads or not. Okay suppose the first read matches the 2nd and 4th read. Then your algorithm will recognize the 2nd one is a duplicate but not the 4th one since the 3rd read and the 4th read don’t match. I recommend doing this with a dictionary with key: things you want to check(strand, chromosome, starting position, UMI)

  • Are proposed functions reasonable? Are they "standalone" pieces of code?

The Plus_adjust_start and Minus_adjust_start functions are very clear!

Maybe you could add some code to check UMI against UMI_List in your Umi_Checker function.

In all, this is a good start! You demonstrate good understanding of what a duplicate is and how to detect, however, the logic doesn’t check out.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.