apeltzer / dedup Goto Github PK
View Code? Open in Web Editor NEWA merged read deduplication tool capable to perform merged read deduplication on single end data.
License: GNU General Public License v3.0
A merged read deduplication tool capable to perform merged read deduplication on single end data.
License: GNU General Public License v3.0
Specifying a directory that is not yet created reports that the user specified a file, not a directory.
This behaviour isn't typical for CLI tools, where if the output dir doesn't exist, it would be created.
(samtools) sos@dat4903339:09:30:bwa:(doc-improvements)$ java -jar ~/Downloads/DeDup-0.12.6.jar -i JK2782_PE.mapped.bam -o ./point6/ -m
DeDup v0.12.6
The output folder should be a folder and not a file!
It should be possible to read CRAM input (coming via htsjdk anyways), write CRAM directly AND provide an option to read a reference FastA file for that purpose.
x-ref: nf-core/eager#181
DeDup currently calculates the deduplication rate based on number of reads after deduplicated over total reads in the BAM file. This is an incorrect calculation as if the BAM file includes unmapped reads, this vastly reduces the rate because of very large denominators.
The denominator should be mapped reads prior deduplication, as deduplication rate can only be calculated for reads upon which deduplication can be applied.
{
"metadata": {
"tool_name": "DeDup",
"version": "0.12.7",
"sample_name": "SKO719A174.bam"
},
"metrics": {
"total_reads": 26,
"mapped_reads": 0,
"merged_removed": 0,
"forward_removed": 0,
"total_removed": 0,
"reverse_removed": 0,
"dup_rate": "�",
"clusterfactor": "�"
}
}
reported by Alexandre Gilardet in Nf-core slack:
I’m running a sample with EAGER and I want to get no-merged data, following the instructions below:
-For the input file upload two Fasq (reads 1 e 2) and select the Single-End Data
-General settings:
FastQC analysis;
Adapter RM/Merging : CLIP&Merge (Perform only adapter clipping);
Mapping: CircularMapper;
Complexity Estimation;
Remove Duplicates with DeDup;
Damage calculation with mapDamage;
But the analysis dies when trying to remove the duplicates. The .log file reports:
SamtoolsSortDeDup was executed with the following commandline: “samtools sort -@ 10 -m 3G /../OUTPUT/5-DeDup/file_name.fastq.fq.MT_realigned.mappedonly.sorted.qF.sorted.cleaned_rmdup.bam -o /../ OUTPUT/INPUT/5-DeDup/file_name.fastq.fq.MT_realigned.mappedonly.sorted.qF.sorted.cleaned_rmdup.sorted.bam “
“samtools sort: truncated file. Aborting”
Furthermore I can still find the following 10 multiple bam files in the 5-DeDup directory even if they should disappear once the final file(file_name.fastq.fq.MT_realigned.mappedonly.sorted.qF.sorted.cleaned_rmdup.sorted.bam. ) is created:
-fastq.fq.MT_realigned.mappedonly.sorted.qF.sorted.cleaned_rmdup.sorted.bam.tmp.0000.bam ( this is an example of one of the 10 files)
How can I manage this issue?
Should I start again the analysis from the beginning or is there a way to skip the steps that went right?
I was also wandering if there is a way to know if the 10 files in the 5-DeDup directory are complete.
Lastly, I only manage to get the report file if I run more samples in a row, how can I get it analysing only one file?
Hi Alex,
As I mentioned in nf-core/eager#209 (comment) I'm running into some performance issues with a BAM file containing ~786M merged paired-end reads. I first had to bump the heap size as with default settings it ran out of memory fairly quickly. After bumping the max heap size to 48G it's been running now for about 1.5 hours and so far has only treated around 3M reads - and has been sitting at that 3M mark for almost 30 min. Is there anything I might be able to do to increase throughput?
Here are the flagstats for the BAM:
786466969 + 0 in total (QC-passed reads + QC-failed reads)
0 + 0 secondary
4730606 + 0 supplementary
0 + 0 duplicates
609772993 + 0 mapped (77.53% : N/A)
0 + 0 paired in sequencing
0 + 0 read1
0 + 0 read2
0 + 0 properly paired (N/A : N/A)
0 + 0 with itself and mate mapped
0 + 0 singletons (N/A : N/A)
0 + 0 with mate mapped to a different chr
0 + 0 with mate mapped to a different chr (mapQ>=5)`
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.