Code Monkey home page Code Monkey logo

Comments (8)

vyepez88 avatar vyepez88 commented on August 16, 2024

Hi Álvaro,
Thanks for noticing and reporting this. At some point the faculty of Informatics changed domain.
I added a working link to the Readme (currently in the dev branch only). Also here:
The website containing the different reports of the Geuvadis demo dataset described in the paper can be found here.
In the different Summary pages (e.g. https://cmm.in.tum.de/public/paper/drop_analysis/webDir/drop_analysis_index.html#Scripts_AberrantExpression_pipeline_OUTRIDER_Datasets.html), you can find the results tables that you can use to compare to your own.

from drop.

AEstebanMar avatar AEstebanMar commented on August 16, 2024

Hello Vicente,

Thank you for your response, everything seems to be working now! I had a followup question.

Our research group would greatly appreciate some clarification as to what anomalies should be observed within the results (e.g which samples and genes were confirmed as aberrant in your original run, as described for the Kramer dataset in the original DROP article). As we understand it, there might be some slight variation in obtained results (specifically in aberrant expression due to how the OUTRIDER autoencoder works).

To clarify, we would like to know which samples and genes should always be identified as aberrant (be it by expression, splicing or MAE).

from drop.

AEstebanMar avatar AEstebanMar commented on August 16, 2024

I noticed something rather strange. In the aberrant splicing module, your number of outliers vs sample rank graph looks like this:

imagen

Whereas mine looks like this:

imagen

I am using the config file linked in the Supplementary Information of the original article (Supplementary Data 2), and our local DROP version is 1.2.2. Is it not the config file I should be using, or is something else going wrong? I am not sure if these differences can be attributed to using a different DROP version, they seem too extreme.

from drop.

vyepez88 avatar vyepez88 commented on August 16, 2024

Hi, can you share your config file here?
If you use the Kremer dataset, the samples that should come up significant are the ones described in the paper, especially
2x TIMMDC1 expression and splicing and ALDH18A1 MAE.

from drop.

AEstebanMar avatar AEstebanMar commented on August 16, 2024

Sure, here it is:
config.txt

We are running DROP on the Geuvadis dataset, not Kremer.

from drop.

vyepez88 avatar vyepez88 commented on August 16, 2024

I saw the discrepancy in the padj and deltaPsi cut-offs that you're using than the ones that were used for the DROP dataset. We recommend:

    padjCutoff: 0.1 # or 0.05
    ### FRASER1 configuration
    FRASER_version: "FRASER"
    deltaPsiCutoff : 0.3
    quantileForFiltering: 0.95
    ### For FRASER2, use the follwing parameters instead of the 3 lines above:
    # FRASER_version: "FRASER2"
    # deltaPsiCutoff : 0.1
    # quantileForFiltering: 0.75

Running with those cut-offs should give you similar results.
Same for expression:

   padjCutoff: 0.05
    zScoreCutoff: 0

from drop.

AEstebanMar avatar AEstebanMar commented on August 16, 2024

Thank you, I've fixed it in my config and re-run it. However, I have no "quantileForFiltering" field. I am running DROP version 1.2.2, was that field added in a more recent version? If so, would you mind providing the config file used for the DROP run shown in the article (DROP 0.9.2)? Also, could you confirm that the results linked in the original response to my issue correspond to that original DROP run?

from drop.

vyepez88 avatar vyepez88 commented on August 16, 2024

Yes, the quantileForFiltering parameter was added in DROP 1.3.0, where we introduced FRASER2. I highly recommend you updating to FRASER2 as it provides more specific results than the original FRASER.
You can find the config file for the paper here: https://www.nature.com/articles/s41596-020-00462-5#Sec37 (File 2)
The results linked in my original response might not correspond to that original DROP run as DROP has been updated since. However, they should not be too different.

from drop.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.