Code Monkey home page Code Monkey logo

dynmethods's Introduction

R-CMD-check
ℹ️ Tutorials     ℹ️ Reference documentation

Codecov test coverage


# A collection of 56 trajectory inference methods This package contains wrappers for trajectory inference (TI) methods. The output of each method is transformed into a common trajectory model using dynwrap, which allows easy visualisation and comparison. All methods are wrapped inside a docker container, which avoids dependency issues, and also makes it easy to add a new method.

To run any of these methods, interpret the results and visualise the trajectory, see the dyno package.

To include your own method, feel free to send us a pull request or create an issue. The easiest way to add a new method is through a docker container, so that dependencies don’t pose any issues for other users, but we also welcome methods directly wrapped inside of R. The main benefit of adding your own method is that users can easily compare your method with others and visualise/interpret the output. Moreover, your method will be compared to other methods within the TI method evaluation.

List of included methods

Method Doi Code Docker Status Authors
Angle code Version Build status
CALISTA code Version Build status Nan Papili Gao
CellRouter code Version Build status Edroaldo Lummertz da Rocha
James J. Collins
George Q. Daley
CellTrails code Version Build status Daniel Ellwanger
Component 1 code Version Build status
DPT code Version Build status Laleh Haghverdi
Philipp Angerer
Fabian Theis
ElPiGraph code Version Build status Luca Albergante
ElPiGraph - Cycle code Version Build status Luca Albergante
ElPiGraph - Linear code Version Build status Luca Albergante
Embeddr code Version Build status Kieran Campbell
FORKS code Version Build status Mayank Sharma
FateID code Version Build status Dominic Grün
GNG code Version Build status Robrecht Cannoodt
GPfates code Version Build status Valentine Svensson
Sarah A. Teichmann
GrandPrix code Version Build status Sumon Ahmed
MATCHER code Version Build status Joshua Welch
Jan Prins
MERLoT code Version Build status Gonzalo Parra
Johannes Söding
MFA code Version Build status Kieran Campbell
Christopher Yau
MST code Version Build status
Monocle DDRTree code Version Build status Xiaojie Qiu
Cole Trapnell
Monocle ICA code Version Build status Xiaojie Qiu
Cole Trapnell
Mpath code Version Build status Michael Poidinger
Jinmiao Chen
Oscope code Version Build status Ning Leng
PAGA code Version Build status Alexander Wolf
Fabian Theis
PAGA Tree code Version Build status Alexander Wolf
Fabian Theis
Periodic PrinCurve code Version Build status
PhenoPath code Version Build status Kieran Campbell
Christopher Yau
Projected DPT code Version Build status
Projected Monocle code Version Build status
Projected PAGA code Version Build status
Projected Slingshot code Version Build status
Projected TSCAN code Version Build status
RaceID / StemID code Version Build status Dominic Grün
Alexander van Oudenaarden
SCIMITAR code Version Build status Josh Stuart
SCORPIUS code Version Build status Robrecht Cannoodt
Wouter Saelens
Yvan Saeys
SCOUP code Version Build status Hirotaka Matsumoto
SCUBA code Version Build status Eugenio Marco
Gregory Giecold
Guo-Cheng Yuan
SLICE code Version Build status Yan Xu
Minzhe Guo
SLICER code Version Build status Joshua Welch
Jan Prins
STEMNET code Version Build status Lars Velten
Sincell code Version Build status Antonio Rausell
Miguel Julia
Slingshot code Version Build status Kelly Street
Sandrine Dudoit
TSCAN code Version Build status Zhicheng Ji
Hongkai Ji
URD code Version Build status Jeffrey A. Farrell
Wanderlust code Version Build status Manu Setty
Dana Pe’er
Waterfall code Version Build status Jaehoon Shin
Hongjun Song
Wishbone code Version Build status Manu Setty
Dana Pe’er
cellTree Gibbs code Version Build status David duVerle
Koji Tsuda
cellTree maptpx code Version Build status David duVerle
Koji Tsuda
cellTree vem code Version Build status David duVerle
Koji Tsuda
ouija code Version Build status Kieran Campbell
Christopher Yau
ouijaflow code Version Build status Kieran Campbell
Christopher Yau
pCreode code Version Build status Charles A. Herring
Ken S. Lau
pseudogp code Version Build status Kieran Campbell
Christopher Yau
reCAT code Version Build status Riu Jian
topslam code Version Build status Max Zwiessele

Sources

We used following resources to get a (hopefully exhaustive) list of all TI methods:

Anthony Gitter’s single-cell-pseudotime DOI

Sean Davis’ awesome-single-cell DOI

Luke Zappia’s scRNA-tools DOI

New methods

Some methods are not wrapped (yet). Check out the issues for an overview

Latest changes

Check out news(package = "dynwrap") or NEWS.md for a full list of changes.

Recent changes in dynmethods 1.1.0 (unreleased)

  • MAJOR CHANGE: Add functionality to switch between R wrappers and container wrappers.

  • MAJOR CHANGE: Add R wrappers for SCORPIUS.

  • BUG FIX: Do not install R packages if version is not specified and package is installed.

Recent changes in dynmethods 1.0.5 (03-07-2019)

  • SMALL CHANGES: Updates for scorpius, slingshot, paga, paga_tree and paga_projected

Dynverse dependencies

dynmethods's People

Contributors

dcellwanger avatar falexwolf avatar flying-sheep avatar manusetty avatar rcannood avatar szcf-weiya avatar zouter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dynmethods's Issues

MFA

Hello @kieranrcampbell and Christopher

This issue is for discussing the wrapper for your trajectory inference method, MFA, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

p-Creode

Hello @herrinca and @KenLauLab

This issue is for discussing the wrapper for your trajectory inference method, p-Creode, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.py (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.py (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.py (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

GrandPrix

Hello @sumonahmedUoM

This issue is for discussing the wrapper for your trajectory inference method, GrandPrix, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.py (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.py (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.py (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

GPfates

Hello @vals and @Teichlab

This issue is for discussing the wrapper for your trajectory inference method, GPfates, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.py (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.py (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.py (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

SLICE

Hello @xu-lab and @minzheguo

This issue is for discussing the wrapper for your trajectory inference method, SLICE, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

CellTrails

Hello @dcellwanger

This issue is for discussing the wrapper for your trajectory inference method, CellTrails, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

Libraries missing from std R insallation:

HI team,
when I tried to install your lib I am missing these libs from my R installation:
magrittr
dynutils
It would probably make sense to add it into your depends file ;-)

Best,
Stefan

ElPiGraph.R

Hello @Albluca

This issue is for discussing the wrapper for your trajectory inference method, ElPiGraph.R, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in docker containers[1][2][3]. The way this container is structured is described in this vignette.

We created 3 separate wrappers:

  • ElPiGraph cyclic: ElpiGraph which always infers a cycle
  • ElPiGraph: ElpiGraph which which will infer a principal curve, circle or tree depending on a parameter
  • ElPiGraph linear: ELpiGraph which always infers a linear trajectory

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

SCUBA

Hello @eugeniomarco, @GGiecold and @gcyuan

This issue is for discussing the wrapper for your trajectory inference method, SCUBA, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.py (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.py (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.py (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

Mpath

Hello Michael and @jinmiaochen

This issue is for discussing the wrapper for your trajectory inference method, Mpath, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

scTDA

Given that dynwrap now includes time as prior information, this should be relatively easy to wrap

SCIMITAR

Hello @dimenwarper

This issue is for discussing the wrapper for your trajectory inference method, SCIMITAR, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.py (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.py (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.py (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

Wishbone

Hello @ManuSetty and Dana

This issue is for discussing the wrapper for your trajectory inference method, Wishbone, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in docker containers[1][2]. The way this container is structured is described in this vignette.

We created 2 separate wrappers:

  • Wanderlust: Wishbone but with branch = False
  • Wishbone: Wishbone in which the branching parameter depends on the number of end states

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.py (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.py (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.py (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

SCOUP

Hello @hmatsu1226

This issue is for discussing the wrapper for your trajectory inference method, SCOUP, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

SCORPIUS

Hello @rcannood, @zouter and @saeyslab

This issue is for discussing the wrapper for your trajectory inference method, SCORPIUS, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in docker containers[1][2]. The way this container is structured is described in this vignette.

We created 2 separate wrappers:

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

CellRouter

Hello @edroaldo, James J. and George Q.

This issue is for discussing the wrapper for your trajectory inference method, CellRouter, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

Topslam

Hello @mzwiessele

This issue is for discussing the wrapper for your trajectory inference method, Topslam, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.py (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.py (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.py (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

Embeddr

Hello @kieranrcampbell and Calb

This issue is for discussing the wrapper for your trajectory inference method, Embeddr, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

Ouija

Hello @kieranrcampbell and Christopher

This issue is for discussing the wrapper for your trajectory inference method, Ouija, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

Pseudogp

Hello @kieranrcampbell and Christopher

This issue is for discussing the wrapper for your trajectory inference method, Pseudogp, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

SLICER

Hello @jw156605 and Jan

This issue is for discussing the wrapper for your trajectory inference method, SLICER, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

StemID

Hello @dgrun and @avolab

This issue is for discussing the wrapper for your trajectory inference method, StemID, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

DPT

Hello Laleh, @flying-sheep and @theislab

This issue is for discussing the wrapper for your trajectory inference method, DPT, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

Slingshot

Hello @kstreet13 and @sandrinedudoit

This issue is for discussing the wrapper for your trajectory inference method, slingshot, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

RaceID / StemID

Hello @dgrun and @avolab

This issue is for discussing the wrapper for your trajectory inference method, RaceID / StemID, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

reCAT

Hello @louhzmaki

This issue is for discussing the wrapper for your trajectory inference method, reCAT, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

TSCAN

Hello @zji90 and Hongkai

This issue is for discussing the wrapper for your trajectory inference method, TSCAN, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

Ouijaflow

Hello @kieranrcampbell and Christopher

This issue is for discussing the wrapper for your trajectory inference method, Ouijaflow, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.py (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.py (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.py (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

URD

Hello @farrellja

This issue is for discussing the wrapper for your trajectory inference method, URD, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in urd (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in urd (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see urd (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the tool of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

Waterfall

Hello Jaehoon and Hongjun

This issue is for discussing the wrapper for your trajectory inference method, Waterfall, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

cellTree

Hello @david-duverle and @tsudalab

This issue is for discussing the wrapper for your trajectory inference method, cellTree, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in docker containers[1][2][3]. The way this container is structured is described in this vignette.

We created 3 separate wrappers:

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

Sincell

Hello @Cortalak and Miguel

This issue is for discussing the wrapper for your trajectory inference method, Sincell, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

PAGA

Hello @falexwolf and @theislab

This issue is for discussing the wrapper for your trajectory inference method, PAGA, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in docker containers[1][2]. The way this container is structured is described in this vignette.

We created 2 separate wrappers:

  • PAGA: The regular version in which cells are assigned to clusters, and the network between these clusters is used as milestone network.
  • projected PAGA: Similar to regular PAGA, but cells are projected onto the edges between milestones (in the space of the dimensionality reduction).

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.py (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.py (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.py (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

MATCHER

Hello @jw156605 and Jan

This issue is for discussing the wrapper for your trajectory inference method, MATCHER, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.py (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.py (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.py (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@zouter and @rcannood

PhenoPath

Hello @kieranrcampbell and Christopher

This issue is for discussing the wrapper for your trajectory inference method, PhenoPath, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

Monocle

Hello @Xiaojieqiu and @ctrapnell

This issue is for discussing the wrapper for your trajectory inference method, Monocle, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in docker containers[1][2]. The way this container is structured is described in this vignette.

We created 2 separate wrappers:

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

STEMNET

Hello Lars

This issue is for discussing the wrapper for your trajectory inference method, STEMNET, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

FateID

Hello @dgrun

This issue is for discussing the wrapper for your trajectory inference method, FateID, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

MERLoT

Hello @gonzaparra and @soedinglab

This issue is for discussing the wrapper for your trajectory inference method, MERLoT, which we wrapped for our benchmarking study (10.1101/276907). In our dynmethods framework, we collected some meta information about your method, and created a docker wrapper so that all methods can be easily run and compared. The code for this wrapper is located in a docker container. The way this container is structured is described in this vignette.

We are creating this issue to ensure your method is being evaluated in the way it was designed for. The checklist below contains some important questions for you to have a look at.

  • Parameters, defined in definition.yml (more info)
    • Are all important parameters described in this file?
    • For each parameter, is the proposed default value reasonable?
    • For each parameter, is the proposed parameter space reasonable (e.g. lower and upper boundaries)?
    • Is the description of the parameters correct and up-to-date?
  • Input, defined in definition.yml and loaded in run.R (more info)
    • Is the correct type of expression requested (raw counts or normalised expression)?
    • Is all prior information (required or optional) requested?
    • Would some other type of prior information help the method?
  • Output, defined in definition.yml and saved in run.R (more info)
    • Is the output correctly processed towards the common trajectory model? Would some other postprocessing method make more sense?
    • Is all relevant output saved (dimensionality reduction, clustering/grouping, pseudotime, ...)
  • Wrapper script, see run.R (more info)
    • This is a script that is executed upon starting the docker container. It will receive several input files as defined by definition.yml, and is expected to produce certain output files, also as defined by definition.yml.
    • Is the script a correct representation of the general workflow a user is expected to follow when they want to apply your method to their data?
  • Quality control, see the qc worksheet
    • We also evaluated the implementation of a method based on a large check list of good software software development practices.
    • Are the answers we wrote down for your method correct and up to date? Do you disagree with certain answers? (Feel free to leave a comment in the worksheet)
      • You can improve the QC score of your method by implementing the required changes and letting us know. Do not gloss over this, as it is the easiest way to improve the overall ranking of your TI method in our study!

The most convenient way for you to test and adapt the wrapper is to install dyno, download and modify these files, and run your method on a dataset of interest or one of our synthetic toy datasets. This is further described in this vignette. Once finished, we prefer that you fork the dynmethods repository, make the necessary changes, and send us a pull request. Alternatively, you can also send us the files and we will make the necessary changes.

If you have any further questions or remarks, feel free to reply to this issue.

Kind regards,
@rcannood and @zouter

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.