Code Monkey home page Code Monkey logo

scipy_proceedings_2012's Introduction

SciPy Proceedings

Paper Format

Papers are formatted using reStructuredText and the compiled version should be no longer than 7 pages, including figures. Here are the steps to produce a paper:

  • Fork the 2012 branch of the scipy_proceedings <https://github.com/scipy/scipy_proceedings>__ repository on GitHub.

  • Check out the 2012 branch (git checkout 2012).

  • An example paper is provided in papers/00_vanderwalt. Create a new directory papers/firstname_surname, copy the example paper into it, and modify to your liking.

  • Run ./make_paper.sh papers/firstname_surname to compile your paper to PDF (requires LaTeX, docutils, Python--see below). The output appears in output/firstname_surname/paper.pdf.

  • Once you are ready to submit your paper, file a pull request on GitHub. Please ensure that you file against the correct branch--your branch should be named 2012, and the pull-request should be against our 2012 branch.

  • Please do not modify any files outside of your paper directory.

Pull requests are to be submitted by July 15th, but modifications may be pushed until August 12th.

General Guidelines

- All figures and tables should have captions.
- License conditions on images and figures must be respected (Creative Commons,
  etc.).
- Code snippets should be formatted to fit inside a single column without
  overflow.
- Avoid custom LaTeX markup where possible.

Other markup
------------
Please refer to the example paper in ``papers/00_vanderwalt`` for
examples of how to:

 - Label figures, equations and tables
 - Use math markup
 - Include code snippets

Requirements
------------
 - IEEETran and AMSmath LaTeX classes
 - `docutils` 0.8 or later
 - `pygments` for code highlighting

scipy_proceedings_2012's People

Contributors

ahmadia avatar breuleux avatar dwf avatar gdesjardins avatar gpoore avatar jaberg avatar jarrodmillman avatar jhmeinke avatar jjhelmus avatar jrjohansson avatar jseabold avatar kadambarid avatar kcarnold avatar koverholt avatar lamblin avatar madhusudancs avatar minrk avatar pierre-haessig avatar prabhuramachandran avatar scopatz avatar stefanv avatar taldcroft avatar warrenweckesser avatar wesm avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scipy_proceedings_2012's Issues

Review of "The Reference Model for Disease Progression"

Reviewer: Christine Choirat

Center: Institute for Quantitative Social Science

University: Harvard University

Field of interest / expertise: Statistics, Statistical Programming

Country: USA

Article reviewed: The Reference Model for Disease Progression

GENERAL EVALUATION

  • Quality of the approach:

    meets

  • Quality of the writing:

    meets

  • Quality of the figures/tables:

    meets

SPECIFIC EVALUATION

  • Is the code made publicly available and does the article sufficiently
    describe how to access it?

    yes

  • Does the article present the problem in an appropriate context?

    yes

    • explain why the problem is important,

      yes

    • describe in which situations it arises,

      yes

    • outline relevant previous work,

      yes

    • provide background information for non-experts

      yes

  • Is the content of the paper accessible to a computational scientist
    with no specific knowledge in the given field?

    yes

  • Does the paper describe a well-formulated scientific or technical
    achievement?

    yes

  • Are the technical and scientific decisions well-motivated and
    clearly explained?

    yes

  • Are the code examples (if any) sound, clear, and well-written?

    yes

  • Is the paper factual correct?

    AFAIK yes

  • Is the language and grammar of sufficient quality?

    yes. Minor typos:

    • Replace "among hypothesis of disease progression" with "among hypotheses of disease progression"
    • Replace "Never the less" with "Nevertheless"
    • Replace "Not only it depends on many factors" with "Not only does it depend on many factors"
    • Rephrase "Different risk equations found in the literature and parameters they use"
    • Replace "Such a combination of equations can include hypothesis" with "Such a combination of equations can include hypotheses"
    • Replace "the modeler can create several hypothesis" with "the modeler can create several hypotheses"
    • Replace "call it's state transition" with "call its state transition"
    • Replace "the use of the Python languange" with "the use of the Python language"
  • Are the conclusions justified?

    yes

  • Is prior work properly and fully cited?

    yes

  • Should any part of the article be shortened or expanded? Please explain.

    Suggestions:

    1. Section "Simulation language"
      Explain the benefits of creating a simulation language vs Python functions.
    2. Section "Population Generator", Table 2.
      How do you calibrate the Population Generator parameters?
  • In your view, is the paper fit for publication in the conference proceedings?
    Please suggest specific improvements and indicate whether you think the
    article needs a significant rewrite (rather than a minor revision).

yes

Editor's Notes: A Computational Framework for Plasmonic Nanobiosensing

Reviews were conducted by @minrk in #8 and @satra in #16.

@minrk submitted minor corrections that were accepted in #7.

Reviewers were generally satisfied, some minor copy corrections were suggested (and implemented by @minrk). @hugadams noted in #7 that he lost some of the data from the original experiments and is unable to regenerate plot figures.

It is the decision of the editors that this paper has passed the peer review standards set by the SciPy 2012 Organizing Committee. Thanks to everyone for their contribution.

Review of Python's Role in VisIt

Independent Review Report

Reviewer: Aashish Chaudhary

Department/Center/Division: Scientific Visualization

Institution/University/Company: Kitware Inc.

Field of interest / expertise: Scientific Computing / Visualization

Country: USA

Article reviewed:

GENERAL EVALUATION

Please rate the paper using the following criteria (please use the abbreviation
to the right of the description)::

below doesn't meet standards for academic publication
meets meets or exceeds the standards for academic publication
n/a not applicable

  • Quality of the approach:
    meets
  • Quality of the writing:
    below. I have given the quality of writing below because the author(s) have not described the challenges in using / implementing various features in Python very clearly upfront. Also, comparison with other similar toolkits could have used some more detail.
  • Quality of the figures/tables:
    meets

SPECIFIC EVALUATION

For the following questions, please respond with 'yes' or 'no'. If you
answer 'no', please provide a brief, one- to two-sentence explanation.

  • Is the code made publicly available and does the article sufficiently
    describe how to access it? We aim not to publish papers that essentially
    advertise proprietary software. Therefore, if the code is not publicly
    available, please provide a one- to two- sentence response to each of the
    following questions:
    yes the code is publicly available.
    • Does the article focus on a topic other than the features
      of the software itself?
      No. The article mostly focuses on use of Python in Visit.
    • Can the majority of statements made be externally validated
      (i.e., without the use of the software)?
      No. Since the features described are not mathematical theories or proofs.
    • Is the information presented of interest to readers other than
      those at whom the software is aimed?
      Yes. I think the paper describes the use of Python really well in a parallel computing
      environment.
    • Is there any other aspect of the article that would
      justify including it despite the fact that the code
      isn't available?
      N/A
    • Does the article discuss the reasons the software is closed?
      Software is open source
  • Does the article present the problem in an appropriate context?
    Specifically, does it:
    • explain why the problem is important,
      Somewhat. Would have been nice to state problem and challenges more clearly.
    • describe in which situations it arises,
      yes
    • outline relevant previous work,
      Somewhat.
    • provide background information for non-experts
      Not sufficient.
  • Is the content of the paper accessible to a computational scientist
    with no specific knowledge in the given field?
    Yes
  • Does the paper describe a well-formulated scientific or technical
    achievement?
    Yes (technical achievement)
  • Are the technical and scientific decisions well-motivated and
    clearly explained?
    Somewhat. It would have been nice to outline reasons to pick python.
  • Are the code examples (if any) sound, clear, and well-written?
    Yes
  • Is the paper factual correct?
    Yes
  • Is the language and grammar of sufficient quality?
  • it could be improved a bit. At some places I found missing comma separation.
  • Are the conclusions justified?
    Yes
  • Is prior work properly and fully cited?
    Somewhat. It would have been nice if author(s) cited some more related work
    and compare it with their own bit more in detail.
  • Should any part of the article be shortened or expanded? Please explain.
    I think length of the paper is good.
  • In your view, is the paper fit for publication in the conference proceedings?
    Please suggest specific improvements and indicate whether you think the
    article needs a significant rewrite (rather than a minor revision).
    The paper presented the use of Python in various forms in VisIT for scientific computing and visualization. I believe implementing all the features described in this paper within python is difficult and I think author(s) did a great job implementing these in the toolkit. Since this is a technical paper, it would have been nice to compare the work performed with prior work with respect to:
  • Performance: Evaluate whether or not performance have been improved compare to previous implementations.
  • Ease of use: Did the API and use of Python made it easier to use VisIT in HPC environment?
    If yes, a user study would have been nice to confirm it.

Overall, I am pleased with the work described here and I am giving it a "may be" for the acceptance.

Editor's Notes: PythonTeX: Fast Access to Python from within LaTeX

Reviews were conducted by @hplgit in #13 and @aashish24 in #6 (although I think the second review may be missing).

@hplgit submitted minor corrections that were accepted in #24. The editors decided that a modern comparison to the IPython Notebook as of 2014 was not necessary, and some more references were added.

It is the decision of the editors that this paper has passed the peer review standards set by the SciPy 2012 Organizing Committee. Thanks to everyone for their contribution.

Review of PythonTeX: Fast Access to Python from within LaTeX

Reviewer: Hans Petter Langtangen

Department/Center/Division: Center for Biomedical Computing

Institution/University/Company: Simula Research Laboratory

Field of interest / expertise: Scientific Computing, Mathematical Modeling

Country: Norway

Article reviewed: See title.

Quality of the approach: meets
Quality of the writing: meets
Quality of the figures/tables: meets

SPECIFIC EVALUATION

For the following questions, please respond with 'yes' or 'no'. If you answer 'no', please provide a brief, one- to two-sentence explanation.

  • Does the article present the problem in an appropriate context? Specifically, does it:
    • explain why the problem is important, YES
    • describe in which situations it arises, YES
    • outline relevant previous work, NO, see comments below
    • provide background information for non-experts YES
  • Is the content of the paper accessible to a computational scientist with no specific knowledge in the given field? YES
  • Does the paper describe a well-formulated scientific or technical achievement? YES
  • Are the technical and scientific decisions well-motivated and clearly explained? YES
  • Are the code examples (if any) sound, clear, and well-written? YES
  • Is the paper factual correct? seems so
  • Is the language and grammar of sufficient quality? YES
  • Are the conclusions justified? YES
  • Is prior work properly and fully cited? NO, see comments
  • Should any part of the article be shortened or expanded? Please explain. NO
  • In your view, is the paper fit for publication in the conference proceedings? YES

Please suggest specific improvements and indicate whether you think the article needs a significant rewrite (rather than a minor revision).

Review

PythonTeX will definitely be highly welcome among LaTeX users who uses Python for numerical and symbolic work. The article is well written and easy to follow. As such it can be published as is, but there are two fundamental issues that must be dealt with:
the relation to some key references is not discussed, and much of the functionality can be achieved in other systems.

When people think about the desire to have executable code as part of a LaTeX document, they will probably claim that IPython notebooks already offers this functionality. IPython notebooks are not mentioned in the article. Clearly, PythonTeX has considerably more functionality than an IPython notebook, e.g., output and/or code can be suppressed and the values of variables in the code can be accessed directly in the text. However, because of the strong momentum in IPython notebooks today, the author should discuss the pros and cons compared to this tool. Having said this, it is equally natural to mention the many similar the pros and cons compared to Sage notebooks.

It is also natural to discuss the relation to literate programming tools such as

Literate programming tools embed the document's text in the program, while PythonTeX and Doconce (below) embed the code in the text.

It appears that much of the functionality in PythonTeX is automatically achieved by using Mako as a preprocessor. This approach is taken in Doconce. For example, the following Doconce code runs the example given in the Step-by-step derivations ... section in the article, where the calculation of a double integral is shown step by step, using SymPy to perform all the mathematics:

# Execute Python code
<%
import sympy as sm
x, y, a = sm.symbols('x y a')
f = a*x + sm.sin(y)
step1 = sm.Integral(f, x, y)
step2 = sm.Integral(sm.Integralf, x).doit(), y)
step3 = step2.doit()
%>

# Make use of results in the above block when writing LaTeX math
!bt
\begin{align*}
${sm.latex(step1)} &= ${sm.latex(step2)}\\
&= ${sm.latex(step3)}
\end{align*}
!et

Blocks between <% and %> are Python code that will be executed, and variables, functions, or modules can be used in the text through syntax like ${module.function(variable)} (which implies a function call, used above to create LaTeX formatting of SymPy expressions).

The result of the LaTeX block above, after Mako is run, becomes

\begin{align*}
\iint a x + y^{2} \sin{\left (y \right )}\, dx\, dy &=
\int \frac{a x^{2}}{2} + x y^{2} \sin{\left (y \right )}\, dy\\
&= \frac{a y}{2} x^{2} + x \left(- y^{2} \cos{\left (y \right )} +
2 y \sin{\left (y \right )} + 2 \cos{\left (y \right )}\right)
\end{align*}

Debugging Python code in Mako is less convenient than debugging Python files directly, so one may prefer to just include the Python code that Mako is supposed to run by

<%
# #include "src/ex1.py"
%>

This is the way this reviewer makes use of SymPy in LaTeX documents to automate the mathematical derivations (i.e., first the SymPy code files are developed and verified, then fragments are included and run in the document such that the text can access the results).

Mako can be used in this way directly with LaTeX; the advantage with Doconce is that LaTeX is only one output format: one can equally well create a range of HTML formats, including Sphinx, or Markdown, IPython notebooks, etc.

Convinced LaTeX users will certainly feel at home with PythonTeX, and it is important for the community to have the key features this tool documented. Therefore, I recommend publication of the article. However, it represents just one technical solution. There are several others, and with Mako one can extend LaTeX as well as any other text format with the capabilities of running embedded Python code and accessing the results directly in the text.

Review of A Computational Framework for Plasmonic Nanobiosensing

Reviewer: Min Ragan-Kelley

Department/Center/Division: Helen Wills Neuroscience Institute

Institution/University/Company: UC Berkeley

Field of interest / expertise: IPython

Country: USA

Article reviewed: A Computational Framework for Plasmonic Nanobiosensing (Adam Hughes)

GENERAL EVALUATION

Please rate the paper using the following criteria (please use the abbreviation
to the right of the description)::

below doesn't meet standards for academic publication
meets meets or exceeds the standards for academic publication
n/a not applicable

  • Quality of the approach: meets
  • Quality of the writing: meets
  • Quality of the figures/tables: below

The figures could definitely use some work.

  • Fig 1.
    • mixes at least three typefaces
    • the 's' in nanoparticles is clipped
    • text labels in the right diagram are too small to read easily
    • sketchy markings, look to be from MS Paint
  • Fig 2.
    • yet another typeface, not seen in Fig 1.
  • Fig. 2., 3., 5. all appear to be very low resolution
  • Fig. 5. includes several undefined variables
  • Fig. 6., 7. have several unlabeled or unit-less axes
  • Fig 7. has difficult to read labels, overlapping lines or other labels

SPECIFIC EVALUATION

For the following questions, please respond with 'yes' or 'no'. If you
answer 'no', please provide a brief, one- to two-sentence explanation.

  • Is the code made publicly available and does the article sufficiently
    describe how to access it?

    yes

  • Does the article present the problem in an appropriate context?
    Specifically, does it:

    • explain why the problem is important,
    • describe in which situations it arises,
    • outline relevant previous work,
    • provide background information for non-experts

    yes

  • Is the content of the paper accessible to a computational scientist
    with no specific knowledge in the given field

    yes

  • Does the paper describe a well-formulated scientific or technical
    achievement?

    yes

  • Are the technical and scientific decisions well-motivated and
    clearly explained?

    yes

  • Are the code examples (if any) sound, clear, and well-written?

    yes

  • Is the paper factually correct?

    yes (as far as I know, not being an expert in the domain)

  • Is the language and grammar of sufficient quality?

    yes, pending some typos and grammar fixes in PR #7.

  • Are the conclusions justified?

    yes

  • Is prior work properly and fully cited?

    yes

  • Should any part of the article be shortened or expanded? Please explain.

    no

  • In your view, is the paper fit for publication in the conference proceedings?
    Please suggest specific improvements and indicate whether you think the
    article needs a significant rewrite (rather than a minor revision).

    yes, pending minor typo fixes.
    If there is any significant change warranted, it would be the figures,
    but I leave that decision to editors.

Review of "Total Recall: flmake and the Quest for Reproducibility"

Review of "Total Recall: flmake and the Quest for Reproducibility"

Reviewer: Kyle Mandli
Department: Institute for Computational Engineering and Science
Institution: University of Texas at Austin
Field: Applied and Computational Mathematics
Country: USA
Article Reviewed: Total Recall: flmake and the Quest for Reproducibility

General Evaluation

below doesn't meet standards for academic publication
meets meets or exceeds the standards for academic publication
n/a not applicable

  • Quality of the approach:

    Meets

  • Quality of the writing:

    Meets

  • Quality of the figures/tables:

    Meets

Specific Evaluation

  • Is the code made publicly available and does the article sufficiently describe how to access it?

    Yes

  • Does the article present the problem in an appropriate context? Specifically, does it:

    • explain why the problem is important,

      Yes

    • describe in which situations it arises,

      Yes

    • outline relevant previous work,

      Yes

    • provide background information for non-experts

      Some, there's a bit of jargon thrown in sporadically but I do not think it significantly detracts from the topic.

  • Is the content of the paper accessible to a computational scientist
    with no specific knowledge in the given field?

    Yes

  • Does the paper describe a well-formulated scientific or technical
    achievement?

    Yes

  • Are the technical and scientific decisions well-motivated and
    clearly explained?

    Yes

  • Are the code examples (if any) sound, clear, and well-written?

    Yes, although I think a slight modification to make the CLI examples a bit more readable would be helpful (just use $> or something).

  • Is the paper factual correct?

    To my knowledge yes.

  • Is the language and grammar of sufficient quality?

    A few corrections have been suggested.

  • Are the conclusions justified?

    Yes

  • Is prior work properly and fully cited?

    Yes

  • Should any part of the article be shortened or expanded? Please explain.

    Yes - I think my major suggestion is for the article to either concentrate on flmake and mention that one of its features is that it addresses the reproducibility problem and shorten the section that addresses reproducibility or make the article address reproducibility and show how flmake in particular solves this. As it is the article seems to have a bit of a split personality with a long section that vaguely seems related.

  • In your view, is the paper fit for publication in the conference proceedings?
    Please suggest specific improvements and indicate whether you think the
    article needs a significant rewrite (rather than a minor revision).

    Yes, I would strongly encourage the author to think about reorganizing the paper along the lines suggested above but as a whole the article is worthy of publication in the SciPy 2012 proceedings.

Review for jacob_frelinger - *Fcm - A python library for flow cytometry*

Reviewer: Dav Clark
Department/Center/Division: D-Lab
Institution/University/Company: UC Berkeley
Field of interest / expertise: Computational Social Science / Neuroscience
Country: USA

Article reviewed: Fcm - A python library for flow cytometry

GENERAL EVALUATION

Please rate the paper using the following criteria (please use the abbreviation
to the right of the description)::

below doesn't meet standards for academic publication
meets meets or exceeds the standards for academic publication
n/a not applicable

  • Quality of the approach: meets
  • Quality of the writing: meets
  • Quality of the figures/tables: meets

SPECIFIC EVALUATION

For the following questions, please respond with 'yes' or 'no'. If you
answer 'no', please provide a brief, one- to two-sentence explanation.

  • Is the code made publicly available and does the article sufficiently
    describe how to access it?

    Yes. But a little more on navigating the code would be nice (along with code
    for figures in the paper, etc.)

  • Does the article present the problem in an appropriate context?
    Specifically, does it:

    • explain why the problem is important,

      Yes!

    • describe in which situations it arises,

      Yes!

    • outline relevant previous work,

      Yes!

    • provide background information for non-experts

      Yes!

  • Is the content of the paper accessible to a computational scientist
    with no specific knowledge in the given field?

    I think so. The editors expressed some concern about this, though. Even if you
    don't fully understand the biology, the methods are quite straightforward
    (single parameter or quadrant-based "gates" or otherwise commonly used
    mixture and k-means models).

  • Does the paper describe a well-formulated scientific or technical
    achievement?

    Yes

  • Are the technical and scientific decisions well-motivated and
    clearly explained?

    Yes! Commendable in the clear explanation of the scientific problem. Extra
    points for explaining the importance of improving methodology / automation.

  • Are the code examples (if any) sound, clear, and well-written?

    Yes, though they are pretty thin.

    "Sensible defaults for hyperparameters have been chosen that in our experience
    perform satisfactorily on all FCS data samples we have analyzed." Might
    aggrivate some readers, but you can only put so much in such a paper... Can
    you refer readers to where they can find these hyperparameters in your code?

  • Is the paper factually correct?

    As far as I can tell.

  • Is the language and grammar of sufficient quality?

    Yes.

  • Are the conclusions justified?

    Yes.

  • Is prior work properly and fully cited?

    Reference, but no citation for other packages mentioned (proprietary and R
    bioconductor). Note, however, that mentioning other packages is already
    above-average for scipy 2012 (based on my limited sample ;)

  • Should any part of the article be shortened or expanded? Please explain.

    Yes - I'd like more code (or pointers to code) if it's not to much trouble.

  • In your view, is the paper fit for publication in the conference proceedings?
    Please suggest specific improvements and indicate whether you think the
    article needs a significant rewrite (rather than a minor revision).

    Yes

Review of OpenMG: A New Multigrid Implementation in Python

Reviewer: Hans Petter Langtangen

Department/Center/Division: Center for Biomedical Computing

Institution/University/Company: Simula Research Laboratory

Field of interest / expertise: Scientific Computing, Mathematical Modeling

Country: Norway

Article reviewed: See title.

Quality of the approach: meets
Quality of the writing: meets
Quality of the figures/tables: meets

SPECIFIC EVALUATION

For the following questions, please respond with 'yes' or 'no'. If you answer 'no', please provide a brief, one- to two-sentence explanation.

  • Does the article present the problem in an appropriate context? Specifically, does it:
    • explain why the problem is important, YES
    • describe in which situations it arises, YES
    • outline relevant previous work, YES
    • provide background information for non-experts YES
  • Is the content of the paper accessible to a computational scientist with no specific knowledge in the given field? YES
  • Does the paper describe a well-formulated scientific or technical achievement? YES
  • Are the technical and scientific decisions well-motivated and clearly explained? YES
  • Are the code examples (if any) sound, clear, and well-written? YES
  • Is the paper factual correct? seems so
  • Is the language and grammar of sufficient quality? YES
  • Are the conclusions justified? YES
  • Is prior work properly and fully cited? YES
  • Should any part of the article be shortened or expanded? Please explain. NO
  • In your view, is the paper fit for publication in the conference proceedings? YES

Please suggest specific improvements and indicate whether you think the article needs a significant rewrite (rather than a minor revision).

Review

OpenMG is a very nice tool for understanding how the multigrid method works. The article is well written and easy to follow, and can be published after a minor revision.

My main criticism is that there should be a closer relationship between the code and the corresponding mathematical description. Make sure the variable names mimic the symbols in the mathematics. Also be more consistent internally in the code (problemshape vs shape, .size vs len()). The initialization of a list by list(range(n)) is misleading - the idea is to make a list of a fixed length with uninitialized elements, more clearly obtained by [None]*n.

Minor points

  • PyAMG is attributed to Bell et al.
  • Some more information about the differences between AMGlab and OpenMG (apart from the programming languages) would be interesting.
  • Many of the backslashes in the code snippets are redundant. Remove them to produce prettier layout of the code.

frelinger review

@jfrelinger

Nice paper, thanks! Since I am not a domain expert, just a few editorial comments:

  • "few opensource tool" -> tools
  • Introduction contains quite a bit of jargon ("monoclonal antibodies are conjugated"): if possible, but some equivalent "English" in brackets that others may understand
  • What is on the x axis of the EQAPOL histogram?
  • Gallery is missing at provided url
  • caner -> cancer

johansson: qtip paper review

Disclosure: I am not an expert in quantum mechanics or simulation.

This paper is perfectly suited to publication in the proceedings. It gives a concise and well-written exposition of the QuTip library for quantum simulation.

I only have some minor editorial feedback:

Link to http://qutip.org/ early in the paper (abstract, introduction?)
"as oppose to" -> "as opposed to"
"The main challenge in numerical simulation of quantum
system" -> systems?
"as oppose of wavefunctions" ?
"open quantum system" -> "open quantum systems"
"provides are: tensor, ....; ptrace, ..." (commas and semi-colons)"
"packages for python" -> "packages for Python"
"various time-evolutions solvers. We" -> "various time-evolution solvers, we"
Last page: "without for example" -> "without, for example"
spaces between paragraphs
use -> uses
Capitalize python -> Python (several instances)
"accessible through a superior" --> what does "superior" imply?

Review of A Tale of Four Libraries

Independent Review Report

Reviewer: Min Ragan-Kelley

Department/Center/Division: Helen Wills Neuroscience Institute

Institution/University/Company: UC Berkeley

Field of interest / expertise: IPython

Country: USA

Article reviewed: A Tale of Four Libraries (Alejandro Weinstein)

GENERAL EVALUATION

Please rate the paper using the following criteria (please use the abbreviation
to the right of the description):

below     doesn't meet standards for academic publication
meets     meets or exceeds the standards for academic publication
n/a       not applicable
  • Quality of the approach: meets
  • Quality of the writing: meets
  • Quality of the figures/tables: meets

SPECIFIC EVALUATION

For the following questions, please respond with 'yes' or 'no'. If you
answer 'no', please provide a brief, one- to two-sentence explanation.

  • Is the code made publicly available and does the article sufficiently
    describe how to access it?

    yes, though a link is given to a companion repo,
    which should contain examples, but is empty.
    However, the discussion is primarily about other publicly available libraries.

  • Does the article present the problem in an appropriate context?
    Specifically, does it:

    • explain why the problem is important,
    • describe in which situations it arises,
    • outline relevant previous work,
    • provide background information for non-experts

    yes

  • Is the content of the paper accessible to a computational scientist
    with no specific knowledge in the given field?

    yes

  • Does the paper describe a well-formulated scientific or technical
    achievement?

    yes

  • Are the technical and scientific decisions well-motivated and
    clearly explained?

    yes

  • Are the code examples (if any) sound, clear, and well-written?

    yes

  • Is the paper factually correct?

    yes (as far as I know, not being a machine learning expert)

  • Is the language and grammar of sufficient quality?

    yes, after some minor edits in PR #9

  • Are the conclusions justified?

    yes

  • Is prior work properly and fully cited?

    yes

  • Should any part of the article be shortened or expanded? Please explain.

    no

  • In your view, is the paper fit for publication in the conference proceedings?
    Please suggest specific improvements and indicate whether you think the
    article needs a significant rewrite (rather than a minor revision).

    yes

Review of "Self-driving Lego Mindstorms Robot"

Independent Review Report

.. note:: Please be aware that all reviews are made public including
the reviewer's name.

Reviewer: Jason K. Moore

Department/Center/Division: Human Motion and Control Lab, Mechanical Engineering Department

Institution/University/Company: Cleveland State University

Field of interest / expertise: multibody dynamics, biomechanics, control
systems, system identification

Country: USA

Article reviewed: Self-driving Lego Mindstorms Robot

INSTRUCTIONS

Please read the submitted article and fully complete this form. Since we don't
have a copy editor, we also request that you annotate the PDF [1]_ to highlight
typos, formatting issues, and grammatical mistakes.

The goal of the review process is two-fold. First, it guides authors in
improving their papers and, secondly, ensures that published works are of a
professional academic standard.

Research in science and engineering increasingly relies on software for
data processing and management as well as theoretical exploration. However,
the effort necessary to develop this software is rarely recognized as having
the same academic worth as other aspects of the research. These proceedings
are, at least in part, intended to address this shortcoming.

An article focused on software development necessarily differs from the
standard scientific article with respect to format. For instance, it is
unlikely to have the same sections (i.e., introduction, methods, results,
conclusion). You may therefore have to rely on other factors to decide whether
the paper sets a high enough standards as an academic publication.

Please note that, while reviewers' recommendations regarding a paper's
suitability for publication are seriously considered, the final decision rests
with the proceeding editors.

.. [1] We recommend the free version of PDF XChange Viewer <http://www.tracker-software.com/product/pdf-xchange-viewer>__ for
Linux (Wine) and Windows. Under OSX, annotation is provided by Preview
as well as Skim <http://skim-app.sourceforge.net/>__.

GENERAL EVALUATION

Please rate the paper using the following criteria (please use the abbreviation
to the right of the description)::

below doesn't meet standards for academic publication
meets meets or exceeds the standards for academic publication
n/a not applicable

  • Quality of the approach: below
  • Quality of the writing: below
  • Quality of the figures/tables: below

SPECIFIC EVALUATION

For the following questions, please respond with 'yes' or 'no'. If you
answer 'no', please provide a brief, one- to two-sentence explanation.

  • Is the code made publicly available and does the article sufficiently
    describe how to access it? We aim not to publish papers that essentially
    advertise proprietary software. Therefore, if the code is not publicly
    available, please provide a one- to two- sentence response to each of the
    following questions:
    • Does the article focus on a topic other than the features
      of the software itself?
    • Can the majority of statements made be externally validated
      (i.e., without the use of the software)?
    • Is the information presented of interest to readers other than
      those at whom the software is aimed?
    • Is there any other aspect of the article that would
      justify including it despite the fact that the code
      isn't available?
    • Does the article discuss the reasons the software is closed?

Yes, the code is publicly available and relies on a variety of open source
packages. The software can be downloaded from the github link provided in the
paper. The Android image capture library may not be open source.

  • Does the article present the problem in an appropriate context?

Yes.

Specifically, does it:

  • explain why the problem is important,

Yes, he gives background on self-driving car research to show the need.

  • describe in which situations it arises,
  • outline relevant previous work,

Maybe, only a couple of papers are cited about self-driving vehicles and
techniques. Much of the literature is not present and no detailed commentary on
how the method presented in the paper compares to other approaches.

  • provide background information for non-experts

Yes, the paper seems to be written with a non-expert audience in mind.

  • Is the content of the paper accessible to a computational scientist
    with no specific knowledge in the given field?

Yes.

  • Does the paper describe a well-formulated scientific or technical
    achievement?

No, the paper doesn't propose a hypothesis or follow the scientific method. It
is more like a report on how to use several software libraries to accomplish a
task rather than a well-formulated scientific achievement. I guess it can be
called a "technical achievement" because the author achieved his goal of
reproducing another's work with different software and hardware.

  • Are the technical and scientific decisions well-motivated and
    clearly explained?

No, the motivation only seems to be to replicate previous work. The reasons for
choosing the hardware, software, and parameters for both are not explained at
all. It seems as if the author just used informed guesses at values for the
neural network, for example.

  • Are the code examples (if any) sound, clear, and well-written?

Yes, but it'd be nice if they followed PEP8 standards for readability reasons.

  • Is the paper factual correct?

Yes, the method and results seem to be factually correct.

  • Is the language and grammar of sufficient quality?

Yes.

  • Are the conclusions justified?

No, because there are no conclusions. The paper simply describes how something
is done. Most scientific works explain what the conclusions are and make some
reasoning on why those conclusions are true. The conclusion here seems to be
simply that someone else's work can be replicated with different hardware and
software while using the same methods.

  • Is prior work properly and fully cited?

No. The blog post that described the work that is being replicated is cited
with a URL and a few academic papers are cited, but the wide berth of work on
self-driving vehicles and neural networks has been ignored.

  • Should any part of the article be shortened or expanded? Please explain.

Yes. The prior literature on the subject should be expanded and comparisons in
this method and others is needed. Furthermore, there should be some scientific
discourse on the details of this method along with quantitative measures
describing the performance of this technique so that comparisons can be made to
other software, hardware, and methods.

  • In your view, is the paper fit for publication in the conference proceedings?
    Please suggest specific improvements and indicate whether you think the
    article needs a significant rewrite (rather than a minor revision).

This paper presents the replication of a "self-driving" robot vehicle
implementation that utilizes a trained neural network to follow a specific
course using visual inputs to control the vehicle's driving motors. From what
is written, it seems that using a Python based software stack and the Lego
robot kit, that prior work can be replicated. But the article does not exhibit
the depth that other quality scientific articles on this subject offer. The
reader is simply instructed in the how, i.e. the method, of implementing this
system using a very specific selection of software packages. Little to no
information is provided that gives the reader technical information on the
capabilities of this method, particularly not for comparison purposes to other
methods. No hypothesis or claim is made nor any proof to back it up the missing
claim. The article seems more akin to a undergraduate lab tutorial that simply
shows the student how to do something, but misses the "why" portion that
generally makes a contribution interesting and publishable for the scientific
community. I think this article could be transformed into a valuable scientific
contribution if these things were changed/added:

  1. A proper literature review on other methods. At the minimum, this could
    detail other software libraries with these capabilities and at the maximum
    this could include comparison to other methods of autonomously controlling a
    vehicle.
  2. A statement, hypothesis, or claim about what makes this method
    special/difference and the proof to back it up. If this is simply a
    replication study of previous work using different methods, then the claims
    from the previous study should also be proved by the method presented in
    this paper along with detailed quantitative comparisons of how well the
    other method was replicated.
  3. More technical detail on the method. If comparing software, we need to know
    things like how easy it is to use, how fast it is, how robust it is, what
    are the limitations, etc all with respect to other available software. The
    technical details of the hardware are also important so that we no its
    advantages and limitations with respect to other methods. If comparing
    algorithms (neural nets, etc) then we need to know more detail about the
    methods and why the parameters you chose are good and what they mean.
    Explanation of the neural net framework you chose and why would be helpful.
  4. Give accurate and precise results. Simply saying that your vehicle completes
    the course "about 2/3rds" of the time is not science. You also need
    dimensional descriptions of the course, the vehicle, and metrics on how bad
    or good it actually performs. No one can compare their vehicle and
    implementation to yours if this isn't provided. We also have no idea if you
    actually replicated the prior work because there are no quantitative
    measures.
  5. The tone and grammar of the paper resembles a blog post as opposed to a
    scientific article. I'm not opposed to having more personal writing, but it
    needs to be justified and it needs to contribute to the understanding of the
    article. As it stands, this would be a fine blog post but it is quite far
    from an average scientific article.

(Reviewed) Uncertainty Modeling with SymPy Stats

Reviewer:
Michael McKerns
the Uncertainty Quantification Foundation
Arcadia, California, USA

Area of Expertiese: It's uncertain.

General Evaluation

  • Quality of the approach: meets
  • Quality of the writing: meets
  • Quality of the figures/tables: meets

Specific Evaluation

  • Is the code made publicly available and does the article sufficiently describe how to access it?

    Yes.

  • Does the article present the problem in an appropriate context?

    Yes.

  • Is the content of the paper accessible to a computational scientist with no specific knowledge in the given field?

    For as "math-y" as the topic is, the author makes it as accessible as possible.

  • Does the paper describe a well-formulated scientific or technical achievement?

    Yes.

  • Are the technical and scientific decisions well-motivated and clearly explained?

    Yes.

  • Are the code examples (if any) sound, clear, and well-written?

    Yes.

  • Is the paper factually correct?

    As far as I can tell.

  • Is the language and grammar of sufficient quality?

    Yes. It's actually excellently written.

  • Are the conclusions justified?

    Yes.

  • Is prior work properly and fully cited?

    This could use some improvement. There are similar ideas that both exist in python code and have been published, but there are no references to included. For example, some the capacity presented exists in mystic (reviewer's shameless plug) as well as some Bayesian python codes, and in tools outside of Python. Also, there are mentions of CUDA, BLAS, LAPACK, MPI, and so on that should have references. The point being, this work was not done in a bubble.

  • Should any part of the article be shortened or expanded? Please explain.

    No.

  • In your view, is the paper fit for publication in the conference proceedings?

    This is one of the better scientific articles I've read in quite a while, not just scientific computing, but science in general. So, in a word, yes.

Review of "cphVB: A System for Automated Runtime Optimization and Parallelization of Vectorized Applications"

Review of "cphVB: A System for Automated Runtime Optimization and Parallelization of Vectorized Applications"

Reviewer: Kyle Mandli
Department: Institute for Computational Engineering and Science
Institution: University of Texas at Austin
Field: Applied and Computational Mathematics
Country: USA
Article Reviewed: cphVB: A System for Automated Runtime Optimization and Parallelization of Vectorized Applications

General Evaluation

below doesn't meet standards for academic publication
meets meets or exceeds the standards for academic publication
n/a not applicable

  • Quality of the approach:

    Meets with caveats (below).

  • Quality of the writing:

    Meets

  • Quality of the figures/tables:

    Meets

Specific Evaluation

  • Is the code made publicly available and does the article sufficiently describe how to access it?

    No although at some point it was I think. Googling the code lead to a set of page links that did not seem to point to anything.

  • Does the article present the problem in an appropriate context? Specifically, does it:

    • explain why the problem is important,

      Yes

    • describe in which situations it arises,

      Yes

    • outline relevant previous work,

      Yes and no. With the length of time that's passed between this review and the original submission, I think their is work more relevant today but it would require a large rewrite of this part of the paper.

    • provide background information for non-experts

      Somewhat, there is terminology that it assumed known but it is not egregious.

  • Is the content of the paper accessible to a computational scientist
    with no specific knowledge in the given field?

    Somewhat, it does assume a working knowledge of the problem being addressed and low-level memory management.

  • Does the paper describe a well-formulated scientific or technical
    achievement?

    Yes

  • Are the technical and scientific decisions well-motivated and
    clearly explained?

    Yes

  • Are the code examples (if any) sound, clear, and well-written?

    Yes.

  • Is the paper factual correct?

    To my knowledge yes.

  • Is the language and grammar of sufficient quality?

    A few corrections have been suggested in the marked up PDF.

  • Are the conclusions justified?

    Somewhat. The performance seems encouraging but there are a number of issues (detailed below). I think the most egregious of these is the claim that this approach will work on clusters and super-computers which is definitely not clear to me. Issues such as communication and latency are not addressed at all and would be critical for these setups.

  • Is prior work properly and fully cited?

    Yes

  • Should any part of the article be shortened or expanded? Please explain.

    The performance study is the crux of the article and should be expanded upon with additional testing and explanations. Some of the design explanations could be condensed perhaps to make room for this.

  • In your view, is the paper fit for publication in the conference proceedings?
    Please suggest specific improvements and indicate whether you think the
    article needs a significant rewrite (rather than a minor revision).

    I think my largest qualm with the article as is involves the Performance Study section. Some specific comments:

    • The vector engine setups are never explained (although I think they can inferred)
    • I think that most computational scientists would be pretty hard pressed to call this a strong-scaling experiment (going from 1 to 2 cores).
    • The code for the benchmarks has not been provided
    • The experiments should probably be longer to test out other issues dealing with normal system operations that even ensembles of 3 will not catch with out longer term operation.
      Besides this, the scope of the work seems to be very limited (only to a single-node machine). As mentioned above, claiming this works on a supercomputer seems completely un-supported given the work in the article.

Other Comments/Questions

  • Related Work section could be a lot better with less of a laundry list approach and more of how the approach to the design of cphVB where certain decisions were made due to previous work (for instance).
  • The memory overhead may not be large (as was shown) for copying between CPU cores but what about discrete accelerators? This seems to be a much more difficult question and one that is not compellingly answered or mentioned.
  • Is using and catching segfaults a wise design decision? Addressing this would lead to a much more compelling article. As I read that a number of questions came up including how fragile this is, does it work on all kernels, what about code that calls other libraries, etc.

Review of cyrus_harrison - *Python's Role in VisIt*

Reviewer: Dav Clark
Department/Center/Division: D-Lab
Institution/University/Company: UC Berkeley
Field of interest / expertise: Computational Social Science / Neuroscience
Country: USA

Article reviewed: Python's Role in VisIt

GENERAL EVALUATION

Please rate the paper using the following criteria (please use the abbreviation
to the right of the description)::

below doesn't meet standards for academic publication
meets meets or exceeds the standards for academic publication
n/a not applicable

  • Quality of the approach: meets
  • Quality of the writing: meets
  • Quality of the figures/tables: meets

SPECIFIC EVALUATION

For the following questions, please respond with 'yes' or 'no'. If you
answer 'no', please provide a brief, one- to two-sentence explanation.

  • Is the code made publicly available and does the article sufficiently
    describe how to access it?

    Yes

  • Does the article present the problem in an appropriate context?
    Specifically, does it:

    • explain why the problem is important,

    No, thought it's somewhat obvious. It would be great to have a clearly
    described explanation of the user models (e.g., Qt UI developer for scientist
    end-user).

    • describe in which situations it arises,

    Yes (-ish). Somewhat implicit, could be more explicit.

    • outline relevant previous work,

    I don't know. Tools that this is built on are well articulated. There are
    other tools even at LBNL that take similar approaches (e.g., KBase), but I
    don't know when these projects were started relevant to this paper being
    written. These other tools wouldn't be replacements (as they are implemented
    in different domains), but would be worth referring to.

    • provide background information for non-experts

    Yes

  • Is the content of the paper accessible to a computational scientist
    with no specific knowledge in the given field?

    Yes

  • Does the paper describe a well-formulated scientific or technical
    achievement?

    Yes, it's clearly an awesome system.

  • Are the technical and scientific decisions well-motivated and
    clearly explained?

    No, I'd love to have a bit more about the problem domain and what kinds of
    solutions this system enables.

  • Are the code examples (if any) sound, clear, and well-written?

    Yes. Just the right amount of commenting. Maybe add a little whitespace to
    break up sections.

  • Is the paper factually correct?

    Yes (I think - haven't run any code, for example)

  • Is the language and grammar of sufficient quality?

    Yes.

  • Are the conclusions justified?

    Yes, but they could be stronger.

  • Is prior work properly and fully cited?

    Yes (but see above on "previous work")

  • Should any part of the article be shortened or expanded? Please explain.

    Per above, more on uesrs of the system, and impacts of solutions.

  • In your view, is the paper fit for publication in the conference proceedings?
    Please suggest specific improvements and indicate whether you think the
    article needs a significant rewrite (rather than a minor revision).

    Yes. Fine as is. Could be better.

COMMENTS

The overview.pdf is a bit out of sync with the text, where the CLI is Python.
You may as well make the text parallel the graphic. The details are explained
later in the paper - this just sets up a period of potential confusion.

The approach using Qt designer UIs is definitely in the "moderately awesome"
category. I've seen the basic Qt / Python tech, but I rarely see examples like
this that make it seem really compelling. If you ever want to do a presentation
at UC Berkeley to the python folks, you're most welcome!

(Reviewed) Total Recall: flmake and the Quest for Reproducibility

Reviewer:
Michael McKerns
Center for Advanced Computing Research
Division of Engineering and Applied Science
California Insitute of Technology
Pasadena, California, USA

Area of Expertiese: Duh, everything.

General Evaluation

  • Quality of the approach: meets
  • Quality of the writing: below
  • Quality of the figures/tables: meets

Specific Evaluation

  • Is the code made publicly available and does the article sufficiently describe how to access it?

    Code is publicly available, and the article provides a link to the homepage as a publication reference. Would be better if link was provided in article text.

  • Does the article present the problem in an appropriate context?

    Yes.

  • Is the content of the paper accessible to a computational scientist with no specific knowledge in the given field?

    What the heck does this question mean, really? Yes, the paper doesn't use a lot of jargon, but it does assume that the reader is at least a little cognizant of standards and practices in scientific computing. I believe the article is at a good level for computational scientists that may be dabbling in scientific computing, as typified by users and erstwhile developers of scientific python community software (i.e. hacks trying to get faculty positions).

  • Does the paper describe a well-formulated scientific or technical achievement?

    Yes.

  • Are the technical and scientific decisions well-motivated and clearly explained?

    Yes, extremely well-motivated. Technical and scientific decisions are somewhat clearly explained, so I'd have to say, no. Portions of the article are meandering, confusing, and poorly written -- primarily, the abstract and the introduction. Since the abstract and introduction are primary locations for providing clear picture of the technical and scientific decisions that were made in the paper, this is where the article falls flat. There are other sections in the text, such as "Why Reproducibility is Important" or "Conclusions and Future Work" that may serve better as a clear motivation for the decisions in this article. The abstract and introduction are a steaming pile of verbage, and it seems to this reviewer were put together post-haste and pasted in front of an otherwise very well-written flmake user manual.

  • Are the code examples (if any) sound, clear, and well-written?

    Mostly. See Detailed Notes below.

  • Is the paper factually correct?

    As far as I can tell.

  • Is the language and grammar of sufficient quality?

    No. The bulk of the paper is well-written, however portions of the text are elliptical. See "Detailed Notes" below.

  • Are the conclusions justified?

    Yes.

  • Is prior work properly and fully cited?

    Yes.

  • Should any part of the article be shortened or expanded? Please explain.

    The paper somewhat suffers from a dissociative identity disorder, as portions of are clearly a user manual for flmake and portions of it are a discussion on reproducibility in scientific computing. The two sides of this paper are not well integrated, in general. The section on reproducibilty could use the same level of examples that are in the first half of the paper. Much of the introduction could actually be cut, were the paper reorganized. The article should pick a central theme: is it an article on reproducibility, with flmake as a case study, or is it an article on the use of flmake, with an emphasis on the features of flmake that enable reproducibility?

  • In your view, is the paper fit for publication in the conference proceedings?

    This has the makings of an excellent paper, and contains some important work. However, in it's current state, the paper is unfit for publication. Portions need a rewrite. Details follow.

Detailed Notes

Abstract

  • The tense is mixed.

    Best to pick past tense.

  • "Canonically, each of these tasks"

    which tasks? referring to the basic steps?

  • "However with the recent advent of flmake"

    oddly worded

  • "fully reproducible way"

    should define reproduciblity in this context before using it

  • "to achieve such reproducibility a number of developments and abstractions were needed, some only enabled by Python"

    there is a lot wrong with this sentence. 'such reproducibility'... you didn't explain what reproducibility is, nor did you demonstrate such reproducibility. 'some only enabled by Python'... it dangles and is bad English.

  • "These methods were widely"

    Which methods?

  • "The process of writing flmake opens many questions"

    It wasn't likely the process. Maybe better "Writing flmake opened"

Introduction

  • "in a repeatable way [FLMAKE]"

    this is not defined, but sounds like 'automated'.

  • "none of the prior attempts have placed reproducibility as their primary concern"

    again, needs definition to have meaning here.

  • "This is in part because"

    What is 'This'?

  • "setup metadata required alterations to the build system"

    Yes, and? Why should I care? What's the big deal?

  • "The development of flmake... typically under its own version control"

    Because the build system works how? git? svn? Needs details.

  • For each of the important tasks... stored directly in the description"

    Unclear if this is describing the 'old' way of doing things or the 'new' way.

  • "it fundamentally increases the scientific merit of FLASH simulations"

    I'd agree that a job builder and launcher that (1) captures metadata and parameters in a way that all information pertaining to executing a FLASH job is logged and fully available (the notebook concept), and (2) automates the workflow for FLASH simulations, is a huge benefit, and will likely increase the quality and reproducibility of work. This work is not only a nice feature, but possibly a significant advance for FLASH. The abstract and introduction do not clearly present it as such, and that is a major detriment to the paper. If I was not reviewing the article, I would have given up reading it before completing the introduction, and probably at the abstract.

  • "The methods described herein... the same reproducibility strategy... Thus flmake shows that reproducibility... command line utilites"

    What is this saying? What strategy? The lack of detail in this section make is very confusing to the reader. Again too much eliptical language, where driving the point home is needed.

Source & Project Paths Searching

  • "classic Sedov problem"

    Cite?

Dynamic Run Control

  • "update the flash.par file"

    What is flash.par?

Example Workflow

  • "Oops, it died... clean 1"

    This doesn't correspond to the text, 'create and run the simulation'. Text should explain what is happening in the example, if 'in-code' documentation cannot be sufficient.

Why Reproducibility is Important

  • "True to its part of speech"

    What does that mean? What does that refer to? Poor grammar.

  • "However, most scientists choose to not utilize these technologies. This is akin to a chemist not keeping a lab notebook."

    Excellent point. Poor grammar.

  • "this is in fact no greater than what is currently expected from scientists with regard to Statistics"

    Another good point, however possibly a counter-example to your argument. Misuse of statistics is also a huge issue in reproducibility, and this reviewer would argue that a majority of scientific papers in the last 50 years have misused and/or incorrect statistics, and thus the conclusions may also be suspect.

Command Time Machine

  • Modules inside of... another in a manner relevant to reproducibility."

    Could use an explicit example to clarify.

Conclusions and Future Work

  • "no previous system included a mechanism to non-destructively execute previous command incarnations similar to flmake reproduce"

    For FLASH or in general?

  • "software-in-science project"

    projects should be plural

Review for alejandro_weinstein - *A Tale of Four Libraries*

Reviewer: Dav Clark
Department/Center/Division: D-Lab
Institution/University/Company: UC Berkeley
Field of interest / expertise: Computational Social Science / Neuroscience
Country: USA

Article reviewed: A Tale of Four Libraries

GENERAL EVALUATION

Please rate the paper using the following criteria (please use the abbreviation
to the right of the description)::

below doesn't meet standards for academic publication
meets meets or exceeds the standards for academic publication
n/a not applicable

  • Quality of the approach: meets
  • Quality of the writing: meets
  • Quality of the figures/tables: meets (though I find the standards for networks
    to be low!)

SPECIFIC EVALUATION

For the following questions, please respond with 'yes' or 'no'. If you
answer 'no', please provide a brief, one- to two-sentence explanation.

  • Is the code made publicly available and does the article sufficiently
    describe how to access it?

    No - the github repo (https://github.com/aweinstein/a_tale) contains only a
    README. I suspect this is an oversight.

  • Does the article present the problem in an appropriate context?
    Specifically, does it:

    • explain why the problem is important,

    Somewhat implicit - see below for questioning that "numpy arrays" in
    particular is what's important. The motivation for the science could be
    stronger.

    • describe in which situations it arises,

    Implicitly

    • outline relevant previous work,

    Theoretical, but not other approaches to similar theoretical concerns.

    • provide background information for non-experts

    Yes, but perhaps a bit technical (see below)

  • Is the content of the paper accessible to a computational scientist
    with no specific knowledge in the given field?

    Yes, though the description of RL is somewhat technical for a broad audience.
    A formulation in which there is a decision rule and an update rule is perhaps
    less general, but likely easier to grok.

    Nice examples for similarity.

  • Does the paper describe a well-formulated scientific or technical
    achievement?

    Yes

  • Are the technical and scientific decisions well-motivated and
    clearly explained?

    Yes, but could be better motivated.

  • Are the code examples (if any) sound, clear, and well-written?

    There are no code examples or code (I think the authors forgot to populate
    their companion github repo)

  • Is the paper factually correct?

    As far as I can tell.

  • Is the language and grammar of sufficient quality?

    Yes

  • Are the conclusions justified?

    Yes, but they are a bit odd. I imagine few individuals making a choice about
    whether to use a library or not based on whether numpy arrays are used.
    Indeed, as long as a library supports the more generic python buffer
    interface, I suspect you'd have few problems compared to numpy arrays (e.g.,
    with rpy2.robjects classes, PIL, etc.). Perhaps the strength comes from sparse
    numpy arrays?

  • Is prior work properly and fully cited?

    Yes. It might be nice to see a comparison to alternate approaches, though.

  • Should any part of the article be shortened or expanded? Please explain.

    Yes, at least some code snippets should be provided.

  • In your view, is the paper fit for publication in the conference proceedings?
    Please suggest specific improvements and indicate whether you think the
    article needs a significant rewrite (rather than a minor revision).

    Yes

review comments for "Uncertainty Modeling with SymPy Stats"

This paper describes a python package that can create symbolic models residing between the low-level machine/programming language and high-level abstract solution to a problem. The symbolic model, if I understand correctly, seems to have the same role as the Java bytecode, aiming at abstracting a problem solution from a machine/code specific context into a mathematical model such that it is understandable in different computational environments. I am not sure if I am the best person to review this paper since the idea is very abstract. If the goal of this symbolic model is to make the domain expert and programmer to communicate and understand each other, I don't think current work fullfills it since the model is really hard for a non-mathematician to understand. I would still suggest it to get accepted though, since the paper describes some real work and it deserves to be delivered to a broader audience.

Some of my specific comments:

  1. figures can not be displayed correctly in the rst file
  2. typo in conclusion section "undertainty" --> "uncertainty"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.