Code Monkey home page Code Monkey logo

aimss's People

Contributors

ao99 avatar dependabot[bot] avatar jacquescarette avatar oluowoj avatar peter-michalski avatar smiths avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

aimss's Issues

LBM solution requires academic license

Hi @smiths and @JacquesCarette ,

One of the prospective solvers requires that I apply for an academic license through a form in order to access the code. The form requires the following:

"By submitting the Registration Form to STFC, you are confirming that:

  1. the institution whose name appears as the Licensee in the Registration Form agrees to the terms of the Licence Agreement;
  2. you have authority to agree to the terms of the Licence Agreement and to enter into a contract with STFC on behalf of that institution;
  3. the DL_MESO Software will be used only for Academic Purposes (as defined in the Licence Agreement);
  4. you will acknowledge use of DL_MESO when publishing any data obtained with its assistance by including a reference either to its website (www.ccp5.ac.uk/DL_MESO) or the article: 'M.A. Seaton, R.L. Anderson, S. Metz and W. Smith, Mol. Sim., 39, 796-821 (2013)';
  5. the information given on the Registration Form is true; and
  6. you agree to your personal data being used for the purpose of managing your institution's licence of the DL_MESO Software, including contacting you to tell you about changes to and error corrections for that software and generally about the software.

Please read the terms of the Licence Agreement carefully. If you do not agree to them, you should not submit the Registration Form.

A contract between the Licensee and STFC will come into existence when you click on the SUBMIT button at the end of the Registration Form."

The above information is from https://www.scd.stfc.ac.uk/Pages/DL_MESO-register.aspx

I am not in a position to agree to points 1 and 2. How do you suggest I proceed?
I have gathered information from quite a few solvers already. If it is easier I could add this one to the discard pile.

Measures 3 to 5 LB Solvers

Following the template at:

https://github.com/adamlazz/DomainX

Measure 3 to 5 LB Solvers. While doing this, critically evaluate the previous template. Think about what we can add and what we can take out.

This file explains the entries in the template:

https://github.com/adamlazz/DomainX/blob/master/TemplateToGradeSCSoft.pdf

You might also find this spreadsheet helpful:

https://gitlab.cas.mcmaster.ca/SEforSC/mmsc/-/blob/master/DomainX/ProcessForSoftwareReview/SoftwareGrading.xlsx

The DomainX repo has several examples:

https://github.com/adamlazz/DomainX

Task-based inspection of measurement template

As discussed in our meeting on May 14, 2020, we want to make decisions on the measurement template and then start completing our measures. Your latest edits (#24) are a great starting point. We want to get everyone on the project to review this document, but the reviewers should have some specific questions in mind while reviewing, to get the most impact from the review.

To start with, create a spreadsheet version of the template. You should keep the classifications (Maintainability, Reusability etc) as row separators. For columns, I suggest the following:

  • description of the metric (large enough a cell for the reviewers to read without scrolling). You might want to use cells with wrap around set to true, so the width doesn't have to be too large.
  • possible measurement values ({yes, no, unclear, etc})

We then want columns for the reviewers to fill in. This is where the reviewers focus on each individual metric by answers specific questions. I suggest the following, but feel free to add others:

  • Is this metric unambiguous? (yes, no*) (* if ambiguous, the reviewer should add text to say how)
  • Is this metric actually likely to be measurable? (yes, no*) (* if no, the reviewer should say why it is unlikely to be measurable)
  • Are there any missing possible measurement values? (yes*, no) (* if yes, the reviewer should say what is missing)

At the bottom of each section, you should leave space for the reviewer to add comments. The comments can cover any metrics they feel are missing, or any other thoughts they have.

Normally we do our reviews through issues, but I think we should do these reviews over e-mail to avoid unintentional bias. I know when I can see responses from other reviewers, it colours my own responses. Once the spreadsheet is created, you can assign me the issue of giving it a quick once over. You can then e-mail it to @oluowoj, @peter-michalski, @JacquesCarette and @smiths and ask the reviewers to send the results to you. Give everyone a deadline, so that it doesn't drag out too long. You can then collect all of the information and use it to focus the next revision of the template. My guess is that we'll drop the measures where there is a strong feeling that they aren't easy to measure. For any feedback that is contradictory, we can set up a focused meeting to discuss them.

Where to put these?

Sensitivity Analysis of Ranking Results

@peter-michalski, please re-run the AHP ranking for the LBM solvers, but with each of the summary "grades" (out of 10) for each of the projects for each of the qualities modified by a random number between -1 and 1. I don't think the AHP script cares whether the numbers are integers or floats, so you could have the random number be a float. Any code or spreadsheet that you make should be parameterized, so that we can modify the range. For instance, we might want to try the range from -2 to 2.

For now, it is fine if the experiment is done manually. We don't have to worry about programming it. If we like the results though, we'll want to automate as much as possible.

To start with, we want to look at the final sorted list with and without noise. Hopefully the sorted order doesn't change by too much, especially in the "big picture" sense. That is, if the top 5 programs change relative position, but stay in the top 5, that would be fine.

Inconsistency between grading templates

There is inconsistency between the pdf and the spreadsheet of grading templates.

Examples:

Is there something in place to automate the installation?
Is there something in place to automate the installation (makefile, script, installer, etc.)?

Does the software handle garbage input reasonably?
Does the software handle garbage (as opposed to bad or malicious) input reasonably?ย  (a reasonable response can include an appropriate error message)

  • @smiths Do we need to combine them? If yes which one is better?

  • @peter-michalski When you were doing the measurements, which version worked better?

Record thoughts on improving installability

@peter-michalski, while your thoughts are still fresh, you should write up your advice on improving installability. We will incorporate these thoughts into our eventual document on the state of the practice for LBM solvers. We'll likely use the same ideas in our other state of the practice documents as well.

You can put your thoughts in your personal folder. We'll move them when we start working on a document that pulls everything together.

Update Section 3 (Overview of Steps) in our Methodology Document

So that our methodology is reusable, we need to document the steps that we are taking, from domain recognition to developer interviews. The text in Section 3 (Overview of Steps in Assessing Quality of the Domain Software) of Methodology.pdf in the StateOfPractice folder is rough and incomplete. Please update and then assign me to review.

Empirical Measures Template

As mentioned in #33, I would like an additional document related to the empirical measures. I would like a document like the measurement template we already have:

https://github.com/smiths/AIMSS/blob/master/StateOfPractice/Peter-Notes/RevisedGradingTemplate.tex

The empirical measures template would list the measures we want, followed by their type. In many cases the type of the measure will be a natural number. There are already some empirical measures in the measurement template, like counting the number of open and closed issues.

You have already started lists like this, but I'd like all of the information in one place. I also want the list to just be things that we can measure with the tools you have identified. We could always do a deeper dive into the data, but our goal is to get a sense of the maturity of each project based on relatively easy to find empirical measures.

We might end up combining the empirical measures template with the existing measures template, but for now we'll keep the discussions separate.

Check if there were any interesting previously measured under "evidence that maintainability was considered"

Related to revising the measurement template #24, please have a look at whether anything interesting was entered in the previous studies for "Is there evidence that maintainability was considered in the design?" (In the latest version of the template this was row 82.) If the previous batch of measurements didn't find anything interesting, we can safety drop this measure. If they did find something interesting, then we can add this to our revised template.

Revise measurement template

Revise the previous template following the discussions that we have had during our on-line meetings. You should incorporate the ideas from @Ao99 as well. Don't worry about making the list short now. We will do a pass later to reduce the scope.

Comment on Second Pass of Measurement Template

Hi @JacquesCarette,

I have uploaded a new version of the measurement template (4aa36b8) which incorporates consensus from today's meeting.

The upload has some orange highlights that still need to be addressed and we would like your thoughts:

  1. Row 67: We are considering if we should drop the Surface Performance section from the template and instead ask the domain expert to give us their impression of the software performance. The template currently has only 3 questions under Surface Performance.

  2. Row 74: We would like your thoughts on the current state of the Surface Usability section of the template, and on asking the domain experts for their impression.

  3. Row 83: In the Maintenance section there is a question from the old template, "Is there evidence that maintainability was considered in the design?" ({yes*, no}) - We discussed removing this question but are leaning towards keeping it as a "catch all". The old templates did not reveal any specific / interesting examples, they all simply have a yes or no. What are your thoughts on this question?

  4. Row 87: In the Reusability section we replaced "Is the system modularized?" with "How many code files are there?". What are your thoughts?

  5. Row 96: We are considering if we should drop the Portability section from the template as it is not providing much useful information in its current form. Thoughts?

  6. Row 112: We are considering if we should drop the Interoperability section from the template as it is not providing much useful information in its current form. Thoughts?

  7. Row 123: We are considering if the domain expert could do a reproducability experiment. The Reproducability section also is not providing much useful information in its current form. Thoughts?

Draft a Survey for the Short List Projects

What questions should we ask the project owners to learn about their development process? I suggest that you provide both a questionnaire and a list of oral interview questions. You should add the questionnaire and interview questions as an appendix to our Methodology document.

Review task selection criteria in Methodology document

@smiths , as discussed yesterday, please review the task selection criteria in section 7.6.
Some of them might not be required to be explicitly stated but based on the questionnaires that I looked through and Nielsen's Heuristics list, I listed all those points

latest update in commit 67f8691

Trial Empirical Measurements: LB Solvers

Measure 3 to 5 LB solvers using the empirical measures available in GitStats. You should use the same 3 to 5 solvers as for #15. Think about the story that the data tells. Which of the metrics are meaningful? Which ones should we continue to collect for other projects?

Write up the Methodology for our Usability Experiments

Add to the Methodology document a write-up of our usability experiments. It should look something like these steps:

- record research question
a. start with short list of projects
b. identify tasks for the study subject to do, at least one task to modify the software
c. survey the study subject
d. observe them doing the tasks, take notes, time duration
e. survey the study subject
f. pair-wise comparisons
- compare to ranking according to adherence to best practices

Repeated entry in ResearchProposal.bib

@smiths

Someone put repeated entries in ResearchProposal.bib, so QDefOfQualities.tex don't compile now. Could you please take a look that which entries you want to keep?

The error show as follows

Database file #1: ../../CommonFiles/ResearchProposal.bib
Repeated entry---line 1467 of file ../../CommonFiles/ResearchProposal.bib
: @inproceedings{fernandez2005model
: ,
I'm skipping whatever remains of this entry
Repeated entry---line 1476 of file ../../CommonFiles/ResearchProposal.bib
: @Article{mooney1990strategies

Update AHP scripts to work with our updated SoP Methodology

The scripts (written in 2014) for the AHP algorithm have been copied to the StateOfPractice folder in a folder labelled AHP. We need to modify these to work with our current inputs. The former MEng students took care of all of the details on this, so neither @JacquesCarette or I know how they did it. Hopefully they left the code in reasonable shape. :-)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.