smiths / aimss Goto Github PK
View Code? Open in Web Editor NEWAssessing the Impact of MDE (Model Driven Engineering) and code generation on the Sustainability of SCS (Scientific Computing Software)
Assessing the Impact of MDE (Model Driven Engineering) and code generation on the Sustainability of SCS (Scientific Computing Software)
@Ao99 please review usability experiment sent to your email and provide your feedback. Check completed when done.
Hi Peter,
Please review usability experiment and provide your feedback.
Please add definitions to the document that summarizes our definitions of qualities:
https://github.com/smiths/AIMSS/tree/master/StateOfPractice/QDefOfQualities
You are assigned the following qualities:
You can check the boxes as you finish each quality, and close the issue once all of the qualities have been defined.
As requested via Aug 19th email
Hi @smiths and @JacquesCarette ,
One of the prospective solvers requires that I apply for an academic license through a form in order to access the code. The form requires the following:
"By submitting the Registration Form to STFC, you are confirming that:
Please read the terms of the Licence Agreement carefully. If you do not agree to them, you should not submit the Registration Form.
A contract between the Licensee and STFC will come into existence when you click on the SUBMIT button at the end of the Registration Form."
The above information is from https://www.scd.stfc.ac.uk/Pages/DL_MESO-register.aspx
I am not in a position to agree to points 1 and 2. How do you suggest I proceed?
I have gathered information from quite a few solvers already. If it is easier I could add this one to the discard pile.
Please review the questions to domain experts
The document is here
It might be a good idea to change numSolvers
to something more common, such as numSoftware
or numPackages
.
Following the template at:
https://github.com/adamlazz/DomainX
Measure 3 to 5 LB Solvers. While doing this, critically evaluate the previous template. Think about what we can add and what we can take out.
This file explains the entries in the template:
https://github.com/adamlazz/DomainX/blob/master/TemplateToGradeSCSoft.pdf
You might also find this spreadsheet helpful:
The DomainX repo has several examples:
As discussed in our meeting on May 14, 2020, we want to make decisions on the measurement template and then start completing our measures. Your latest edits (#24) are a great starting point. We want to get everyone on the project to review this document, but the reviewers should have some specific questions in mind while reviewing, to get the most impact from the review.
To start with, create a spreadsheet version of the template. You should keep the classifications (Maintainability, Reusability etc) as row separators. For columns, I suggest the following:
We then want columns for the reviewers to fill in. This is where the reviewers focus on each individual metric by answers specific questions. I suggest the following, but feel free to add others:
At the bottom of each section, you should leave space for the reviewer to add comments. The comments can cover any metrics they feel are missing, or any other thoughts they have.
Normally we do our reviews through issues, but I think we should do these reviews over e-mail to avoid unintentional bias. I know when I can see responses from other reviewers, it colours my own responses. Once the spreadsheet is created, you can assign me the issue of giving it a quick once over. You can then e-mail it to @oluowoj, @peter-michalski, @JacquesCarette and @smiths and ask the reviewers to send the results to you. Give everyone a deadline, so that it doesn't drag out too long. You can then collect all of the information and use it to focus the next revision of the template. My guess is that we'll drop the measures where there is a strong feeling that they aren't easy to measure. For any feedback that is contradictory, we can set up a focused meeting to discuss them.
Please review "Overview of Steps in Assessing Quality of the Domain Software" in the Methodology document and provide feedback.
https://github.com/smiths/AIMSS/blob/master/StateOfPractice/Methodology/Methodology.pdf
I was googling around and found some potentially interesting papers. Not sure the best place to put them, so here we go:
@peter-michalski, please re-run the AHP ranking for the LBM solvers, but with each of the summary "grades" (out of 10) for each of the projects for each of the qualities modified by a random number between -1 and 1. I don't think the AHP script cares whether the numbers are integers or floats, so you could have the random number be a float. Any code or spreadsheet that you make should be parameterized, so that we can modify the range. For instance, we might want to try the range from -2 to 2.
For now, it is fine if the experiment is done manually. We don't have to worry about programming it. If we like the results though, we'll want to automate as much as possible.
To start with, we want to look at the final sorted list with and without noise. Hopefully the sorted order doesn't change by too much, especially in the "big picture" sense. That is, if the top 5 programs change relative position, but stay in the top 5, that would be fine.
Hi @oluowoj , please mark this issue as complete when you have commented on and emailed back the measurement template that was emailed to you on May 18th.
I've drafted our research objective and research questions at:
In commit 9a0e408.
Please review and let me know your feedback. You can check your name off of this list once you have done your review:
Once we have feedback from everyone, we can close this issue.
So far, 6 tools tested and this is the only one working well.
Please take a look at the results. Feedbacks will be appreciated.
Tool: git_stats
Target: 3D Slicer
Results: http://git-stats-slicer.ao9.io/ (Not available since 2020-4-22)the results are output as webpages, so I hosted for you to check. Data can be downloaded as spreadsheets.
The record of tested tools is in Methodology
There is inconsistency between the pdf and the spreadsheet of grading templates.
Examples:
Is there something in place to automate the installation?
Is there something in place to automate the installation (makefile, script, installer, etc.)?
Does the software handle garbage input reasonably?
Does the software handle garbage (as opposed to bad or malicious) input reasonably?ย (a reasonable response can include an appropriate error message)
@smiths Do we need to combine them? If yes which one is better?
@peter-michalski When you were doing the measurements, which version worked better?
@peter-michalski, while your thoughts are still fresh, you should write up your advice on improving installability. We will incorporate these thoughts into our eventual document on the state of the practice for LBM solvers. We'll likely use the same ideas in our other state of the practice documents as well.
You can put your thoughts in your personal folder. We'll move them when we start working on a document that pulls everything together.
Please review "Measure Using Shallow Measurement Template" in the Methodology document and provide feedback.
https://github.com/smiths/AIMSS/blob/master/StateOfPractice/Methodology/Methodology.pdf
So that our methodology is reusable, we need to document the steps that we are taking, from domain recognition to developer interviews. The text in Section 3 (Overview of Steps in Assessing Quality of the Domain Software) of Methodology.pdf in the StateOfPractice folder is rough and incomplete. Please update and then assign me to review.
Please review "Identify Candidate Software" in the Methodology document and provide feedback.
https://github.com/smiths/AIMSS/blob/master/StateOfPractice/Methodology/Methodology.pdf
As mentioned in #33, I would like an additional document related to the empirical measures. I would like a document like the measurement template we already have:
https://github.com/smiths/AIMSS/blob/master/StateOfPractice/Peter-Notes/RevisedGradingTemplate.tex
The empirical measures template would list the measures we want, followed by their type. In many cases the type of the measure will be a natural number. There are already some empirical measures in the measurement template, like counting the number of open and closed issues.
You have already started lists like this, but I'd like all of the information in one place. I also want the list to just be things that we can measure with the tools you have identified. We could always do a deeper dive into the data, but our goal is to get a sense of the maturity of each project based on relatively easy to find empirical measures.
We might end up combining the empirical measures template with the existing measures template, but for now we'll keep the discussions separate.
Please review Section 5 - Empirical Measures of the document Methodology as we discussed in our last meeting.
Related to revising the measurement template #24, please have a look at whether anything interesting was entered in the previous studies for "Is there evidence that maintainability was considered in the design?" (In the latest version of the template this was row 82.) If the previous batch of measurements didn't find anything interesting, we can safety drop this measure. If they did find something interesting, then we can add this to our revised template.
Revise the previous template following the discussions that we have had during our on-line meetings. You should incorporate the ideas from @Ao99 as well. Don't worry about making the list short now. We will do a pass later to reduce the scope.
Create a demo for empirical measures with the tools.
Hi @smiths , please mark this issue as complete when you have commented on and emailed back the measurement template that was emailed to you on May 18th.
Add Reliability from Ghezzi's book
5ddc85f
Please add definitions to the document that summarizes our definitions of qualities:
https://github.com/smiths/AIMSS/tree/master/StateOfPractice/QDefOfQualities
@peter-michalski your are assigned the following qualities:
You can check the boxes as you finish each quality, and close the issue once all of the qualities have been defined.
Please add definitions to the document that summarizes our definitions of qualities:
https://github.com/smiths/AIMSS/tree/master/StateOfPractice/QDefOfQualities
You are assigned the following qualities:
You can check the boxes as you finish each quality, and close the issue once all of the qualities have been defined.
Hi @JacquesCarette , please mark this issue as complete when you have commented on and emailed back the measurement template that was emailed to you on May 18th.
Hi @Ao99 , please mark this issue as complete when you have commented on and emailed back the measurement template that was emailed to you on May 18th.
Please add definitions to the document that summarizes our definitions of qualities:
https://github.com/smiths/AIMSS/tree/master/StateOfPractice/QDefOfQualities
You are assigned the following qualities:
You can check the boxes as you finish each quality, and close the issue once all of the qualities have been defined.
Repeat #17, but for your domain.
Hi @JacquesCarette,
I have uploaded a new version of the measurement template (4aa36b8) which incorporates consensus from today's meeting.
The upload has some orange highlights that still need to be addressed and we would like your thoughts:
Row 67: We are considering if we should drop the Surface Performance section from the template and instead ask the domain expert to give us their impression of the software performance. The template currently has only 3 questions under Surface Performance.
Row 74: We would like your thoughts on the current state of the Surface Usability section of the template, and on asking the domain experts for their impression.
Row 83: In the Maintenance section there is a question from the old template, "Is there evidence that maintainability was considered in the design?" ({yes*, no}) - We discussed removing this question but are leaning towards keeping it as a "catch all". The old templates did not reveal any specific / interesting examples, they all simply have a yes or no. What are your thoughts on this question?
Row 87: In the Reusability section we replaced "Is the system modularized?" with "How many code files are there?". What are your thoughts?
Row 96: We are considering if we should drop the Portability section from the template as it is not providing much useful information in its current form. Thoughts?
Row 112: We are considering if we should drop the Interoperability section from the template as it is not providing much useful information in its current form. Thoughts?
Row 123: We are considering if the domain expert could do a reproducability experiment. The Reproducability section also is not providing much useful information in its current form. Thoughts?
What questions should we ask the project owners to learn about their development process? I suggest that you provide both a questionnaire and a list of oral interview questions. You should add the questionnaire and interview questions as an appendix to our Methodology document.
While working on #34, have a look over the following paper (found by @JacquesCarette):
https://arxiv.org/pdf/2005.13474.pdf
There is a table in the paper that suggests many code related metrics. Some of these might make sense for our work.
Using the resources listed in #15, measure 3-5 medical image analysis programs.
Please review "How to Initially Filter the Software List" in the Methodology document and provide feedback.
https://github.com/smiths/AIMSS/blob/master/StateOfPractice/Methodology/Methodology.pdf
Measure 3 to 5 LB solvers using the empirical measures available in GitStats. You should use the same 3 to 5 solvers as for #15. Think about the story that the data tells. Which of the metrics are meaningful? Which ones should we continue to collect for other projects?
See if Doug or Alan have something to add.
Add to the Methodology document a write-up of our usability experiments. It should look something like these steps:
- record research question
a. start with short list of projects
b. identify tasks for the study subject to do, at least one task to modify the software
c. survey the study subject
d. observe them doing the tasks, take notes, time duration
e. survey the study subject
f. pair-wise comparisons
- compare to ranking according to adherence to best practices
Someone put repeated entries in ResearchProposal.bib, so QDefOfQualities.tex don't compile now. Could you please take a look that which entries you want to keep?
The error show as follows
Database file #1: ../../CommonFiles/ResearchProposal.bib
Repeated entry---line 1467 of file ../../CommonFiles/ResearchProposal.bib
: @inproceedings{fernandez2005model
: ,
I'm skipping whatever remains of this entry
Repeated entry---line 1476 of file ../../CommonFiles/ResearchProposal.bib
: @Article{mooney1990strategies
Please add definitions to the document that summarizes our definitions of qualities:
https://github.com/smiths/AIMSS/tree/master/StateOfPractice/QDefOfQualities
You are assigned the following qualities:
You can check the boxes as you finish each quality, and close the issue once all of the qualities have been defined.
The scripts (written in 2014) for the AHP algorithm have been copied to the StateOfPractice folder in a folder labelled AHP. We need to modify these to work with our current inputs. The former MEng students took care of all of the details on this, so neither @JacquesCarette or I know how they did it. Hopefully they left the code in reasonable shape. :-)
Please review "How to Identify the Domain" in the Methodology document and provide feedback.
https://github.com/smiths/AIMSS/blob/master/StateOfPractice/Methodology/Methodology.pdf
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.