diku-edu / remarks Goto Github PK
View Code? Open in Web Editor NEWA DSL for marking student work
License: BSD 3-Clause "New" or "Revised" License
A DSL for marking student work
License: BSD 3-Clause "New" or "Revised" License
remarks check
and remarks show
will likely show the same pretty-printing in the end (currently, remarks show
doesn't perform a validate
, but it probably should.
Perhaps let the CLI take a "depth" parameter, showing just the overall sum, sum per top-level judgement, or sums all the way down.
CSV export would be useful in general
This is useful when templates are first developed. For instance, you might come up with some initial template, and then try to remark on a submission using the template. As you do so, you will inevitably end up extending the template. It would be nice to not have to manually modify the template, and instead just generate a new template from the remarks you end up with.
Currently it just prints a Haskell type.
Different students often make similar mistakes. To this end, I often use raw (as in, human brain) memory, and grep
at best, to find a comment that I had previously given, and paste it in for another student. Clearly, both are imprecise. It would be nice to be able to query into remarks files to list just the remarks under a particular judgement and/or structural remark. This would make it easier to recall what sort of comments have been given under a particular point, and to find the comment one is thinking of.
I guess this calls for something like XPath or CSS selectors; just for remarks.
This can even be extended to a small scripting language, that can
calculate grades and similar. We can even move the calculation of Total and
MaxPoints to have a predefined summation of the leaves.
The parsec parser does allow to provide a "Feedback judgement" like this:
# Assignment 1: 10/10
## Feedback:
* Good work!
## Task 1: 10/10
Then, we can extract the feedback using the CLI:
$ remarks feedback remarks.mrk
# Assignment 1
* Good work!
However, the README makes no mention of Feeedback judgements, while still mentioning the feedback CLI command, which yields nothing if no Feedback judgements are present.
My immediate questions are:
Wrt. question 1, remarks currently does accept the following:
# Assignment 1: 0/10
## Feedback:
* Good work!
## Task 1: 10/10
# Assignment 2: 0/10
## Feedback:
* Bad work :-(
## Task 1: 0/10
However, it yields:
$ remarks feedback remarks.mrk
# Assignment 1
* Good work!# Assignment 2
* Bad work :-(
Which is not ideal ;-)
Wrt. question 2, the following yields a syntax error:
# Assignment 1: 0/10
## Feedback:
* Good work!
In particular,
$ remarks feedback remarks.mrk
"remarks.mrk" (line 4, column 1):
unexpected end of input
expecting white space or "##"
Sticking with Haskell makes remarks hard for second year students to contribute to.
A good start would be to denote the grammar and port the parser to Parsec.
The remarks parser is very particular about where it does and does not expect whitespace. The TAs who write these corrections generally are not. I think the parser could be made more lenient. Two cases I can think of:
2/2
).Some essential properties: student name, id, and grade.
It would be nice to extract just these properties into a CSV format.
Perhaps the format should be similar to org-mode:
# Total: 45/100
:Name: Donald E. Knuth
:Grade: 00
## Theory: 40/50
+ Excellent.
- But you need a CS degree to understand what's going on.
## Practice: 5/50
+ There is an implementation.
- Only proven correct, but not tested.
I think properties should be global.
The CSV export command syntax could then look something like this:
$ remarks export --format "Name;Grade;Total;Theory;Practice" <path>
This is a prerequisite for #3.
Write one in the readme and have help argument in the CLI.
Sometimes, you want to extend the template mid-way during grading. For instance, you discover an important structural comment missing from the original template. Some tool support for this would be nice. For instance,
Teaching assistants shouldn't be asked to sum things up correctly. It should be easy to sum up points starting from the bottom-most judgements.
This may require to keep track of line breaks when parsing.
As you mark student work, you might come to realise that your initial remarks template is missing, or has a wrong, or ill-phrased judgement. A diff
and patch
pair of commands would allow to compute the difference between two remarks files (e.g., a filled in remarks file and a template), and patch in changes, as needed.
This related to #13, but closing that issue now due to its age.
Histograms over:
That means, the logic will be written in JavaScript, but there isn't much logic to remarks, especially when little parsing logic is needed due to the availability of a DOM.
The benefit would be 0-entry for new TA's, and it will be trivial to send out HTML-versions to students or external examiners. The down-side is that there won't be a file structure to separate out concerns and avoid git merge conflicts if several TA's grade the same submission.
Let's talk if you are interested, else I will hack something up soon enough.
@oleks comments?
Getting a list of which correction are missing could give a good overview. I could have used it for the Assignments
This is a prerequisite for #2.
I suggest we allow this:
# Theory: /50
## Question 1: 5/10
## Question 2: 10/20
## Question 3: 10/20
The validator should allow for points to be missing in all but the bottom-most judgements, and report that points are missing if some bottom-most judgements lack them.
In general, the lack of points can be represented in the AST with 1/0
, i.e. Infinity
. This is because 1/0 == 1/0 ~> True
.
I had this feature in my shell-scripted format. The idea was that the syntax was slightly different for "bonus judgements":
# Theory: /50
## Question 1: 5/10
## Question 2: 5/20
## Question 3: 5/20
## Bonus: +5
The template would then be
## Bonus: +0
Bonus points should be accounted separately to keep the overflow counts as they are, and to make bonus points selectively applicable (i.e., only if need be).
Currently, judgements look like this:
# Theory: 0/50
## Question 1: 0/10
## Question 2: 0/20
## Question 3: 0/20
One suggestion is to change this to
* Theory: 0/50
** Question 1: 0/10
** Question 2: 0/20
** Question 3: 0/20
This would make remarks slightly more compatible with org-mode, making it slightly more wieldy for emacs users. However, .mrk
files should be small, so folding/unfolding isn't expected to be in too great a demand.
The reason headers are written with #
is to make files easy to grep
. They are still easy to grep
if we use *
instead of #
, since neutral comments must always be indented at least once.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.