prairielearn / prairielearn Goto Github PK
View Code? Open in Web Editor NEWOnline problem-driving learning system
Home Page: http://prairielearn.readthedocs.io/
License: Other
Online problem-driving learning system
Home Page: http://prairielearn.readthedocs.io/
License: Other
Add table-striped table-hover
to homework panel tables once Bootstrap v3.1 is released: twbs/bootstrap#10492
We need to change:
var id = type.slice(0, type.length - 2) + item.seq;
to:
var id = type.slice(0, type.length - 2) + item.value.seq;
in server.js
.
Should each question include set of test data so we can run automatic tests against the full system, or at least part of the system?
Clicking "New attempt at this exam", then doing a question, then grading, then clicking "Exams" in the navbar, gives a list of exam attempts with the new exam at the bottom, not at the top where it should be.
Add a client page that shows all submissions for the current user.
Add API info for test
and tInstance
.
At the moment all code and HTML associated with a specific test is dynamically loaded, but perhaps this should just be hard-coded in the client, given that we don't really have all that many different types of tests?
Apparently NetID can be passed through from shibboleth as mixed case, so sometimes students have multiple UIDs (one lowercase, one uppercase, for example).
The Requester
object in app.js
should be removed. Instead $.ajaxPrefilter
should take care of the authentication headers, and we should use similar jQuery ajax methods to add error handlers.
Allow exam questions to be submitted for grading feedback multiple times.
Set up a hosted experience so that PrairieLearn can be used without having to set up dev and deployment instances for every class. This depends on:
prairielearn.engr.illinois.edu
or prairielearn.illinois.edu
or similar.The server should always reload question modules when running in dev mode. This can be done by deleting the appropriate keys from require.cache
just before doing require()
on the question module.
Question performance statistics need to be imported from external sources (e.g., randexam
). For now we can just do this by hand.
There is currently code to differentiate the "effective user" from the "authenticated user", just like the unix notion of UID/EUID. This allows instructors to act as if they were a particular student, which is an easy way to see user scores, etc, without adding a special instructor view of information that a student can already see about themselves. We need to fix:
Add more hooks where test code is called, such as for tInstance and qInstance creation. This should enable tests to do things like:
Currently PrairieDraw errors on missing images. Instead it should substitute a place-holder image, or just nothing.
Add the concept of a "course" to the system, with a CID (course ID). Courses should contain lists of users, questions, and tests. On the client side there should be a "currently active course" concept, and the server API should be augmented to take a CID parameter to operations like GET on all questions or tests, so that we just get the appropriate tests and questions.
Add an interface to the client to GET /export.csv
and create a similar export.json
endpoint. The main question here is how to handle authentication. The link needs to include auth information in the request headers, so it needs to be generated by javascript, but it then needs to trigger a download/file-save in the browser.
At present we use too many different templating and data-binding engines: underscore, rivets, mustache (often all applied). Reduce this to just one (mustache?).
Add configuration to store questions/tests in external directories, to enable separate question/test private repositories.
At the moment we store dueDate
per-test, but availDate
per tInstance. These should always be store in tests, but with optional overrides in tInstances. All code should try and get the any date from the tInstance and then fallback to the test. This would allow us to update the dates for all students any time we want by changing the values in the test, but also permit per-student overrides when necessary.
Add user statistics incrementally on every submission or other API request and remove current 10-minute stats computation. This will be more scalable for large courses.
Also add more per-question statistics, such as median-time-to-complete.
Add a field for "related resources" in the info.json
for each question and display it somewhere on the question page (presumably in the sidebar).
Eventually this should augmented by recommendation-engine-generated resources and click-throughs should be tracked.
Add formatters to limit precision of exact answers.
Remove:
qScores
/qInstances
from QuestionDataModel.loadQuestion()
from the $.ajax()
call-site.Don't pass params
to server.gradeAnswer()
for questions. Instead have a question class that implements this, or other things like automating the getData()
pattern.
Make sure app.js can survive and accurately report:
Store current QIIDs in HW test and don't generate a new qInstance until the old one has been solved. One way to implement this would be to pass off more responsibilities to the test server code (including passing in an object that can make qInstances).
At the moment the server-side handlers do their own ad hoc input validation. Should we have some more systematic way of doing this?
Add a global handler for timeouts and errors on data load (JS, images, other files) and add a generic "retry" interface to the client. This should involve both automatic and manual retries, with generic user feedback (e.g., an error div at the top of the page).
At the moment the server runs out of open files after some amount of time. This can be mitigated by increasing the open-file-handle limits and periodically restarting the server, but we should see whether we can also force long-standing connections to be terminated more quickly.
/users
by UID for non-superusers.Add a separate div
for questions to use as a "Answer Box" to display the true answer after grading is complete.
The names we have for various objects may not be very good. For example:
We should spawn helper node subprocesses and use them to run any instructor-written code (question and test server.js code). This would allow:
We should also do a better job of cleaning up data coming back from instructor-written code. For example, score
should be forced to be a Number
between 0 and 1.
The PrairieRandom
functions should all anti-correlate their return values, probably using stratified sampling. This will require VIDs to be generated incrementally, rather than completely randomly, for repeats of a given question.
As an example:
var autoCreateTInstances = function(req, res, tInstances, autoCreateCallback) {
var tiDB = _(tInstances).groupBy("tid");
async.each(_(testDB).values(), function(test, callback) {
...
writeTInstance(req, res, tInstance, function() {
tiDB[tInstance.tid] = [tInstance];
callback(null);
});
...
}, function(err) {
if (err)
return sendError(res, 500, "Error autoCreating tInstances", err);
var tInstances = _.chain(tiDB).values().flatten(true).value();
autoCreateCallback(tInstances);
});
};
The problem occurs when two of the calls to writeTInstance()
fail. Each failing writeTInstance()
calls sendError()
, and the second call to sendError()
dies with:
Error: Can't set headers after they are sent.
We need to either be smarter inside sendError()
and not try and send a second error for the same request, or don't try and do error handling inside async loops, and instead pass all errors back up to the top call-site (probably the app.get()
handler) and only call sendError()
once.
See http://stackoverflow.com/questions/7042340/node-js-error-cant-set-headers-after-they-are-sent
Add a web interface allowing online editing of questions, tests, etc. This could be done by shifting question/test code into the DB, or by just directly editing the on-disk files.
Secret keys need to be specified in config files somewhere, but not committed to the main source repository.
Students seem to expect that they can enter numeric answers in the form "3/2" instead of "1.5". This is probably due to familiarity with WebAssign (used for math classes at UIUC), which can accept this. We should probably change things to handle this.
Students frequently requested being able to find out how a given practice exam question mapped back to the homework question numbering. In general this is a problem when we re-use questions and users want to be able to see the back-mappings to tests from questions.
Make sure server.js can survive and accurately report:
This was partially implemented in c3876b1bfa058afe65d3c23afcdc202419f69871 in old-PrairieLearn (at least the main server should survive question server errors).
Instructors need to be able to easily access aggregate statistics, such as student score distributions on each homework, completion percentages for assessment items, etc. This should probably be done by "tests", as it is rather test-specific. We should probably also expose as much of this as possible to students, or at least allow this on a course-by-course basis.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.