Code Monkey home page Code Monkey logo

autolabjs / autolabjs Goto Github PK

View Code? Open in Web Editor NEW
18.0 7.0 22.0 9.79 MB

Auto evaluation software for programming projects and assignments. This repository contains AutolabJS server.

Home Page: https://autolabjs.github.io/

License: GNU General Public License v3.0

Java 3.21% Shell 47.54% JavaScript 45.28% CSS 0.58% C++ 0.75% Python 2.50% Dockerfile 0.14%
gitlab autolab evaluation educational-technology education laboratory-exercises programming-contests programming-exercises coding-challenge courses

autolabjs's Introduction

Autolab

Maintainability Codecov Build Status Chat CII Best Practices License: GPL v3

Autolab is an auto evaluation software for programming labs. The software currently supports automatic evaluation in Java, Python2, Python3, C++ and C programming languages. Autolab uses Gitlab as a component to provide the version control system. All the student code submissions reside in Gitlab, thus benefiting from powerful git system. The software is modular with separate microservice instances for all the application components.

Please see

Features      Releases      Documentation     

Contribute by

Creating Issues      Writing Documentation      Making a PR that follows coding standards     


Respect others. Follow Code of Conduct     

autolabjs's People

Contributors

ankshitjain avatar coditva avatar efueger avatar gnarula avatar kashyapgajera avatar prasadtalasila avatar rajat503 avatar sangoltejas avatar shivin7 avatar vinamrabhatia avatar yash10p avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

autolabjs's Issues

time and commit hash columns on score board

There seems to be quite a bit of confusion on the scores shown on the score board and the corresponding commits. Would having two extra columns, time and commit hash/label help resolve this mental block?

What would be the difficulty of adding these two to the scoreboard?

Timing out an evaluation request

If for any reason, a client computer (identified by IP) submits an evaluation that can't be finished, the autolab will not allow any more submissions from the client. Right now, the evaluation requests are not timing out under certain conditions. We need to explore these conditions and come up with a way to timeout for these scenarios.
One of the reasons that an evaluation can be in infinite loop is when the files for the requested evaluation are missing from lab_author / student repositories.
If for any reason, the evaluation is not completed by the autolab with in a certain time (say 20 seconds), we can time out the request at all the components.

non-terminating client script for evaluation

Under the following conditions, the client-side submit function, goes into infinite loop.

  1. The lab has been created in labs.json
  2. The corresponding repositories do not exist in 'lab_author' and 'studentID' name spaces.
  3. Evaluation request is submitted

When this happens, the evaluation request does not terminate. Good thing is that all the components, main server, load balancer and execution nodes continue to work.

It might be interesting to see what really happens to the evaluation request under these conditions.

Execution faults due to existence of student_solution

The following issues have been observed in the execution nodes.

  1. If the directory of lab_author contains a student_solution folder, that does not get overwritten with the current students files. Every one who makes a request for evaluation gets evaluated for the files in student_solution on lab_author repository. We can take care of this by emptying the contents of the student_solution directory before cloning the users file.

Making Docker Containers

  • Every execution node should be a separate container.
  • Load balancer, database and the main server should be in one machine.
  • See the tradeoff for GitLab

Parallel execution on re-submission

Scenario: There are multiple submissions of the same lab by the same student and a Docker container receives the second evaluation request while the first request in being evaluated in another Docker container.

Suspected Reason: Same evaluation path in both the containers.

Suggested Solution: Use OverlayFS as the container File System.

programming language specific classpaths

At present, the class paths for different programming languages are not being set in an organized manner. execution.sh file contains classpath for only java language (lines-50,51). Similar language paths must be set for all programming languages.

Execution node crash due to incomplete lab setup

Execution nodes are known to crash under the following conditions.

  1. Administrator creates a lab on web application.
  2. The corresponding solution repository is not in place on gitlab.
  3. Student submits an evaluation request.

When the evaluation request goes to the execution node, the execution node tries to pull the repository from lab_author. Since, it is expecting the existing repository in gitlab, execution node code is not able to handle this exception. Such a possibility should return a different kind of error to the student so that the problem can be taken care of immediately.

Suggestions:

  1. Exception handling in execution node.
  2. Update of the return codes in execution.sh script.

admin page on webapp

We discussed the possibility of an admin page on the web application. The admin page would allow a course administrator to perform the following tasks.

Permit a course administrator to login using API key. We can debate the strategies of allowing third-party authentication vs local authentication vs API key authentication. API key-based authentication seems easiest to implement while providing the same-level of security.

Once an administrator logs in, client can use cookies to let administrator have continued access to /admin page until logout.

Admin is expected to perform the following operations.

  1. Update labs.json file through a html editor.
  2. Trigger a reevaluation request on an lab. We should debate the logic of triggering a reevaluation on an active lab. This is only for scenarios where the solution is updated during the lab itself.

When doing these things, we should keep in mind the future requirement of hosting multiple courses simultaneously.

@rajat503 can comment on all the suggested tasks.

contributors page

On the front-end web application, it is better to have a page for list of contributors.

Fate of automation scripts

@tejas-sangol spent some time in developing scripts in misc of both master and dev branches. Two issues pertain to this section of code base.

  1. Should we continue to maintain this code base?
  2. If so, what would be the corresponding changes to the code and wiki pages to reflect the latest code commits?

pruning and labeling of branches

Currently there are too many branches with inappropriate names. It is a good practice to have only two branches, namely master and dev. Please remove other branches.

If certain features have been developed, but not getting used at the moment, the relevant commits can be given appropriate labels and comments on the commits so that we can pull the code from those commits in future.

Since the linting is complete, it is better to check the live status on a cloud machine and migrate the changes to the master branch.

show server time on web application

Since the server time can be significantly different from local machine time, it may be better to add the following features to the web application.

server time on the main page
Showing server time on the main page below top-menu bar will alert users to complete their work in time.

remaining time for active labs
We can also have another item named remaining time for each lab. For all active labs, the remaining time is shown in days, hours:minutes format. For all inactive labs, remaining time is shown as zero.

Changes on execute.sh

  1. add class path export CLASSPATH="lib/*:."
  2. Suppress -Xlint warning

Suggested fix:
cat log.txt >> ../log.txt
errors=$(wc -l log.txt | awk '{print $1}')
errors2=$(grep -v "^Note:" log.txt | wc -l | awk '{print $1}')
if [ $errors2 -eq 0 ];
then

  1. Copy student libraries in working directory after moving author solution
    cp -r ./student_solution/lib working_dir/

Hard coded GitLab IP

GitLab IP is hardcoded in download bash files at execution nodes and load balancer.

Refactor load balancer

The load balancer can be broken down into 2 conponents(LB1 and LB2) . One would handle communications with the main server, mysql and execution nodes.
The second would schedule the process and dynamically change the number of execution nodes.

Assumptions when devising a mechanism for addition/removal of nodes -

  1. The amount of time taken for execution of one submission is 5 seconds.
  2. Out of all the pending jobs, each node gets a maximum of 5 submissions to execute.
  3. Hence the maximum waiting time for each submission is 25 seconds.
  4. The second component of the load balancer will have an array of unused ports from 8081 to 8181 from which the new nodes will be attached, one node_queue, array with all the available nodes and job_queue, array with all the pending jobs.
  5. When LB1 recieves from the main server, forwards the request to LB2 -
    if node is available :
    send the job with the scheduled node to LB1
    else if number_of_jobs >= number_of_nodes*5 :
    create a new node, send the job with the new node to LB1
    else :
    wait for a node to complete execution.
  6. When LB1 recieves a submission from one of the nodes, it forwards the score to the main server, and ammends the databse. The node details are sent to LB2 -
    if number_of_nodes*5 >= number_of_jobs :
    remove node
    else :
    if job is pending :
    send the job with the node_details to LB1 for execution
    else :
    add node to queue

Load balancer down node scheduling

  • Load balancer should have a status of every execution node and schedule accordingly. Also account for cases when a submission brings a node down and the node comes back up again.

reevaluation failure in v0.2-beta

The reevaluation step of web application does not complete in the v0.2-beta. We need to automate the manual steps of the reevaluation process before the reevaluation request succeeds.

webapp UI improvements

The following improvements would be helpful.

  1. Time specification for the lab.
    Instead of specifying one single date, it is better to specify date on both start and end times in the following format.
    Start Time: 15:00, 9-8-2016
    End Time: 17:00, 9-8-2016
  2. Update on submit page
    If the lab is not active, we can show a message at the top
    "Lab is no longer active. The result of this evaluation shall not be added to the score card."

Explicit specification of instructors repository in labs.json

At present, the lab_author gitlab user is assumed to be the instructor. In order to provide support for multiple courses with a single gitlab deployment, it is better to provision for an explicit instructor's repository in labs.json file.

The suggested fields is as follows.

    "lab_solution": "gitlab repo url"

Another way to reduce repetition of URL is to specify the instructors username in courses.json.

Ansible errors

The following errors are known to occur during install/uninstall process.

  1. Only one execution node gets installed. We need multiple execution nodes in place. We can create five execution nodes by default.
  2. During uninstall the keys of the main_server (JavaAutolab/deploy/keys/main_server/id_rsa*) are not removed. This is leading to an error during reinstall process.
  3. The install script needs to consider the possibility of rerun and skip steps that have already been completed. We need to use the conditional execution feature of Ansible (include...when: clause of a task) in order to have this feature in place.

Returning log.txt to user

At present, we only give score for all test cases to the users. We may provide the log.txt file as well. But that would require doing a file transfer from execution node to web application via (with consent of) load balancer.

Write Tests

  • Student workflow and submission correctness
  • Scalability

Add Prefix to Lab_No

  • Lab_No attribute for a lab should support prefixes like lab_1/assignment_1.
  • Make relevant UI changes

Installation Package

We really need an installation script for the software. The following steps may help with creation of installation script.

  1. Where ever files need to be changed, create sample files and copy these sample files over the existing files in the docker container.
  2. Assume default cache directory for installation. Or, user can specify this cache directory through an installation config file.
  3. If IP addresses need to be specified, let the user specify them in a default config file for installation.

We need to go with the standard open source flow of configure-make-make install cycle. Please see relevant documentation to create this - 1, 2, 3.

GitLab API

From the email discussion:

Two kinds of scenarios.

  1. administrator creating / managing users (once a semester, may be)
  2. lab author performing the following weekly tasks
    a) creating new repository for lab statement and solutions
    b) creating a skeleton repository in all the students accounts

There are many GitLab API wrapper clients available. For smooth operation, it is better to choose node.js-based package or Java-based package. For the first task, it may be better to choose Node.js based package. Administrator can perform the user creation task.

For the second task, we need to use Java wrapper available. We may need to write another wrapper code that takes in a config file with the following information.

  1. authentication 2) user-list in a separate file 3) local weekly repository with solutions 4) skeleton repository

A lab author provides the config file which will be used to create an authoritative solution in lab authors namespace, create skeleton repository in all the student namespaces.

Having this wrapper in Java would make it easy for the lab authors to finish their weekly work. If the given Java wrapper does not work, it may be prudent to write a simple Java package containing the required set of classes that would make the necessary API calls.

Code Review and Refactoring

As part of code review and refactoring work, we need to undertake the following improvements.

  1. Automated option to uninstall AutoLab. This would ideally be an ansible package.
  2. Prune branches to main and dev. Create v0.1 and v0.2beta labels.
  3. Restructure wiki page for all releases.
  4. Single script for start/stop/restart the Autolab components on all machines.
  5. Protection against SQL injection attacks.
  6. Use ORM module in load balancer to interact with DB.
  7. JSHint, JSLint and JSLint Errors seem like good additions to your tool chain for quality control in our project.
  8. Decoupling of front-end web application from database. All database interactions happen through load balancer.
  9. Implementation of log framework using Winston logger library.
  10. Provision for environment variables where ever needed. No hard coding of filenames. Even magic constants, if any, will become environment variables.
  11. one-step submission script for autolab ref
  12. Refactor load balancer into two sub-components. One is responsible for communication with other components. Another one is responsible for scheduling and dynamic provisioning of execution nodes. See #35
  13. Change the encryption library to libsodium (NaCl). All communication between web application / load balancer / execution nodes must be using symmetric session key.
  14. Potential replacement of MySQL with a light-weight or NoSQL DB that is more scalable. See reddit discussion.
  15. Follow project checklist to make sure most of the items in the checklist are done.

Number of containers in execution node

In each execution node should there be multiple docker containers each taking a maximum of one job or only one container taking multiple jobs?
I'm more inclined towards multiple containers in one node due to the following reasons despite the overhead of running containers:

  • It gives complete isolation to each job.
  • A malicious job can only affect a container and other containers can still take submissions.
  • The system doesn't delay the response of marks until all jobs at the node are finished giving faster performance.

Cleanup

Appropriate directories need to be created in all microservices and misc directory. SSL, configuration, start/stop scripts also need to be arranged in proper directories.

Add test case status

  • Make failure of a test case more verbose by providing description of failure.
  • Give memory and runtime statistics.

load balancer does not exclude failed execution nodes

Some times load balancer receives the following json error response from an execution node.

{ [Error: connect ECONNREFUSED 127.0.0.1:8082]
2016-11-28T12:59:33.875721134Z   code: 'ECONNREFUSED',
2016-11-28T12:59:33.875724234Z   errno: 'ECONNREFUSED',
2016-11-28T12:59:33.875726634Z   syscall: 'connect',
2016-11-28T12:59:33.875729034Z   address: '127.0.0.1',
2016-11-28T12:59:33.875731434Z   port: 8082 }

Even after receiving the above error, the load balancer keeps the execution node running at port 8082 in the available execution nodes list. This needs to be changed.

Looking closely at the error, we can deduce that the load balancer is making the incorrect socket request to localhost at 8082 port where as the execution node has been configured at the socket 10.0.0.5:8082. Funnily enough this kind of error opens only for 8082 port all the time.

The error is also not position dependent. I tried with the following load balancer configuration.

  "Nodes": [
    {
      "hostname": "10.0.0.5",
      "port": "8084"
    },
    {
      "hostname": "10.0.0.5",
      "port": "8083"
    },
    {
      "hostname": "10.0.0.5",
      "port": "8082"
    }
  ]

Note that the port 8082 is the last one in the list. Still the error occurs only with the socket 10.0.0.5:8082; other sockets at ports 8083 and 8084 work fine.

If for any genuine reason, an execution node is down or refusing to take evaluation requests, the load balancer must recognize this fact and respond. A preferred response is to keep the execution node that is sending connection refused messages in the down list with the status as connection refused. Such a status can be clearly shown on the appurl:9000/status page as well.

Course creation documentation after installation

Detailed documentation for creating a course and getting started in the course. In specific requires details on:

  • User creation, specifically lab_author and student accounts
  • default user (project) settings

any other relevant information

main_server does not read the latest config files

We have the admin portal that is used for updating course and lab configurations. However, the main server loads the configuration files at start up and does not read these config files again.

We need to force the main_server to execute the configuration file reading after configuration update in the admin portal.

Documentation for v0.2-dev and v0.1

Specific to v0.2-dev

  • On the front-end web application, it is better to have a page for list of contributors.

Common to v0.1 and v0.2-dev branches:

Detailed documentation for creating a course and getting started in the course. In specific requires details on:

  • User creation, specifically lab_author and student accounts
  • default user (project) settings
  • initial steps after the installation

any other relevant information

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.