Code Monkey home page Code Monkey logo

evalai-cli's Introduction

EvalAI-CLI

Official Command Line utility to use EvalAI in your terminal.

EvalAI-CLI is designed to extend the functionality of the EvalAI web application to command line to make the platform more accessible and terminal-friendly to its users.


Join the chat at https://gitter.im/Cloud-CV/EvalAI Build Status Coverage Status Documentation Status

Installation

EvalAI-CLI and its required dependencies can be installed using pip:

pip install evalai

Once EvalAI-CLI is installed, check out the usage documentation.

Contributing Guidelines

If you are interested in contributing to EvalAI-CLI, follow our contribution guidelines.

Development Setup

  1. Setup the development environment for EvalAI and make sure that it is running perfectly.

  2. Clone the evalai-cli repository to your machine via git

    git clone https://github.com/Cloud-CV/evalai-cli.git evalai-cli
  3. Create a virtual environment

    cd evalai-cli
    virtualenv -p python3 venv
    source venv/bin/activate
  4. Install the package locally

    pip install -e .
  5. Change the evalai-cli host to make request to local EvalAI server running on http://localhost:8000 by running:

    evalai host -sh http://localhost:8000
  6. Login to cli using the command evalai login Two users will be created by default which are listed below -

    Host User - username: host, password: password
    Participant User - username: participant, password: password

evalai-cli's People

Contributors

abinav62 avatar ayushr1 avatar burnerlee avatar dependabot[bot] avatar deshraj avatar drepram avatar gautamjajoo avatar gchhablani avatar guyandtheworld avatar hkmatsumoto avatar inishchith avatar jayaike avatar khalidrmb avatar krtkvrm avatar nikhilmudholkar avatar nikochiko avatar nileshprasad137 avatar radiantly avatar ram81 avatar rishabhjain2018 avatar rohansreelesh avatar saiputravu avatar savish28 avatar varunagrawal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

evalai-cli's Issues

Google Summer of Code Tasks

Features to implement for the EvalAI CLI:

  • Get all challenges

    • command: evalai challenges list
    • data: name, id and short description
  • Show number of challenges in which participant has participated

    • command: evalai challenges list -participate true
    • data: Show the challenge id, challenge name, and short description
  • Show number of challenges in which participant has hosted

    • command: evalai challenges list -host true
    • data: Show the challenge id, challenge name, and short description
  • Get current challenges

    • command: evalai challenges list ongoing
    • data: name, id and short description
  • Get past challenges

    • command: evalai challenges list past
    • data: name, id and short description
  • Get future challenges

    • command: evalai challenges list future
    • data: name, id and short description
  • Get details about the phases of a challenge

    • command: evalai challenges phases list -c 1
    • data: phase name, id and short description
  • Get details about a particular phase of a challenge

    • command: evalai challenges phases -c 1 -p 2
    • data: show all the details that a ChallengePhase serializer shows in django app
  • Get all participant teams of the user

    • command: evalai teams list
    • data: show team name, team pk, members usernames in the team
  • Create a participant team

    • command: evalai teams create
    • data: show team name, team pk, member
  • Participate in a challenge

    • command: evalai challenges participate -c 1 -pt 123
    • data: success message with relevant metadata
  • Make submissions to the challenge phase

    • command: evalai challenges submit -c 1 -p 2 -file submission.json
    • data: show the status of the submission with all relevant information that we show in my-submissions
  • Get status of a particular submission

    • command: evalai submissions -s 1234
    • data: show the status of the submission with all relevant information that we show in my-submissions
  • Get all the challenge phase splits of a phase in a challenge

    • code: evalai challenges phases splits -c 1 -p 2
    • data: show the name, id of the challenge phase splits
  • Pretty print the leaderboard of a challenge

    • command: evalai challenges leaderboard -c 1 -cps 3
    • data: show the leaderboard as it is shown on EvalAI website
  • Show submission stats for a challenge (both for participants and hosts)

    • command: evalai challenges submissions stats -c 1
    • data: shows the stats based on the user if he is a participant or show all submission stats is he is a host
  • Show submission stats for a challenge phase (both for participants and hosts)

    • commad: evalai challenges submissions stats -c 1 -p 2
    • data: shows the stats based on the user if he is a participant or show all stats is he is a host
  • Display number of submissions happened between two dates for a challenge phase

    • command: evalai challenges submissions -d1 mm/dd/yyyy -d2 mm/dd/yyyy
    • data: show all the submission id, submission date and other details that we show in my-submissions tab in EvalAI
  • Display number of submissions happened between two dates for a challenge phase with filtering by participant team.

    • command: evalai challenges submissions -d1 mm/dd/yyyy -d2 mm/dd/yyyy -pt 12
    • data: show all the submission id, submission date and other details that we show in my-submissions tab in EvalAI done by a particular participant team

Add docs for auth token

  • How to add auth token for EvalAI-CLI is not there in README
  • Add contribution guidelines

I am not sure if this is necessary. But when I tried running a few commands, I saw a warning to add token.json. That's why I created this issue

Development templates

We should have templates that describe to new contributors what the expectations of the project are. This requires putting the files in a .github folder to allow Github to render them appropriately. Below is a list of files that will be needed to close this issue.

  1. - CONTRIBUTING.md
  2. - ISSUE_TEMPLATE.md
  3. - PULL_REQUEST_TEMPLATE.md

Token File Missing Message

Change the error message when token file isn’t present in specified directory.

The message should be:

The authentication token json file doesn't exists at the required path.
Please download the file from the Profile section of the EvalAI webapp and place it at ~/.evalai/token.json

Create challenge configuration using CLI

Users should be able to create a challenge configuration zip using CLI. The idea is to have a similar setup that Sphinx provides when you want to generate the documentation configuration. It should work something like this:

Command: evalai create_challenge

$ evalai create challenge 

Enter the challenge name:
Test Challenge

Enter the Number of Phases:
2

Enter the Phase 1 name:
Test Phase

Enter the Phase 1 codename:
test-phase
.
.
.
.
.
.

CC: @isht3 @varunagrawal @RishabhJain2018

Multiple tokens

  • Create a file which will have multiple tokens so that one can switch from host to participant creating some sort of a login kind of feature.
    Label - enhancement

Add test cases to cover the complete codebase

Since the coverage is 72% which is quite low, hence we should add test cases in order to cover all the code, before merging the new PR's.

Deliverables: Increase the coverage to 90%.

Remove hard-coded token file name.

We need to figure out a better way to save the token file name.
One way could be to have a config.json that saves these values and is loaded at startup. This will allow us to not only save custom config file names but also custom locations.
We can prompt the user at startup to enter these values if they are missing, providing the current values as defaults (similar to how ssh keys are generated).

Modify docstrings in functions

Current Scenario: Tells the functioning of the function in a single line.

Deliverables:
We should add doc strings answering the following points:

  1. Args : What argument a function accepts
  2. Returns: What is the final output of the function
  3. Raises: What exception it raises
  4. Command (Optional): What is command used to invoke it.

Revert PR #2

Description: Revert the pull request #2 and merge again by squashing the changes into 1 merge commit with a proper commit message.

Abstracting the request pattern.

Currently, most of the request function use the same format of http request pattern except a few.

    headers = get_request_header()

    try:
        response = requests.get(url, headers=headers)
        response.raise_for_status()
    except requests.exceptions.HTTPError as err:
        if (response.status_code in EVALAI_ERROR_CODES):
            validate_token(response.json())
            echo(style("Error: {}".format(response.json()["error"]), fg="red", bold=True))
        else:
            echo(err)
        sys.exit(1)
    except requests.exceptions.RequestException as err:
        echo(err)
        sys.exit(1)

This could be simplified easily by abstracting this functionality as a function.

Add a command for logging in to evalai using cli

Deliverables

  • Add a command evalai login <username> which will prompt for the password.
  • Once the password is entered, then the command will create a auth_token for the user by using the EvalAI login API, & saves it in the token.json file at ~/.evalai/token.json in the user's system.

Auth Token value gets reset

The token which is placed at ~/.evalai/token.json gets reset whenever a user runs test cases on his local machine.

Steps to reproduce:

  1. Add a valid token at ~/.evalai/token.json.
  2. Run the test cases using py.test.
  3. Check the value of token at ~/.evalai/token.json

The token value at step 1 != value at step 3.

Token Expiration Error

We should display an error that the token is expired and ask the user to generate again.

Storing the project settings in a config file.

Right now, there are some user-specific settings that need to be configured to use the CLI with different instances of the EvalAI. Some of them are:

  • Host URL.
  • Token File Location
  • Token File Name

The default for Host URL would be the localhost:8000, and the user could change it to their host to use different instances of EvalAI.

Fix bug while copying command

Bug:

Currently, when the command evalai set_token <auth_token> is copied to clipboard, then the content which is copied is evalai set_token &lt;auth_token&gt;.

Expected Behaviour

The copied content should be evalai set_token <auth_token>

Check if docker is running on host machine before using evalai push

Current Scenario

If a user wants to make submissions using evalai push command, then if docker isn't running on the users machine then the error thrown is shouwn below and it will confuse the user.

Traceback (most recent call last):
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
    chunked=chunked)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/docker/transport/unixconn.py", line 42, in connect
    sock.connect(self.unix_socket)
ConnectionRefusedError: [Errno 61] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
    timeout=timeout
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/urllib3/util/retry.py", line 367, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/urllib3/packages/six.py", line 685, in reraise
    raise value.with_traceback(tb)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
    chunked=chunked)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/docker/transport/unixconn.py", line 42, in connect
    sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionRefusedError(61, 'Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/bin/evalai", line 10, in <module>
    sys.exit(main())
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/evalai/submissions.py", line 53, in push
    docker_image = docker_client.images.get(image)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/docker/models/images.py", line 312, in get
    return self.prepare_model(self.client.api.inspect_image(name))
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/docker/utils/decorators.py", line 19, in wrapped
    return f(self, resource_id, *args, **kwargs)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/docker/api/image.py", line 245, in inspect_image
    self._get(self._url("/images/{0}/json", image)), True
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/docker/utils/decorators.py", line 46, in inner
    return f(self, *args, **kwargs)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/docker/api/client.py", line 215, in _get
    return self.get(url, **self._set_request_timeout(kwargs))
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/requests/sessions.py", line 537, in get
    return self.request('GET', url, **kwargs)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/requests/sessions.py", line 524, in request
    resp = self.send(prep, **send_kwargs)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/requests/sessions.py", line 637, in send
    r = adapter.send(request, **kwargs)
  File "/Users/rishabhjain/Documents/Projects/evalai-cli/env/lib/python3.6/site-packages/requests/adapters.py", line 498, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionRefusedError(61, 'Connection refused'))

Expected Scenario

The error thrown should be Docker isn't running on your system. Please start docker before making submissions

Documentation milestones.

Write docs for.

  • Welcome page.
  • Installation.
  • Getting the token.
  • Connecting CLI to the host.
  • Basic commands overview.
  • View Challenge, Phase, Phase Splits.
  • Create a participant team.
  • Participate in a challenge.
  • Make a submission.
  • View status of a submission.
  • View all your submission.
  • View the leaderboard.
  • Directory Structure.
  • Contributing guidelines.
  • Setting theme for the doc.

Auto Docs Generation

By their very nature, CLI tools are harder to figure out compared to GUI tools. We will need really good docs to ensure the evalai-cli too gains traction among users.

It would be great to have automatic documentation generation from doc strings as a starting point so we can add more detailed documentation later.

Redoing line-breaks.

Right now, line breaks are a continuous stream of hyphens, we have to change it to something aesthetically pleasing like '-'*N.

For more details please refer #29

Display only the ongoing challenges with the participant flag.

Current Scenario.

All the challenges are displayed in which participant have participated i.e present as well as past on running the command evalai challenges --participant

Expected Behaviour.

The command evalai challenges --participant should only display the challenges in which are ongoing and the participant have participated in them.

Upcoming Command not working

$ evalai challenges upcoming

Doesn't work .
Instead the function that should be invoked is named future. REF.

$ evalai challenges future

@RishabhJain2018 @isht3 this is just a typo kind of issue. let me know if i can propose a quick fix. 😄

Automated Testing & Release Pipeline

Once we have the project in a semi-working condition, it would be nice to have automatic testing and deployment to pypi.org via travis or a similar CI/CD tool.

Increase Coverage

We should now focus on covering up all the minor test cases in order to increase the coverage to 95%.

Change docstrings.

Make docstrings more consistent by defining it with the following terms.

  • ARGS
  • PARSES
  • RAISES

Redesign get_challenge_count to accept both host and participant flags.

The function get_challenge_count called when either participant or host is True first checks for host and if host is false, checks for participant.

So after activating both flags, only the host challenges are printed. We have to redesign the get_challenge_count function to account for this (and possibly rename it too).

#24

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.