Code Monkey home page Code Monkey logo

maintainer-tools's Introduction

maintainer-tools

Workflows

Workflows for use by maintainers. These should be run from your fork of this repository, with an encrypted secret called ACCESS_TOKEN that is a personal access token with repo and workflow scopes.

PR Script

The PR Script Workflow allows you to make a commit against a PR as a maintainer without having to check out the PR locally and push the change. The manual workflow takes as its inputs a link to the PR and a comma-separated list of quoted commands to run. As a convenience, you can also type "True" for the option to run pre-commit against the PR to fix up any pre-commit errors.

Actions

Base Setup

Use this action to consolidate setup steps and caching in your workflows. You can control the versions of Python and Node used by setting matrix.python-version and matrix.node-version, respectively. An example workflow file would be:

name: Tests

on:
  push:
    branches: ["main"]
  pull_request:

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v2
      - name: Base Setup
        uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
      - name: Install
        shell: bash
        run: pip install -e ".[test]"
      - name: Test
        shell: bash
        run: pytest

If you want to use your minimum dependencies, you can use the following option, which will create a constraints file and set the PIP_CONSTRAINT environment variable, so that installations will use that file. By default the Python version will be "3.7", which can be overridden with python_version. Note that the environment variable also works if you use virtual environments like hatch. Note: this does not work on Windows, and will error.

  minimum_version:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v2
      - name: Base Setup
        uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
        with:
          dependency_type: minimum
      - name: Install
        run: pip install -e ".[test]"
      - name: Test
        run: pytest

If you want to use your pending dependencies, you can use the following option, which will create a constraints file and set the PIP_CONSTRAINT environment variable, so that installations will use that file. By default the Python version will be "3.12", which can be overridden with python_version. Note that the environment variable also works if you use virtual environments like hatch. Note: this does not work on Windows, and will error.

  prereleases:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v2
      - name: Base Setup
        uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
        with:
          dependency_type: pre
      - name: Install
        run: pip install -e ".[test]"
      - name: Test
        run: pytest

Check Links

Use this action to check the links in your repo using pytest-check-links. It will ignore links to GitHub and cache links to save time.

When adding this to a repo, you may need to skip some files or links. If the build fails, you can copy the "Checking files with command" used in the build, and add the appropriate --ignore-glob and --check-links-ignore until the tests pass locally, and add them as ignore_glob and ignore_links inputs to the action, respectively.

name: Check Links

on:
  push:
    branches: ["main"]
  pull_request:

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
      - uses: jupyterlab/maintainer-tools/.github/actions/check-links@v1

Enforce Labels

Use this action to enforce one of the triage labels on PRs in your repo (one of documentation, bug, enhancement, feature, maintenance). An example workflow file would be:

name: Enforce PR label

on:
  pull_request:
    types: [labeled, unlabeled, opened, edited, synchronize]

jobs:
  enforce-label:
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
    steps:
      - name: enforce-triage-label
        uses: jupyterlab/maintainer-tools/.github/actions/enforce-label@v1

Pre-Commit Check

Use this action to run a pre commit check with a manual stage. It will print a suitable error message on failure.

name: Pre-Commit Check
on:

on:
  push:
    branches: ["main"]
  pull_request:

jobs:
  pre_commit:
   runs-on: ubuntu-latest
   steps:
    - uses: actions/checkout@v2
    - uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
    - uses: jupyterlab/maintainer-tools/.github/actions/pre-commit@v1

Test Downstream Libraries

Use this action to test a package against downstream libraries. This can be used to catch breaking changes prior to merging them. An example workflow file would be:

name: Downstream Tests

on:
  push:
    branches: ["main"]
  pull_request:

jobs:
  build:
    runs-on: ubuntu-latest
  steps:
    - name: Checkout
      uses: actions/checkout@v2
    - name: Base Setup
      uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
    - name: Test Against Foo
      uses: jupyterlab/maintainer-tools/.github/actions/downstream-test@v1
      with:
        package_name: foo
    - name: Test Against Bar
      uses: jupyterlab/maintainer-tools/.github/actions/downstream-test@v1
      with:
        package_name: bar
        env_values: "FIZZ=buzz NAME=snuffy"

To test against a prerelease use package_download_extra_args: "--pre".

Test SDist

Use this pair of actions to build an sdist for your package, and then test it in an isolated environment.

name: Test Sdist
on:
  push:
    branches: ["main"]
  pull_request:

jobs:
  make_sdist:
    name: Make SDist
    runs-on: ubuntu-latest
    timeout-minutes: 10
    steps:
      - uses: actions/checkout@v2
      - uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
      - uses: jupyterlab/maintainer-tools/.github/actions/make-sdist@v1

  test_sdist:
    runs-on: ubuntu-latest
    needs: [make_sdist]
    name: Install from SDist and Test
    timeout-minutes: 20
    steps:
      - uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
      - uses: jupyterlab/maintainer-tools/.github/actions/test-sdist@v1

PR Binder Link

Use this action to add binder links for testing PRs, which show up as a comment. You can use the optional url_path parameter to use a different url than the default lab. An example workflow would be:

name: Binder Badge
on:
  pull_request_target:
    types: [opened]

jobs:
  binder:
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
    steps:
      - uses: jupyterlab/maintainer-tools/.github/actions/binder-link@v1
        with:
          github_token: ${{ secrets.github_token }}

PR Script

You can use the PR Script action in your repo along with pull-request-comment-trigger to enable maintainers to comment on PRs to run a script against a pull request. The script can only be run by a org member, collaborator, or repo owner if the association parameter is used (as in the examples below).

Note that the resulting commit will not trigger the workflows to run again. You will have to close/reopen the PR, or push another commit for the workflows to run again. If this behavior is not desirable, you can use a personal access token instead of the default GitHub token provided to the workflow. Make sure the token used is of as limited scope as possible (preferably a bot account token with access to the public_repo scope only).

This first example allows maintainers to run pre-commit by commenting "auto run pre-commit" on a Pull Request.

name: Trigger Pre-Commit on a PR
on:
  issue_comment:
    types: [created]

permissions:
  contents: write
  pull-requests: write

jobs:
  pr-script:
    runs-on: ubuntu-latest
    steps:
      - uses: khan/[email protected]
        id: check
        with:
          trigger: "auto run pre-commit"
      - if: steps.check.outputs.triggered == 'true'
        uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
      - if: steps.check.outputs.triggered == 'true'
        uses: jupyterlab/maintainer-tools/.github/actions/pr-script@v1
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          pre_commit: true
          commit_message: "auto run pre-commit"
          target: ${{ github.event.issue.html_url }}
          association: ${{ github.event.comment.author_association }}

In this example, the repo has a custom script that should be run, which is triggered by a PR comment "auto run cleanup". Again, this can only be run by a org member, collaborator, or repo owner.

name: Trigger a Cleanup Script on a PR
on:
  issue_comment:
    types: [created]

permissions:
  contents: write
  pull-requests: write

jobs:
  pr-script:
    runs-on: ubuntu-latest
    steps:
      - uses: khan/[email protected]
        id: check
        with:
          trigger: "auto run cleanup"
      - if: steps.check.outputs.triggered == 'true'
        uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
      - if: steps.check.outputs.triggered == 'true'
        uses: jupyterlab/maintainer-tools/.github/actions/pr-script@v1
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          script: '["jlpm run integrity", "jlpm run lint"]'
          commit_message: "auto run cleanup"
          target: ${{ github.event.issue.html_url }}
          association: ${{ github.event.comment.author_association }}

Upload Coverage and Report Coverage

These actions are meant to be used together, to combine and enforce coverage. A coverage snapshot will be included in the workflow summary. If coverage is below threshold, the report-coverage action will fail and upload the html report.

name: Tests

on:
  push:
    branches: ["main"]
  pull_request:

jobs:
  test:
    runs-on: ${{ matrix.os }}
    strategy:
      fail-fast: false
      matrix:
        os: [ubuntu-latest, windows-latest, macos-latest]
    steps:
      - uses: actions/checkout@v2
      - uses: jupyterlab/maintainer-tools/.github/actions/base-setup@v1
      - run: |
          pip install -e ".[test]"
          python -m coverage run -m pytest
      - uses: jupyterlab/maintainer-tools/.github/actions/upload-coverage@v1
  coverage_report:
    name: Combine & check coverage
    needs: test
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: jupyterlab/maintainer-tools/.github/actions/report-coverage@v1
        with:
          fail_under: 90

Update snapshots

You can use update snapshots action to commit on a branch Playwright updated snapshots.

The requirements and constrains are:

  • You must be on the branch to which the snapshots will be committed
  • You must installed your project before calling the action
  • The action is using yarn package manager by default but can be configured with npm_client
  • The Playwright tests must be in TypeScript or JavaScript

An example of workflow that get triggered when a PR comment contains update playwright snapshots would be:

name: Update Playwright Snapshots

on:
  issue_comment:
    types: [created, edited]

permissions:
  contents: write
  pull-requests: write

jobs:
  update-snapshots:
    if: ${{ github.event.issue.pull_request && contains(github.event.comment.body, 'update playwright snapshots') }}
    runs-on: ubuntu-latest

    steps:
      - name: React to the triggering comment
        run: |
          hub api repos/${{ github.repository }}/issues/comments/${{ github.event.comment.id }}/reactions --raw-field 'content=+1'
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

      - name: Checkout
        uses: actions/checkout@v2

      - name: Checkout the branch from the PR that triggered the job
        run: |
          # PR branch remote must be checked out using https URL
          git config --global hub.protocol https
          hub pr checkout ${{ github.event.issue.number }}
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

      - name: Install your project
        run: |
          # Execute the required installation command

      - name: Update snapshots
        uses: jupyterlab/maintainer-tools/.github/actions/update-snapshots@v1
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          # Test folder within your repository
          test_folder: playwright-tests

          # Optional npm scripts (the default values are displayed)
          # Script to start the server or 'null' if Playwright is taking care of it
          #   If not `null`, you must provide a `server_url` to listen to.
          start_server_script: start
          # Server url to wait for before updating the snapshots
          #  See specification for https://github.com/iFaxity/wait-on-action `resource`
          server_url: http-get://localhost:8888
          update_script: test:update

maintainer-tools's People

Contributors

3coins avatar blink1073 avatar brichet avatar dependabot[bot] avatar fcollonval avatar gabalafou avatar github-actions[bot] avatar jasonweill avatar jtpio avatar krassowski avatar max-schaefer avatar ophie200 avatar pre-commit-ci[bot] avatar tiagodepalves avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

maintainer-tools's Issues

Add the ability to do a shallow checkout

Problem

Checkouts can be slow for larger repos like JupyterLab.

Proposed Solution

We should add an option to do a shallow checkout and make it the default. You shouldn't need a full checkout unless you are manipulating tags or otherwise using git history.

Enforce Label Action Does Not Properly Delay

Description

The enforce label action is using the original github context to check the label, which means that our 60 second delay is having no effect. We need to write our own GitHub Script that gets the current set of labels.

Reproduce

Create a ChangeLog PR using Jupyter Releaser.
Releaser adds a "documentation" label after creating the PR.
The Enforce Label action fails.

Expected behavior

The enforce label workflow should pass.

When snapshot update fails, not only assets but also report should be uploaded

Problem

Snapshots on my PR are failing prior to taking a snapshots which I cannot repro locally. The PR job contains both assets and snapshots but the update job only assets, compare:

Screenshot from 2023-04-14 18-22-29
from https://github.com/jupyterlab/jupyterlab/actions/runs/4682777719?pr=14356

with

Screenshot from 2023-04-14 18-22-36
from https://github.com/jupyterlab/jupyterlab/actions/runs/4682788709

Proposed Solution

Upload both on failure

Additional context

In lab repo we use:

      - name: Upload Galata Test assets
        if: always()
        uses: actions/upload-artifact@v3
        with:
          name: jupyterlab-galata-test-assets
          path: |
            galata/test-jupyterlab-results
            galata/test-results

      - name: Upload Galata Test report
        if: always()
        uses: actions/upload-artifact@v3
        with:
          name: jupyterlab-galata-report
          path: |
            galata/playwright-report

test-sdist is run with BDist files

Description

The GH Action test-sdist currently install the SDist package (line 30) before testing. This installation step however generates a BDist (from the SDist) and installs that instead (you can see it happening in this failed job).

Problem

Testing a wheel is problematic because the project may remove test files from binaries, making the test step impossible. I found this issue when doing exactly that in jupyter/nbconvert#1822.

Possible Solution

I'm not too versed in pip, but installing in --editable/-e mode does keep the source files intact, so it could be inserted directly in pip invocation (line 30). If forcing editable mode is not desired, only the default package_spec (line 6) could be modified.

New version 0.25.1 breaks JupyterLab CI

https://github.com/jupyterlab/jupyterlab/actions/runs/8538604184/job/23391600562?pr=16078

Install from SDist and Test:

+ uv pip compile pyproject.toml -o /home/runner/constraints.txt
error: failed to read from file `pyproject.toml`
  Caused by: No such file or directory (os error 2)

Test Minimum Versions:

Error: Can't find 'action.yml', 'action.yaml' or 'Dockerfile' for action 'jupyterlab/maintainer-tools/.github/actions/install-minimums@v1'.

Windows (integrity) and Windows (python):

Run .\scripts\ci_install

    Directory: C:\Users\runneradmin

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d----            4/3/[20](https://github.com/jupyterlab/jupyterlab/actions/runs/8538604175/job/23391595403?pr=16078#step:4:21)24 12:24 PM                .jupyter
ERROR: Could not open requirements file: [Errno 2] No such file or directory: '/c/Users/runneradmin/constraints.txt'
Exception: D:\a\jupyterlab\jupyterlab\scripts\ci_install.ps1:12
Line |
  12 |  โ€ฆ TCODE -ne 0) { throw "Command failed. See above errors for details" }
     |                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     | Command failed. See above errors for details
Error: Process completed with exit code 1.

Upvote comments asking for actions

When asking bot to perform snapshot update or backport it would help if it reacted with thumbs up on the comment that triggered the action. This is what dependabot does when asked to rebase.

Use Jupyter Releaser

Problem

We will want to easily keep this repo and its v1 tag up to date.

Proposed Solution

We should adopt Jupyter Releaser and copy the pattern it uses to update its v1 tag.

Follow [extras] in install-minimums

Problem

Some packages put rather important functionality behind [extras], and these are not followed up by install-minimums, which can leave a broken test state if not specified directly.

Proposed Solution

Alternatives

  • document options for install-minimums users
    • avoid using [extras] ๐Ÿ˜
    • re-capture the upstream's [extra] as another extra

Additional context

  • example log for a PR that exists mostly to opt in to jsonschema[format-nongpl].

More verbose `check-links` to find broken links more easily

Problem

By default the check-links action fails without too much information when there is a broken link.

Example run: https://github.com/jupyter/notebook/actions/runs/3570433572/jobs/6001397371

image

Proposed Solution

Ideally the broken link(s) should be printed in the logs so it's easier to find and fix them.

Additional context

Noticed in https://github.com/jupyter/notebook/actions/runs/3570433572/jobs/6001397371 on https://github.com/jupyter/notebook

Add a minimum version test

Problem

We would like to be able to test against the minimum versions we claim to support. There is discussion about adding this to pip, but it has stalled.

Proposed Solution

We can add an action that does the following:

  • Creates a wheel using python -m build --wheel
  • Uses pkginfo to parse the wheel file to get its requirements, e.g. (Wheel("./dist/jupyter_server-1.14.0.dev0-py3-none-any.whl").requires_dist)
  • Parses the requirements using packaging to parse the requirements, looking for ~ or >. e.g.
from packaging.requirements import Requirement
r = Requirement('anyio (>=3.1.0)')
if not r.extras:
   for specifier in r.specifier:
       if '~' in specifer.operator or `>` in specifier.operator:
              # save as a constraint
  • Generate a constraints file
  • Install the current package using the constraints file, with optional extras (defaults to test).

Additional context

cf jupyter-server/jupyter_server#678 where we discussed adding this capability to jupyter_server itself.

Fix the way downstream tests work

Description

We are currently assuming that the tests ship with the package and can be run from site-packages.
We are also re-using the virtual environment.

Reproduce

cf https://github.com/jupyter-server/jupyter_server/runs/5527570612

Expected behavior

We should have test isolation and test the project in-place.

Context

We need to remove the virtual env directory between each run.
We need to download the sdist, install in editable mode, and
run the tests from that folder

Add ability to auto-merge PRs

Problem

We often approve PRs and then have to monitor until CI passes before merging.

Proposed Solution

Add an action and example workflow that waits for a check suite to finish, and also check for an automerge label on the pull request. If true, it merges the PR. We would use the action on this repo as well. Note that for repos like JupyterLab, that allow anyone to apply labels by invoking meekseeksdev, branch protection rules should be in affect to ensure at least one approval, and potentially have approvals revoked when subsequent changes are pushed. In general repos should be using that ๐Ÿ˜„.

Additional context

Conda-forge has a similar action available using their bot. This could also be implemented as part of jupyterlab-probot, but then it is not as widely useful since not all orgs want to/can install that bot.

Make PR Script an Action and Show Its Usage

Problem

We should be able to trigger a PR script using a comment. That way it is transparent when it is triggered and is easier for the maintainer to invoke.

Proposed Solution

We can make the PR script into an action, and show an example of triggering it using https://github.com/Khan/pull-request-comment-trigger. The action is run using the commenter's credentials. This can also be a general pattern of issue comments that trigger actions.

Add Maintainability Audit Checklist for Repos

Problem

It is hard to standardize best practices for the large number of Jupyter repos.

Proposed Solution

Add maintainability audit checklist to maintainer-tools

  • Point to example implementation PRs and current config files as appropriate
  • Advise annual audit for updated list and current best practices - upgrade version tags, etc.

Ideas to include:

  • Use Jupyter Releaser
  • Triage Label enforcement
  • Use "Base Setup" GitHub Action
  • Cron job for daily builds
  • Precommit setup
    • Add minimal pre-commit file without auto-formatters
    • Enable and run the auto-formatters as a separate commit
    • Merge the PR
    • Add a new PR that adds the .git-blame-ignore-revs file
    • Add a new PR for flake8 and/or eslint
    • Run autoformatters on backport branches to make backporting easier
  • ReadTheDocs setup with PR builds
  • Pydata theme with MyST
  • Example binder and PR commenter using a workflow or jupyterlab-probot config
  • 4 kinds of docs
  • Coverage shown on PRs
  • Coverage thresholds enforced in pytest
  • Tests for downstream libraries
  • CI job minimum dependency versions
  • CI job against prerelease dependency versions
  • Test with warnings as errors
  • Test with warnings as errors in docs
  • No upper version constraints on dependencies
  • Include tests in sdist but not wheel - tests should be at the top level of the repo and excluded when using find_packages
  • Run the test suite on the conda-forge recipe
  • Run pre-commit autoupdate or use pre-commit.ci
  • Add .github/dependabot.yml file with weekly updates
  • Consider adding mypy - copy pip config, add py.typed file
  • Use dev requirements for anything that isn't strictly required to run tests, e.g. pre-commit, and recommend using pip install -e ".[dev,test]"

Perhaps tie this to an annual update of supported Python versions.

Support for other types of repos for the PR script

Problem

At the moment the PR script runs the following command which is specific to Python repos:

run("pip install -e '.[test]'")

Proposed Solution

This could be similar to the jupyter_releaser choosing which command to run based on the existence of pyproject.toml / setup.py / package.json.

Additional context

This would make the PR script more generic.

Add an Action for Workflow Prep

Problem

We have a lot of duplication of our workflow files for thing like installing and configuring Python, caching, etc.

Proposed Solution

Provide a base action here that is similar to the one used in jupyter_releaser

Custom commit message when running a PR script

Problem

Right now the PR script workflows creates a new commit with "Run maintainer script" as the message.

It would be nice to be able to pass a custom commit message as part of the workflow inputs. For example "Lint" when running a lint script.

Proposed Solution

Probably we can pass the input down to the script, so it can be picked up here:

run(f"git commit -a -m 'Run maintainer script' -m 'by {maintainer}' -m '{json.dumps(script)}'")

Additional context

Example commit currently generated: jupyterlab/jupyterlab@6971c03

Add a Workflow to Create a PR and Run a Script

Follow up to #1.

We could also have a workflow that takes an arbitrary command and opens a new PR with the changes (could be under the maintainer GitHub name).

For example this would be useful to update dependencies with commands like jlpm run update:dependency --regex @lumino/ latest

Feedback on the new resolution mechanism

  1. It looks like this is going to break a few more packages. I just tried running CI on jupyter-server and see this:
+ '[' Linux == Windows ']'
+ '[' -f pyproject.toml ']'
+ uv pip compile --resolution=lowest-direct pyproject.toml -o /home/runner/constraints.txt
error: Failed to download and build: overrides==0.1
  Caused by: Failed to build: overrides==0.1
  Caused by: Build backend failed to determine extra requires with `build_wheel()` with exit status: 1
--- stdout:

Also many other jobs (e.g. https://github.com/krassowski/jupyter_server/actions/runs/8542681567/job/23404855225) seem to be failing with:

E       TypeError: websocket_connect() got an unexpected keyword argument 'body'

but this might be unrelated

  1. It installs the lowest version possible when no floor version is defined:
dependencies = [
    "async_lru>=1.0.0",
    "httpx>=0.25.0",
    "importlib-metadata>=4.8.3;python_version<\"3.10\"",
    "importlib-resources>=1.4;python_version<\"3.9\"",
    "ipykernel",
    "jinja2>=3.0.3",
    "jupyter_core",
    "jupyter_server>=2.4.0,<3",
    "jupyter-lsp>=2.0.0",
    "jupyterlab_server>=2.19.0,<3",
    "notebook_shim>=0.2",
    "packaging",
    "tomli;python_version<\"3.11\"",
    "tornado>=6.2.0",
    "traitlets",
]

Results in:

+ uv pip compile --resolution=lowest-direct pyproject.toml -o /home/runner/constraints.txt
error: Failed to download and build: traitlets==0.0.1
  Caused by: Failed to build: traitlets==0.0.1
  Caused by: Build backend failed to determine extra requires with `build_wheel()` with exit status: 1
  1. It pins to oldest available version even if other dependencies of dependencies have a pin (after pinning traitlets to something modern):
+ uv pip compile --resolution=lowest-direct pyproject.toml -o /home/runner/constraints.txt
Resolved 92 packages in 1.07s
# This file was autogenerated by uv via the following command:
#    uv pip compile --resolution=lowest-direct pyproject.toml -o /home/runner/constraints.txt
anyio==4.3.0
    # via
    #   httpcore
    #   jupyter-server
argon2-cffi==23.1.0
    # via jupyter-server
argon2-cffi-bindings==21.2.0
    # via argon2-cffi
arrow==1.3.0
    # via isoduration
asttokens==2.4.1
    # via stack-data
async-lru==1.0.0
attrs==23.2.0
    # via
    #   jsonschema
    #   referencing
babel==2.14.0
    # via jupyterlab-server
backcall==0.2.0
    # via ipython
beautifulsoup4==4.12.3
    # via nbconvert
bleach==6.1.0
    # via nbconvert
certifi==2024.2.2
    # via
    #   httpcore
    #   httpx
    #   requests
cffi==1.16.0
    # via argon2-cffi-bindings
charset-normalizer==3.3.2
    # via requests
decorator==5.1.1
    # via ipython
defusedxml==0.7.1
    # via nbconvert
exceptiongroup==1.2.0
    # via anyio
executing==2.0.1
    # via stack-data
fastjsonschema==2.19.1
    # via nbformat
fqdn==1.5.1
    # via jsonschema
h11==0.14.0
    # via httpcore
httpcore==0.18.0
    # via httpx
httpx==0.25.0
idna==3.6
    # via
    #   anyio
    #   httpx
    #   jsonschema
    #   requests
importlib-metadata==4.8.3
    # via
    #   jupyter-client
    #   jupyter-lsp
    #   jupyterlab-server
    #   nbconvert
importlib-resources==1.4.0
    # via
    #   jsonschema
    #   jsonschema-specifications
ipykernel==4.0.1
ipython==8.12.3
    # via ipykernel
isoduration==20.11.0
    # via jsonschema
jedi==0.19.1
    # via ipython
jinja2==3.0.3
    # via
    #   jupyter-server
    #   jupyterlab-server
    #   nbconvert
json5==0.9.24
    # via jupyterlab-server
jsonpointer==2.4
    # via jsonschema
jsonschema==4.21.1
    # via
    #   jupyter-events
    #   jupyterlab-server
    #   nbformat
jsonschema-specifications==2023.12.1
    # via jsonschema
jupyter-client==8.6.1
    # via
    #   ipykernel
    #   jupyter-server
    #   nbclient
jupyter-core==4.12.0
    # via
    #   jupyter-client
    #   jupyter-server
    #   nbclient
    #   nbconvert
    #   nbformat
jupyter-events==0.10.0
    # via jupyter-server
jupyter-lsp==2.0.0
jupyter-server==2.4.0
    # via
    #   jupyter-lsp
    #   jupyterlab-server
    #   notebook-shim
jupyter-server-terminals==0.5.3
    # via jupyter-server
jupyterlab-pygments==0.3.0
    # via nbconvert
jupyterlab-server==2.19.0
markupsafe==2.1.5
    # via
    #   jinja2
    #   nbconvert
matplotlib-inline==0.1.6
    # via ipython
mistune==3.0.2
    # via nbconvert
nbclient==0.10.0
    # via nbconvert
nbconvert==7.16.3
    # via jupyter-server
nbformat==5.10.3
    # via
    #   jupyter-server
    #   nbclient
    #   nbconvert
notebook-shim==0.2.0
packaging==21.3
    # via
    #   jupyter-server
    #   jupyterlab-server
    #   nbconvert
pandocfilters==1.5.1
    # via nbconvert
parso==0.8.3
    # via jedi
pexpect==4.9.0
    # via ipython
pickleshare==0.7.5
    # via ipython
pkgutil-resolve-name==1.3.10
    # via jsonschema
prometheus-client==0.20.0
    # via jupyter-server
prompt-toolkit==3.0.43
    # via ipython
ptyprocess==0.7.0
    # via
    #   pexpect
    #   terminado
pure-eval==0.2.2
    # via stack-data
pycparser==2.22
    # via cffi
pygments==2.17.2
    # via
    #   ipython
    #   nbconvert
pyparsing==3.1.2
    # via packaging
python-dateutil==2.9.0.post0
    # via
    #   arrow
    #   jupyter-client
python-json-logger==2.0.7
    # via jupyter-events
pytz==2024.1
    # via babel
pyyaml==6.0.1
    # via jupyter-events
pyzmq==25.1.2
    # via
    #   jupyter-client
    #   jupyter-server
referencing==0.34.0
    # via
    #   jsonschema
    #   jsonschema-specifications
    #   jupyter-events
requests==2.31.0
    # via jupyterlab-server
rfc3339-validator==0.1.4
    # via
    #   jsonschema
    #   jupyter-events
rfc3986-validator==0.1.1
    # via
    #   jsonschema
    #   jupyter-events
rpds-py==0.18.0
    # via
    #   jsonschema
    #   referencing
send2trash==1.8.2
    # via jupyter-server
six==1.16.0
    # via
    #   asttokens
    #   bleach
    #   python-dateutil
    #   rfc3339-validator
sniffio==1.3.1
    # via
    #   anyio
    #   httpcore
    #   httpx
soupsieve==2.5
    # via beautifulsoup4
stack-data==0.6.3
    # via ipython
terminado==0.18.1
    # via
    #   jupyter-server
    #   jupyter-server-terminals
tinycss2==1.2.1
    # via nbconvert
tomli==0.2.0
tornado==6.2
    # via
    #   jupyter-client
    #   jupyter-server
    #   terminado
traitlets==5.6.0
    # via
    #   ipykernel
    #   ipython
    #   jupyter-client
    #   jupyter-core
    #   jupyter-events
    #   jupyter-server
    #   matplotlib-inline
    #   nbclient
    #   nbconvert
    #   nbformat
types-python-dateutil==2.9.0.20240316
    # via arrow
typing-extensions==4.10.0
    # via
    #   anyio
    #   ipython
uri-template==1.3.0
    # via jsonschema
urllib3==2.2.1
    # via requests
wcwidth==0.2.13
    # via prompt-toolkit
webcolors==1.13
    # via jsonschema
webencodings==0.5.1
    # via
    #   bleach
    #   tinycss2
websocket-client==1.7.0
    # via jupyter-server
zipp==3.18.1
    # via importlib-metadata
/scripts/ci_install.sh: line 19: yarn: command not found
+ mkdir -p /home/runner/.jupyter
+ git config --global user.name foo
+ git config --global user.email [email protected]
+ pip install -q --upgrade pip --user
+ pip --version
pip 24.0 from /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/pip (python 3.8)
+ pip install -e '.[dev,test]'
Obtaining file:///home/runner/work/jupyterlab/jupyterlab
  Installing build dependencies: started
  Installing build dependencies: finished with status 'error'
  error: subprocess-exited-with-error
  
  ร— pip subprocess to install build dependencies did not run successfully.
  โ”‚ exit code: 1
  โ•ฐโ”€> [[29](https://github.com/krassowski/jupyterlab/actions/runs/8542572841/job/23404523285#step:4:30) lines of output]
      Collecting hatchling>=1.21.1
        Using cached hatchling-1.22.4-py3-none-any.whl.metadata (3.8 kB)
      Collecting packaging>=21.3 (from hatchling>=1.21.1)
        Downloading packaging-21.3-py3-none-any.whl.metadata (15 kB)
      Collecting pathspec>=0.10.1 (from hatchling>=1.21.1)
        Using cached pathspec-0.12.1-py3-none-any.whl.metadata (21 kB)
      Collecting pluggy>=1.0.0 (from hatchling>=1.21.1)
        Using cached pluggy-1.4.0-py3-none-any.whl.metadata (4.3 kB)
      INFO: pip is looking at multiple versions of hatchling to determine which version is compatible with other requirements. This could take a while.
      Collecting hatchling>=1.21.1
        Downloading hatchling-1.22.3-py3-none-any.whl.metadata (3.8 kB)
        Downloading hatchling-1.22.2-py3-none-any.whl.metadata (3.8 kB)
        Using cached hatchling-1.21.1-py3-none-any.whl.metadata (3.8 kB)
      Collecting editables>=0.3 (from hatchling>=1.21.1)
        Using cached editables-0.5-py3-none-any.whl.metadata (3.1 kB)
      ERROR: Cannot install hatchling==1.21.1, hatchling==1.22.2, hatchling==1.22.3 and hatchling==1.22.4 because these package versions have conflicting dependencies.
      
      The conflict is caused by:
          hatchling 1.22.4 depends on tomli>=1.2.2; python_version < "3.11"
          hatchling 1.22.3 depends on tomli>=1.2.2; python_version < "3.11"
          hatchling 1.22.2 depends on tomli>=1.2.2; python_version < "3.11"
          hatchling 1.21.1 depends on tomli>=1.2.2; python_version < "3.11"
          The user requested (constraint) tomli==0.2.0
      
      To fix this you could try to:
      1. loosen the range of package versions you've specified
      2. remove package versions to allow pip attempt to solve the dependency conflict
      
      ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
ร— pip subprocess to install build dependencies did not run successfully.
โ”‚ exit code: 1
โ•ฐโ”€> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
+ pip install -v -e '.[dev,test]'
Using pip 24.0 from /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/pip (python 3.8)
Obtaining file:///home/runner/work/jupyterlab/jupyterlab
  Installing build dependencies: started
  Running command pip subprocess to install build dependencies
  Collecting hatchling>=1.21.1
    Using cached hatchling-1.22.4-py3-none-any.whl.metadata (3.8 kB)
  Collecting packaging>=21.3 (from hatchling>=1.21.1)
    Using cached packaging-21.3-py3-none-any.whl.metadata (15 kB)
  Collecting pathspec>=0.10.1 (from hatchling>=1.21.1)
    Using cached pathspec-0.12.1-py3-none-any.whl.metadata (21 kB)
  Collecting pluggy>=1.0.0 (from hatchling>=1.21.1)
    Using cached pluggy-1.4.0-py3-none-any.whl.metadata (4.3 kB)
  INFO: pip is looking at multiple versions of hatchling to determine which version is compatible with other requirements. This could take a while.
  Collecting hatchling>=1.21.1
    Using cached hatchling-1.22.3-py3-none-any.whl.metadata (3.8 kB)
    Using cached hatchling-1.22.2-py3-none-any.whl.metadata (3.8 kB)
    Using cached hatchling-1.21.1-py3-none-any.whl.metadata (3.8 kB)
  Collecting editables>=0.3 (from hatchling>=1.21.1)
    Using cached editables-0.5-py3-none-any.whl.metadata (3.1 kB)
  ERROR: Cannot install hatchling==1.21.1, hatchling==1.22.2, hatchling==1.22.3 and hatchling==1.22.4 because these package versions have conflicting dependencies.
  The conflict is caused by:
      hatchling 1.22.4 depends on tomli>=1.2.2; python_version < "3.11"
      hatchling 1.22.3 depends on tomli>=1.2.2; python_version < "3.11"
      hatchling 1.22.2 depends on tomli>=1.2.2; python_version < "3.11"
      hatchling 1.21.1 depends on tomli>=1.2.2; python_version < "3.11"
      The user requested (constraint) tomli==0.2.0
  To fix this you could try to:
  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict
  ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
  error: subprocess-exited-with-error

Encode branch name when creating a Binder link

Currently the PR_HEAD_REF is taken as is and passed to the Binder link:

var PR_HEAD_REF = process.env.PR_HEAD_REF;
var URL_PATH = process.env.URL_PATH;
var BODY = `[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/${PR_HEAD_USERREPO}/${PR_HEAD_REF}?urlpath=${URL_PATH}) :point_left: Launch a Binder on branch _${PR_HEAD_USERREPO}/${PR_HEAD_REF}_`;

However this can create issues when a branch has a special character, for example #, as noticed in jupyter/notebook#7178. The Binder link becomes https://mybinder.org/v2/gh/haok1402/notebook/issue#7147?urlpath=tree, but Binder fails to build it:

image

However Binder can build the repo fine if the special character is URL encoded: https://mybinder.org/v2/gh/haok1402/notebook/issue%237147

image

So this binder-link action should probably try to encode the PR_HEAD_REF before creating the link.

Add Action to Enforce Labels for PRs

Problem

We often have to go back and re-label PRs that are missing appropriate labels so they are appropriately labeled.

Proposed Solution

We could include an action here that wraps enforce-label-action with the labels required by github-activity and document how to use it on other repos that use Jupyter Releaser.

Add Binder Badge Action

Problem

Not every org wants to use jupyterlab-probot to add binder links.

Proposed Solution

Provide an action here that accepts an optional urlpath.

Add Action to Test Downstream Package

Problem

We have a pattern of testing downstream packages, but it is currently brittle. We are installing in the same base environment, and the testing dependencies may differ between the downstream package(s) and/or the package under test.

Proposed Solution

We can provide an action here that has the following inputs:

package_name: required
package_version: optional
test_command: optional
env_values: optional

It will:

  • Create a virtual environment
  • Install the given package in the test environment with the given version specification, with the default being pip install --pre -U --upgrade-strategy=only-if-needed <package>
  • Install the package under test (not test dependencies) in the test environment using pip install . --force-reinstall
  • Run the test command, by default pytest --pyargs <package_name> with the given environment values

Min Version Check Should Handle Greater Than

Description

If a dependency includes the > directive, we currently convert it to == here. This is technically incorrect, as we should instead technically find the next available version greater than that version and constrain to that version.

Reproduce

Depend on a >, see example failure.

Expected behavior

We get the correct min version.

Context

We can use the PyPI JSON API to get the right version.

import requests
data = requests.get('https://pypi.org/pypi/notebook/json').json()
versions = list(data['releases'])
  • Operating System and version:
  • Browser and version:
  • JupyterLab version:
Troubleshoot Output
Paste the output from running `jupyter troubleshoot` from the command line here.
You may want to sanitize the paths in the output.
Command Line Output
Paste the output from your command line running `jupyter lab` here, use `--debug` if possible.
Browser Output
Paste the output from your browser Javascript console here, if applicable.

Base Setup Should Accept Python and Node Versions

Problem

The only way to set the versions is using a matrix. For workflows that don't otherwise need a matrix, we should be able to provide these versions as inputs.

Proposed Solution

Add inputs to the base setup, add docs, and use the input in the minimum versions example.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.