This project has been moved to mach-nix/pypi-crawlers
davhau / pypi-crawlers Goto Github PK
View Code? Open in Web Editor NEWCollection of crawlers to mine pypi packages metadata and their dependencies
License: MIT License
Collection of crawlers to mine pypi packages metadata and their dependencies
License: MIT License
This project has been moved to mach-nix/pypi-crawlers
The URL crawler needs to be extended to also save URL and sha256 of wheel packages we are interested in.
Currently only one release file per candidate is allowed. The data structure of the resulting json must be extended to allow multiple releases.
Each candidate could contain another key value mapping where the key is a release type (like cp38-cp38-manylinux1_x86_64.whl) and the values are url + sha256.
Interesting PEPS:
https://www.python.org/dev/peps/pep-0425/
https://www.python.org/dev/peps/pep-0427/
https://www.python.org/dev/peps/pep-0491/
This package is available on pypi here: https://pypi.org/project/scanimage-tiff-reader/
We need to add this information to be able to decide if a pypi none-any
wheel package is suited for a specific python version.
let
mach-nix = import (builtins.fetchGit {
url = "https://github.com/DavHau/mach-nix/";
ref = "refs/heads/master";
});
py_env = mach-nix.mkPython rec {
python = mach-nix.nixpkgs.python37;
requirements = ''
mariadb
'';
};
in
(py_env.override ( args:{ignoreCollisions=true; }) ).env
leads to
The Package 'mariadb' is not available from any of the selected providers {'wheel', 'nixpkgs', 'sdist'}
To support wheels, the dependency crawler must be extended.
It must be able to extract dependency information from wheel releases. There should be enough examples around on how to do that. Important is, that we do not yet evaluate the markers which might be included in the dependency definitions. Instead we want to save them.
The SQL database schema defined in db.py
must be adapted to allow for differentiation between release types. I suggest to just add another column release_type
or something similar.
dump_deps.py
must be extended to use a new format for its json dump allowing for multiple releases per candidate.
Interesting PEPS:
https://www.python.org/dev/peps/pep-0425/
https://www.python.org/dev/peps/pep-0427/
https://www.python.org/dev/peps/pep-0491/
URLs can be computed. They don't need to be stored. Or at least not fully.
Currently the URL crawler saves the URls in the dataset and therefore blows up the download size of nix-pypi-fetcher.
Example for a URL:
https://files.pythonhosted.org/packages/04/ab/e2eb3e3f90b9363040a3d885ccc5c79fe20c5b8a3caa8fe3bf47ff653260/scipy-1.4.1.tar.gz
The following URL is equivalent:
https://pypi.org/packages/source/s/scipy/scipy-1.4.1.tar.gz
The logic for computing URLs for sdist releases (like the one above) seems to be:
https://pypi.io/packages/source/{ name[0] }/{ name }/{ name }-{ version }.{ tar.gz | .tgz | .tar.bz2 | zip }
We can probably not compute the file ending and therefore need to store it.
Also I'm not sure if the format of the whole filename is enforced at all by pypi or not. If we don't find a specification for this, we better just store the whole filename.
The logic for computing wheel URLs seems to be:
https://pypi.org/packages/{ python_version }/{ name[0] }/{ name }/{ fn }
Example for a wheel URL:
pypi.org/packages/cp37/s/scipy/scipy-1.4.1-cp37-cp37m-manylinux1_x86_64.whl
Also here the filename seems to be non trivial. We might be able to compute it but only with help of additional metadata which we would need to store. Therefore we might just store the filename itself.
PEP-427 specifies the filename format, but I'm not sure if it is enforced.
Synchronously, the nix expression of pypi-fetcher itself must be modified to be able to generate the same URL's: DavHau/nix-pypi-fetcher#2
Some libraries declare their dependencies in a project.toml
file. The format is specified in PEP-518.
Currently this is not respected by the dependency extraction. Therefore it fails on libraries like scipy (see issue DavHau/mach-nix#5) trying to execute their setup.py without the necessary minimum requirements installed.
The good thing with PEP-518 is, it is cheap to parse. The bad thing is, that it doesn't necessarily give us the full list of setup+install requirements. According to the specification, only the Minimum Build System Requirements
need to be specified there. Meaning, that more requirements can be specified by the build backend which is executed later.
Therefore to get the full list of requirements we would need to:
project.toml
to get the minimum build dependencies,mach-nix could be used to build the python environment with the minimum build dependencies.
Currently the extraction process is defined as a single nix derivation in ./src/extractor/default.nix which takes the url and sha256 of an sdist package and dumps a json containing the dependencies to $out.
To support PEP-518 we would need to separate this into two distinct derivations:
minimum build requirements
according to PEP-518 and returns them as an artifactminimal build requirements
added to buildInputs.A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.