Code Monkey home page Code Monkey logo

agdc's People

Contributors

alex-ip avatar dunkgray avatar jeremyh avatar joshvote avatar matthew-hoyles avatar matthewhardy avatar petewa avatar simonoldfield avatar sixy6e avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

agdc's Issues

numpy version 1.9 or higher required

In the installation documentation, you should report numpy 1.9 or higher is required, because the nanpercentile function (use in agdc) is not defined in the previous release of numpy.

Set up new workflow

There seems to be consensus about using the new 'gitflow' workflow, so I am going to set it up. I am writing some documentation for this, but further information can be found here and here.

As soon as Alex pushes his latest changes I am proposing to do the following:

  1. Tag the head of the master branch as 0.1.0 with message 'Inital Release'.
  2. Create a develop branch and push it to github
  3. Rebase my example feature branch matthew-hoyles/get-tiles-testing onto the new develop branch.
  4. Delete the integeration-0 branch.
  5. Issue a pull request for get-tiles-testing. Josh has already done a quick code review on this branch, but that was before I found out about pull requests, so I want to go through the process to see how it works.

If anyone has comments on this please add them to the issue, or send me an email, or both. If you don't want to receive notifications like this, switch to the ga-datacube page on github, click the unwatch button, and change your status to 'not watching'

Distributed database for the AGDC

This suggestion is motivate by our experience using the Datacube RDBMS on HPC (Raijin in this case, for the WOfS application).

Typical HPC jobs involve parallel execution or many (perhaps thousands) of independent UNIX tasks. Each task often acts as a database client. In this architecture, the central database is a potential bottleneck. If the database becomes overload, it impacts the job in question and all other AGDC users. Throughput falls off and if the database needs rebooting, large jobs can be lost.

The use of a distributed database is worth considering?

Between ingestion runs, the AGDC database remains in a static, read only state. The HPC analysis runs do not update the database.

So here's the suggestion.

a) We "publish" a new version of the AGDC database each time we do an ingestion. The database is "published" as a single DB content file.

b) When a HPC job starts it is allocated a cluster of one or more nodes. Each node has 16 CPUs and a 400GB solid state disk (SSD). When a Datacube analysis job starts, the first step is to copy the database content file to the SSD. A single lightweight DB server can then be started on the node. Each unix task running on the node communicates only with it's local DB server. Alternatively each UNIX task could use a lightweight imbedded SQL server to access the same DB content on the SSD (SQLite comes to mind). All DB operations are performed "on-node". At the end of the HPC run, the DB on the SSD is discarded.

c) The current central DB server is maintained to service the non-HPC clients.

Advantages

  • No central database server to bottleneck
  • Little or no change to existing API code (we still use SQL)
  • The AGDC RDBMS becomes naturally "versioned" with each ingestion run creating a new version (hey, this sounds like a big win for provenance tracking and good data governance)
  • Speed improvement during task execution -- SSD I/O is very very FAST!!!
  • memory savings and simplified API design (We can use generators to iterate over a SQL result sets without fear of destroying the central server)

Disadvantage

  • Extra step during PBS job setup (one or two lines of BASH)
  • Additional I/O and runtime during DB->SSD copy (but Lustre is FAST and is designed for this type of copy operation)
  • Consumption of SSD (but our databases are not big, and we don't typically use the SSD anyway)

Thinking ahead a little to when databases are larger, we could further partition our databases along the lines of "Landsat DB", "Modis DB" in addition to "Everything DB". The DB could also be partitioned by year. Or dynamically partitioned at the beginning of a HPC run before being distributed to the nodes .... a kind of AGDC-Map-Reduce pattern.

But I'm getting ahead of myself.

How do you feel about distributing our AGDC DB to the nodes?

cheers
Steven

DB connections should not span API calls

More a suggestion than a question...

The query API currently returns a generator inviting applications to iterate over Tiles and Cells. The generator will hold a database connection (and cursor) open until iteration is complete. Connections and cursors are heavyweight objects (particularly for the DB server). Application Tile processing may be long running and connections may be held open for long periods of time.

The concern is that this approach will not scale on HPC where there may be many thousands of applications instances each slowly iterating sets of Tiles. The single DB server can become stressed having to maintain many long-duration cursors. Also connection limits may be reached resulting in denial of service to application instances.

Suggest using

    cursor.execute(sql, params)
    tiles = []

   for record in cursor:
       tiles.append(Tile.from_db_record(record))

   conn.close()
   return tiles

Instead of

    for record in result_generator(cursor):
        yield Tile.from_db_record(record)

The former has the advantage of ensuring that cursors and connections are explicitly closed and, more importantly, have very short life spans on the DB server. The memory and time cost of transporting the full result set to the application host ought to be modest given our Tile numbers and the lightweight nature of Tile instances.

NBAR Landsat ingest fails - PQ works ok

working on develop on Ubuntu 64:

execution example:

python -m agdc.ingest.landsat -C agdc_default.conf --source /mnt/scenes/nbar/

2015-07-15 16:59:01,534 agdc.ingest.landsat INFO Searching for datasets in /mnt/scenes/nbar/
2015-07-15 16:59:01,619 agdc.ingest.landsat.landsat_dataset INFO Opening Dataset /mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306
2015-07-15 16:59:02,203 eotools.drivers._scene_dataset INFO 78 values read from metadata
2015-07-15 16:59:02,526 agdc.ingest._core INFO Ingestion failed for dataset '/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306' in 0:00:00.907321:
2015-07-15 16:59:02,527 agdc.ingest._core INFO Unable to find unique match for file pattern .*_B10\..*
2015-07-15 16:59:02,530 agdc.ingest.landsat.landsat_dataset INFO Opening Dataset /mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330
2015-07-15 16:59:03,118 eotools.drivers._scene_dataset INFO 78 values read from metadata
2015-07-15 16:59:03,466 agdc.ingest._core INFO Ingestion failed for dataset '/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330' in 0:00:00.936024:
2015-07-15 16:59:03,467 agdc.ingest._core INFO Unable to find unique match for file pattern .*_B1\..*
2015-07-15 16:59:03,471 agdc.ingest._core INFO Ingestion process complete for source directory '/mnt/scenes/nbar' in 0:00:01.937171.

This is what the inputs look like - note that PQ products ingest ok.

simonaoliver@simonaoliver-VirtualBox:~/datacube/agdc-develop/agdc$ find /mnt/scenes/nbar/
/mnt/scenes/nbar/
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306.jpg
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306_FR.jpg
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/md5sum.txt
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/metadata.xml
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/scene01
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/scene01/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306_B10.tif
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/scene01/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306_B10.tif.aux.xml
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/scene01/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306_B20.tif
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/scene01/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306_B30.tif
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/scene01/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306_B40.tif
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/scene01/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306_B50.tif
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/scene01/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306_B70.tif
/mnt/scenes/nbar/LS7_ETM_NBAR_P54_GANBAR01-002_091_085_20150306/scene01/report.txt
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330.jpg
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330_FR.jpg
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/md5sum.txt
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/metadata.xml
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/scene01
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/scene01/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330_B1.tif
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/scene01/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330_B1.tif.aux.xml
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/scene01/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330_B2.tif
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/scene01/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330_B3.tif
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/scene01/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330_B4.tif
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/scene01/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330_B5.tif
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/scene01/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330_B6.tif
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/scene01/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330_B7.tif
/mnt/scenes/nbar/LS8_OLI_TIRS_NBAR_P54_GANBAR01-032_091_085_20150330/scene01/report.txt

Add ability to query for "missing" datasets....

It would be useful, for e.g. for both the fractional cover processing and potentially the WOFs processing, to have an API query which returned "missing" datasets.

That is, give me a list of NBAR datasets for which there is NOT an associated FC (or WOFS or ...) dataset

Refer to #35

Add ability to query for "DSM" datasets

It's not obvious how to obtain a DSM tile via the API and there are no examples.

   tiles=list_tiles_wkt(wkt,satellites=[Satellite.LS7], years=[2005, 2006, 2007], \
     datasets=[DatasetType.ARG25, DatasetType.PQ25, DatasetType.FC25, DatasetType.DSM], \
     database=config.get_db_database(), user=config.get_db_username(), \
     password=config.get_db_password(), \
     host=config.get_db_host(), port=config.get_db_port())

doesn't work.

no DB API config

conn.rollback() exception when executing example ipynbs

default DB config coded in config.py - suggested move to:
and reflect flyway config default users etc.

~/.datacube/config

[DATABASE]
host: 130.56.244.225
port: 5432
database: hypercube_v0
username: cube_user
password: GAcube0

flyway configuration example - minor change suggested

Suggested for ease of install - add the following to the flyway example configuration:

  • Change user to cube_admin
  • Change password to cube_admin
  • Add flyway.schemas=datacube
  • the flyway locations also implies an installation location for flyway - maybe replace with /database/sql

modify the pixel time series tool to return a dataframe

The pixel time series tool could be made more functional by incorporating a function that returns a pandas.DataFrame rather than outputting directly to disk.

Some thing along the lines of:

dataframe = retrieve_pixel_time_series(coord=(x, y), dsets=[ARG25, FC25, PQ25], pq_flags=None)

where:
coord: a tuple of x, y coordinates
dsets: a list of DatasetTypes
pq_flags: If None then no PQ masking is applied.

A pandas.DataFrame is returned containing the same info as currently output, but is time-series aware by having the dataset.start_datetime timestamps set as the DataFrame index

The current class RetrievePixelTimeSeriesTool could make use of the same return structure i.e a pandas.DataFrame allowing multiple outputs formats to be written to disk such as csv, JSON & xls.

What do you think?
I can draft up an example soonish.

Allow NULL value to be specified for Retrieve Pixel Time Series tool

When using the --bands-all option with the Retrieve Pixel Time Series tool, it currently output NULL for those bands which are not applicable for a data set. See #24.

That is, an empty CSV field x,,y.

It would be desirable to allow the end-user to be able to specify the behaviour - for example to elect to have the NO DATA VALUE output rather than NULL.

Implement Retrieve Dataset Stack tool

Implement a tool to retrieve a stack of (arbitrary) datasets with optional masking (pixel quality, water/wofs) and create a set of stacked - i.e. one band per acquisition - datasets.

Implement query API

Implement a query API to allow users to "discover" the holdings of the AGDC

psycopg2 version 2.5 or above required

In the requirement, perhaps you should report in the installation documention the psycopg2 release must be 2.5 of higher, because in the older release there is a issue which is fixed from release 2.5

Durable/invariant acquisition identifier

Since its inception, WOfS has encountered problems with the lack of a stable identifier for "tiles/acquisitions". WOfS currently uses the AGDC convention of "start timestamp" which is available via the API and also encoded into the dataset filename.

One problem has been that there is fuzziness associated with this value. In the past, should a tile/acquisition be re-ingested the "start timestamp" is not guaranteed to retain the same value. Any application running incremental updates may incorrectly interpret slightly changed start times as new tile.

The micorsecond resolution of these timestamps also causes problems with string representation of the timestamp changing depending on the value of (or absence of) the microsecond component. A general solution to this problem requires some REGEX gymnastics (e.g. https://github.com/smr547/ga-neo-nfrip/blob/luigi_api_refactor/wofs/timeparser.py)

Anyway, I'd like to suggest that we consider using a combination of satellite_id, "orbit number" and AGCD "cell_ID" as a durable acquisition identifier within any particular dataset type.

This ID has the highly desirable property of not changing between ingestions. The orbit number is readily available from TLEs and could be computed during pre-ingestion processing (if it is not already).

The identifier's value can also be easily predetermined. i.e. We can reliably state what acquisition IDs will be generated during the next (or any) orbit of LS8. This has significant benefits when developing QA procedures (i.e. identifiying gaps in our data).

Obviously it also eliminates the possibility of data duplication.

As an example

LS5_TM_NBAR_150_-034_1991-01-09T23-03-58.241019.tif

might become

LS5_NBAR_150_-034_23456.tif

Test failure: Type error during NBAR ingestion.

Opening Dataset /g/data/v10/projects/ingest_test_data/input/NBAR/LS5_TM_NBAR_P54_GANBAR01-002_100_081_20100228
78 values read from metadata
9 tile footprints cover dataset
Unexpected error during path '/g/data/v10/projects/ingest_test_data/input/NBAR/LS5_TM_NBAR_P54_GANBAR01-002_100_081_20100228'
Traceback (most recent call last):
  File "/apps/python/2.7.3/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/apps/python/2.7.3/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/projects/u46/opt/modules/agdc/1.2.0rc/lib/python2.7/site-packages/agdc/landsat_ingester/__main__.py", line 39, in <mo#
    run_ingest(LandsatIngester)
  File "/projects/u46/opt/modules/agdc/1.2.0rc/lib/python2.7/site-packages/agdc/abstract_ingester/_core.py", line 590, in run_#
    ingester.ingest(ingester.args.source_dir)
  File "/projects/u46/opt/modules/agdc/1.2.0rc/lib/python2.7/site-packages/agdc/abstract_ingester/_core.py", line 191, in inge#
    self.ingest_individual_dataset(dataset_path)
  File "/projects/u46/opt/modules/agdc/1.2.0rc/lib/python2.7/site-packages/agdc/abstract_ingester/_core.py", line 212, in inge#
    self.tile(dataset_record, dataset)
  File "/projects/u46/opt/modules/agdc/1.2.0rc/lib/python2.7/site-packages/agdc/abstract_ingester/_core.py", line 286, in tile
    tile_list += dataset_record.make_tiles(tile_type_id, band_stack)
  File "/projects/u46/opt/modules/agdc/1.2.0rc/lib/python2.7/site-packages/agdc/abstract_ingester/dataset_record.py", line 218#
    tile_contents.reproject()
  File "/projects/u46/opt/modules/agdc/1.2.0rc/lib/python2.7/site-packages/agdc/abstract_ingester/tile_contents.py", line 119,#
    _reproject(self.tile_type_info, self.tile_footprint, self._band_stack, temp_tile_output_path)
  File "/projects/u46/opt/modules/agdc/1.2.0rc/lib/python2.7/site-packages/agdc/abstract_ingester/tile_contents.py", line 256,#
    command_string = ' '.join(reproject_cmd)
TypeError

Optional sort sequences for Tiles

The query API currently returns tiles sorted by end_date. Some apps may want to sort by satellite, others may want to sort by lat/long (cell_id). I'm wondering if a "sort_key" parameter on list_tiles_as_generator and list_tiles_as_list methods would be useful? Such a parameter would allow the DB to do the sorting and avoid in-memory sorting of Python objects. Of course, this is only an issue for large tile sets.

Not a firm suggestion at this stage, just a question.

Time series retrieval error

This is the command I executed to produce the error

module load agdc-api/0.1.0-b20150512

$ retrieve_pixel_time_series.py --lon 150.2708 --lat -35.6355 --acq-min 1987-01 --acq-max 2014-12 --satellite LS7 LS5 LS8 --dataset-type ARG25 --mask-pqa-apply --mask-pqa-mask PQ_MASK_CLEAR --bands-all --hide-no-data --quiet --output-directory $PWD
2015-05-13 08:56:25,182 ERROR Caught exception invalid literal for int() with base 10: '2014-07-19T23-43-48.vrt'
Traceback (most recent call last):
File "/projects/el8/opt/modules/agdc-api/0.1.0-b20150512/bin/retrieve_pixel_time_series.py", line 5, in
pkg_resources.run_script('agdc-api==0.1.0-b20150512', 'retrieve_pixel_time_series.py')
File "build/bdist.linux-x86_64/egg/pkg_resources.py", line 492, in run_script

File "build/bdist.linux-x86_64/egg/pkg_resources.py", line 1350, in run_script

File "/projects/el8/opt/modules/agdc-api/0.1.0-b20150512/lib/python2.7/site-packages/agdc_api-0.1.0_b20150512-py2.7.egg/EGG-INFO/scripts/retrieve_pixel_time_series.py", line 422, in
RetrievePixelTimeSeriesTool("Retrieve Pixel Time Series").run()
File "/projects/el8/opt/modules/agdc-api/0.1.0-b20150512/lib/python2.7/site-packages/agdc_api-0.1.0_b20150512-py2.7.egg/datacube/api/tool/init.py", line 165, in run
self.go()
File "/projects/el8/opt/modules/agdc-api/0.1.0-b20150512/lib/python2.7/site-packages/agdc_api-0.1.0_b20150512-py2.7.egg/EGG-INFO/scripts/retrieve_pixel_time_series.py", line 217, in go
for tile in self.get_tiles(x=cell_x, y=cell_y):
File "/projects/el8/opt/modules/agdc-api/0.1.0-b20150512/lib/python2.7/site-packages/agdc_api-0.1.0_b20150512-py2.7.egg/EGG-INFO/scripts/retrieve_pixel_time_series.py", line 170, in get_tiles
return list(self.get_tiles_from_db(x=x, y=y))
File "/projects/el8/opt/modules/agdc-api/0.1.0-b20150512/lib/python2.7/site-packages/agdc_api-0.1.0_b20150512-py2.7.egg/EGG-INFO/scripts/retrieve_pixel_time_series.py", line 190, in get_tiles_from_db
dataset_types=dataset_types):
File "/projects/el8/opt/modules/agdc-api/0.1.0-b20150512/lib/python2.7/site-packages/agdc_api-0.1.0_b20150512-py2.7.egg/datacube/api/query.py", line 985, in list_tiles_as_generator
yield Tile.from_db_record(record)
File "/projects/el8/opt/modules/agdc-api/0.1.0-b20150512/lib/python2.7/site-packages/agdc_api-0.1.0_b20150512-py2.7.egg/datacube/api/model.py", line 369, in from_db_record
datasets=DatasetTile.from_db_array(record["satellite"], record["datasets"]))
File "/projects/el8/opt/modules/agdc-api/0.1.0-b20150512/lib/python2.7/site-packages/agdc_api-0.1.0_b20150512-py2.7.egg/datacube/api/model.py", line 236, in from_db_array
dst = make_wofs_dataset(satellite_id, out[DatasetType.ARG25])
File "/projects/el8/opt/modules/agdc-api/0.1.0-b20150512/lib/python2.7/site-packages/agdc_api-0.1.0_b20150512-py2.7.egg/datacube/api/model.py", line 450, in make_wofs_dataset
y = int(fields[5])
ValueError: invalid literal for int() with base 10: '2014-07-19T23-43-48.vrt'

I can replicate the error by executing

f = 'LS8_OLI_TIRS_NBAR_123_-025_2013-04-24T01-46-06.vrt'
f.split('_')
['LS8', 'OLI', 'TIRS', 'NBAR', '123', '-025', '2013-04-24T01-46-06.vrt']
int('2014-07-19T23-43-48.vrt')
Traceback (most recent call last):
File "", line 1, in
ValueError: invalid literal for int() with base 10: '2014-07-19T23-43-48.vrt'

It would appear that the file being accessed didn't follow a standard naming convention??

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.