Code Monkey home page Code Monkey logo

xskillscore's Introduction

xskillscore: Metrics for verifying forecasts

https://img.shields.io/readthedocs/xskillscore/stable.svg?style=flat

xskillscore is an open source project and Python package that provides verification metrics of deterministic (and probabilistic from properscoring) forecasts with xarray.

Installing

$ conda install -c conda-forge xskillscore

or

$ pip install xskillscore

or

$ pip install git+https://github.com/xarray-contrib/xskillscore

Documentation

Documentation can be found on readthedocs.

See also

  • If you are interested in using xskillscore for data science where you data is mostly in pandas.DataFrames's check out the xskillscore-tutorial
  • If you are interested in using xskillscore for climate prediction check out climpred.

History

xskillscore was orginally developed to parallelize forecast metrics of the multi-model-multi-ensemble forecasts associated with the SubX project.

We are indebted to the xarray community for their advice in getting this package started.

xskillscore's People

Contributors

aaronspring avatar ahuang11 avatar blackary avatar bradyrx avatar cheginit avatar dougiesquire avatar mcsitter avatar pre-commit-ci[bot] avatar raybellwaves avatar zeitsperre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xskillscore's Issues

Discussion: Streamlining xskillscore

Along the lines of https://github.com/raybellwaves/xskillscore/issues/33 as the package is growing I wonder how we can streamline the package. Right now the code base is getting larger and larger. To navigate around it might help to consolidate.

Some ideas:

  • Testing vectorized metrics: Have a general function that checks whether _metric from np_deterministic gives the same results as metric from deterministic
  • Testing dask: Have a general function that checks whether functions in deterministic give the same results lazy as for .load()

Challenges:

  • Can have think of a template for new metrics? Many metrics are very similar (the math is one line of code) but the code to implement a metric is more than 100 lines of code

Reduce warnings

There is a ton of warnings. They appear in pytest and the demo. So far I mostly suppressed all warnings. But there are catchers for warnings, which should be implemented.

seeing warnings in pytest:

pytest -r w

I had to uninstall pytest-tldr and pytest-sugar.

how to reduce: https://docs.pytest.org/en/stable/warnings.html

Multiple metrics applied at once in parallel

I wonder if it's good to have a util that allows the user to apply multiple metrics at once in parallel?
Like:
xs.score(a, b, 'time', ['rmse', 'mae', 'pearson_r', 'pearson_r_p_val'])

mu scalars / sig scalars in probabilistic fail

Docs state scalars are ok

    mu : Dataset, DataArray, GroupBy, Variable, numpy/dask arrays or
     scalars, Mix of labeled and/or unlabeled forecasts mean arrays.
    sig : Dataset, DataArray, GroupBy, Variable, numpy/dask arrays or
     scalars, Mix of labeled and/or unlabeled forecasts mean arrays.

However,

/mnt/c/Users/Solactus/GOOGLE~1/Bash/xskillscore/xskillscore/core/probabilistic.py in xr_crps_gaussian(observations, mu, sig)
     27     """
     28     # check if same dimensions
---> 29     if mu.dims != observations.dims:
     30         observations, mu = xr.broadcast(observations, mu)
     31     if sig.dims != observations.dims:

AttributeError: 'int' object has no attribute 'dims'

Probably need to do if not isinstance(mu, xr.DataArray): mu = xr.DataArray(mu)

check whether faster without moveaxis

Check whether deterministic metrics would speed up when omitting np.moveaxis or even writing in xarray functions.

see also http://xarray.pydata.org/en/stable/dask.html

import numpy as np
import xarray as xr
import bottleneck

def covariance_gufunc(x, y):
    return ((x - x.mean(axis=-1, keepdims=True))
            * (y - y.mean(axis=-1, keepdims=True))).mean(axis=-1)

def pearson_correlation_gufunc(x, y):
    return covariance_gufunc(x, y) / (x.std(axis=-1) * y.std(axis=-1))

def spearman_correlation_gufunc(x, y):
    x_ranks = bottleneck.rankdata(x, axis=-1)
    y_ranks = bottleneck.rankdata(y, axis=-1)
    return pearson_correlation_gufunc(x_ranks, y_ranks)

def spearman_correlation(x, y, dim):
    return xr.apply_ufunc(
        spearman_correlation_gufunc, x, y,
        input_core_dims=[[dim], [dim]],
        dask='parallelized',
        output_dtypes=[float])

Check that `a` and `b` are xarray objects

The docstrings for all deterministic functions claim that ndarrays (i.e. numpy arrays) can be passed through, but that isn't the case since a lot of the preprocessing and wrapper functions leverage xarray methods to set up the package. Given the prefix x for xskillscore, I think it would be reasonable to enforce that a and b have to be xarray objects.

We do this in climpred with a decorator:

https://github.com/bradyrx/climpred/blob/61e397bc07004e292af5b0798d4eca17be5dd263/climpred/checks.py#L19-L57

Which is then applied as a header to relevant functions:

https://github.com/bradyrx/climpred/blob/61e397bc07004e292af5b0798d4eca17be5dd263/climpred/prediction.py#L32-L41

This would be easy to port over to xskillscore and decorate all of the high-level functions with.

crps wrong implemented

if forecast and obs have the same shape, then results just MAE.
forecast needs to have ensemble dimension extra for crps to be computed probabilistic.

Feature request: spearman correlation

I'd like to calculate the spearman correlation.

Why? To calc correlation for non-gaussian quantities like mixed layer depth

Ref: Servonnat, Jérôme, Juliette Mignot, Eric Guilyardi, Didier Swingedouw, Roland Séférian, and Sonia Labetoulle. “Reconstructing the Subsurface Ocean Decadal Variability Using Surface Nudging in a Perfect Model Framework.” Climate Dynamics 44, no. 1–2 (January 1, 2015): 315–38. https://doi.org/10/f6v7kq.

Example code from xarray docu: http://xarray.pydata.org/en/stable/dask.html#automatic-parallelization

Wiki: https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient

Ways to implement:

  • hack pearson_r with x_ranks = bottleneck.rankdata(x, axis=-1) and then rename to corr(type='pearson')
  • write a new function the same way as pearson_r called spearman_r

pearson_r position NaN doesn't look right / is misaligned

I think something is misaligned
image

Check mark means it matches my expectation.

The highlighted part / X mark is fishy.

import xarray as xr
import xskillscore as xs

da = xr.DataArray([[[0, 1, 2], [np.nan, 4, 5]],
                   [[0, 1, 2], [3, np.nan, 5]],
                   [[0, 1, 2], [2, 4, 5]]
                  ], dims=['time', 'lat', 'lon'])
da.isel(time=0)
da.isel(time=1)
da.isel(time=2)

xs.pearson_r(da, da, dim='time', skipna=False)
xs.pearson_r(da, da, dim='time', skipna=True)

xs.pearson_r(da, da, dim=['lat', 'lon'], skipna=False)
xs.pearson_r(da, da, dim=['lat', 'lon'], skipna=True)

Allow optional kwargs to deterministic functions

I think it would be a nice feature to allow optional keyword arguments (kwarg) to

def mae(a, b, dim):

e.g. something like:

def mae(a, b, dim, kwargs=kwargs):

Which should have to get passed through to

    return xr.apply_ufunc(_mae, a, b,
                          input_core_dims=[dim, dim],
                          kwargs={'axis': axis},
                          dask='parallelized',
output_dtypes=[float])

e.g. something like:

    return xr.apply_ufunc(_mae, a, b,
                          input_core_dims=[dim, dim],
                          kwargs={'axis': axis}.update(kwargs),
                          dask='parallelized',
output_dtypes=[float])

np.rollaxis needed ?

when playing about around in https://github.com/raybellwaves/xskillscore/pull/66 I tried to get rid of the lines with np.rollaxis in correlations. I think it might not be needed and might then speedup even more, but I cannot make it work to pass the tests.

https://github.com/raybellwaves/xskillscore/blob/dd1400fd96afb82895cab881978d293942230bf4/xskillscore/core/np_deterministic.py#L96

Below some observations:

  • somehow it doesnt change testing whether kwargs={"axis": -1, "skipna": skipna} or kwargs={"axis": 0, "skipna": skipna} in deterministic.py for correlations.
  • apply_ufunc: Core dimensions are automatically moved to the last axes of input variables before applying func, which facilitates using NumPy style generalized ufuncs. http://xarray.pydata.org/en/stable/generated/xarray.apply_ufunc.html
  • I would have guessed that we can spare command rollaxis if axis is axis=0
a = np.ones((3,4,5,6))
np.rollaxis(a,-1).shape
# equivalent
np.rollaxis(a,-1,0).shape

# what we now do
%timeit np.sum(np.rollaxis(a,-1),axis=0).shape 8.59 µs ± 251 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

# what might be faster
%timeit np.sum(a,axis=-1).shape 5.9 µs ± 362 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

do you see a solution to this potential speed up?

Is median absolute deviation used in forecasting and written accurately?

xskillscore includes median absolute deviation (MAD). It seems to be returning different results than other packages and is not written as Wikipedia (https://en.wikipedia.org/wiki/Median_absolute_deviation) or scipy (https://scipy.github.io/devdocs/generated/scipy.stats.median_absolute_deviation.html) would suggest it to be. I also can't find any documentation on it in forecast literature/pages.

import numpy as np
import pandas as pd
import xarray as xr
import xskillscore as xs
import astropy.stats
import scipy.stats

np.random.seed(12345)
def generate_data():
    time = pd.date_range("1/1/2000", "1/5/2000", freq="D")
    da = xr.DataArray(np.random.rand(len(time)), dims=["time"], coords=[time])
    return da

a = generate_data()
b = generate_data()

xs.mad(a, b, "time")
<xarray.DataArray ()>
array(0.46925829)

astropy.stats.median_absolute_deviation(a, b)
0.13518689535658013

scipy.stats.median_absolute_deviation(a, b)
0.2004280910556657

The scipy description suggests that MAD is the median of the median error. I.e. np.median(x - np.median(x)). It seems like a statistic for a single time series. Does it make sense to apply this to the absolute error of A and B?

Just checking that there's some evidence of this being used in the field.

CC @raybellwaves, @aaronspring

Add sphinx documentation

It would be nice to add proper sphinx documentation for xskillscore hosted on readthedocs. A really barebones version is hosted at https://github.com/bradyrx/esmtools/, which gives you the essentials. We have more expansive documentation at https://github.com/bradyrx/climpred, which shows you how to use notebooks in the docs.

I think just having the API hosted more formally + some example notebooks would be great. @hdrake expressed interest in leading this. However, I think readthedocs needs to be run by the owner of the repo (@raybellwaves). Perhaps the sphinx can be assembled by someone and then the readthedocs settings set up by @raybellwaves?

pearson_r_p_value skipna not working

d1 = xr.tutorial.open_dataset('air_temperature').isel(time=slice(0, 5))
d2 = xr.tutorial.open_dataset('air_temperature').isel(time=slice(5, 10))
d2 = d2.where(d2['air'] > 279)
d2['time'] = d1['time']
xs.pearson_r_p_value(d1, d2, ['lat', 'lon'], skipna=True)
air      (time) float64 nan nan nan nan nan

This works though

xs.spearman_r_p_value(d1, d2, ['lat', 'lon'], skipna=True)

RMSE documentation typo showing correlation

Signature: xs.rmse(a, b, dim)
Docstring:
Root Mean Squared Error.

Parameters

a : Dataset, DataArray, GroupBy, Variable, numpy/dask arrays or scalars
Mix of labeled and/or unlabeled arrays to which to apply the function.
b : Dataset, DataArray, GroupBy, Variable, numpy/dask arrays or scalars
Mix of labeled and/or unlabeled arrays to which to apply the function.
dim : str
The dimension to apply the correlation along.

Returns

Single value or tuple of Dataset, DataArray, Variable, dask.array.Array or
numpy.ndarray, the first type on that list to appear on an input.
Root Mean Squared Error.

Broadcasting dimensions in _match_nans

Currently, the _match_nans function in https://github.com/raybellwaves/xskillscore/blob/master/xskillscore/core/np_deterministic.py requires a and b have identical dimensions, while most operations on xarray.Dataset objects broadcast across non-common dimensions. This means that all dependent skill scores (e.g. _mae) are inconsistent between skipna=True and skipna=False options.

def _match_nans(a, b, weights):
    """
    Considers missing values pairwise. If a value is missing
    in a, the corresponding value in b is turned to nan, and
    vice versa.
    Returns
    -------
    a, b, weights : ndarray
        a, b, and weights (if not None) with nans placed at
        pairwise locations.
    """
    if np.isnan(a).any() or np.isnan(b).any():
        # Find pairwise indices in a and b that have nans.
        idx = np.logical_or(np.isnan(a), np.isnan(b))
        a[idx], b[idx] = np.nan, np.nan
        if weights is not None:
            weights[idx] = np.nan
    return a, b, weights

I think the following modifications could work to address this:

def _match_nans(a, b, weights):
    """
    Considers missing values pairwise. If a value is missing
    in a, the corresponding value in b is turned to nan, and
    vice versa.
    Returns
    -------
    a, b, weights : ndarray
        a, b, and weights (if not None) with nans placed at
        pairwise locations.
    """
    if np.isnan(a).any() or np.isnan(b).any():
        a, b = xr.broadcast(a, b)
        # Find pairwise indices in a and b that have nans.
        idx = np.logical_or(np.isnan(a), np.isnan(b))
        a[idx], b[idx] = np.nan, np.nan
        if weights is not None:
            weights[idx] = np.nan
    return a, b, weights

I'll try to test this myself and put in a Pull Request sometime soon but just wanted to open this issue first.

Multiple axes fails with all axes

I removed '('time', 'lat', 'lon')' from the AXES variable in xskillscore/tests/test_deterministic.py

As I got the issue:
test_deterministic.py::test_pearson_r_xr_dask[dim4] Aborted (core dumped)
isn't running.

I'm guessing it has something to do with
https://docs.dask.org/en/latest/changelog.html#id1

I would argue that this probably isn't worth looking into as I there are other packages to do stats on vectors. The package focuses on ndarrays.

Check that weights is same size as dimension(s) the metric is being applied over

Probably need to provide the user with a warning.

>>> import xarray as xr
>>> import pandas as pd
>>> import numpy as np
>>> from scipy.stats import norm
>>> import xskillscore as xs
>>> obs = xr.DataArray(
...     np.random.rand(3, 4, 5),
...     coords=[
...         pd.date_range("1/1/2000", "1/3/2000", freq="D"),
...         np.arange(4),
...         np.arange(5),
...     ],
...     dims=["time", "lat", "lon"],
... )
>>> fct = obs.copy()
>>> fct.values = np.random.rand(3, 4, 5)

>>> weights = np.cos(np.deg2rad(obs.lat))
>>> _, weights = xr.broadcast(obs, weights)
>>> weights = weights.isel(time=0)
>>> r = xs.pearson_r(obs, fct, ["time", "lat", "lon"], weights=weights)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ray/Documents/PYTHON_dev/xskillscore/xskillscore/core/deterministic.py", line 120, in pearson_r
    weights = weights.stack(**{new_dim: dim})
  File "/home/ray/local/bin/anaconda3/envs/xss/lib/python3.6/site-packages/xarray/core/dataarray.py", line 1725, in stack
    ds = self._to_temp_dataset().stack(dimensions, **dimensions_kwargs)
  File "/home/ray/local/bin/anaconda3/envs/xss/lib/python3.6/site-packages/xarray/core/dataset.py", line 3234, in stack
    result = result._stack_once(dims, new_dim)
  File "/home/ray/local/bin/anaconda3/envs/xss/lib/python3.6/site-packages/xarray/core/dataset.py", line 3181, in _stack_once
    shape = [self.dims[d] for d in vdims]
  File "/home/ray/local/bin/anaconda3/envs/xss/lib/python3.6/site-packages/xarray/core/dataset.py", line 3181, in <listcomp>
    shape = [self.dims[d] for d in vdims]
  File "/home/ray/local/bin/anaconda3/envs/xss/lib/python3.6/site-packages/xarray/core/utils.py", line 386, in __getitem__
    return self.mapping[key]
  File "/home/ray/local/bin/anaconda3/envs/xss/lib/python3.6/site-packages/xarray/core/utils.py", line 417, in __getitem__
    return self.mapping[key]
KeyError: 'time'

DataArrays as integer fail

import numpy as np
import xarray as xr
import xskillscore as xs

da = xr.DataArray([0, 1, 2], dims=['time'])

print('-----')

xs.pearson_r(da, da, dim='time', skipna=True)

yields TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' because integers cannot be NaN

List out available metrics upfront

Instead of digging through the growing README, I think the following should be displayed up front.

all = [
'_pearson_r',
'_pearson_r_p_value',
'_rmse',
'_mse',
'_mae',
'_mad',
'_smape',
'_mape',
'_spearman_r',
'_spearman_r_p_value',
]
all = [
'brier_score',
'crps_ensemble',
'crps_gaussian',
'crps_quadrature',
'threshold_brier_score',
]

xs performance vs built-in xr performance

I dont want to be the party pooper, do you get an increase in performance with xr.metric over a easily self-written xr function? I don't on my laptop.
For comparison, I also wrapped MSE from sklearn with apply_ufunc.

# test bed for xs performance

import numpy as np
import pandas as pd
import sklearn
import xarray as xr
import xskillscore as xs
from sklearn.metrics import mean_squared_error

np.random.seed(12345)


def generate_small_data():
    time = pd.date_range("1/1/2000", "1/5/2000", freq="D")
    da = xr.DataArray(np.random.rand(len(time)), dims=["time"], coords=[time])
    return da


a = generate_small_data()
b = generate_small_data()


def generate_large_data():
    time = pd.date_range("1/1/2000", "1/5/2000", freq="D")
    nlon, nlat = 200, 200
    da = xr.DataArray(np.random.rand(len(time), nlon, nlat), dims=[
                      "time", "lon", "lat"], coords=[time, np.arange(nlon), np.arange(nlat)])
    return da


#a = generate_large_data()
#b = generate_large_data()


def xr_mse(a, b, dim):
    return ((a-b)**2).mean(dim)

def sk_mse(a, b, dim):
    # doesnt work yet for ndims > 2
    # tried here
    if len(a.dims) > 2:
        a = a.stack(s=a.drop(dim).dims)
        b = b.stack(s=b.drop(dim).dims)
        stacked = True
    else:
        stacked = False
    res = xr.apply_ufunc(
        # sklearn.metrics.mean_squared_error
        mean_squared_error,
        a,
        b,
        # weights,
        input_core_dims=[[dim], [dim]],
        # kwargs={"axis": axis, "skipna": skipna},
        dask="parallelized",
        output_dtypes=[float],
    )
    if stacked:
        res = res.unstack()
    return res

%timeit xs.mse(a, b, "time")
723 µs ± 7.79 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit xr_mse(a, b, "time")
383 µs ± 13.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit sk_mse(a, b, 'time')
619 µs ± 59.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

I re-ran timeit several times to avoid cache effects or compilation effects. A better way to test this would be directly with asv. I get similar results for larger datasets.

Why is xr performance higher? (xr).mean, (xr).sum, ... is automatically vectorized and basically from numpy.

no lazy computes anymore

When implementing properscoring.brier_score, I saw that we dont test lazy computation, i.e. whether we get chunked results when entering chunked inputs. Therefore I added a few assert statements. But this error also occurs now for test_xr_threshold_brier_score_dask which had the line assert actual.chunks is not None already before I did my changes.

    @pytest.mark.parametrize('dim', AXES)
    def test_pearson_r_p_value_xr_dask(a_dask, b_dask, dim):
        actual = pearson_r_p_value(a_dask, b_dask, dim)
>       assert actual.chunks is not None
E       assert None is not None

Maybe it has something to do with the stacking in _preprocess? @ahuang11

Furthermore I get this warning, maybe there is a problem with np.rollaxis:

======================================= warnings summary ========================================
/Users/aaron.spring/anaconda3/envs/climpred-dev/lib/python3.6/site-packages/_pytest/mark/structures.py:332
  /Users/aaron.spring/anaconda3/envs/climpred-dev/lib/python3.6/site-packages/_pytest/mark/structures.py:332: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/latest/mark.html
    PytestUnknownMarkWarning,

/Users/aaron.spring/anaconda3/envs/climpred-dev/lib/python3.6/site-packages/_pytest/mark/structures.py:332
  /Users/aaron.spring/anaconda3/envs/climpred-dev/lib/python3.6/site-packages/_pytest/mark/structures.py:332: PytestUnknownMarkWarning: Unknown pytest.mark.network - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/latest/mark.html
    PytestUnknownMarkWarning,

xskillscore/tests/test_deterministic.py::test_pearson_r_xr_dask[time]
xskillscore/tests/test_deterministic.py::test_pearson_r_xr_dask[time]
xskillscore/tests/test_deterministic.py::test_pearson_r_xr_dask[lat]
xskillscore/tests/test_deterministic.py::test_pearson_r_xr_dask[lat]
xskillscore/tests/test_deterministic.py::test_pearson_r_xr_dask[lon]
xskillscore/tests/test_deterministic.py::test_pearson_r_xr_dask[lon]
xskillscore/tests/test_deterministic.py::test_pearson_r_xr_dask[dim3]
xskillscore/tests/test_deterministic.py::test_pearson_r_xr_dask[dim3]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[time]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[time]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[time]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[lat]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[lat]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[lat]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[lon]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[lon]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[lon]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[dim3]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[dim3]
xskillscore/tests/test_deterministic.py::test_pearson_r_p_value_xr_dask[dim3]
  /Users/aaron.spring/anaconda3/envs/climpred-dev/lib/python3.6/site-packages/dask/array/core.py:1263: FutureWarning: The `numpy.rollaxis` function is not implemented by Dask array. You may want to use the da.map_blocks function or something similar to silence this warning. Your code may stop working in a future release.
    FutureWarning,

-- Docs: https://docs.pytest.org/en/latest/warnings.html

See https://github.com/raybellwaves/xskillscore/pull/18 for the new tests.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.