Code Monkey home page Code Monkey logo

plantphys / spectratrait Goto Github PK

View Code? Open in Web Editor NEW
11.0 4.0 9.0 44.3 MB

A tutorial R package for illustrating how to fit, evaluate, and report spectra-trait PLSR models. The package provides functions to enhance the base functionality of the R pls package, identify an optimal number of PLSR components, standardize model validation, and vignette examples that utilize datasets sourced from EcoSIS (ecosis.org)

License: GNU General Public License v3.0

R 100.00%
plsr-modeling lma leaf-spectra predicting-plant-traits plsr-model nitrogen spectra scripts-illustrating best-practices rpubs

spectratrait's Introduction

PLSR modeling for the estimation of plant functional traits

This repository contains example scripts illustrating best-practices for fitting, evaluating, and reporting leaf level spectra-trait PLSR models. These scripts encompass several possibilities that you may encounter doing PLSR. Start by reading Burnett et al. (2021), then work through the scripts or vignettes.

Article citation:

Burnett AC, Anderson J, Davidson KD, Ely KS, Lamour J, Li Q, Morrison BD, Yang D, Rogers A, Serbin SP (2021) A best-practice guide to predicting plant traits from leaf-level hyperspectral data using partial least squares regression. Journal of Experimental Botany. https://doi.org/10.1093/jxb/erab295

Source code citation:

DOI

EcoSML

https://ecosml.org/package/github/TESTgroup-BNL/spectratrait

Getting started, tips and tricks:

Depends:

ggplot2 (>= 3.3.2), remotes (>= 2.2.0), devtools (>= 2.3.1), readr (>= 1.3.1), RCurl (>= 1.98-1.2), httr (>= 1.4.2), pls (>= 2.7-2), magrittr (>= 2.0.1), dplyr (>= 1.0.1), reshape2 (>= 1.4.4), here (>= 0.1), plotrix (>= 3.7-8), gridExtra (>= 2.3), scales (>= 1.1.1), knitr (>= 1.4.2)

INSTALL

spectratrait is not currently on CRAN, but you can install from GitHub using devtools(). First, make sure you have all of the package dependencies installed. You can do this either by 1) installing the packages individually using install.packages(), for example:

install.packages("pls")
install.packages("ggplot2")
...

and so forth until all of the dependencies (listed above in the "Depends" section) are installed. Note - you should pay careful attention at this stage to any R messages in your terminal alerting you that you need to update existing or install new R packages. These messages usually show up after you attempt to run install.packages() and require you to respond in your terminal to a y/n or multiple choice question before the install can continue.

Or 2) you can also run or source the "install_dependencies.R" script located in inst/scripts which should also install all of the required dependencies. Note - again you will need to watch for any R prompts to update packages in order for the install to proceed correctly.

Finally, to complete the installation you will also need to install the spectratrait package itself. You can do this by copying and pasting the command below into your R or RStudio (preferred) terminal.

# to install the master branch version
devtools::install_github(repo = "plantphys/spectratrait", dependencies=TRUE)

# to install the master branch version - with Vignettes (though slower)
devtools::install_github(repo = "plantphys/spectratrait", dependencies=TRUE, build_vignettes = TRUE)

# to install a specific release, for example release 1.0.5
devtools::install_github(repo = "plantphys/[email protected]", dependencies=TRUE)

# or a specific branch, e.g. a branch named devbranch
devtools::install_github(repo = "plantphys/spectratrait", ref = "devbranch", dependencies=TRUE)

Contains:

  1. Core package functions are located in the in the main "R" folder
  2. inst/scripts contains example PLSR workflows for fitting example leaf and canopy spectra-trait PLSR models for different leaf traits, including LMA and foliar nitrogen
  3. Example datasets that can be loaded in your R environment using the base load() function can be found in the data/ folder
  4. man - the manual pages that are accessible in R
  5. tests - package tests to check that functions are still operational and produce the expected results
  6. vignettes - example Rmarkdown and github markdown vignettes illustrating the various PLSR model fitting examples. These can be used to learn how to use the PLSR workflow and associated functions for new applications
  7. spectratrait_X.X.X.pdf (where X.X.X is the current release number) is the pdf documentation

Linked dataset citations, DOIs, and EcoSIS IDs/URLs:

  1. Leaf spectra, structural and biochemical leaf traits of eight crop species (Ely et al., 2019)
    EcoSIS URL: https://ecosis.org/package/leaf-spectra--structural-and-biochemical-leaf-traits-of-eight-crop-species
    EcoSIS ID: 25770ad9-d47c-428b-bf99-d1543a4b0ec9
    DOI: https://doi.org/doi:10.21232/C2GM2Z
    Rpubs LeafN bootstrap example output: https://rpubs.com/sserbin/spectratrait_ex1
    Rpubs LeafN bootstrap by group (species) example output: https://rpubs.com/sserbin/spectratrait_ex2

  2. Leaf reflectance plant functional gradient IFGG/KIT
    Target variable: SLA
    EcoSIS URL: https://ecosis.org/package/leaf-reflectance-plant-functional-gradient-ifgg-kit
    EcoSIS ID: 3cf6b27e-d80e-4bc7-b214-c95506e46daa
    Rpubs example output: https://rpubs.com/sserbin/spectratrait_ex3

  3. Fresh leaf spectra to estimate LMA over NEON domains in eastern United States
    Target variable: LMA
    EcoSIS URL: https://ecosis.org/package/fresh-leaf-spectra-to-estimate-lma-over-neon-domains-in-eastern-united-states
    EcoSIS ID: 5617da17-c925-49fb-b395-45a51291bd2d
    DOI: https://doi.org/doi:10.21232/9831-rq60
    Rpubs example output: https://rpubs.com/sserbin/spectratrait_ex4
    Rpubs example showing Serbin et al. (2019) applied to NEON data: https://rpubs.com/sserbin/spectratrait_ex9

  4. Canopy spectra to map foliar functional traits over NEON domains in eastern United States
    Target variable: leaf nitrogen
    EcoSIS URL: https://ecosis.org/package/canopy-spectra-to-map-foliar-functional-traits-over-neon-domains-in-eastern-united-states
    EcoSIS ID: b9dbf3db-5b9c-4ab2-88c2-26c8b39d0903
    DOI: https://doi.org/doi:10.21232/e2jt-5209
    Rpubs leaf nitrogen example output: https://rpubs.com/sserbin/spectratrait_ex5

  5. Leaf spectra of 36 species growing in Rosa rugosa invaded coastal grassland communities in Belgium
    Target variable: LMA, leaf nitrogen
    EcoSIS URL: https://ecosis.org/package/leaf-spectra-of-36-species-growing-in-rosa-rugosa-invaded-coastal-grassland-communities-in-belgium
    EcoSIS ID: 9db4c5a2-7eac-4e1e-8859-009233648e89
    DOI: https://doi.org/doi:10.21232/9nr6-sq54
    Rpubs LeafN example output: https://rpubs.com/sserbin/spectratrait_ex6
    Rpubs LeafN bootstrap example output: https://rpubs.com/sserbin/spectratrait_ex7
    Rpubs LMA example output: https://rpubs.com/sserbin/spectratrait_ex8

Build status

Auto-run PLSR example: run_PLSR_example-auto
CI run PLSR example: ci-run_PLSR_example
CI OS and R Release Checks: R-CMD-check-OS-R
Weekly CI Checks: R-CMD-check-Weekly
EcoSIS API Check: run_ecosis_pull_example

spectratrait's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

spectratrait's Issues

Create new apply function that applies PLSR models to new data

apply_plsr(..) to take plsr coefficients/permutation coefficients and input spectra matrix to generate new values in a dataframe.

Options should be:
input coefficients as either R object or a specific file(s)
Input reflectance as either an R matrix or a specific file
wavelength prefix (e.g. "Wave_")
metadata_columns - those in the input spectra to keep in the output object

I don't see this overly complicated but we will want to have a check that wavelengths match. also need to indicate if there is an intercept and where it is.

Im sure other considerations will come up.

Use EcoSIS spectra and built-in coefficients as examples

As part of this build in the Serbin et al 2019 LMA coefficients

Adding group stratification to pls_permutation and selection of the number of components

If ensuring the validation data set has a similar distribution as the calibration data set is important for model validation and is achieved by stratifying the random selection procedure used to split the data set, I think it should equally equally important for permuting the calibration data set to select the optimal number of components.
I suggest to improve find_optimal_components by allowing it to stratify the random sampling in each permutation.
In case you find it interesting, although I am far from en expert, I have already modified pls_permutation and find_optimal_components to allow for the input of a groups object of the form c("var1", "var2"..., "varn"). I names these functions as pls_permut_by_groups and find_optimal_comp_groups. pls_permut_by_groupsnow seems to perform well. I'm not sure about find_optimal_comp_groups, as I get a the following error:
Error: 'pls_permut_by_goups' is not an exported object from 'namespace:spectratrait'
Called from: getExportedValue(pkg, name)
I understand the issue, but I don't know how to tell getExportedValue that the new function is not in spectratrait.
Of course, this should not be an issue if you find the function could be included within spectratrait.

Manuscript resubmission checklist

  • Convert to package (shawn)
  • Remove unused depends (@neo0351)
  • Update README - revise links and update contains and other text (@neo0351 )
  • Update example scripts in inst/scripts (shawn)
  • Update vignettes (shawn)
  • Add tests (all)
  • TEST group testing sesh, squash bugs (all)
  • Create internal dataset to use in an example not dependent on EcoSIS
  • Create additional supplemental example that sources a package dataset (shawn)
  • Update example script to include supplemental figure captions
    ...
  • more here

Package load syntax is wrong - need to fix I think

> lapply(list.of.packages, library, character.only = TRUE)
[[1]]
 [1] "spectratrait" "gridExtra"    "ggplot2"      "plotrix"      "here"         "reshape2"    
 [7] "dplyr"        "pls"          "stats"        "graphics"     "grDevices"    "utils"       
[13] "datasets"     "methods"      "base"        

I don't think that's correct. Maybe at some point, I screwed up that syntax. clearly its wrong and it should be doing "library(package), ...)"

Possible bug in create_data_split when working with groups

Aparently, the function performs a stratified random sampling for cal_data, but always takes the last rows of the dataset for val_data, even reusing cases already selected for cal_data:

plot<- rep(c("plot1", "plot2", "plot3"),each=42)
season<- rep(1:6, 21)
disease<- c(rep(0,84), rep(1,42))
d<- seq(1:126)
df <- data.frame(plot,season,disease,d)
df <- df %>% mutate(id=row_number())
names(df)
[1] "plot" "season" "disease" "d" "id"
df$id
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
[19] 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
[37] 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
[55] 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
[73] 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
[91] 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
[109] 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126

split_data <- spectratrait::create_data_split(dataset=df, approach="dplyr", split_seed=7529075, prop=0.8, group_variables=c("plot", "season", "disease"))

split_data$cal_data$id
[1] 7 19 13 25 37 14 38 20 32 26 3 21 15 9 27 10 34 16
[19] 4 40 11 35 5 17 41 12 36 6 18 42 73 55 67 49 43 44
[37] 68 56 62 50 69 75 51 45 63 52 58 82 76 70 53 71 59 77
[55] 47 54 48 66 84 72 85 109 121 91 103 86 116 98 110 92 93 123
[73] 117 99 111 112 118 94 88 100 89 113 101 95 107 102 90 120 126 114
split_data$val_data$id
[1] 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
[19] 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126

table(split_data$cal_data$plot,split_data$cal_data$season, split_data$cal_data$disease)
, , = 0
1 2 3 4 5 6
plot1 5 5 5 5 5 5
plot2 5 5 5 5 5 5
plot3 0 0 0 0 0 0
, , = 1
1 2 3 4 5 6
plot1 0 0 0 0 0 0
plot2 0 0 0 0 0 0
plot3 5 5 5 5 5 5

table(split_data$val_data$plot,split_data$val_data$season, split_data$val_data$disease)
, , = 1
1 2 3 4 5 6
plot3 6 6 6 6 6 6

add internal example dataset

To address @alistairrogers comment about examples that arent dependent on external tools, we will add a built in dataset example and associated script. Plan is to load the data using the R data() command then the rest of the steps would look the same; that is no reliance on EcoSIS

R2 reporting bug

In our example scripts, particularly when we create the side-by-side cal/val plots

Nmass_mg_g_Cal_Val_Scatterplots

I found a bug where the R2 doesn't match the R2 in the final validation plot

Nmass_mg_g_PLSR_Validation_Scatterplot

Thats because for the cal/val plotting we are putting in R2 using

val.R2 <- round(pls::R2(plsr.out,newdata=val.plsr.data)[[1]][nComps],2)

but that doesn't account for the fact that the intercept is included so instead of say giving me the results for nComp=5 its giving me the value at ncomp=4. We need to do

val.R2 <- round(pls::R2(plsr.out,newdata=val.plsr.data)[[1]][nComps+1],2)

which is what we do for the validation plot. So we need to make a change in our scripts to address this and re-run the examples :/

@julien is this also what you saw? Or was that a different issue?

nComp selection seems not to work properly

I am trying to fit a PLSR model of hyperspectral (VNIR) data on SLA (mean of canopy values per tree) using psl and spectratrait and following the tutorial provided in it. But I find a lack of congruence between the results of spectratrait::find_optimal_components and what I see in the training, crossvalidation and validation results with different number of components.
find_optimal_components always select 1 to 3 components (depending on the split proportion), but all three data sets seem to improve significantly at least until 8 components.
imagen
imagen
imagen
imagen

There is surely something I am missing, but I cannot get the reason why, but there seems to be a general agreement between train and validation results, while cross-validation results have a very different pattern.
My data come from a complex experimental design with factors plot, disease and season. I use these factors for creating the validation data set...but it seems to me, that find_optimal_components does not account for this stratification. Could this be the reason for the differing behaviour of crossvalidation? Could I use spectratrait::create_data_split and then run the model iteratively for different splits to select the optimal number of components?
Even disregarding this difference, seems that the optimal number of components should be something around 8 (if we use cross-validation as a reference), but the procedure always gives values between 1 and 3 (tried with different methods).
May the lack of stratification in cross-validation be behind this weird behaviour?
Best regards,
Asier

Unit tests

Started but we could use more

devtools::test("PLSR_for_plant_trait_prediction")

> devtools::test("PLSR_for_plant_trait_prediction")
Loading spectratrait
Testing spectratrait
✔ |  OK F W S | Context
⠏ |   0       | Grabbing data via the EcoSIS API works                                                                                                              [1] "**** Downloading Ecosis data ****"
✔ |   1       | Grabbing data via the EcoSIS API works [8.0 s]

══ Results ════════════════════════════════
Duration: 8.0 s

[ FAIL 0 | WARN 0 | SKIP 0 | PASS 1 ]

rename repo?

Should we just go ahead and rename this repo to the name of the R package it builds? The current name is a holdover from when this was just a repo of scripts. It may make sense to just rename to "spectratrait" Thoughts?

@JulienLamour @regnans @neo0351 @DavidsonKen

Deprecate use of reshape2

Replace use of reshape2::melt() with new suggested package data.table::melt()

reshape2 is apparently going the way of the dodo at some point soonish, or has, or maybe no longer?? Need to confirm and either address this issue or close

Refine model component selection

Both approaches are less than desirable for different reasons

  1. pls jackknife selection often results in more components than necessary

  2. T.test permutation requires a large enough number of iterations otherwise the auto-select chooses far too few. Or if there is a lot of variance in the data it can result in the selection of a small number of components. For example with the https://ecosis.org/package/leaf-reflectance-plant-functional-gradient-ifgg-kit dataset. My tests keep selected 2 components which isnt correct. I wonder if we can refine how we find the minimum?

Build manual

Make sure to create the PDF manual

in the package folder, run:

R

devtools::build_manual(pkg = ".", path = NULL)

Issues Found by Ken 20200814 expanded_spectra-trait_plsr_example.R

line 117: val.pls.data<-plsr_data[!plsr_data$id %in% cal.plsr.data$id,] change to val.plsr.data<-plsr_data[!plsr_data$id %in% cal.plsr.data$id,]

line 153: object cal_hist_plot returns error. I think this is because the object cal.plsr.data which is used to make the plot is an S3 grouped object and thus qqplot doesn't know how to deal with it. I changed cal.plsr.data to a data.frame and it worked fine for the rest of the script

line 193 and 195: Error in xy.coords(x, y, setLab = FALSE) : 'x' and 'y' lengths differ. Looks like there is an additional entry "id" in the cal.plsr.data$Spectra vector which is causing the differing lengths

line 445, 449, 593, 595 and 599,: Error in xy.coords(x, y, xlabel, ylabel, log) : 'x' and 'y' lengths differ. I think this is the same issue as above. The vectors "coefs" and "vips" have a length of 1902, which is 1 longer than the length of seq(Start.wave,End.wave,1). The named last entry in both of these vectors is id

Replace old repo name in install example

Somehow we still have

devtools::install_github(repo = "TESTgroup-BNL/PLSR_for_plant_trait_prediction", 
dependencies=TRUE)

and this should be

devtools::install_github(repo = "TESTgroup-BNL/spectratrait", dependencies=TRUE)

Bug: dplyr data split approach assumptions

FYI @JulienLamour when playing with another dataset from EcoSIS I realized that your dplyr method for splitting data assumes the dataset contains "Sample_ID" which is based on the original example but not universal

    } else if (approach=="dplyr")
      cal.plsr.data <- plsr_data %>% 
        group_by_at(vars(all_of(group_variables))) %>% 
        slice(sample(1:n(), prop*n())) %>% 
        data.frame()
      val.plsr.data <- plsr_data[!plsr_data$Sample_ID %in% cal.plsr.data$Sample_ID,]

We will need to revise how we select out the not cal data so it doesnt depend on knowing what variable is in the dataset. currently I am not sure yet how do do that with that approach as it doesnt create a new variable to track which rows are part of the cal selected dataset, like the original version as revised by @neo0351 . We will need to fix this to make it more generic

Possible bugs detected by Angie #1

Originally posted by Angie on the wrong git: serbinsh/SSerbin_etal_2019_NewPhytologist#1

Reposted here

When running the script via the RPubs version, nComps was not generated automatically (pressDFres not found in the t.test script). I had to manually set nComps <- 14 (guided by the annotation) to move to the next step.

When running the script from GitHub using RStudio, I could see the graphics provided in the Rmd file but running the code chunks didn't generate my own graphics in the graphics window of RStudio. I expected I would see the graphics there.

I tried both of these scripts from GitHub:
run_lma_plsr_example.Rmd
run_lma_plsr_example_neon.Rmd

I also think the uninitiated user would benefit from a bit more description of what the code is doing, throughout.

Replace deprecated qplot in Vignettes

Warning: qplot() was deprecated in ggplot2 3.4.0.

e.g.

cal_hist_plot <- qplot(cal.plsr.data[,paste0(inVar)],geom="histogram",
                       main = paste0("Cal. Histogram for ",inVar),
                       xlab = paste0(inVar),ylab = "Count",fill=I("grey50"),col=I("black"),
                       alpha=I(.7))

Ecosis cannot download data

Cannot download data using read.csv or read_csv. Output for both provided. However if i paste the URL into Chrome, it downloads the file.

dat_raw <- read.csv(ecosis_file, na.strings = "")
Error in file(file, "rt") : 
  cannot open the connection to 'https://ecosis.org/api/package/5617da17-c925-49fb-b395-45a51291bd2d/export?metadata=true'
In addition: Warning message:
In file(file, "rt") :
  URL 'https://ecosis.org/api/package/5617da17-c925-49fb-b395-45a51291bd2d/export?metadata=true': status was 'SSL peer certificate or SSH remote key was not OK'
dat_raw <- readr::read_csv(ecosis_file)
Error in open.connection(con, "rb") : 
  SSL certificate problem: certificate has expired

baseR create_data_split tweaks

@JulienLamour pointed out here that we are hard-coded a minimum number of obs needed for data split but then provide no error handling when the obs are too low. We should instead either stop and mention that the groups are insufficient or provide a solution, such as putting the obs into either cal or val randomly and then reporting as such since again this is really about selecting cal./val datasets

https://github.com/TESTgroup-BNL/spectratrait/blob/4e472c6ae0e5833faae7d2f0f2b2fd9a2b48c196/R/create_data_split.R#L30

Testing from the POV of unskilled user

I'm trying to be "that reviewer" that has no idea what they are doing with GitHub or R. (not that far from the truth).

Running spectra-trait_reseco_lma_plsr_example.R
I had some issues getting started, but will make that a different issue.
Once I had a the code running in R Studio (v1.4, with R 4.0.3), these are potential issues.

At Line 31: install devtools. Outputs list of packages (n = 56). Instructions about "enter one or more numbers" seem to be repeated and confusing.


56: readr (1.3.1 -> 1.4.0 ) [CRAN]

Enter one or more numbers, or an empty line to skip updates:list.of.packages <- c("pls","dplyr","reshape2","here","plotrix","ggplot2","gridExtra","spectratrait")
Enter one or more numbers, or an empty line to skip updates:invisible(lapply(list.of.packages, library, character.only = TRUE))
Enter one or more numbers, or an empty line to skip updates:#--------------------------------------------------------------------------------------------------#
Enter one or more numbers, or an empty line to skip updates:

Entered “1”
Next output message:

There are binary versions available but the source versions are later:
binary source needs_compilation
waldo 0.2.3 0.2.4 FALSE
promises 1.1.1 1.2.0.1 TRUE

Do you want to install from sources the package which needs compilation? (Yes/no/cancel)

Entered “Yes” (maybe guidance to this should be given in instructions).

Continued running chunks, all working until Line 99.

Error in sample_info %>% select(Plant_Species = Latin Species, Species_Code = species code, :
could not find function "%>%"

Tried to load packages that might help with "%>%"

library(dplyr)
Error: package or namespace load failed for ‘dplyr’ in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]):
namespace ‘rlang’ 0.4.7 is already loaded, but >= 0.4.9 is required

library(magrittr)
Error in value[3L] :
Package ‘magrittr’ version 1.5 cannot be unloaded:
Error in unloadNamespace(package) : namespace ‘magrittr’ is imported by ‘tibble’, ‘testthat’ so cannot be unloaded

Testing ended at this point. Unable to continue.
I have recently used dplr in other code without an issue. Not sure what has changed.

Edits suggested by Angie 9/23

Change LMA to SLA in filename etc for expanded_spectra-trait_kit_lma_plsr_example.R
Rename files without 'expanded' and delete simple file.

Bug: need to generalize assumptions in base R data split

@neo0351

FYI -

> apply(plsr_data[, split_var], MARGIN = 1, FUN = function(x) paste(x, collapse = " "))
Error in apply(plsr_data[, split_var], MARGIN = 1, FUN = function(x) paste(x,  : 
  dim(X) must have a positive length

I found a bug in your split function when trying out a new dataset
https://ecosis.org/package/leaf-reflectance-plant-functional-gradient-ifgg-kit
ID: 3cf6b27e-d80e-4bc7-b214-c95506e46daa

Not yet sure what the issue is but it looks at as coded it assumes there will be two grouping variables. We need this to be flexible enough to handle 1+ grouping variables

And also FYI - if I try with two grouping vars I get this

> split_data <- create_data_split(approach=method, split_seed=2356812, prop=0.8, 
+                                 group_variables=c("Growth_Form","Plant_Species"))
NA   Cal: 79.9729364005413%
 Not enough observations

Again these functions need to be general to allow for flexibility. We will need to fix this to allow for different numbers of grouping variables with different numbers of obs.

Create a new "plsr_companion" R package to contain custom TEST group functions/approaches?

Given the increased complexity and interest in using the example PLSR R script, the question now is whether we should create a small R package containing the functions developed for the scripts in order to make it easier to find and use these functions in custom fitting scripts? The idea would be this package would be a companion to the standard PLS package containing custom functions to make the approach easier. It could also contain 1 or 2 example datasets.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.