neuropsychology / psycho.r Goto Github PK
View Code? Open in Web Editor NEWAn R package for experimental psychologists
Home Page: https://neuropsychology.github.io/psycho.R/
License: Other
An R package for experimental psychologists
Home Page: https://neuropsychology.github.io/psycho.R/
License: Other
Hello,
Do you support other languages? I think as psycho::analyze( ... , lang = "TR")
If so, I may help in translating to Turkish.
Best wishes.
The get_means() function is not found. It's throwing error. I am using the currently available version of psycho installed via install.package. I am using Microsoft R Open 3.5.1
library(psycho)
require(lmerTest)
fit <- lmer(Subjective_Valence ~ Emotion_Condition * Participant_Sex + (1|Participant_ID), data=emotion)
anova(fit)
get_means(fit)
Error in get_means(fit) : could not find function "get_means"
Describe the problem.
Hi,
I am currently trying to use the analyze function with Blavaan, but I amgetting errors, and I am not sure why.
I am using the following code:
fit <- bsem(bayesModelPB, cp = "srs", convergence = "auto", data=data)
FitSummary<- analyze(fit, CI = 90, standardize=T)
print(FitSummary)
However, I get the error
1: In log(z) : NaNs produced
2: In if (standardize == FALSE) { :
the condition has length > 1 and only the first element will be used
3:
the condition has length > 1 and only the first element will be used
When I print the results, only the fit statistics are presented.
I have also tried running the code as:
FitSummary<- analyze(fit)
And
FitSummary<- analyze(fit, standardize=FALSE)
When standardized = False I get the error
Error in if (median >= 0) { : argument is of length zero
I have checked Blavaan and psycho, and reinstalled the most recent version of all packages and dependencies I could see. Is there anywhere I have gone wrong here?
Dear Dominique,
Thank you very much for your package, I think it will be very useful to me in the future.
I am notably interested in the find_best_model function and I encontered the following bugs while exploring it.
find_best_model.stanreg gives the following error message when a random intercept is entered in the formula :
df <- psycho::emotion
fit <- stan_lmer(Autobiographical_Link ~ Emotion_Condition + Subjective_Valence + (1|Participant_ID), data=df)
best <- find_best_model(fit)
Error in f_i(data_i = data[i, , drop = FALSE], draws = draws, ...) :
unused argument (k_treshold = k_treshold)
Solution
It seems to come from the loo function of the rstanarm package. I added the following lines into find_best_model.stanreg to temporarily solve the problem :
if (!is.null(k_treshold)) {
loo <- rstanarm::loo(newfit, k_treshold = k_treshold)
} else {
loo <- rstanarm::loo(newfit)
}
Inside find_best_model.stanreg, warning messages are sent when accessing the loos estimates :
loo$elpd_loo
Warning message:
Accessing elpd_loo using '$' is deprecated and will be removed in a future release. Please extract the elpd_loo estimate from the 'estimates' component instead.
Solution
I replaced them with those lines:
Estimates <- loo[["estimates"]]
model <- data.frame(
formula = formula,
complexity = complexity - 1,
R2 = R2s[[formula]],
looic = Estimates["looic","Estimate"],
looic_se = Estimates["looic","SE"],
elpd_loo = Estimates["elpd_loo","Estimate"],
elpd_loo_se = Estimates["elpd_loo","SE"],
p_loo = Estimates["p_loo","Estimate"],
p_loo_se = Estimates["p_loo","SE"],
elpd_kfold = Estimates["p_loo","Estimate"],
elpd_kfold_se = Estimates["p_loo","SE"]
)
The find_best_model works for merModLmerTest class used in the lmerTest v2.0-36 package but not for lmerModLmerTest class used in lmerTest v3.0.
df <- psycho::emotion
fit2 <- lmerTest::lmer(Autobiographical_Link ~ Emotion_Condition + Subjective_Valence + (1|Participant_ID), data=df)
best <- find_best_model(fit2)
Error in UseMethod("find_best_model") :
no applicable method for 'find_best_model' applied to an object of class "c('lmerModLmerTest', 'lmerMod', 'merMod')"
Solution
From what I understood, the problem seems to come from the find_combinations.formula function. However, I don't know how to simply resolve this problem.
I am new to github and I will try to push my find_best_model.stanreg enhancement in the dev branch immediately.
Thanks in advance !
Point 5 from JOSS Review:
I like the inclusion of partial correlations. Perhaps an option could be to estimate these using the glasso package rather than significance testing? the EBICglasso function from qgraph, which wraps around glasso from the glasso package, should easily be useable to this end in the backend.
Point 7 from JOSS Review:
The documentation should make clearer what parts are based on other packages. For example, n_factors uses qgraph's cor_auto in the background, which uses lavaan in the background, but both packages are not referenced. I think the psycho package also heavily relies on the psych package, but it is not referenced in the documentation at all.
Hi,
I am not sure but doing analyze with index = "ROPE" does return the ROPE in the print text but not in the summary table. For Baysian correlation it does include both ROPE and overlap.
I had a quick look at the source and found that the analyze.stanreg does only include the overlap. Is it possible to inlcude the ROPE as well?
Cheers,
Tobi
I just tried to install devtools::install_github("neuropsychology/psycho.R") in RStudio Version 1.1.447 and right at the end I was given the news that
Error in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]) :
namespace 'loo' 1.1.0 is being loaded, but >= 2.0.0 is required
ERROR: lazy loading failed for package 'psycho'
* removing 'C:/Users/KM TRADING/Documents/R/win-library/3.4/psycho'
In R CMD INSTALL
Installation failed: Command failed (1)
So psycho.R has not been installed on my laptop.
Is there a solution to this problem and/or do you need more from me?
I tried to replicate the steps of this post (https://neuropsychology.github.io/psycho.R/2018/05/01/repeated_measure_anovas.html) but unfortunately the functions analyze() and get_contrasts() only produce error messages like this:
Error in UseMethod("analyze") :
no applicable method for 'analyze' applied to an object of class "c('merModLmerTest', 'lmerMod', 'merMod')"
What kind of problem might cause these errors?
This is the class of my model (the replicated model of the post above):
[1] "merModLmerTest"
attr(,"package")
[1] "lmerTest"
Concerning the find_best_model.lmerModLmerTest function, different fits' formula return different results when it shouldn't.
Example :
library(psycho)
library(lmerTest)
data <- affective
data <- data[complete.cases(data),]
Aff.full1 <- lmer(Life_Satisfaction ~ Salary + Birth_Season + (1|Age) ,data = data)
Affbest1 <- find_best_model(Aff.full1)
Affbest1$table
Aff.full2 <- lmer(Life_Satisfaction ~ Birth_Season + Salary + (1|Age) ,data = data)
Affbest2 <- find_best_model(Aff.full2)
Affbest2$table
# Manual anova to compare the criterions with the find_best_model function
Aff.Birth <- lmer(Life_Satisfaction ~ Birth_Season + (1|Age) ,data = data)
Aff.Salary <- lmer(Life_Satisfaction ~ Salary + (1|Age) ,data = data)
anova(Aff.Birth,Aff.Salary)
When computing these lines, the displayed criterions are the same for both table, but the formula are ordered differently.
It seems that the table is right only for the model that has his fixed factors sorted in a reverse alphabetical order.
This conclusion can be verified with any other dataset.
Inside find_best_model.lmerModLmerTest, I have added a line to sort the comparison data frame based on the row names before adding the formulas to the data frame.
# Model comparison
comparison <- as.data.frame(do.call("anova", models))
# Reordering the rows before implementing the combinations
comparison <- comparison[ order((row.names(comparison)),]
comparison$formula <- combinations
It should now works as intended.
It would be nice to remove the dependency on MuMIn that is only used for GLMM's R2 and reimplement it using Nakagawa's (2017) most recent method.
When I try to run:
fit <- lmerTest::lmer(rt ~ cond + (1|subject), data=dt)
contrasts = get_contrasts(fit, formula="cond", adjust="tukey")
I get the error:
Error in .f(.x[[i]], ...) : object 'lower.CL' not found
I installed from the latest build of psycho from github.
And my related package versions are:
> packageDescription("lmerTest")$Version
[1] "3.0-1"
> packageDescription("lme4")$Version
[1] "1.1-19"
> packageDescription("emmeans")$Version
[1] "1.3.0"
Is this a known issue? Is there a fix/workaround?
Thanks!
I'm trying to compute d' and beta using the dprime function, and I receive the above error. I'm trying to run the function on aggregated data that does not contain NAs (see below), so I'm not sure why I receive this error. Do you have any suggestions as to why I'm receiving this error and of what I can do to ensure that the input is acceptable?
Subject Hit FA Miss CR
1 1 18 7 0 11
2 2 18 2 0 16
3 4 18 2 0 16
4 5 16 0 2 18
5 6 14 14 4 4
6 7 7 0 11 18
7 8 8 7 10 11
8 9 11 10 7 8
9 13 8 6 10 12
10 14 4 1 14 17
11 15 0 1 18 17
12 16 2 2 16 16
13 17 10 11 8 7
14 18 16 0 2 18
15 19 7 6 11 12
16 20 3 2 15 16
17 21 18 1 0 17
18 22 18 1 0 17
19 23 15 6 3 12
20 24 17 7 1 11
21 25 18 0 0 18
22 26 2 2 16 16
23 27 10 1 8 17
24 28 16 3 2 15
25 29 15 0 3 18
26 31 17 0 1 18
27 32 10 9 8 9
28 33 18 2 0 16
29 36 3 6 15 12
30 37 8 11 10 7
31 38 1 2 17 16
32 39 5 0 13 18
33 40 0 0 18 18
34 41 5 8 13 10
35 42 15 2 3 16
36 43 1 0 17 18
37 44 1 0 17 18
38 45 4 4 14 14
39 46 7 7 11 11
40 47 8 7 10 11
This is just a heads up about something that will need to be fixed once the next versions of loo and rstanarm are released. In the upcoming release of version 2.1.0 of the loo package we will be exporting a generic kfold
function and the next rstanarm will import that generic and then provide a kfold.stanreg
method. This will result in a warning when checking the psycho package because you are currently importing all of the loo package and also importing kfold
from rstanarm. Once loo and rstanarm are released you can easily fix the problem by importing just kfold.stanreg
from rstanarm instead of kfold
. If you have any trouble let me know and I can help out.
Whenever I call the analyze()
funtion on a lmerTest
object I get:
Error in UseMethod("analyze") : no applicable method for 'analyze' applied to an object of class "c('merModLmerTest', 'lmerMod', 'merMod')"
id = 15
obs = 10
data<- tibble::tibble(id = rep(1:id, obs),
obs = rep(1:obs, 15),
y = rnorm(150),
x = rnorm(150))
fit<-lmer(y ~ x + (1|id), data=data)
analyze(fit)
Converting data
to data.frame
does not help.
Hello!,
First of all, nice work on this package! It would've saved me a lot of going back and forth with APA style if I had met it in the past.
I have just started trying it out, but I found that the output of the analyze function for a correlation returns a somewhat odd description:
The Pearson's product-moment correlation between a and b is significantly large and positive (r(8) = 10, 95% CI [10, 1], p < .001).
I suppose what is meant is: "significant, large, and positive" (with Oxford comma included, see page 88 of APA's 6th Edition), as the p-value and the significant refers to the test of the obtained correlation value against a null correlation, while the inference that the sentence leads one to make ("significantly large") would only be valid with something more exotic, e.g., something in which we test the null hypothesis that the p-value is equal to or higher than 0.80 (the criterion for large), i.e., a unilateral, lower tail test against the value of 0.80.
library(psycho)
a <- seq(1, 10)
b <- seq(10, 19)
cor_results <- cor.test(a, b)
psycho::analyze(cor_results)
Hello! Before posting my request, many thanks for the package!
I have used it succesfully with simpler models. But with glmer I get the following error message:
"Error in glmer(formula = ....._ :
fitting model with the observation-level random effect term failed. Add the term manually
In addition: Warning message:
In KhatriRao(sm, t(mm)) : (p <- ncol(X)) == ncol(Y) is not TRUE"
[for the simpler model: one continuous predictor, one categorical predictor, one random term; standard glmer works fine]
Again, many thanks for your work!
Describe the problem.
> library(psycho)
Error : package or namespace load failed for ‘psycho’ in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]):
no package ‘xts’ found
> install.packages("xls")
Warning in install.packages :
package ‘xls’ is not available (for R version 3.5.1)
When NA values are in a dataset, using the find_best_model function returns the following error:
data <- affective
fit <- lmer(Tolerating ~ Salary + Life_Satisfaction + Concealing + (1|Sex) + (1|Age), data=data)
best <- find_best_model(fit)
Error in anova.merMod(new("lmerModLmerTest", vcov_varpar = c(0.0686147429065321, :
models were not all fitted to the same size of dataset
Solution
Removing NA values only for the variables used in the formula
# Recreating the dataset without NA
dataComplete <- get_all_vars(fit)[complete.cases(get_all_vars(fit)), ]
# fit models
models <- c()
for (formula in combinations) {
newfit <- update(fit, formula, data = dataComplete)
models <- c(models, newfit)
}
Using the same function, warning messages are always displayed:
data <- affective
fit <- lmer(Tolerating ~ Salary + Life_Satisfaction + Concealing + (1|Sex) + (1|Age), data=data)
best <- find_best_model(fit)
Warning message:
In anova.merMod(new("lmerModLmerTest", vcov_varpar = c(0.0686147429065321, :
failed to find model names, assigning generic names
Solution
Hiding warnings when the anova are computed.
# No warnings for this part
options(warn = -1)
# Model comparison
comparison <- as.data.frame(do.call("anova", models))
comparison$formula <- combinations
# Re-displaying warning messages
options(warn = 0)
Point 2 from JOSS Review:
It is not clear where I can find and/or if there are arguments to the print method to, say, the output of correlation( )
lintr: local variable p.mat
asigned but may not be used.
see https://github.com/neuropsychology/psycho.R/blob/master/R/correlation.R#L54
Hello Dominique,
I've been using mixed-effects models since 2010 in my scientific publications and I find your package very promising to promote these models in the psycho/neuroscience community (I've been trying myself to promote them among my colleagues and students). My concern is that the verbal explanations you give for instance with "print (results)" as shown in
https://neuropsychology.github.io/psycho.R//2018/05/01/repeated_measure_anovas.html
might be used by "quick" scientists without them trying to really understand what they're saying (mixed effects models are great but they can easily be misused).
My suggestion, in order to be more didactic, would be to add (include) accurate references to acknowledged statistical textbooks or papers.
For instance, still using your "repeated_measure_anovas.html" blog: when you say in the "effect of emotion" section: "the overall model predicting .... successfully converged and explained 56.73% of the variance of the endogen (the conditional R2)", it would be nice if you stated explicitely, along with these lines or at least somewhere in the blog, how you connect this sentence to a reference with didactic explanations: where does the number come from, what is a conditional R2, and so on.
Along the same lines, it would be nice if you could explain (with references) the links between the standard ANOVA model and mixed-effects models : my experience is that this issue is not trivial and there are not many clear explanations in the literature.
To conclude, I think that users of your package would feel more confident if they could relate the statements issued by your "print (results)" lines to statistical sources. I hope this message will be helpful. and don't hesitate to contact me if I haven't been clear enough. Eric Castet.
Hello Dominique!
Thanks for a cool package, the correlation functions are really good for my work. My question is regarding the r values and the printed output. I'm not sure if I'm missing something completely, but in your blog example, the r values in the table are reported as positive, but in the printed pairwise correlations the text output states it is a negative correlation.
For example, in the table Concealing and Age have a reported as -0.05 but in the printed text it is " - Age / Concealing: Results of the Pearson correlation showed a non significant and weak positive association between Age and Concealing (r(1249) = -0.050, p > .1)." And the opposite is true for Life Satisfaction and adjusting with r of 0.36 and the text stating " - Life_Satisfaction / Adjusting: Results of the Pearson correlation showed a significant and moderate negative association between Life_Satisfaction and Adjusting (r(1249) = 0.36, p < .001***)."
I'm not sure if I'm interpreting things correctly or if I have indeed been interpreting the correlations incorrectly?
Thank you for your help!
Marie
Point 3 from JOSS Review:
I don't really like the NA's printed in the correlation output, would be nicer if that was an empty string by default?
Hi all,
I have been using the psycho package to retrieve summary stats and publishable info of the t-test for my manuscripts, I wondered if this can be extended to include the Wilcoxon rank sum test, please?
Many thanks
Mark Thomas
the link is broken:
https://github.com/neuropsychology/psycho.R/blob/master/vignettes/overview.R
Heyo!
I was looking at your post here and wanted to offer a nice, reproducible and programmatic way to confidently link releases with authors (and beyond paper authors, contributors to the code repository!) that sometimes don't get credit that they deserve with a traditional publication model. If you add a .zenodo.json file to the repository to link to a DOI on zenodo you can generate programmically in static docs a citation to copy paste (or render it yourself where needed!) The best example I can show you is in nipype:
Point 4 from JOSS Review:
It would be nice if polychoric and polyserial correlations could be included. The qgraph function cor_auto detects values and uses the lavaan function lavCor to obtain polychoric/polyserial correlations. Perhaps something like this could be included in correlation( ) as well?
Hello,
I cannot install psycho on my linux machine (Ubuntu 18.4). I get various error messages, such as
"try removing lock00-rstanam" or "lazy loading failed for package psycho".
or RStudio crashes on the attempt to install it.
I have tried to install it in the console, in RStudio (via the click-option, install.packages, and install.packages(..., INSTALL_opts = c('--no-lock')).
Do you know what I can do about it? Thank you very much
Point 10 from JOSS Review:
I like the normalize() function, but don't really like the name, as it implies it will make data normally distributed (as the huge.npn() function in huge for example). The function standardizes, not normalizes. Perhaps it could be renamed to standardize?
correlation has become a bit messy. Should refactor correlation
into smaller chunks.
Create a more systematic implementation of the various interpretation helpers with:
Hello,
I came across your package and found it very useful.
But I came across an issue.
I can't use the package.
I downloaded from Git manually and past it in my work folder then unzip it as this:
install.packages('psycho.R-master.zip', lib='//spodwh02/Sas_dwh_p_datamining/90. Datamining Work/Team/Rlibrary',repos = NULL)
Then start as in you example to check wether I get the same results as you.
but when calling the package I have this
Error in library("psycho.R") : there is no package called ‘psycho.R’.
Error in library("psycho") : there is no package called ‘psycho’
This is the first time I get this.
As I can't use devtools::install_github("neuropsychology/psycho.R") # Install the newest version
because of security reason .
I usually download things manually when i need the lastest version and it works;
I got the same error when installing packages from traditional way and repos.
Do you have an idea?
any help?
Thanks
Although the package is relatively recent, I'm coming (slowly) to the conclusion that it should undergo an important change. As I improved my R skills, broaden my development experience and bumped my general knowledge of statistics for psychology (and got a PhD 😉), I feel like psycho should be a more specific package, devoted to its primary aim (analyzing and giving a textual ready-to-use representation of statistical results), rather than a very large toolbox encompassing a large variety of miscellaneous functions; I'd like it to be clean, neat, light, quick and easy to maintain and use.
First of all, I'll release to CRAN a final stable version (the 0.4.0). Then, in the coming weeks/months, I'll start making some heavy cleaning. Out-of-scope functions will be moved to another package. I'll try to keep a backlog of changes in this issue, and, at the end of the refactoring process, I'll make a blogpost summarizing things. If you have any suggestions/ideas, please let me know.
After struggling a bit, I decided to move all the reports related functions to a new package named report, as its scope appears as more general than to be used in psychological science only.
To summarise, psycho
will be slowly replaced by several packages (report, maktools and bayestestR). I am not sure what the future of psycho is (i.e., what could be the set of tools specific to psychological science covered by the package). The future will tell. Fow now, I recommend you to start switching (helping to improve 🙌) the new packages, which I hope will be, in the end, better at their respective role.
This will probably all be done inside a new organisation, easystats. Check-it-out for more info.
HDI
-> hdi
ROPE
-> rope
and rope_test
MPE
-> p_direction
rnorm_perfect
find_highest_density_point
-> map_estimate
overlap
golden
create_intervals
find_season
htest
lm
cite_packages
Point 12 from JOSS Review:
When printing the output of analyze( ), I would make use of the cat( ) function to present the output nicely instead of the print function. That is, instead of:
> print(results)
[1] "The overall model predicting ... successfully converged and explained 10.57% of the variance of the endogen (the conditional R2). The variance explained by the fixed effects was of 0.56% (the marginal R2) and the one explained by the random effects of 10.01%."
[2] "The effect of (Intercept) was [NOT] significant (beta = 0.092, SE = 0.22, t(5.81) = 0.42, p > .1) and can be considered as very small (std. beta = 0, std. SE = 0)."
[3] "The effect of ConditionB was [NOT] significant (beta = -0.15, SE = 0.20, t(95.00) = -0.78, p > .1) and can be considered as very small (std. beta = -0.076, std. SE = 0.097)."
I think the following would be nicer:
> print(results)
The overall model predicting ... successfully converged and explained 10.57% of the variance of the endogen (the conditional R2). The variance explained by the fixed effects was of 0.56% (the marginal R2) and the one explained by the random effects of 10.01%.
- The effect of (Intercept) was [NOT] significant (beta = 0.092, SE = 0.22, t(5.81) = 0.42, p > .1) and can be considered as very small (std. beta = 0, std. SE = 0).
- The effect of ConditionB was [NOT] significant (beta = -0.15, SE = 0.20, t(95.00) = -0.78, p > .1) and can be considered as very small (std. beta = -0.076, std. SE = 0.097).
Perhaps the invisible() function can be used to still return the character vector with these texts as well so that the function can be used in a .Rnw or .Rmd file to generate a report though.
While trying to install psycho package, R (3.5 Linux) returns non-zero exit status. I suspect this is because of R's inability to install the 'd3Network' package. I tried to install 'd3Network' manually but it also returns a non-zero exist status.
I was wondering if the issues is because of
"Development has moved to networkD3. d3Network is no longer supported"
, as described on http://christophergandrud.github.io/d3Network/. I tried installing 'networkD3' too; the package installs fine, but R still cannot install 'psycho'.
I would appreciate any help to get it to work?
When I try to install the package of 'psycho and use it', it didn't work well saying "installation of package ‘BDgraph’ had non-zero exit status"
install.packages("devtools") library("devtools") install_github("neuropsychology/psycho.R") library("psycho")
Point 11 from JOSS Review:
The analyze() function is nice, but I don't really like the string labeling of effect sizes as "Very small", as I am always in favor of numeric effect size measures rather than labels based on arbitrary thresholds. Can't the numeric effect size be printed instead?
Point 1 from JOSS Review:
Just entering an object in the console does not print it:
data("bfi")
cres <- correlation(bfi[,1:25])
cres # does nothing
print(cres) # prints results
Point 8 from JOSS Review:
I'd suggest to set the argument forcePD = FALSE in cor_auto. It was a bad design choice of me to make that default to TRUE, and has lead to a lot of problems.
Point 6 from JOSS Review:
It's not clear from the documentation what, if any, arguments I can use in the plot method. For example, can I change the colors of the assess plot (which is really nice btw).
First of all, the sorting I proposed in issue #90 had a problem when there was more than nine models, as the names MODEL1, MODEL2, MODEL10 are sorted into MODEL1, MODEL10, MODEL2.
Thus, I changed again the find_best_model.lmerModLmerTest function, for the formulas to be well integrated into the best$table.
# Model comparison
comparison <- as.data.frame(do.call("anova", models))
# Creating row names equivalent to the comparison data frame
combinations <- as.data.frame(combinations, row.names = paste0("MODEL",seq(1,length(combinations))))
# Reordering the rows in the same way for both combinations and comparison before implementing the formulas
comparison <- comparison[ order(row.names(comparison)),]
comparison$formula <- combinations[order(row.names(combinations)),]
# Sorting the data frame by the AIC then BIC
comparison <- comparison[order(comparison$AIC,comparison$BIC),]
More importantly, the find_combinations function doesn't find all the different possible models when interaction = TRUE and when there are three or more fixed factors.
library(psycho)
as.data.frame(find_combinations(Y ~ A + B + C + (1 | X) ,interaction = TRUE, fixed = NULL))
- Y ~ A + (1 | X)
- Y ~ B + (1 | X)
- Y ~ C + (1 | X)
- Y ~ A + B + (1 | X)
- Y ~ A + C + (1 | X)
- Y ~ B + C + (1 | X)
- Y ~ A + B + C + (1 | X)
- Y ~ A * B + (1 | X)
- Y ~ A * C + (1 | X)
- Y ~ B * C + (1 | X)
- Y ~ A * B + C + (1 | X)
- Y ~ A * B * C + (1 | X)
In this example it lacks A+B*C and A*C+B.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.