Code Monkey home page Code Monkey logo

psycho.r's Introduction

psycho logo r package

Efficient and Publishing-Oriented Workflow for Psychological Science

psycho

Build Status License: MIT CRAN downloads total Build status codecov Dependency Status CRAN downloads month

Name psycho
Stable CRAN
Documentation Rdoc
Blog
Examples
Questions
Authors
Reference DOI

⚠️ NOTE: This package is being deprecated in favour of the report package. Please check it out and ask for any missing features.


Goal

The main goal of the psycho package is to provide tools for psychologists, neuropsychologists and neuroscientists, to facilitate and speed up the time spent on data analysis. It aims at supporting best practices by providing tools to format the output of statistical methods to directly paste them into a manuscript, ensuring standardization of statistical reporting.

Contribute

psycho is a young package in need of affection. You can easily hop aboard the development of this open-source software and improve psychological science:

  • Need some help? Found a bug? Request a new feature? Just open an issue ☺️
  • Want to add a feature? Correct a bug? You're more than welcome to contribute!

Don't be shy, try to code and submit a pull request (PR). Even if unperfect, we will help you to make a great PR! All contributors will be very graciously rewarded. Someday.

Examples

Check examples in the following vignettes:

Or blog posts:

General Workflow

The package revolves around the psychobject. Main functions from the package return this type, and the analyze() function transforms other R objects into psychobjects. Four functions can then be applied on a psychobject: summary(), print(), plot() and values().

Installation

  • To get the stable version from CRAN, run the following commands in your R console:
install.packages("psycho")
library("psycho")
  • To get the latest development version, run the following:
install.packages("devtools")
library("devtools")
install_github("neuropsychology/psycho.R")
library("psycho")

Credits

You can cite the package as following:

  • Makowski, (2018). The psycho Package: an Efficient and Publishing-Oriented Workflow for Psychological Science. Journal of Open Source Software, 3(22), 470. https://doi.org/10.21105/joss.00470

Contributors

psycho.r's People

Contributors

bigpas avatar dominiquemakowski avatar hugonjb avatar vsimko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

psycho.r's Issues

Error on installing the package

Describe the problem.

> library(psycho)
Error : package or namespace load failed forpsychoin loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]):
 no packagextsfound

> install.packages("xls")
Warning in install.packages :
  packagexlsis not available (for R version 3.5.1)

Import kfold.stanreg instead of kfold

This is just a heads up about something that will need to be fixed once the next versions of loo and rstanarm are released. In the upcoming release of version 2.1.0 of the loo package we will be exporting a generic kfold function and the next rstanarm will import that generic and then provide a kfold.stanreg method. This will result in a warning when checking the psycho package because you are currently importing all of the loo package and also importing kfold from rstanarm. Once loo and rstanarm are released you can easily fix the problem by importing just kfold.stanreg from rstanarm instead of kfold. If you have any trouble let me know and I can help out.

Issue with get_contrasts()

When I try to run:

fit <- lmerTest::lmer(rt ~ cond + (1|subject), data=dt)
contrasts = get_contrasts(fit, formula="cond", adjust="tukey")

I get the error:

Error in .f(.x[[i]], ...) : object 'lower.CL' not found

I installed from the latest build of psycho from github.

And my related package versions are:

> packageDescription("lmerTest")$Version
[1] "3.0-1"
> packageDescription("lme4")$Version
[1] "1.1-19"
> packageDescription("emmeans")$Version
[1] "1.3.0"

Is this a known issue? Is there a fix/workaround?

Thanks!

"Significantly large"

Hello!,

First of all, nice work on this package! It would've saved me a lot of going back and forth with APA style if I had met it in the past.

I have just started trying it out, but I found that the output of the analyze function for a correlation returns a somewhat odd description:

The Pearson's product-moment correlation between a and b is significantly large and positive (r(8) = 10, 95% CI [10, 1], p < .001).

I suppose what is meant is: "significant, large, and positive" (with Oxford comma included, see page 88 of APA's 6th Edition), as the p-value and the significant refers to the test of the obtained correlation value against a null correlation, while the inference that the sentence leads one to make ("significantly large") would only be valid with something more exotic, e.g., something in which we test the null hypothesis that the p-value is equal to or higher than 0.80 (the criterion for large), i.e., a unilateral, lower tail test against the value of 0.80.

library(psycho)

a <- seq(1, 10)
b <- seq(10, 19)

cor_results <- cor.test(a, b)

psycho::analyze(cor_results)

anova.merMod error with find_best_model.lmerModLmerTest

1

When NA values are in a dataset, using the find_best_model function returns the following error:

data <- affective
fit <- lmer(Tolerating ~ Salary + Life_Satisfaction + Concealing + (1|Sex) + (1|Age), data=data)
best <- find_best_model(fit)

Error in anova.merMod(new("lmerModLmerTest", vcov_varpar = c(0.0686147429065321, :
models were not all fitted to the same size of dataset

Solution
Removing NA values only for the variables used in the formula

  # Recreating the dataset without NA
  dataComplete <- get_all_vars(fit)[complete.cases(get_all_vars(fit)), ]

  # fit models
  models <- c()
  for (formula in combinations) {
    newfit <- update(fit, formula, data = dataComplete)
    models <- c(models, newfit)
  }

2

Using the same function, warning messages are always displayed:

data <- affective
fit <- lmer(Tolerating ~ Salary + Life_Satisfaction + Concealing + (1|Sex) + (1|Age), data=data)
best <- find_best_model(fit)

Warning message:
In anova.merMod(new("lmerModLmerTest", vcov_varpar = c(0.0686147429065321, :
failed to find model names, assigning generic names

Solution
Hiding warnings when the anova are computed.

  # No warnings for this part
  options(warn = -1)
  
  # Model comparison
  comparison <- as.data.frame(do.call("anova", models))
  comparison$formula <- combinations

  # Re-displaying warning messages
  options(warn = 0)

find_best_model issues

Dear Dominique,
Thank you very much for your package, I think it will be very useful to me in the future.
I am notably interested in the find_best_model function and I encontered the following bugs while exploring it.

1

find_best_model.stanreg gives the following error message when a random intercept is entered in the formula :

df <- psycho::emotion
fit <- stan_lmer(Autobiographical_Link ~ Emotion_Condition + Subjective_Valence + (1|Participant_ID), data=df)
best <- find_best_model(fit)

Error in f_i(data_i = data[i, , drop = FALSE], draws = draws, ...) :
unused argument (k_treshold = k_treshold)

Solution
It seems to come from the loo function of the rstanarm package. I added the following lines into find_best_model.stanreg to temporarily solve the problem :

    if (!is.null(k_treshold)) {
      loo <- rstanarm::loo(newfit, k_treshold = k_treshold)
    } else {
      loo <- rstanarm::loo(newfit)
    }

2

Inside find_best_model.stanreg, warning messages are sent when accessing the loos estimates :

loo$elpd_loo

Warning message:
Accessing elpd_loo using '$' is deprecated and will be removed in a future release. Please extract the elpd_loo estimate from the 'estimates' component instead.

Solution
I replaced them with those lines:

    Estimates <- loo[["estimates"]]
    model <- data.frame(
      formula = formula,
      complexity = complexity - 1,
      R2 = R2s[[formula]],
      looic = Estimates["looic","Estimate"],
      looic_se = Estimates["looic","SE"],
      elpd_loo = Estimates["elpd_loo","Estimate"],
      elpd_loo_se = Estimates["elpd_loo","SE"],
      p_loo = Estimates["p_loo","Estimate"],
      p_loo_se = Estimates["p_loo","SE"],
      elpd_kfold = Estimates["p_loo","Estimate"],
      elpd_kfold_se = Estimates["p_loo","SE"]
    )

3

The find_best_model works for merModLmerTest class used in the lmerTest v2.0-36 package but not for lmerModLmerTest class used in lmerTest v3.0.

df <- psycho::emotion
fit2 <- lmerTest::lmer(Autobiographical_Link ~ Emotion_Condition + Subjective_Valence + (1|Participant_ID), data=df)
best <- find_best_model(fit2)

Error in UseMethod("find_best_model") :
no applicable method for 'find_best_model' applied to an object of class "c('lmerModLmerTest', 'lmerMod', 'merMod')"

Solution
From what I understood, the problem seems to come from the find_combinations.formula function. However, I don't know how to simply resolve this problem.


I am new to github and I will try to push my find_best_model.stanreg enhancement in the dev branch immediately.

Thanks in advance !

issue using the package

Hello,
I came across your package and found it very useful.
But I came across an issue.
I can't use the package.
I downloaded from Git manually and past it in my work folder then unzip it as this:
install.packages('psycho.R-master.zip', lib='//spodwh02/Sas_dwh_p_datamining/90. Datamining Work/Team/Rlibrary',repos = NULL)
Then start as in you example to check wether I get the same results as you.
but when calling the package I have this
Error in library("psycho.R") : there is no package called ‘psycho.R’.
Error in library("psycho") : there is no package called ‘psycho’
This is the first time I get this.
As I can't use devtools::install_github("neuropsychology/psycho.R") # Install the newest version
because of security reason .
I usually download things manually when i need the lastest version and it works;
I got the same error when installing packages from traditional way and repos.
Do you have an idea?
any help?
Thanks

NAs not allowed in subscripted assignments

I'm trying to compute d' and beta using the dprime function, and I receive the above error. I'm trying to run the function on aggregated data that does not contain NAs (see below), so I'm not sure why I receive this error. Do you have any suggestions as to why I'm receiving this error and of what I can do to ensure that the input is acceptable?

   Subject Hit FA Miss CR
1        1  18  7    0 11
2        2  18  2    0 16
3        4  18  2    0 16
4        5  16  0    2 18
5        6  14 14    4  4
6        7   7  0   11 18
7        8   8  7   10 11
8        9  11 10    7  8
9       13   8  6   10 12
10      14   4  1   14 17
11      15   0  1   18 17
12      16   2  2   16 16
13      17  10 11    8  7
14      18  16  0    2 18
15      19   7  6   11 12
16      20   3  2   15 16
17      21  18  1    0 17
18      22  18  1    0 17
19      23  15  6    3 12
20      24  17  7    1 11
21      25  18  0    0 18
22      26   2  2   16 16
23      27  10  1    8 17
24      28  16  3    2 15
25      29  15  0    3 18
26      31  17  0    1 18
27      32  10  9    8  9
28      33  18  2    0 16
29      36   3  6   15 12
30      37   8 11   10  7
31      38   1  2   17 16
32      39   5  0   13 18
33      40   0  0   18 18
34      41   5  8   13 10
35      42  15  2    3 16
36      43   1  0   17 18
37      44   1  0   17 18
38      45   4  4   14 14
39      46   7  7   11 11
40      47   8  7   10 11

find_best_model formula sorting

Concerning the find_best_model.lmerModLmerTest function, different fits' formula return different results when it shouldn't.

Example :

library(psycho)
library(lmerTest)
data <- affective
data <- data[complete.cases(data),]

Aff.full1 <- lmer(Life_Satisfaction ~ Salary + Birth_Season + (1|Age) ,data = data)
Affbest1  <- find_best_model(Aff.full1)
Affbest1$table

Aff.full2 <- lmer(Life_Satisfaction ~ Birth_Season + Salary + (1|Age) ,data = data)
Affbest2  <- find_best_model(Aff.full2)
Affbest2$table

# Manual anova to compare the criterions with the find_best_model function
Aff.Birth <- lmer(Life_Satisfaction ~ Birth_Season + (1|Age) ,data = data) 
Aff.Salary <- lmer(Life_Satisfaction ~ Salary + (1|Age) ,data = data) 
anova(Aff.Birth,Aff.Salary)

When computing these lines, the displayed criterions are the same for both table, but the formula are ordered differently.
It seems that the table is right only for the model that has his fixed factors sorted in a reverse alphabetical order.
This conclusion can be verified with any other dataset.


Solution

Inside find_best_model.lmerModLmerTest, I have added a line to sort the comparison data frame based on the row names before adding the formulas to the data frame.

 # Model comparison
  comparison <- as.data.frame(do.call("anova", models))
  # Reordering the rows before implementing the combinations
  comparison <- comparison[ order((row.names(comparison)),]
  comparison$formula <- combinations

It should now works as intended.

JOSS Review - 12 - nicer analyze output

Point 12 from JOSS Review:


When printing the output of analyze( ), I would make use of the cat( ) function to present the output nicely instead of the print function. That is, instead of:

> print(results)
[1] "The overall model predicting ... successfully converged and explained 10.57% of the variance of the endogen (the conditional R2). The variance explained by the fixed effects was of 0.56% (the marginal R2) and the one explained by the random effects of 10.01%."
[2] "The effect of (Intercept) was [NOT] significant (beta = 0.092, SE = 0.22, t(5.81) = 0.42, p > .1) and can be considered as very small (std. beta = 0, std. SE = 0)."                                                                                                
[3] "The effect of ConditionB was [NOT] significant (beta = -0.15, SE = 0.20, t(95.00) = -0.78, p > .1) and can be considered as very small (std. beta = -0.076, std. SE = 0.097)." 

I think the following would be nicer:

> print(results)
The overall model predicting ... successfully converged and explained 10.57% of the variance of the endogen (the conditional R2). The variance explained by the fixed effects was of 0.56% (the marginal R2) and the one explained by the random effects of 10.01%.

	- The effect of (Intercept) was [NOT] significant (beta = 0.092, SE = 0.22, t(5.81) = 0.42, p > .1) and can be considered as very small (std. beta = 0, std. SE = 0).
	- The effect of ConditionB was [NOT] significant (beta = -0.15, SE = 0.20, t(95.00) = -0.78, p > .1) and can be considered as very small (std. beta = -0.076, std. SE = 0.097).

Perhaps the invisible() function can be used to still return the character vector with these texts as well so that the function can be used in a .Rnw or .Rmd file to generate a report though.

Correlation: wrong direction intepretation (positive - negative)

Hello Dominique!

Thanks for a cool package, the correlation functions are really good for my work. My question is regarding the r values and the printed output. I'm not sure if I'm missing something completely, but in your blog example, the r values in the table are reported as positive, but in the printed pairwise correlations the text output states it is a negative correlation.

For example, in the table Concealing and Age have a reported as -0.05 but in the printed text it is " - Age / Concealing: Results of the Pearson correlation showed a non significant and weak positive association between Age and Concealing (r(1249) = -0.050, p > .1)." And the opposite is true for Life Satisfaction and adjusting with r of 0.36 and the text stating " - Life_Satisfaction / Adjusting: Results of the Pearson correlation showed a significant and moderate negative association between Life_Satisfaction and Adjusting (r(1249) = 0.36, p < .001***)."

I'm not sure if I'm interpreting things correctly or if I have indeed been interpreting the correlations incorrectly?

Thank you for your help!

Marie

'BDgraph' problem

When I try to install the package of 'psycho and use it', it didn't work well saying "installation of package ‘BDgraph’ had non-zero exit status"
install.packages("devtools") library("devtools") install_github("neuropsychology/psycho.R") library("psycho")

get_means() is not recognized

The get_means() function is not found. It's throwing error. I am using the currently available version of psycho installed via install.package. I am using Microsoft R Open 3.5.1

library(psycho)
require(lmerTest)

fit <- lmer(Subjective_Valence ~ Emotion_Condition * Participant_Sex + (1|Participant_ID), data=emotion)
anova(fit)
get_means(fit)

Error in get_means(fit) : could not find function "get_means"

refactor correlation

correlation has become a bit messy. Should refactor correlation into smaller chunks.

analyze() not working

Whenever I call the analyze() funtion on a lmerTest object I get:

Error in UseMethod("analyze") : no applicable method for 'analyze' applied to an object of class "c('merModLmerTest', 'lmerMod', 'merMod')"

id = 15
obs = 10
data<- tibble::tibble(id = rep(1:id, obs), 
                      obs = rep(1:obs, 15), 
                      y = rnorm(150),
                      x = rnorm(150))

fit<-lmer(y ~ x + (1|id), data=data)
analyze(fit)

Converting data to data.frame does not help.

Package does not install

Hello,

I cannot install psycho on my linux machine (Ubuntu 18.4). I get various error messages, such as
"try removing lock00-rstanam" or "lazy loading failed for package psycho".
or RStudio crashes on the attempt to install it.

I have tried to install it in the console, in RStudio (via the click-option, install.packages, and install.packages(..., INSTALL_opts = c('--no-lock')).

Do you know what I can do about it? Thank you very much

error message in Analyze

Hello! Before posting my request, many thanks for the package!
I have used it succesfully with simpler models. But with glmer I get the following error message:
"Error in glmer(formula = ....._ :
fitting model with the observation-level random effect term failed. Add the term manually
In addition: Warning message:
In KhatriRao(sm, t(mm)) : (p <- ncol(X)) == ncol(Y) is not TRUE"

[for the simpler model: one continuous predictor, one categorical predictor, one random term; standard glmer works fine]

Again, many thanks for your work!

One Sample t-test error

One Sample t-test gives an error. It cannot find a value where there should be TRUE/FALSE. I think it cannot find the mu = 0 to compare.

result <- t.test(df$Adjusting, 
       mu = 0,
       conf.level = .90) %>% 
      psycho::analyze(result)

ekran resmi 2018-06-21 20 28 40

Thanks for the package.

JOSS Review - 7 - Dependencies

Point 7 from JOSS Review:


The documentation should make clearer what parts are based on other packages. For example, n_factors uses qgraph's cor_auto in the background, which uses lavaan in the background, but both packages are not referenced. I think the psycho package also heavily relies on the psych package, but it is not referenced in the documentation at all.

find_combinations interaction

1

First of all, the sorting I proposed in issue #90 had a problem when there was more than nine models, as the names MODEL1, MODEL2, MODEL10 are sorted into MODEL1, MODEL10, MODEL2.

Thus, I changed again the find_best_model.lmerModLmerTest function, for the formulas to be well integrated into the best$table.

  # Model comparison
  comparison <- as.data.frame(do.call("anova", models))

  # Creating row names equivalent to the comparison data frame
  combinations <- as.data.frame(combinations, row.names = paste0("MODEL",seq(1,length(combinations))))
  
  # Reordering the rows in the same way for both combinations and comparison before implementing the formulas
  comparison <- comparison[ order(row.names(comparison)),]
  comparison$formula <- combinations[order(row.names(combinations)),]
  
  # Sorting the data frame by the AIC then BIC
  comparison <- comparison[order(comparison$AIC,comparison$BIC),]

2

More importantly, the find_combinations function doesn't find all the different possible models when interaction = TRUE and when there are three or more fixed factors.

library(psycho)
as.data.frame(find_combinations(Y ~ A + B + C + (1 | X) ,interaction = TRUE, fixed = NULL))
  1. Y ~ A + (1 | X)
  2. Y ~ B + (1 | X)
  3. Y ~ C + (1 | X)
  4. Y ~ A + B + (1 | X)
  5. Y ~ A + C + (1 | X)
  6. Y ~ B + C + (1 | X)
  7. Y ~ A + B + C + (1 | X)
  8. Y ~ A * B + (1 | X)
  9. Y ~ A * C + (1 | X)
  10. Y ~ B * C + (1 | X)
  11. Y ~ A * B + C + (1 | X)
  12. Y ~ A * B * C + (1 | X)

In this example it lacks A+B*C and A*C+B.

Installation issue b/c of d3Network package dependency?

While trying to install psycho package, R (3.5 Linux) returns non-zero exit status. I suspect this is because of R's inability to install the 'd3Network' package. I tried to install 'd3Network' manually but it also returns a non-zero exist status.

I was wondering if the issues is because of

"Development has moved to networkD3. d3Network is no longer supported"

, as described on http://christophergandrud.github.io/d3Network/. I tried installing 'networkD3' too; the package installs fine, but R still cannot install 'psycho'.

I would appreciate any help to get it to work?

suggestion: using .zenodo.json

Heyo!

I was looking at your post here and wanted to offer a nice, reproducible and programmatic way to confidently link releases with authors (and beyond paper authors, contributors to the code repository!) that sometimes don't get credit that they deserve with a traditional publication model. If you add a .zenodo.json file to the repository to link to a DOI on zenodo you can generate programmically in static docs a citation to copy paste (or render it yourself where needed!) The best example I can show you is in nipype:

Problems to use functions analyze() and get_contrasts()

I tried to replicate the steps of this post (https://neuropsychology.github.io/psycho.R/2018/05/01/repeated_measure_anovas.html) but unfortunately the functions analyze() and get_contrasts() only produce error messages like this:

Error in UseMethod("analyze") : 
  no applicable method for 'analyze' applied to an object of class "c('merModLmerTest', 'lmerMod', 'merMod')"

What kind of problem might cause these errors?

This is the class of my model (the replicated model of the post above):

[1] "merModLmerTest"
attr(,"package")
[1] "lmerTest"

JOSS Review - 4 - polychoric and polyserial correlations

Point 4 from JOSS Review:


It would be nice if polychoric and polyserial correlations could be included. The qgraph function cor_auto detects values and uses the lavaan function lavCor to obtain polychoric/polyserial correlations. Perhaps something like this could be included in correlation( ) as well?

Toward 0.5.0 and beyond: Package refactoring, streamlining and scope narrowing

Although the package is relatively recent, I'm coming (slowly) to the conclusion that it should undergo an important change. As I improved my R skills, broaden my development experience and bumped my general knowledge of statistics for psychology (and got a PhD 😉), I feel like psycho should be a more specific package, devoted to its primary aim (analyzing and giving a textual ready-to-use representation of statistical results), rather than a very large toolbox encompassing a large variety of miscellaneous functions; I'd like it to be clean, neat, light, quick and easy to maintain and use.

First of all, I'll release to CRAN a final stable version (the 0.4.0). Then, in the coming weeks/months, I'll start making some heavy cleaning. Out-of-scope functions will be moved to another package. I'll try to keep a backlog of changes in this issue, and, at the end of the refactoring process, I'll make a blogpost summarizing things. If you have any suggestions/ideas, please let me know.

EDIT

After struggling a bit, I decided to move all the reports related functions to a new package named report, as its scope appears as more general than to be used in psychological science only.

To summarise, psycho will be slowly replaced by several packages (report, maktools and bayestestR). I am not sure what the future of psycho is (i.e., what could be the set of tools specific to psychological science covered by the package). The future will tell. Fow now, I recommend you to start switching (helping to improve 🙌) the new packages, which I hope will be, in the end, better at their respective role.

This will probably all be done inside a new organisation, easystats. Check-it-out for more info.

0.5.0

  • Release 0.4.0 to CRAN
  • Freeze the 0.4.0 in a legacy repo (oldpsycho) or branch
  • Move Bayesian models related functions to bayestestR
    • HDI -> hdi
    • ROPE -> rope and rope_test
    • MPE -> p_direction
    • rnorm_perfect
    • find_highest_density_point -> map_estimate
  • Move miscellaneous functions to maktools
    • overlap
    • golden
    • create_intervals
    • find_season
  • Move reports/analyze to report
    • htest
    • lm
    • cite_packages
  • Do not remove / break anything but add depreciation warnings leading to the new way
  • Move psycho blogposts to a new blog made with blogdown
  • Add blostpost explaining changes and new ways

statistical references

Hello Dominique,
I've been using mixed-effects models since 2010 in my scientific publications and I find your package very promising to promote these models in the psycho/neuroscience community (I've been trying myself to promote them among my colleagues and students). My concern is that the verbal explanations you give for instance with "print (results)" as shown in
https://neuropsychology.github.io/psycho.R//2018/05/01/repeated_measure_anovas.html
might be used by "quick" scientists without them trying to really understand what they're saying (mixed effects models are great but they can easily be misused).
My suggestion, in order to be more didactic, would be to add (include) accurate references to acknowledged statistical textbooks or papers.
For instance, still using your "repeated_measure_anovas.html" blog: when you say in the "effect of emotion" section: "the overall model predicting .... successfully converged and explained 56.73% of the variance of the endogen (the conditional R2)", it would be nice if you stated explicitely, along with these lines or at least somewhere in the blog, how you connect this sentence to a reference with didactic explanations: where does the number come from, what is a conditional R2, and so on.
Along the same lines, it would be nice if you could explain (with references) the links between the standard ANOVA model and mixed-effects models : my experience is that this issue is not trivial and there are not many clear explanations in the literature.
To conclude, I think that users of your package would feel more confident if they could relate the statements issued by your "print (results)" lines to statistical sources. I hope this message will be helpful. and don't hesitate to contact me if I haven't been clear enough. Eric Castet.

Support for other languages

Hello,

Do you support other languages? I think as psycho::analyze( ... , lang = "TR")

If so, I may help in translating to Turkish.

Best wishes.

Error in loadNamespace( ...

I just tried to install devtools::install_github("neuropsychology/psycho.R") in RStudio Version 1.1.447 and right at the end I was given the news that

Error in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]) : 
  namespace 'loo' 1.1.0 is being loaded, but >= 2.0.0 is required
ERROR: lazy loading failed for package 'psycho'
* removing 'C:/Users/KM TRADING/Documents/R/win-library/3.4/psycho'
In R CMD INSTALL
Installation failed: Command failed (1)

So psycho.R has not been installed on my laptop.

Is there a solution to this problem and/or do you need more from me?

BLAVAAN Analyze

Describe the problem.
Hi,

I am currently trying to use the analyze function with Blavaan, but I amgetting errors, and I am not sure why.

I am using the following code:

fit <- bsem(bayesModelPB, cp = "srs", convergence = "auto", data=data)
FitSummary<- analyze(fit, CI = 90, standardize=T)
print(FitSummary)

However, I get the error

1: In log(z) : NaNs produced
2: In if (standardize == FALSE) { :
the condition has length > 1 and only the first element will be used
3:
the condition has length > 1 and only the first element will be used

When I print the results, only the fit statistics are presented.

I have also tried running the code as:
FitSummary<- analyze(fit)
And
FitSummary<- analyze(fit, standardize=FALSE)

When standardized = False I get the error

Error in if (median >= 0) { : argument is of length zero

I have checked Blavaan and psycho, and reinstalled the most recent version of all packages and dependencies I could see. Is there anywhere I have gone wrong here?

Rope in summary of stanreg

Hi,

I am not sure but doing analyze with index = "ROPE" does return the ROPE in the print text but not in the summary table. For Baysian correlation it does include both ROPE and overlap.

I had a quick look at the source and found that the analyze.stanreg does only include the overlap. Is it possible to inlcude the ROPE as well?

Cheers,
Tobi

JOSS Review - 10 - normalize to standardize

Point 10 from JOSS Review:


I like the normalize() function, but don't really like the name, as it implies it will make data normally distributed (as the huge.npn() function in huge for example). The function standardizes, not normalizes. Perhaps it could be renamed to standardize?

JOSS Review - 5 - partial correlations with glasso

Point 5 from JOSS Review:


I like the inclusion of partial correlations. Perhaps an option could be to estimate these using the glasso package rather than significance testing? the EBICglasso function from qgraph, which wraps around glasso from the glasso package, should easily be useable to this end in the backend.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.