Code Monkey home page Code Monkey logo

metalab's Introduction

Developing for Metalab

Building MetaLab locally will allow you to make changes, including text or data updates, adding new pages, or enhancing Shiny applications.

Software Requirements (one-time setup)

One easy way to install Hugo Extended is to use the blogdown package.

install.packages("markdown")
install.packages("blogdown") #only required if you do not already have it
blogdown::install_hugo()

Install the required packages (one-time setup)

To install all the required R packages on your system, run the following commad after opening this project in RStudio:

renv::restore()

Updating to latest data (optional)

If the spreadsheets have been updated and you want to try the latest available data, you can run:

source(here::here("build", "update-metalab-data.R"))

If you do not run this command, your build will use the dataset that the current MetaLab site uses.

Building MetaLab (each time you want to make changes)

You are now ready to build MetaLab. The commands below will serve the site locally on your computer. The build script may take a few minutes to run. When completed, it will be serving a local copy of the MetaLab site at http://localhost:4321/metalab

source(here::here("build", "build-metalab-site.R"))
blogdown::serve_site()

Editing content

You can now try editing existing content in the content directory. Your changes will automatically reload in your web browser.

metalab's People

Contributors

alecristia avatar amsan7 avatar anjiecao avatar christinabergmann avatar cissard avatar daattali avatar erikriverson avatar juliacarbajal avatar kvonholzen avatar kylehamilton avatar lottiegasp avatar mcfrank avatar mllewis avatar roddalben avatar saraelshawa avatar shotsuji avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metalab's Issues

Add ESMARConf2022 presentation to website

Under Documentation > Data validator; Power analysis; Visualization (so on 3 separate pages) could we include the text "Here is a tutorial on how to use the visualization, power analysis and data validator tools (Video); (Slides)"

Expand metalab format to accommodate longitudinal datasets

I have been talking to @alecristia about longitudinal datasets, e.g. an early predictor of later language outcome, and adding these to MetaLab.

Alex started this other website inVarInf prior to MetaLab which is out of date, so in future she thinks it would be preferable to add meta-analyses to MetaLab rather than try to add to inVarInf.

  • There is already one longitudinal dataset on MetaLab, however Alex pointed out that the MetaLab structure perhaps is not quite compatible as is for longitudinal data, i.e. it doesn't have a column for age at outcome. So a task would be to look into making these adjustments. I might try to do so since it's a topic related to my PhD, but wanted to add it here in case I don't manage and if someone else comes along and wants to!
  • If we make these changes, Alex shared a meta-analysis which could be added to MetaLab which I have added to the Google Drive

No visualizations for some datasets

For me, visualizations sometimes show up, the N conditions and effect sizes appear, as does the funnel plot, but not the forest or bubble (effect by age) plot.
I've found this to be the case for word segmentation and familiar word recognition, but there might be other datasets.

Visualizations don't load

This is a really odd one that keeps popping up. On my laptop (windows 10), all is fine. On my PC (also windows 10, chrome or firefox), the visualizations do not appear - I tried reloading, different browsers. Any idea what might be happening?

bug in number of meta-analyses

doesn't match up across various places on website (currently says 29 total, but the sum of the number in the two domains is 26).

Estimating corr/dependencies in the data

Following on from #68 to find a long-term optimal method for estimating corr/dependencies in datasets.

MetaLab script's current method for imputing missing corr values is to randomly select from other existing values in the dataset (perhaps limited to the same method/similar age). In langdiscrim we imputed from a normal distribution using the median and variance of existing corr values and then adjusting the range to [-1,1]. Rabagliati et al. (2018) imputed it as the mean weighted by sample size.

We will work on a short report (/paper in the long run) where we review the literature for other methods, then compare the methods for the site, then decide on the best option

Error in R updating metalab data

@erikriverson I get an error when I try to run source(here::here("build", "update-metalab-data.R")) in R on my machine after adding a new dataset. All datasets up to Mutual exclusivity work (I haven't included the R output for these as they look straightforward) then I get the code right at the end.

...
Getting raw MetaLab data from Google Sheets for dataset: Mutual exclusivity
tibble [146 x 50] (S3: spec_tbl_df/tbl_df/tbl/data.frame)
$ study_ID : chr [1:146] "allen2010" "bedford2013" "bedford2013" "beverly2003" ...
$ short_cite : chr [1:146] "Allen & Scofield (2010)T" "Beford et al (2013)" "Beford et al (2013)" "Beverly & Estis (2003)" ...
$ long_cite : chr [1:146] "Allen, R., & Scofield, J. (2010). Word learning form videos: More evidence from 2-year-olds. Infant and Child D"| truncated "Bedford, R., Gliga, T., Frame, K., Hudry, K., Chandler, S., Johnson, M. H., ... & BASIS TEAM. (2013). Failure t"| truncated "Bedford, R., Gliga, T., Frame, K., Hudry, K., Chandler, S., Johnson, M. H., ... & BASIS TEAM. (2013). Failure t"| truncated "Beverly, B. L; Estis, J. M. Journal of Speech-Language Pathology and Audiology.Vol. 27, Iss. 3, (October 2003): 163-171." ...
$ doi : logi [1:146] NA NA NA NA NA NA ...
$ peer_reviewed : chr [1:146] "yes" "yes" "yes" "yes" ...
$ coder : chr [1:146] "Molly Lewis" "Molly Lewis" "Molly Lewis" "Molly Lewis" ...
$ expt_num : num [1:146] 2 1 1 1 1 1 2 2 2 1 ...
$ response_mode : chr [1:146] "behavior" "behavior" "behavior" "behavior" ...
$ response_mode2 : chr [1:146] "behavior" "behavior" "behavior" "behavior" ...
$ dependent_measure : chr [1:146] "target_selection" "target_selection" "target_selection" "target_selection" ...
$ native_lang : chr [1:146] NA NA NA NA ...
$ infant_type : chr [1:146] "typical" "NT" "typical" "NT" ...
$ infant_type2 : chr [1:146] "typical" "ASD" "typical" "SLI" ...
$ group_name_1 : chr [1:146] "experimental" "ASD- high risk" "typical" "experimental" ...
$ n_1 : num [1:146] 32 31 31 5 5 5 20 22 25 16 ...
$ mean_age_1 : num [1:146] 1004 749 740 1765 1796 ...
$ same_infant : chr [1:146] "6" "7" "8" "9" ...
$ x_1 : num [1:146] 0.76 NA NA 0.65 0.95 0.9 0.68 0.52 0.65 0.08 ...
$ x_2 : num [1:146] 0.5 NA NA 0.5 0.5 0.5 0.5 0.5 0.5 0 ...
$ SD_1 : num [1:146] NA NA NA 0.058 0.1 0.082 0.14 0.14 0.13 0.19 ...
$ t : num [1:146] 6.25 3.88 3.41 NA NA NA 5.9 0.59 5.34 1.69 ...
$ F : logi [1:146] NA NA NA NA NA NA ...
$ d : num [1:146] NA NA NA NA NA ...
$ d_var : num [1:146] NA NA NA NA NA ...
$ age_range_1 : num [1:146] NA NA NA NA NA ...
$ age_range_2 : logi [1:146] NA NA NA NA NA NA ...
$ n_excluded_1 : num [1:146] NA NA NA NA NA NA 5 10 8 NA ...
$ gender_1 : num [1:146] NA NA NA NA NA NA NA NA NA NA ...
$ num_trials : num [1:146] 4 4 4 10 10 10 8 8 8 6 ...
$ object_stimulus : chr [1:146] "digital" "objects" "objects" "objects" ...
$ N_AFC : chr [1:146] "N_AFC-2" "N_AFC-3" "N_AFC-3" "N_AFC-2" ...
$ mean_comprehension_vocab: num [1:146] NA 335 449 NA NA ...
$ mean_production_vocab : num [1:146] NA NA NA NA NA NA 534 NA NA 35 ...
$ ME_trial_type : chr [1:146] "NN" "FN" "FN" "FN" ...
$ lab_group : chr [1:146] "scofield" "charman" "charman" "estis" ...
$ data_source : chr [1:146] "paper" "paper" "paper" "paper" ...
$ expt_condition : chr [1:146] "experimental" "experimental" "experimental" "experimental" ...
$ exposure_phase : chr [1:146] "test_only" "test_only" "test_only" "test_only" ...
$ method : chr [1:146] "FC" "FC" "FC" "FC" ...
$ participant_design : chr [1:146] "within_one" "within_one" "within_one" "within_one" ...
$ group_name_2 : logi [1:146] NA NA NA NA NA NA ...
$ n_2 : logi [1:146] NA NA NA NA NA NA ...
$ mean_age_2 : logi [1:146] NA NA NA NA NA NA ...
$ SD_2 : logi [1:146] NA NA NA NA NA NA ...
$ r : logi [1:146] NA NA NA NA NA NA ...
$ corr : logi [1:146] NA NA NA NA NA NA ...
$ n_excluded_2 : logi [1:146] NA NA NA NA NA NA ...
$ gender_2 : logi [1:146] NA NA NA NA NA NA ...
$ N_Langs : logi [1:146] NA NA NA NA NA NA ...
$ d_notes : chr [1:146] NA NA NA NA ...

  • attr(*, "spec")=
    .. cols(
    .. study_ID = col_character(),
    .. short_cite = col_character(),
    .. long_cite = col_character(),
    .. doi = col_logical(),
    .. peer_reviewed = col_character(),
    .. coder = col_character(),
    .. expt_num = col_double(),
    .. response_mode = col_character(),
    .. response_mode2 = col_character(),
    .. dependent_measure = col_character(),
    .. native_lang = col_character(),
    .. infant_type = col_character(),
    .. infant_type2 = col_character(),
    .. group_name_1 = col_character(),
    .. n_1 = col_double(),
    .. mean_age_1 = col_double(),
    .. same_infant = col_character(),
    .. x_1 = col_double(),
    .. x_2 = col_double(),
    .. SD_1 = col_double(),
    .. t = col_double(),
    .. F = col_logical(),
    .. d = col_double(),
    .. d_var = col_double(),
    .. age_range_1 = col_double(),
    .. age_range_2 = col_logical(),
    .. n_excluded_1 = col_double(),
    .. gender_1 = col_double(),
    .. num_trials = col_double(),
    .. object_stimulus = col_character(),
    .. N_AFC = col_character(),
    .. mean_comprehension_vocab = col_double(),
    .. mean_production_vocab = col_double(),
    .. ME_trial_type = col_character(),
    .. lab_group = col_character(),
    .. data_source = col_character(),
    .. expt_condition = col_character(),
    .. exposure_phase = col_character(),
    .. method = col_character(),
    .. participant_design = col_character(),
    .. group_name_2 = col_logical(),
    .. n_2 = col_logical(),
    .. mean_age_2 = col_logical(),
    .. SD_2 = col_logical(),
    .. r = col_logical(),
    .. corr = col_logical(),
    .. n_excluded_2 = col_logical(),
    .. gender_2 = col_logical(),
    .. N_Langs = col_logical(),
    .. d_notes = col_character()
    .. )
    Error in if (field$type == "string") { : argument is of length zero

search not working

search results only contain 1 result. js console shows error related to jquery.mark function.

Effect sizes not calculated from F (and t?) values

It looks like rows with effect sizes on the website only show up if they have been calculated from means and SDs, not from F or t values. (I have checked that this is the case in Fleur's and my datasets)

Not sure if this is a problem with the script content or its deployment so assigning @christinabergmann and @erikriverson. Let me know if I can help

Adding team members on MetaLab

Hi @erikriverson can you help me add a team member on MetaLab? I added Fleur under content>people in what I thought was the same structure as the other members, but she isn't showing up on the website

Also, I think the old way to do it (add team members to metadata>people.yaml) is now redundant, as I see that you, myself and Sara aren't in that file... can this be deleted now to avoid confusion?

Changes not updating on website

I have tried to 1) add new levels to columns by editing spec.yaml, and 2) add the language-discrimination dataset, but I am not seeing these changes updating on the website. Namely 1) when I try to use the data validator with the language-discrimination URL, it still tells me the dependent_measure column is not validated, and when I check dependent_measure under "Fields information" I can see that the new levels I added are not there yet. And the language-discrimination dataset is not showing up under "Datasets by domain".

Am I missing some steps?

Missing or confusing links to meta-analysis publications

We received feedback that the links to meta-analysis publications are missing or confusing. I suggest all datasets have a line indicating the publication status, so users aren't confused when a link is missing.

e.g. Unpublished: Repository{with link}
Preprint: Paper{with link}, Repository{with link to osf etc}
Published: Paper{with link to article}, Repository{with link to osf etc}

I'm listing below all those that need changes made:

  • Abstract rule learning- link to Rabagliati et al (2018) doesn't open. Add link to journal article]
  • Categorization bias - no link to a paper. is there a paper in progress and if so, link to the repository?
  • Cross situational word learning - no link to Dal Ben et al. (2019)
  • Familiar word recognition - no link to Carbajal (2018)
  • Function word segmentation - "Published:Bergmann & Cristia (2015), Database"
  • Gaze following - link to Frank, Lewis, & MacDonald (2016) no longer works. find and add new link
  • Label advantage in concept learning - add link to repository of Lewis & Long (unpublished)?
  • Mispronunciation sensitivity - links to a drive without paper, so add label: Unpublished: Repository{with link}
  • Mutual exclusivity: add label "Published: Lewis et al. (2020)"
  • Natural speech preference - links to a spreadsheet, so add label "Unpublished: Repository{with link}"
  • Phonotactic learning - find out status and link to Cristia (2018)
  • Prosocial agents - "Published: Margoni & Surian (2018)"
  • Sound symbolism - link doesn't work find link to Lammertink et al. (2016)
  • Statistical sound category learning - find out status and link to Cristia (2018)
  • Statistical word segmentation - link to Github. add link to paper or indicate Unpublished
  • Switch Task - link to a drive. add link to paper or indicate Unpublished
  • Vowel discrimination (native): Add "Published: Tsuji & Cristia (2014), Database"
  • Vowel discrimination (non-native): Add "Published: Tsuji & Cristia (2014), Database"
  • Word segmentation: "Published: Bergmann & Cristia (2015), Database](https://sites.google.com/site/inworddb/)"

Effect direction in habituation studies: reverse entry or flip during processing?

I realized recently that we code Habituation studies positive when there is a stronger dishabituation response in the vowels database. This is different from most other types of paradigms (right?) so we either need to make this consistent and flip effect sizes from habituation studies in the code programmatically by checking if something is labelled habituation, or have very clear instructions.
@shotsuji and @lottiegasp - Do you have any preference for either? I think each of you solved this in one of the two suggested ways.

metafor update: random effect structure does not work any more

Right now we have the following random effect structure in our scripts:

random = ~  same_infant_calc | short_cite/unique_row

in server.R

This leads to the following error:

Error in rma.mv(d_calc, d_var_calc, random = ~same_infant_calc | short_cite/unique_row,  : 
Cannot use '~ inner | outer1/outer2' type terms in the 'random' argument.

Publications and presentations about MetaLab

Under Documentation could we have a new subheading (under Welcome and before Getting started) called "About MetaLab" and then have a page with a heading "Publications about MetaLab" and "Presentations about MetaLab". Under these headings we would include everything that is listed in this Google Doc.

metapower tool

Create a tool for calculating the power of a meta-analysis

Downloading data

A beta-tester asked about the download data button which I think we used to have.

Publications - to add to the section on papers using MetaLab

The following papers have used MetaLab and cited us appropriately, so it would be great to add them:

https://onlinelibrary.wiley.com/doi/abs/10.1002/jrsm.1464
Mathur, M. B., & VanderWeele, T. J. (2020). Estimating Publication Bias in Meta‐Analyses of Peer‐Reviewed Studies: A Meta‐Meta‐Analysis Across Disciplines and Journal Tiers. Research Synthesis Methods.

https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/rssc.12440
Mathur, M. B., & VanderWeele, T. J. (2020). Sensitivity analysis for publication bias in meta‐analyses. Journal of the Royal Statistical Society. Series C, Applied Statistics, 69(5), 1091.

https://econtent.hogrefe.com/doi/abs/10.1027/2151-2604/a000393?journalCode=zfp
Tsuji, S., Cristia, A., Frank, M. C., & Bergmann, C. (2020). Addressing Publication Bias in Meta-Analysis. Zeitschrift für Psychologie.

Please add any I am missing

Failure to build and deploy

I made changes to the Google Sheet tabular data of language-discrimination (to fix direction of effect sizes as per #39) and it is failing to build and deploy (see last three Actions).

@erikriverson Let me know if there is a process I should follow whenever editing the Google Sheet tabular data to avoid this

I assume this is why the Applications on the website are not working at the moment?

Add new fields to the metadata/datasets.yaml

Add new fields to the metadata/datasets.yaml, so that they can be accessed when viewing the datasets on the website.

  • Search protocol Google sheet
  • Prisma chart
  • External websites

Version control of datasets

Suggestion from @mllewis:
It'd be great if we had a way on Metalab to do version control a little more systematically. I'm imagining adding an optional column to the data where you could indicate which rows were associated with a publication version of the data, and then on the visualization app, you could optionally subset to only the published rows. That would allow (a) for researchers to continually update meta-analyses, and (b) for Metalab to function as an interactive SI for publications, even after the data have been updated

Publications - papers describing the meta-analyses?

Would it make sense to have a section under Publications that collects all reports on the meta-analyses currently on metalab?

(Ideally below the papers we ask people to cite, see issue #35 and below the papers using MetaLab, i.e. right now the two proceedings papers).

Restructuring Tutorial page

To ease navigation, I would propose the following:

tutorial landing page has the following buttons:

  1. Download data from MetaLab (Optional subtitle: You want to conduct a power analysis or run your own regressions? Find more information here on how to obtain the data we display on MetaLab, either via the website or our package metalabr).
  2. Update a meta-analysis on MetaLab (You have a brand new paper out on one of the research questions covered in MetaLab or discovered some data in the filedrawer? Or you think some value might not be quite correct? Here is how you can propose updates to the datasets).
  3. Conduct a meta-analysis (What are the necessary steps to conduct a meta-analysis or systematic review from scratch? Which tools are available? What are best practices? We provide a short primer with video lectures here)/
  4. Contribute a meta-analysis to MetaLab (You have already completed a meta-analysis and would like to add it to MetaLab? Here are step-by-step instructions on how to make your data MetaLab-compatible and see everything displayed in our interactive visualizations).

As for content,

  1. Would be one short blurb on the data download buttons and the documentation for metalabr
  2. Would need to be added and should include the validator workflow.
  3. Can be covered by what we already have (which can be updated, of course)
  4. Should be a single page on how to validate a spreadsheet and submit a PR to update the relevant yaml files.

All comments are welcome!

update column specs in spec.yaml

There are currently discrepancies between spec.yaml and the MA template codebook. I assume the MA template has been updated more recently? If true, the spec sheet will need to be updated before new data can be added to the database.

Two issues @anjiecao came across in the course of working on a MA for the challenge.

  1. response_mode in the template lists both "looking" and "eye-tracking" as options. I don't think we want looking as an option here?
  2. source_of_data is missing from spec.yaml. We probably want options like text/table, plot, author here.

I suspect there are other discrepancies, but these are two we came across.

Bug in power app

It looks like there's a bug on power computation when moderators are selected. For instance when we move the age cursor to select only a subset of studies and push it to the maximum age, the ES is different than when no moderator is selected, although it should be the same as in this case all the ages are included.

some lines not included in the cross-situational word learning dataset

The following lines have been coded in the data spreadsheet, with all the necessary information, but don't appear in the calculated spreadsheet (the one downloaded from the meta-analytic visualization app):

  • McGregor2013
  • Fitneva2017 120m
  • hu2017
  • Filippi2017
  • Smith2013

For within_two, the problem might be missing correlations, for within_one we don't know as they have all the necessary info to compute ES.

To do: add note on empty pages

As per @christinabergmann's email on 27/11 as I think this still needs to be addressed:

Just a quick request, can we add [under revision, access the previous tutorial here (Erik had a link to the old site, I think, @saraelshawa ,do you know)] to the FAQ page and others that are empty right now on metalab?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.