Code Monkey home page Code Monkey logo

tern's Introduction

tern

Check πŸ›  Docs πŸ“š Code Coverage πŸ“”

GitHub forks GitHub repo stars

GitHub commit activity GitHub contributors GitHub last commit GitHub pull requests GitHub repo size GitHub language count Project Status: Active – The project has reached a stable, usable state and is being actively developed. Current Version Open Issues

The tern R package contains analysis functions to create tables and graphs used for clinical trial reporting.

The package provides a large range of functionality, such as:

Data visualizations:

Statistical model fit summaries:

Analysis tables:

  • See a list of all available analyze functions here
  • See a list of all available summarize functions here
  • See a list of all available column-wise analysis functions here

Many of these outputs are available to be added into teal shiny applications for interactive exploration of data. These teal modules are available in the teal.modules.clinical package.

See the TLG Catalog for an extensive catalog of example clinical trial tables, listings, and graphs created using tern functionality.

Installation

tern is available on CRAN and you can install the latest released version with:

install.packages("tern")

or you can install the latest development version directly from GitHub by running the following:

if (!require("remotes")) install.packages("remotes")
remotes::install_github("insightsengineering/tern")

Note that it is recommended you create and use a GITHUB_PAT if installing from GitHub.

See package vignettes browseVignettes(package = "tern") for usage of this package.

Related

Acknowledgment

This package is a result of a joint efforts by many developers and stakeholders. We would like to thank everyone who has contributed so far!

Stargazers and Forkers

Stargazers over time

Stargazers over time

Stargazers

Stargazers repo roster for tern

Forkers repo roster for tern

tern's People

Contributors

6iris6 avatar anajens avatar arkadiuszbeer avatar ayogasekaram avatar brandondbutcher avatar cicdguy avatar danielinteractive avatar edelarua avatar edgar-manukyan avatar gogonzo avatar imazubi avatar insights-engineering-bot avatar jennifer-j-li avatar khatril avatar kpagacz avatar legrasv avatar maximilianmordig avatar maximo1311 avatar mbrothe71 avatar melkiades avatar nautilussu avatar nikolas-burkoff avatar nolan-steed avatar pawelru avatar polkas avatar shajoezhu avatar walkowif avatar wangh107 avatar wwojciech avatar yli110-stat697 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tern's Issues

g_km has trouble with > <

Using 2021_05_05 release on R 3.6.3.

I noticed g_km has trouble when factor levels contains > or <. Here's some code:

library(tern)
library(dplyr)

adtte_arm_bep <- ex_adtte %>% 
  df_explicit_na() %>%
  filter(PARAMCD == "OS", ARM == "A: Drug X", BEP01FL == "Y") %>% 
  mutate(is_event = CNSR == 0,
         group = as.factor(ifelse(AGE>34,">Median","<=Median")))

variables <- list(tte = "AVAL", is_event = "is_event", arm = "group")

g_km(
  df = adtte_arm_bep,
  variables = variables,
  annot_surv_med = FALSE
)

Risk table is off in the plot, see below.

user/196/files/823ebc00-ea1e-11eb-9fa3-9a268a2efae0)

Seems to work fine for columns in the demographics table. Would be nice to be able have > and < in group levels as stratifying using some cutoff is common. Thanks!

Provenance:

Creator: harric17

Data Processing

Basic/generic data pre-processing should not be done on the user but rather on the back end. this is true for both tern and teal but opening issue in tern to cover both.

  1. changing study variable type to conform to tern/teal variable type requirements on the front end is going to diminish adoption. if tern/teal needs a factor variable then conversion should happen in the tern package. many examples in agile-R do not work when pointed at study data.
  2. having to remove records with missing values in key variables is a requirement that should not be done on the front end by the user. this should data management should be done in the tern package. example is the AE table where user has to remove records where coded variables and others like grade have missing values. this should be done in the tern package if that is what is needed. would add that this will under report AE because there is no section in the table displaying how many AEs are not coded which happens when a trial is ongoing.

Provenance:

Creator: npaszty

[Investigate if this issue has been addressed] rtables accessor function may not work for tables with nested columns based on names

> library(tern)
> library(dplyr)
> 
> tbl <- basic_table() %>%
+   split_cols_by("ARM") %>%
+   split_cols_by("STRATA2") %>%
+   count_occurrences("AEDECOD") %>%
+   build_table(ex_adae %>% slice(1:300), ex_adsl)
> tbl
                      A: Drug X              B: Placebo           C: Combination   
                    S1          S2         S1         S2          S1         S2    
-----------------------------------------------------------------------------------
dcd A.1.1.1.1    5 (6.8%)    2 (3.3%)   1 (1.5%)    4 (6%)     3 (5.4%)   5 (6.6%) 
dcd A.1.1.1.2    6 (8.2%)    5 (8.2%)   1 (1.5%)    4 (6%)     3 (5.4%)   3 (3.9%) 
dcd B.1.1.1.1    6 (8.2%)    3 (4.9%)   1 (1.5%)   7 (10.4%)   1 (1.8%)   6 (7.9%) 
dcd B.2.1.2.1    7 (9.6%)    4 (6.6%)    2 (3%)    5 (7.5%)    4 (7.1%)   6 (7.9%) 
dcd B.2.2.3.1    3 (4.1%)    2 (3.3%)      0       5 (7.5%)    1 (1.8%)   5 (6.6%) 
dcd C.1.1.1.3   11 (15.1%)   1 (1.6%)    2 (3%)    5 (7.5%)    1 (1.8%)   4 (5.3%) 
dcd C.2.1.2.1    1 (1.4%)    3 (4.9%)    2 (3%)     6 (9%)     3 (5.4%)   6 (7.9%) 
dcd D.1.1.1.1    4 (5.5%)    2 (3.3%)    2 (3%)     2 (3%)     2 (3.6%)   5 (6.6%) 
dcd D.1.1.4.2    6 (8.2%)    4 (6.6%)    2 (3%)    3 (4.5%)    2 (3.6%)   6 (7.9%) 
dcd D.2.1.5.3    5 (6.8%)    4 (6.6%)    2 (3%)     6 (9%)     4 (7.1%)   8 (10.5%)
> 
> # non-unique table names
> names(tbl)
[1] "A: Drug X"      "A: Drug X"      "B: Placebo"     "B: Placebo"     "C: Combination" "C: Combination"
> 
> # Using helper function, we return only the first element for each nested column
> h_col_indices(tbl, "A: Drug X")
[1] 1
> 
> # Impact: in accessor functions used for sorting / pruning the wrong info used if using table names
> first_row <- collect_leaves(tbl[1,])[[1]]
> 
> # Using column indices gives the correct result.
> h_row_counts(first_row, col_indices = 1:6)
     A: Drug X.S1      A: Drug X.S2     B: Placebo.S1     B: Placebo.S2 C: Combination.S1 C: Combination.S2 
                5                 2                 1                 4                 3                 5 
> 
> # Using column names gives the wrong result since within each nested column, only the first record is extracted
> h_row_counts(first_row, col_names = names(tbl))
     A: Drug X.S1      A: Drug X.S1     B: Placebo.S1     B: Placebo.S1 C: Combination.S1 C: Combination.S1 
                5                 5                 1                 1                 3                 3 
> 
> # return only the first element
> h_row_counts(first_row, col_indices = h_col_indices(tbl, "A: Drug X"))
A: Drug X.S1 
           5 

Provenance:

Creator: anajens

Huge number of dependencies

already existed NOTE:
user/2704/files/f854af00-db53-11eb-8a97-c204194ebf29)

We have 114 dependencies (whole tree - dependencies of dependencies too) for tern and all of them takes 0.5 Gb of memory.
Of course most of these packages are used by other packages too.

# using pacs package https://github.com/Polkas/pacs
cat(pacs::pac_true_size("tern")/10**6, "Mb")
# 442.9339 Mb
# pacs::pac_deps("tern")
nrow(pacs::pac_deps("tern"))
# 114

Provenance:

Creator: Polkas

Add geometric mean to summarize_vars

This is commonly used in neuroscience studies.

  • design doc
  • production code
  • unit tests
  • open downstream issue to add to tm_t_summary and tm_t_summary_by in teal.modules.clinical

Provenance:

Creator: anajens

Wrapper for split_cols_by overall level

In future rtables releases the argument add_overall_col may be deprecated.

Propose a design for a layout function in tern that can be a wrapper for split_cols_by(split_fun = add_overall_level(xx))

Provenance:

Creator: anajens

Redesign LBT05 layout to work with new rtables:: trim_levels_to_map and .spl_context

Redesign LBT05 family of functions (eg count_abnormal_by_marked, s_*, a_*) to work with rtables::trim_levels_to_map. Background issue (insightsengineering/rtables#203)

The idea is that based on metadata map the function can return either low , high, or low and high summary rows.

Rationale: improved speed, allow users to precisely control which combinations of levels among several categorical variables used in the layout should be displayed.

Please add the PR link(s) below.

  • tern design doc. Please use .spl_context approach. similar to LBT07
  • tern production code for s_, a_, layout function. Also please update relevant unit tests.
  • TLG-C update
  • nsdl.tests update

Tests failure on rocker

See integration tests
blue/organizations/jenkins/NEST-Automation%2Frocker%2Fnestreleases/detail/master/80/pipeline

[2021-08-03T03:18:47.390Z]   Running β€˜testthat.R’

[2021-08-03T03:18:47.390Z]  ERROR

[2021-08-03T03:18:47.390Z] Running the tests in β€˜tests/testthat.R’ failed.

[2021-08-03T03:18:47.390Z] Last 13 lines of output:

[2021-08-03T03:18:47.390Z]   

[2021-08-03T03:18:47.390Z]   x[128]: "(43.78, 52.581)"

[2021-08-03T03:18:47.390Z]   y[128]: "(43.78, 52.58)"

[2021-08-03T03:18:47.390Z]   

[2021-08-03T03:18:47.390Z]   x[130]: "(-9.79, 3.183)"

[2021-08-03T03:18:47.390Z]   y[130]: "(-9.789, 3.183)"

[2021-08-03T03:18:47.390Z]   

[2021-08-03T03:18:47.390Z]   [ FAIL 3 | WARN 4 | SKIP 0 | PASS 1837 ]

[2021-08-03T03:18:47.390Z]   Error: Test failures

[2021-08-03T03:18:47.390Z]   In addition: Warning message:

[2021-08-03T03:18:47.390Z]   The `wrap` argument of `test_dir()` is deprecated as of testthat 3.0.0.

[2021-08-03T03:18:47.390Z]   This warning is displayed once every 8 hours.

[2021-08-03T03:18:47.390Z]   Call `lifecycle::last_warnings()` to see where this warning was generated. 

[2021-08-03T03:18:47.390Z]   Execution halted

[2021-08-03T03:18:47.390Z]   Error while shutting down parallel: unable to terminate some child processes

[2021-08-03T03:18:47.390Z] * checking for unstated dependencies in vignettes ... OK

[2021-08-03T03:18:47.390Z] * checking package vignettes in β€˜inst/doc’ ... WARNING

[2021-08-03T03:18:47.390Z] dir.exists(dir) is not TRUE

[2021-08-03T03:18:47.390Z] Package vignette without corresponding single PDF/HTML:

[2021-08-03T03:18:47.390Z]    β€˜introduction.Rmd’

[2021-08-03T03:18:47.390Z] 

[2021-08-03T03:18:47.390Z] * checking running R code from vignettes ...

[2021-08-03T03:18:47.390Z]   β€˜introduction.Rmd’ using β€˜UTF-8’... OK

[2021-08-03T03:18:47.390Z]  NONE

[2021-08-03T03:18:47.390Z] * checking re-building of vignette outputs ... OK

[2021-08-03T03:18:47.390Z] * checking PDF version of manual ... OK

[2021-08-03T03:18:47.390Z] * DONE

[2021-08-03T03:18:47.390Z] 

[2021-08-03T03:18:47.390Z] Status: 1 ERROR, 2 WARNINGs, 1 NOTE

[2021-08-03T03:18:47.390Z] See

[2021-08-03T03:18:47.390Z]   β€˜/automation_code/install_rpkgs_with_log/buildfiles/check/tern.Rcheck/00check.log’

[2021-08-03T03:18:47.390Z] for details.

Use rocker image to reproduce

<REDACTED>nest/r/rocker/nest:devel-latest

Provenance:

Creator: gogonzo

test pipelines

to check jenkins pipelines

Provenance:

Creator: waddella

Tern/rtables issues with RMarkdown related to magrittr v2.0.1 on BEE

linked to https://github.roche.com/NEST/nest_on_bee/issues/91
PR: https://github.roche.com/NEST/nest_on_bee/pull/93

I'm using tern with RMarkdown in R 3.6.3 and frequently get this error when trying to knit the RMarkdown file:
image

This will occur even after restarting my session and trying to re-knit. Here's some example code:

rm(list=ls())
<REDACTED>NEST/nest_on_bee/master/bee_nest_utils.R")
bee_use_nest(release = "2021_05_05")

library(tern)
library(dplyr)
library(ggplot2)

I find that restarting my session and then running each line above individually helps. But it would be nice to just be able to knit the RMarkdown file straight away from a restarted R session.

Provenance:

Creator: harric17

Increment required versions of packages `emmeans`, `lmerTest`, `lme4` and `optimx`

After the Dec 2020 release, let's increment the required versions in tern of the packages emmeans, lmerTest, lme4 and optimx to latest available versions for given R version. Then inform enableR team to do the same update, so that in the next enableR release we can have most recent versions.

Provenance:

Creator: danielinteractive

GDS feedback: adding all population reference line to g_forest.

Consider adding an optional arguments in g_forest called vline_overall to show an optional dash line at the position of overall treatment effect as below.

image

vline_overall should take value T or F and be defaulted to F. When vline_overall = T, a second vline is added in forest_grob. Position of this second vline is read from tbl.


Affected modules

  • tm_g_forest_rsp
  • tm_g_forest_tte

Provenance:

Creator: anajens

Extension of stat_median_ci to normal distribution + generalisation to quantiles.

Existing implementation of stat_median_ci is based on DescTools(version 0.99.35) R package. This is a non-parametric approach (i.e. distribution-free, and by the way, it can also be obtained from the acceptance region of the well-known nonparametric sign test against a two-sided alternative) to construction of median confidence interval. While it is reasonable in general, a more accurate CIs can be built under additional assumptions, among which, one of the most popular is that the sample comes from a normal distribution. For such a case there are several methods of estimations available (see e.g. Chakraborti (2007)). The one referred to as Lawless Interval (LA) is quite well established (see also Section 4.4 in Meeker at el. (2017)) and it is available in SAS through CIPCTLNORMAL function (see the screenshot, below)

percentile_CI_SAS

Within this issue, the following two extensions should be researched:

  • Modify stat_median_ci so that it can also compute a version of CIs dedicated for normally distributed samples, using Lawless Interval (LA) method.
  • Optionally, make stat_median_ci more generic, so that any percentile CI can be computed, with the default median. It would require function renaming to e.g. stat_quantile_ci.

Notice: The main reason for raising this issue is to keep the consistency with SAS, as in many analysis SPAs use CIPCTLNORMAL for median CI computations.

References:
Chakraborti, S. and Li, J. (2007) Confidence Interval Estimation of a Normal Percentile, The American Statistician, 61:4, 331--336. doi: 10.1198/000313007X244457. https://www.researchgate.net/publication/4986675_Confidence_Interval_Estimation_of_a_Normal_Percentile

Meeker, W.Q., Hahn, G.J. and Escobar, L.A. (2017) Statistical intervals : a guide for practitioners and researchers. Wiley

issue 1347: create data transform function for smq tables

NEST/teal.modules.clinical/issues/1347.
This function is going to be used the upcoming new teal module.
Basically, the function created in tlg-c has been taken and moved into tern.

releases/2021_07_07/tlg-catalog/

user/3190/files/0d527700-f37a-11eb-822e-8e4fc88a52ac)

Once this PR is accepted, I will add another issue to update the TLG-C by using this new tern helper function.

@bahatsky I am assigning this review to you as you are very familiarized with this SMQ analysis and the function that was already created in TLG-C-

Provenance:

Creator: imazubi

New Tern Function: Individual Patient Plot over Time with Treatment Group Mean (IPP02)

New function should be able to create the following outputs

IPP02
Individual Patient Plot over Time with Treatment Group Mean
image

Individual Patient Plot over Time with Treatment Group Mean with Attributes Unified with a Group
image

Individual Patient Plot over Time with Treatment Group Mean and 95% CI with Attributes Unified with a Group
image

MDIS/stream_doc/um/report_outputs_ipp02.html)

Provenance:

Creator: npaszty

Design doc for MNG01

MDIS/stream_doc/2_11/um/report_outputs_mng01.html?highlight=mng01) catalogue and GDSR specifications.

releases/2021_07_07/embedded/goshawk/reference/g_lineplot.html).

Provenance:

Creator: anajens

d_onco_rsp_label sets factors

Using tern 2020_05_05 with R 3.6.3. Does it make sense to have d_onco_rsp_label additionally convert to factor and set levels since overall response is inherently ordered? The table in the RSPT01 TLG catalog itself has response categories out of order.

Provenance:

Creator: harric17

New Tern Function: Mean Plot (MNG01)

New function should be able to create the following outputs

MNG01

Plot of Mean and Confidence Interval
Plot of Mean and Confidence Intervals of Change from Baseline of Vital Signs (Changing the Input Analysis Data Set and Analysis Variable)
Plot of Mean (+/-SD) (Changing the Statistics)
Plot of Mean and Confidence Interval (Modify Alpha Level)
Plot of Mean and Confidence Interval (with Number of Patients only in Table Section)
Plot of Mean and Confidence Interval (with Table Section)
Plot of Median and Confidence Interval (Condense Visits in Table Section)
Plot of Mean and Upper Confidence Limit

MDIS/stream_doc/um/report_outputs_mng01.html)

Provenance:

Creator: npaszty

Add exact logistic regression option to `fit_logistic()`.

This is required as a new option for the LGRT02 output table (multivariate logistic regression).

Unfortunately there does not seem to be a conventional implementation for it in R. The elrm package does not seem optimal as it uses MCMC which always comes with problems and has been removed from CRAN. A couple of code snippets can be found, e.g. see
https://zhanxw.com/blog/2011/02/exact-logistic-regression/

So this would be a quite large more research/stats effort here.

Provenance:

Creator: danielinteractive

Think about / remedy NA handling in `check_mmrm_vars()`

Good question from @imazubi:
In check_mmrm_vars() we are removing records with NA in order to check whether we have the minimum number of complete records. However, in TLG-C df_explicit_na() is used so NAs are not NAs anymore but explicit missing levels. How should we deal with this?

Provenance:

Creator: danielinteractive

Add weighted log-rank tests

Besides the log-rank (cox score) p-value, tern currently supports cox wald and likelihood ratio test p-values. Missing are weighted log-rank tests, e.g. Peto-Peto, Gehan-Breslow, etc.

Currently, these tests serve as sensitivity analyses at best (not included in most SAPs). The log-rank test remains the preferred primary analysis for time-to-event endpoints by regulatory agencies. So the question comes down to how much we prioritize primary vs secondary analysis of clinical endpoints.

NEST/tlg-catalog/pull/611

Provenance:

Creator: anajens

`compare_vars` to add referential footnotes when not enough sample size

Currently it just suppresses the warnings or does not show any result. Better would be to

  • show NA and then have a footnote explaining that here the sample size was not sufficient to compute any p-value
  • show the p-value and have a footnote with the warning message.

Provenance:

Creator: danielinteractive

[Investigate if we can close this issue already] problematic survival package over >= "3.2.9" version (at least clogit)

NEST/tern/pull/1277
NEST/tern/issues/1275
@anajens
Most important on BEE R 4.0.3 we use survival version '3.2.7' so this is not a problem there.
@waddella ask us to test NEST packages on R 4.1.0 with newer survival.

TL;DR
Precisely survival::clogit might give Inf loop over >= "3.2.9" survival version, only for "exact" method and presorted dataset.
If we shuffle the dataset, then it optimize correctly. So it looks like pre-sorted data might cause problems now for "exact" method.
I created a github issue therneau/survival#151 , hopefully survival authors will respond.

The new scaling functionality was introduce with this commit therneau/survival@cd13496#diff-9c3fe81a7b4cf1c186184f63ea7d06bc27fafc3707a1dcaff9896bb96036349a, which i think is the source of the problem. (here (scaling) it crashed RStudio too therneau/survival#146). Moreover clogit ("exact" method) depends on coxph which looks to cause the problem.

In my opinion we should wait till survival package maintainers will solve this issue. Till this moment we could not use survival version >= "3.2.9".

Another option is to use popular method "breslov" which is used in other packages.
user/2704/files/40cfec00-df09-11eb-87e8-d13398f8423b)

There was another solution suggested to always shuffling a dataset.

Provenance:

Creator: Polkas

un-muffle car anova

Follow-up to #958 with idea from @collinf1 about how warnings can be handled instead of muffled.

NEST/teal.modules.clinical/issues/674) .

For the record, note that the message is caught and nothing indicates a message was generated. Alternatively, an adapted construction from below could be used to capture the message and eventually attached as an attribute to the result.

try_car_anova <- function(mod,
                          test.statistic) { # nolint

  y <- tryCatch(
    withCallingHandlers(
      expr = {
        warn_text <- c()
        list(
          aov = car::Anova(
            mod,
            test.statistic = test.statistic,
            type = "III"
          ),
          warn_text = warn_text
        )
      },
      warning = function(w) {
        # If a warning is detected it is handled as "w".
        warn_text <<- trimws(paste0("Warning in `try_car_anova`: ", w))

        # A warning is sometimes expected, then, we want to restart
        # the execution while ignoring the warning.
        invokeRestart("muffleWarning")
      }
    ),
    finally = {
    }
  )

  return(y)
}

Provenance:

Creator: anajens

Informative error handling for missing values

I've seen new NEST users (myself included) get confused when setting up their first AE by grade teal visualization and getting a not super informative "subscript out of bound" error. Turns out you get this when you have missing values in required variables, e.g. AETOXGR, AEBODSYS, etc. Just wanted to suggest giving the end user a more informative error message for ease of debugging. Thanks!

Provenance:

Creator: harric17

Design doc for AET01_AESI

MDIS/stream_doc/2_11/um/report_outputs_aet01_aesi.html). Likely no new layout functions are needed.

  • Use the STREAM test data as ADAE from random.cdisc.data does not yet include all variables. Note STREAM v2.11 was release so BEE paths are updated.
  • Please note which variables are required and should be added to radae in a separate issue.

Provenance:

Creator: anajens

Add (N=xx) to forest plot headers

Currently it's not possible to insert column counts (N=xx) at the highest level (eg ARM) when a table has nested columns. For example:

user/118/files/5701ce00-2e8b-11eb-8d2c-d799d37b6565)

There is an issue open in rtables about how to do this: insightsengineering/rtables#135

Once the above is resolved, please update layout functions and tests for FSTG01 and FSTG02, and ONCT05.

Provenance:

Creator: anajens

1618 test aet06 variant3 aet02 variant12

NEST/tlg-catalog/pull/717/files
@anajens
I realized that these TLG unit tests are stored in tern, which is not the repo where I have been working (tlg-catalog is the issue’s repo). Hence, I created another branch in this tern repo in order to go forward with the addition of the AET06 variant 3 unit test and the fix of the error of AET02 variant 12 unit test (updated the expected structure as well). Do not know if this way (creating another branch) was the best course of action, but wanted to make these updates.
Updated NEWS too (by adding tern x.x.x) in order not to lose the updates included until the following version is created.

Provenance:

Creator: imazubi

Support multiplicity adjustment for TTET01

pd/post/do-you-use-ttet01-for-time-to-event-analyses-please-read/
I guess we don't have this problem in the sense that we currently do NOT support p-value adjustment with Bonferroni.

"Issue: For studies in which more than 2 study arms are displayed in one TTET01 table (e.g., Control, Treatment 1, Treatment 2), STREAM will by default make the Bonferroni p-value adjustment for multiplicity. STREAM v1 allowed the option to employ a separate model for each pairwise comparison (in which case no adjustment would be performed) while STREAM v2 does not allow that option. This could result in inconsistencies in output between STREAM v1 and v2 when more than 2 study arms are displayed in one table."

But I guess it would be a nice additional feature for our t_tte if we would support the Bonferroni p-value adjustment.

Provenance:

Creator: danielinteractive

Introduce lifecycle

Introduce lifecycle package for tern. Task needs one code-dev to implement or advice and one SPA to determine wow mature each function is.
NEST/teal/pull/835)

Provenance:

Creator: gogonzo

replace the use of gridExtra functions if tern has an alternative implementation

tern has a number of grid helper function:

user/8/files/68038380-44de-11e9-9c85-45a676beacf7)

Please assess if the usage of gridExtra functions is not obsolete now, e.g. grid.arrange can be replaced with tern::arrange_grobs.

If possible replace the gridExtra calls and then remove gridExtra from the Imports field in the NAMESPACE.

Provenance:

Creator: waddella

New Tern Function: Scatter Plot (SHG01) & Scatter Plot (SLG01)

New function should be able to create the following outputs

SHG01

Scatterplot of Maximum Level of Liver Function Tests
Scatterplot of Maximum Level of Liver Function Tests including Reference Lines

STREAM Catalog Link

SLG01

Scatterplot of Maximum Level of Liver Function Tests
Scatterplot of Maximum Level of Liver Function Tests (adding reference lines using the recode option)
Scatterplot of Maximum Level of Liver Function Tests (adding reference lines using a pre-processing)

STREAM Catalog Link

Provenance:

Creator: npaszty

d_onco_rsp_label with df_explicit_na

I am using tern 2020_05_05 on R 3.6.3. I noticed df_explicit_na encodes NA values as the factor level <Missing>. However d_onco_rsp_label handles the value Missing without the pointy brackets. Suggest a simple update d_onco_rsp_label to work better with df_explicit_na or vice versa.

Provenance:

Creator: harric17

Add helper function to process SMQs

This function should help transform ADAE dataset from wide to long for each selected SMQ / CQ.

Alternative idea would be to not transform dataset and define a new split function to be used in rtables layout pipeline.

releases/2020_12_17/embedded/agile-R/tlg_catalog/tables/aet09_smq/) entry.

  • Design doc
  • Add helper function to tern
  • Add tests
  • Update entries in TLG-C that report SMQs

Provenance:

Creator: anajens

Add a prognostic forest plot that tests biomarker effects within specific populations and models

image

Idea / Background:

  • needed to create a series of logistic/cox tests within an entire population to test prognostic effects of given biomarkers.
  • This requires the test be within a biomarker itself, not between two arms, and I could not find an existing function in the tlg catalog that achieved this.
  • spent some time making this (attached) and am happy to share code and thoughts.
  • just think this would be useful going forward, and no doubt you guys could make a much better version than I did.

To do:

  • design doc: what is needed here on top of existing tern functions? can we amend existing tern functions? or make a new one? I guess the forest plotting function itself should be flexible enough to use again.
    • idea
    • layout
    • prototype survival
  • check with stakeholders (Patrick Kimes, Eugene Kim) if solution would be acceptable, and iterate if necessary
  • implement in tern:
    • survival version
    • binary version
      • extract_rsp_biomarkers
      • tabulate_rsp_biomarkers
      • h_logistic_mult_cont_df
      • h_rsp_to_logistic_variables
      • add strata variable option to fit_logistic
      • make h_logistic_simple_terms work with clogit model object
      • make h_glm_simple_term_extract work with clogit model object
      • h_tab_rsp_one_biomarker
  • add issue for biomarker-catalog so we can add one or two pages with this

Centralize formula and model handling in tern

Ideally, we would like to have a central formula class(es) in NEST that deals with low-level R functions on formula checks, extraction and model frame construction. This could avoid duplicate helper functions such as s_ancova_items, t_tte_items etc.

Please see the design doc and comment there. Thanks!

Provenance:

Creator: danielinteractive

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.