Hi, I came across your post here by coincidence. Just one remark: to benchmark collapse, you at least need to use it's own functions for grouping and data manupilation e.g. fgroup_by()
, fsummarsie()
, fselect()
and fmutate()
instead of the dplyr versions. Most of the performance differences are realized at the grouping stage, not at the stage of final computation.
Another thing to be aware of is different library defaults. For example arrow performs unsorted grouping i.e. the groups are not sorted upon aggregation. collapse, in line with dplyr does sorted grouping, which has an additional cost. So for the comparison with arrow, using fgroup_by(..., sort = FALSE)
would be important. A final difference in the defaults regards missing value removal. In collapse, the default is na.rm = TRUE
, which also has a small performance cost. So ideally you would use fmean(na.rm = FALSE)
in benchmarks without missing values.
So in general: for benchmarking different libraries it is also important to look at the default settings of libraries to make sure you are comparing apples with apples under the hood: are the defaults set up for greater convenience or maximum performance? What is the default behavior regarding sorting, missing values, number of threads etc...