Code Monkey home page Code Monkey logo

metacom's People

Contributors

jmespadero avatar piecol avatar taddallas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

metacom's Issues

Negative z-score for coherence

Hello,
I'm getting a lot of negative Coherence z-scores in my data, so I decided to try the example from "Coherence and Turnover different from previous versions" issue and I'm getting different results. I'm using the 1.5.3 version.

Input

data("varespec")
Coe_r0_1.5.1 <- Coherence(varespec, method = "r0", order = T, binary = T, sims = 1000, seed = 1)
Coe_r0_1.5.1

His output

         name       stat
1      embAbs 251.000000
2           z  57.471647
3           p   0.000000
4     simMean 437.076000
5 simVariance   3.237701
6 method = r0         NA

My output

        name      stat
1      embAbs 251.00000
2           z -54.58247
3           p   0.00000
4     simMean 436.83200
5 simVariance   3.40461
6 method = r0        NA

Do you know what's happening? Which one is correct?
Thanks!

BoundaryClump fuction: package version

Dear Dallas,

I am working with your package and I find out that "BoundaryClump" produces a different result from the example described in Dallas, 2014. In other words, a community before "Gleasonian - p=0.15" {metacom v1.3}, now is classified as "Clumped" - p= 0.001 {metacom v1.5}

Visually, the results from the earliar version seems to be more corret to me. Could you confirm which of the versions is correct?

I used the "varespec" data {vegan}, with default settings to compare the results.
Thank you in advance,

Danilo Vieira

Dallas, T. 2014. Metacom: An R package for the analysis of metacommunity structure. - Ecography (Cop.). 37: 402–405.

{metacom v1.3}

install.packages("metacom", repos="https://cran.r-project.org/src/contrib/Archive/metacom/metacom_1.3.tar.gz")
data(varespec)
BoundaryClump(varespec) ## Results from (Dallas, 2014)
index P df
1 1.238462 0.1513498 41 ---------> non- significant p value- Gleasonian community

{metacom v1.5}

install.packages('metacom')
require(metacom)
data(varespec)
BoundaryClump(varespec)
name stat
1 index 1.607692308
2 p 0.001895953 ---------> significant p value - Clumped community
3 df 21.000000000

could not find function Modularity

Thank you for the nice package,

I have installed and loaded the metacom package but when I run the Modularity function I am getting this error

Modularity(comm, method = "tswap", sims = 1000, scores = 1,
           order = TRUE, c = length(comm), nstarts = 100, returnModules = FALSE)
Error in Modularity(comm, method = "tswap", sims = 1000, scores = 1, order = TRUE,  : 
  could not find function "Modularity"

Any idea?

I got the info about the function from here https://www.rdocumentation.org/packages/metacom/versions/1.5.1/topics/Modularity

with best wishes,
Loukas

Utility Suggestion: Allow toggle for serialized creation and analysis of null matrices

I've been using the metacom package for an investigation of the effects of sampling scale and resolution on metacommunity structure of marine fishes. I've been working with fairly large datasets, and I noticed that this results in quite a significant memory leak that I seem to have tracked down to the creation of the list of null matrices. While lapply on the full list of null matrices may run slightly more efficiently from a computing time standpoint, from the standpoint of RAM usage a for loop that creates the null matrix, calculates the necessary metric, and outputs the statistic may be more practical for large/sparse matrices. This could be included as either a logical toggle, or even as a numerical or proportional argument that determines a batch size for null matrix creation and calculation (e.g., say I wanted to produce 1000 nulls but I only wanted them produced and analyzed in batches of 10 to reduce the memory requirement, a hybrid lapply/for loop structure could be implemented that calls NullMaker for a set of 10 matrices 100 separate times in a for loop).

No R 3.3.2 support

When trying to install the package it fails claiming there is no version available for R 3.3.2 (the most current version). Will this be released? If so, when? If not, is there a workaround?
image

NullMaker example fails to run

rbinom() creates columns (species) with no individuals in the initial dataset and nullmodel() borks on matrix simulation

Bad optimization and code comments

Hello Tad,

my name is Lucas Camacho and I'm MSc student in the Ecology Department in Bioscience Institute - USP, São Paulo Brazil.

First of all, I would like to thank you for creating this package that helps the scientists works with bipartite networks and leaving the package code open. I believe this openness makes positive contributions to the way we do science today. I've been trying to use a specific function in the package called Modularity that calculates Barber's 2007 modularity index in adjacency matrices (in my case, pollinator and plant).

Unfortunately, my attempts to use Modularity were quite frustrating. I deeply believe that there are serious optimization issues in your code due to the time it takes for few simulations to finish. I am trying to get the modularity index proposed by Barber in an adjacency matrix with 27 species only (13 plants and 14 pollinators) and with the sims parameter set to 10 (the smallest allowed). The function has not returned result in the 40 minutes I waited. Too much time for such a small network. My attempts to use the function were not restricted to this network or this same number of sims. I even used different computers with different processing capabilities. All with the same delay.

Curious with the delay I looked at the source code and could not clearly understand the processes performed in the function. I believe that the amount of comments in the code needs to be improved to better understand who has not written the package but wants to understand how the processes are being done.

Finally, I would like to point out that using its function in my analysis would be fundamental but not feasible, considering the large amount of computer simulations we do today. This shows the importance of computer code optimization for big data uses.

I hope these comments help to realize that the code needs to be optimized and commented so that its use and understanding can be maximized.

Again, thanks for the availability of the package and code, and I hope it is greatly improved for use by interaction network scientists.

Kind regards

Lucas Camacho

Coherence and Turnover different from previous versions

Hi. I ran the functions Coherence and Turnover a couple of years ago. Now, using the same null models, I can't get the same results. It appears that Coherence and Turnover are being overestimated in the Null Models causing larger values of Z-score. In the case of Turnover, the number of replacements of the original matrix is also different. Do you know what might be the problem? Maybe I am doing something wrong?

This is an example of what is happening using the varespect dataset of the vegan package in two versions of the metacom package(1.4.3 and 1.5.1):

install.packages("metacom")
library(metacom)

data("varespec")

Coe_r0_1.5.1 <- Coherence(varespec, method = "r0", order = T, binary = T, sims = 1000, seed = 1)
Coe_r0_1.5.1
         name       stat
1      embAbs 251.000000
2           z  57.471647
3           p   0.000000
4     simMean 437.076000
5 simVariance   3.237701
6 method = r0         NA


Coe_r1_1.5.1 <- Coherence(varespec, method = "r1", order = T, binary = T, sims = 1000, seed = 1)
Coe_r1_1.5.1
         name         stat
1      embAbs 2.510000e+02
2           z 1.339347e+01
3           p 6.602409e-41
4     simMean 4.075790e+02
5 simVariance 1.169070e+01
6 method = r1           NA


Tur_r0_1.5.1 <- Turnover(varespec, method = "r0", fill = T, binary = T, sims = 1000, seed = 1)
Tur_r0_1.5.1
         name          stat
1    turnover  6.301000e+03
2           z  2.686969e+01
3           p 4.966051e-159
4     simMean  1.430274e+04
5 simVariance  2.977981e+02
6 method = r0            NA

Tur_r0_1.5.1 <- Turnover(varespec, method = "r1", fill = T, binary = T, sims = 1000, seed = 1)
Tur_r0_1.5.1
         name         stat
1    turnover 6.301000e+03
2           z 5.448243e+00
3           p 5.086978e-08
4     simMean 1.020671e+04
5 simVariance 7.168748e+02
6 method = r1           NA


library(devtools)
install_version("metacom", version = "1.4.3")
library(metacom)

data("varespec")

set.seed(1)
Coe_r0 <- Coherence(varespec, method = "r0", order = T, binary = T, sims = 1000)
Coe_r0
$`EmbAbs`
[1] 251

$z
[1] 13.51749

$pval
[1] 1.233052e-41

$SimulatedMean
[1] 393.525

$SimulatedVariance
[1] 10.54374

$method
[1] "r0"


set.seed(1)
Coe_r1 <- Coherence(varespec, method = "r1", order = T, binary = T, sims = 1000)
Coe_r1
$`EmbAbs`
[1] 251

$z
[1] 5.78413

$pval
[1] 7.288854e-09

$SimulatedMean
[1] 342.861

$SimulatedVariance
[1] 15.88156

$method
[1] "r1"


set.seed(1)
Tur_r0 <- Turnover(varespec, method = "r0", binary = T, sims = 1000)
Tur_r0
$`Turnover`
[1] 5296

$z
[1] -18.96256

$pval
[1] 3.477863e-80

$SimulatedMean
[1] 449.275

$SimulatedVariance
[1] 255.5944

$Method
[1] "r0"


set.seed(1)
Tur_r1 <- Turnover(varespec, method = "r1", binary = T, sims = 1000)
Tur_r1
$`Turnover`
[1] 5296

$z
[1] -6.450244

$pval
[1] 1.1167e-10

$SimulatedMean
[1] 1565.008

$SimulatedVariance
[1] 578.4265

$Method
[1] "r1"

Different results in Turnover

I get qualitatively different results if I use the separate turnover() function than if I use the Metacommunity() function. The results for coherence and boundary clumping are the same. The only difference is with the turnover function, specifically the simulated mean. Which function is providing the correct answer? Thank you for your help! Example below:

Metacommunity(veg, scores = 1, method = "r1", sims = 1000,

  •           order = TRUE, allowEmpty = TRUE, binary = TRUE, verbose = FALSE)
    

$Turnover is different using the separate function: it is significantly high
turnover z pval
"1,316,771" "-9.12140545737929" "7.41563066350554e-20"
simulatedMean simulatedVariance method
"298,240.212" "111663.79926418" "r1"

Turnover(veg, method = "r1", sims = 1000, scores = 1, order = TRUE,

  •      allowEmpty = TRUE, binary = TRUE, verbose = FALSE)
    
    turnover z pval simulatedMean simulatedVariance method
    1 1,254,166 11.5047 1.249268e-30 1,700,020 38754.11 r1
    This is significantly low turnover.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.