Code Monkey home page Code Monkey logo

ncov2019's Introduction

nCov2019: An R package with real-time data, historical data and Shiny app

Please visit https://github.com/YuLab-SMU/nCov2019 for an up-to-date version.

This package is one of the earliest R packages that designed to query COVID data. It is available since Jan. 2020, at the time there were few data resources available (see our blog post (in Chinese) (Feb. 03, 2020) and another blog post by third party (in English) (Feb. 11, 2020)).

🏠 Data Sources

Real-time data

Historical data (three public data sources):

  1. Wuhan-2019-nCoV GitHub repository.

    • This data source contains detailed city level data in China, and country level data in worldwide.
  2. National Health Commission of the People’s Republic of China

    • This data source contains province level data in China.
  3. DXY.cn. Pneumonia. 2020.

    • We collect historical city level data in China from this source.
  4. 今日头条

    • We collect historical province level data for oversea countries form this source. (Start from 2020-03-15)

      The user can obtain the historical provincial data in China, South Korea, United States, Japan, Iran, Italy, Germany and United Kingdom now.

      For example, the below will return the historical data for Italy.

      library(nCov2019)
      nCov2019_set_country(country = 'Italy') 
      x['province'] # this will return Italy data only.

For more details see our vignette, Preprint, and Shiny app.

✍️ Authors

If you use nCov2019, please cite the following paper:

Wu T, Hu E, Ge X*, Yu G*. 2021. nCov2019: an R package for studying the COVID-19 coronavirus pandemic. PeerJ 9:e11421 https://doi.org/10.7717/peerj.11421

⏬ Installation

Get the development version from github:

## install.packages("remotes")
remotes::install_github("GuangchuangYu/nCov2019")
  • get_nCov2019() to query online latest information
  • load_nCov2019() to get historical data
  • nCov2019_set_country() to set country options
  • summary and [ to access data
  • plot to present data on map
  • dashboard() to open Shiny app dashboard

🎨 Example

Run the script example.R in R using source("example.R"), will produce the following figure:

📖 Documents

📈 Shiny Apps that use nCov2019

💖 Collected in resource list

ncov2019's People

Contributors

aminblm avatar gexijin avatar guangchuangyu avatar huerqiang avatar kmader avatar timze216 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ncov2019's Issues

Dependencies - Suggest

The number of dependencies has greatly increased. I'm happy to see you are putting together a dashboard but most use the package to pull data and now require the installation of packages life sp and sf that have huge external dependencies (e.g.: rgdal), setting this up for travis alone is extremely painful.

Would you consider moving them to Suggests?

It does not pass R CMD check either due to the large shapefiles stored in the www directory.

plot_city函数无法画离散图例

按照这篇文章的做法

plot(x, region='湖北', chinamap=m)

可以正常运行,但

plot(x, region='湖北', chinamap=m, continuous_scale=FALSE)

会报错

Error in cut.default(map2$confirm2, discrete_breaks, include.lowest = T,  : 
  'x'必需为数值

怀疑是因为这一行应修改为

map2$confirm2 <- cut(map2$confirm, discrete_breaks,

海外数据源爬取的具体时间

您好!我们团队目前正在研究海外疫情,想引用nCov2019包。但是由于不同国家处在不同时区,上报数据的时间不统一,因此,想问一下nCov2019包所采用的数据是前一天具体几点更新(爬取)的呢?
虽然使用data = load_nCov2019(),可以看到里面有data里有一个"time"变量,但它仅是对日期的描述(没有时区和具体时间),不是很准确,所以想请教一下。

谢谢!

Failed to install 'nCov2019' from GitHub

Hi,
I am a newer to GitHub...and I have no idea with why I can't install this package. This is the error I have come across:
Error: Failed to install 'nCov2019' from GitHub: Incorrect number of arguments (3), expecting 2 for 'processx_kill'
Here is the complete version of my try:

remotes::install_github("GuangchuangYu/nCov2019",dependencies = TRUE)
Downloading GitHub repo GuangchuangYu/nCov2019@master
These packages have more recent versions available.
It is recommended to update all of them.
Which would you like to update?

1: All                              
2: CRAN packages only               
3: None                             
4: shiny   (1.4.0 -> 1.4.0.2) [CRAN]
5: tinytex (0.19  -> 0.20   ) [CRAN]
6: stringi (1.4.5 -> 1.4.6  ) [CRAN]

Enter one or more numbers, or an empty line to skip updates:
1
cowplot      (NA    -> 1.0.0  ) [CRAN]
ggplotify    (NA    -> 0.0.5  ) [CRAN]
maps         (NA    -> 3.3.0  ) [CRAN]
magick       (NA    -> 2.3    ) [CRAN]
shiny        (1.4.0 -> 1.4.0.2) [CRAN]
prettydoc    (NA    -> 0.3.1  ) [CRAN]
BiocManager  (NA    -> 1.30.10) [CRAN]
tinytex      (0.19  -> 0.20   ) [CRAN]
stringi      (1.4.5 -> 1.4.6  ) [CRAN]
gridGraphics (NA    -> 0.5-0  ) [CRAN]
rvcheck      (NA    -> 0.1.8  ) [CRAN]
Installing 11 packages: cowplot, ggplotify, maps, magick, shiny, prettydoc, BiocManager, tinytex, stringi, gridGraphics, rvcheck
trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/cowplot_1.0.0.zip'
Content type 'application/zip' length 1367940 bytes (1.3 MB)
downloaded 1.3 MB

trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/ggplotify_0.0.5.zip'
Content type 'application/zip' length 203238 bytes (198 KB)
downloaded 198 KB

trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/maps_3.3.0.zip'
Content type 'application/zip' length 3696295 bytes (3.5 MB)
downloaded 3.5 MB

trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/magick_2.3.zip'
Content type 'application/zip' length 20089495 bytes (19.2 MB)
downloaded 19.2 MB

trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/shiny_1.4.0.2.zip'
Content type 'application/zip' length 4942207 bytes (4.7 MB)
downloaded 4.7 MB

trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/prettydoc_0.3.1.zip'
Content type 'application/zip' length 1035059 bytes (1010 KB)
downloaded 1010 KB

trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/BiocManager_1.30.10.zip'
Content type 'application/zip' length 100261 bytes (97 KB)
downloaded 97 KB

trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/tinytex_0.20.zip'
Content type 'application/zip' length 102821 bytes (100 KB)
downloaded 100 KB

trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/stringi_1.4.6.zip'
Content type 'application/zip' length 15310634 bytes (14.6 MB)
downloaded 14.6 MB

trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/gridGraphics_0.5-0.zip'
Content type 'application/zip' length 270274 bytes (263 KB)
downloaded 263 KB

trying URL 'https://mirrors.tuna.tsinghua.edu.cn/CRAN/bin/windows/contrib/3.6/rvcheck_0.1.8.zip'
Content type 'application/zip' length 40803 bytes (39 KB)
downloaded 39 KB

package ‘cowplot’ successfully unpacked and MD5 sums checked
package ‘ggplotify’ successfully unpacked and MD5 sums checked
package ‘maps’ successfully unpacked and MD5 sums checked
package ‘magick’ successfully unpacked and MD5 sums checked
package ‘shiny’ successfully unpacked and MD5 sums checked
package ‘prettydoc’ successfully unpacked and MD5 sums checked
package ‘BiocManager’ successfully unpacked and MD5 sums checked
package ‘tinytex’ successfully unpacked and MD5 sums checked
package ‘stringi’ successfully unpacked and MD5 sums checked
package ‘gridGraphics’ successfully unpacked and MD5 sums checked
package ‘rvcheck’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
	C:\Users\thinkpad\AppData\Local\Temp\RtmpemWXK4\downloaded_packages
Error: Failed to install 'nCov2019' from GitHub:
  Incorrect number of arguments (3), expecting 2 for 'processx_kill'

Discrepancy

Thank you for the package!

I see a great discrepancy between.

library(dplyr)
library(nCov2019)

virus <- get_nCov2019()

# total
virus$chinaTotal$confirm
#> 59888

# daily
virus$chinaDayList %>%  
        mutate( 
          date = paste0("2020.", date), 
          date = as.Date(date, "%Y.%m.%d") 
        ) %>%  
        mutate_if(is.character, as.numeric) %>% 
        filter(date == max(date)) %>% 
        pull(confirm) %>% 
        sum()   
#> 44730

新手刚学R语言,请问这个问题怎么解决

y<-load_nCov2019()

ggplot(summary(y), aes(as.Date(date, "%m.%d"), as.numeric(confirm))) +

  • geom_col(fill='firebrick') + theme_minimal(base_size = 14) +
  • xlab(NULL) + ylab(NULL) +
  • labs(caption = paste("accessed date:", time(y)))
    Error in as.Date.default(date, "%m.%d") :
    不知如何将'date'转换成“Date”类别

新手刚学R语言,请问这个问题怎么解决

Error: parse error: premature EOF

Error is reported when using get_nCov2019 function. The package was installed from github.

> x <- get_nCov2019(lang='en')
Error: parse error: premature EOF
                                                       
                     (right here) ------^

0.08版本出现的前面的日期缺失

xhist <- load_nCov2019(lang='en')
xhist
head(xhist[])
DTCity <- xhist[]
DTWuhan <- DTCity[DTCity$city=="Wuhan"&!is.na(DTCity$city),]
DTWuhan
time province city confirmed cured dead suspected
86 2020-01-25 Hubei Wuhan 572 32 38 0
257 2020-01-26 Hubei Wuhan 618 40 45 0
667 2020-01-28 Hubei Wuhan 1590 47 85 0
767 2020-01-29 Hubei Wuhan 1905 54 104 0
1297 2020-01-30 Hubei Wuhan 2261 54 129 0
1672 2020-01-31 Hubei Wuhan 2639 103 159 0
2147 2020-02-01 Hubei Wuhan 3215 106 192 0
2267 2020-02-02 Hubei Wuhan 4109 175 224 0
2682 2020-02-03 Hubei Wuhan 5142 228 265 0
3004 2020-02-04 Hubei Wuhan 6384 306 313 0
3549 2020-02-05 Hubei Wuhan 8351 374 362 0
3815 2020-02-06 Hubei Wuhan 10117 455 414 0
4547 2020-02-07 Hubei Wuhan 11618 542 478 0
4971 2020-02-08 Hubei Wuhan 13603 747 545 0
5140 2020-02-09 Hubei Wuhan 14982 878 608 0
5799 2020-02-10 Hubei Wuhan 16902 1046 681 0
5899 2020-02-11 Hubei Wuhan 18454 1242 748 0
6555 2020-02-12 Hubei Wuhan 19558 1380 820 0
7077 2020-02-13 Hubei Wuhan 32994 1923 1036 0
7532 2020-02-14 Hubei Wuhan 35991 2023 1016 0
7648 2020-02-15 Hubei Wuhan 37914 2535 1123 0
8358 2020-02-16 Hubei Wuhan 39462 2925 1233 0
8670 2020-02-17 Hubei Wuhan 41152 3507 1309 0
8813 2020-02-18 Hubei Wuhan 42752 4253 1381 0
上面这段版本显示最早只到1-25,前面应该还有几天,可否麻烦Yu老师查看一下是哪里出问题了?另外27号的数据也没有。

国外省份翻译问题

余老师您好,
在英文版本windows下国外省份翻译出现无法识别问题,以下为示例:
R.version
_
platform x86_64-w64-mingw32
arch x86_64
os mingw32
system x86_64, mingw32
status
major 3
minor 6.2
year 2019
month 12
day 12
svn rev 77560
language R
version.string R version 3.6.2 (2019-12-12)
nickname Dark and Stormy Night

library(nCov2019)
x<-load_nCov2019(lang`` = "auto")
Warning messages:
1: In readRDS(system.file("country_translate.rds", package = "nCov2019")) :
strings not representable in native encoding will be translated to UTF-8
nCov2019_set_country("China")
China<-x[]
nCov2019_set_country("United States")
US<-x[]

head(China)
time country province city cum_confirm cum_heal cum_dead suspected
2019-12-01 China Hubei Wuhan 1 0 0 0
2019-12-02 China Hubei Wuhan 1 0 0 0
2019-12-03 China Hubei Wuhan 1 0 0 0
2019-12-04 China Hubei Wuhan 1 0 0 0
head(US)
time country province cum_confirm cum_heal cum_dead suspected
2020-03-15 United States 769 1 42 NA
2020-03-15 United States 746 0 6 NA
2020-03-15 United States 431 2 6 NA
2020-03-15 United States 164 1 0 NA
美国各州的名字因为encoding问题直接无法显示,在Rstudio中为
切换为中文后显示为UTF-8
time country province
2020-03-15 <U+7F8E><U+56FD> <U+534E><U+76DB><U+987F><U+5DDE>
2020-03-15 <U+7F8E><U+56FD> <U+7EBD><U+7EA6><U+5DDE>

我已经更新到version 0.0.6,可是还是没办法提取历史数据

请问各位大神,我已经更新到version 0.0.6,可是还是没办法提取历史数据
显示报错

library("nCov2019", lib.loc="~/R/win-library/3.5")
x<-load_nCov2019()
Error in readRDS(rds) : error reading from connection
In addition: Warning message:
In readRDS(rds) : invalid or incomplete compressed data

issue

请问怎么解决呀

Not working?

Thanks for developing this great package.

The nCov2019 package does not seem to be working this morning (2/6/2020).

library(nCov2019)
x <- get_nCov2019()
x[ ]
Error in d[[by]] : subscript out of bounds.

Historical data works well.

Also, when it was working yesterday, the suspected cases are all zero?

Thanks.

Installation error "unable to re-encode 'utilities.R' line 63"

Hi there,

Thank you for the awesome package. I have encountered a problem during the installation. I followed your instructions here.

> remotes::install_github("GuangchuangYu/nCov2019", dependencies = TRUE)
Downloading GitHub repo GuangchuangYu/nCov2019@master
These packages have more recent versions available.
It is recommended to update all of them.
Which would you like to update?

1: All                          
2: CRAN packages only           
3: None                         
4: vctrs (0.2.3 -> 0.2.4) [CRAN]

Enter one or more numbers, or an empty line to skip updates:
1
dplyr (NA    -> 0.8.5) [CRAN]
vctrs (0.2.3 -> 0.2.4) [CRAN]
Skipping 1 packages not available: BiocStyle
Installing 2 packages: dplyr, vctrs
Installing packages into ‘C:/Users/fuhyo/Documents/R/win-library/3.6’
(as ‘lib’ is unspecified)

  There are binary versions available but the source versions are later:
      binary source needs_compilation
dplyr  0.8.4  0.8.5              TRUE
vctrs  0.2.3  0.2.4              TRUE

  Binaries will be installed
trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.6/dplyr_0.8.4.zip'
Content type 'application/zip' length 3221142 bytes (3.1 MB)
downloaded 3.1 MB

trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.6/vctrs_0.2.3.zip'
Content type 'application/zip' length 986125 bytes (963 KB)
downloaded 963 KB

package ‘dplyr’ successfully unpacked and MD5 sums checked
package ‘vctrs’ successfully unpacked and MD5 sums checked

The downloaded binary packages are in
	C:\Users\fuhyo\AppData\Local\Temp\Rtmpgbi55L\downloaded_packages
√  checking for file 'C:\Users\fuhyo\AppData\Local\Temp\Rtmpgbi55L\remotes520c7199c7\GuangchuangYu-nCov2019-07fc9a6/DESCRIPTION' ...
-  preparing 'nCov2019':
√  checking DESCRIPTION meta-information ... 
-  checking for LF line-endings in source and make files and shell scripts
-  checking for empty or unneeded directories
-  building 'nCov2019_0.3.0.tar.gz'
   
Installing package into ‘C:/Users/fuhyo/Documents/R/win-library/3.6’
(as ‘lib’ is unspecified)
* installing *source* package 'nCov2019' ...
** using staged installation
** R
Error : (converted from warning) unable to re-encode 'utilities.R' line 63
ERROR: unable to collate and parse R files for package 'nCov2019'
* removing 'C:/Users/fuhyo/Documents/R/win-library/3.6/nCov2019'
Error: Failed to install 'nCov2019' from GitHub:
  (converted from warning) installation of package ‘C:/Users/fuhyo/AppData/Local/Temp/Rtmpgbi55L/file520c6adf26b5/nCov2019_0.3.0.tar.gz’ had non-zero exit status

The error is caused by this line.

But I have managed to install this package manually using the R CMD INSTALL mechanism.

R CMD INSTALL nCov2019/
* installing to library 'C:/Users/fuhyo/Documents/R/win-library/3.6'
* installing *source* package 'nCov2019' ...
** using staged installation
** R
Warning: unable to re-encode 'utilities.R' line 63
** inst
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
  converting help for package 'nCov2019'
    finding HTML links ... done
    dashboard                               html
    get_nCov2019                            html
    load_nCov2019                           html
    trans_city                              html
    trans_province                          html
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
*** arch - i386
*** arch - x64
** testing if installed package can be loaded from final location
*** arch - i386
*** arch - x64
** testing if installed package keeps a record of temporary installation path
* DONE (nCov2019)

And then, everything works perfectly.

I noted that, even during my manual installation, the warning pops up, but it was not treated as an error.

Thank you

Dataset now missing the $global statistics

Previously the dataset included an entry $global, which then had the worldwide statistics for different countries. This now only includes one country as "China".

The plot() of the dataset (world map) no longer works - I assume because of the missing data?

Historical data for Beijing, Shanghai, Chongqing etc

No data is returned for Beijing, Shanghai etc.

x <- load_nCov2019() #load historical data
x['北京',]
[1] province city time cum_confirm cum_heal cum_dead
[7] 累计重症病例 累计危重病例 confirm dead heal 新增重症病例数量
[13] 新增危重病例数量 武汉接触史 来源链接 数据来源 公告中是否有病例 其他
[19] X
<0 rows> (or 0-length row.names)

Dead Endpoint?

Hi,

It seems the data returned by the endpoint (https://view.inews.qq.com/g2/getOnsInfo?name=disease_h5&callback=1580373566110) used in the package no longer returns some variables, only chinaTotal and areaTree are now available.

无法拉取nCov2019包

remotes::install_github("GuangchuangYu/nCov2019")
提示错误: Failed to install 'nCov2019-master.zip' from local:
schannel: failed to receive handshake, SSL/TLS connection failed
下载离线包,还是不行,请问这是怎么回事?

Failed to install 'nCov2019' from GitHub

Hi,

I am a newbie...don't know why I can't install this package. I have tried all 4 options but none of them worked. They all showed the same error:

Error: Failed to install 'nCov2019' from GitHub:
(converted from warning) cannot remove prior installation of package ‘glue’

For example, if I choose option 1, it shows:

remotes::install_github("GuangchuangYu/nCov2019")

Downloading GitHub repo GuangchuangYu/nCov2019@master
These packages have more recent versions available.
It is recommended to update all of them.
Which would you like to update?

1: All
2: CRAN packages only
3: None
4: Rcpp (1.0.3 -> 1.0.4) [CRAN]

Enter one or more numbers, or an empty line to skip updates:
1
glue (NA -> 1.3.2) [CRAN]
Rcpp (1.0.3 -> 1.0.4) [CRAN]
Installing 2 packages: glue, Rcpp

There is a binary version available but the source version is later:
binary source needs_compilation
Rcpp 1.0.3 1.0.4 TRUE

Binaries will be installed
trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.6/glue_1.3.2.zip'
Content type 'application/zip' length 153935 bytes (150 KB)
downloaded 150 KB

trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.6/Rcpp_1.0.3.zip'
Content type 'application/zip' length 2990436 bytes (2.9 MB)
downloaded 2.9 MB

package ‘glue’ successfully unpacked and MD5 sums checked

Error: Failed to install 'nCov2019' from GitHub:
(converted from warning) cannot remove prior installation of package ‘glue’

Can someone help me please?

Thank you in advance!

提供一个代码优化的思路

从您的公众号上看到您发的信息,就特别好奇是使用什么方法来提取数据(特别是当我发现 get_nCov2019函数只有三行代码时候我震惊了!!),拜读了一下您的代码,确实有非常大的收获,感谢Y叔的指点。

在对您的代码学习过程中,发现对于同一个参数多枚举判断时,可以考虑结合switch进行判断,例如:
summary.nCov2019 <- function(object, by = "total", ...) {
by <- match.arg(by, c("total", "today"))
if (by == "total") {
return(object$chinaDayList)
}
return(object$chinaDayAddList)
}
类似于这样的情况下,可以考虑做下面的调整
summary.nCov2019 <- function(object, by = "total", ...) {
switch(
match.arg(by, c("total", "today")),
"total" = object$total,
"today" = object$today
)
}
这样(也许)能增强代码的可读性。

2020.02.04 4点47分。。。

Cannot access provincial data this morning

Thanks for your package first!
But I cannot access provincial data in the new version with global map this morning. I followed your Wechat article just now and cannot plot china's map.

In load_nCov2019() there is both 吉林 and 吉林市 as a city in 吉林 (problem in English)

  • There are 16 records of "吉林市" and 17 records of "吉林"
  • When I saw this before, I translated both 吉林 and 吉林市 to "Jilin". So the English version has 33 "Jilin"s. Does anyone know if this should be the same or should it be different? If they are different, what should "吉林" be and what should "吉林市" be? They are both under "city".
  • Because I named it both Jilin, summary(english) does not show the same as summary(chinese). There are 16 more records in summary(english) as of now. "吉林市" disappears from summary().

你好這個迭代看上去有點問題

y <- load_nCov2019()
plot(y, region='china',chinamap=cn, date='2020-02-09')
Warning: Ignoring unknown aesthetics: x, y
Error in f(..., self = self) : Breaks and labels are different lengths

Issue with get_nCov2019

I am getting the following error: Error in rbind(deparse.level, ...) :
numbers of columns of arguments do not match

Based on the below code

[install.packages("janitor")

library(tidyverse)
library(janitor)
library(remotes)

'Install data
remotes::install_github("GuangchuangYu/nCov2019", force = TRUE)

'get latest date - NOTE default language is Chinese so use LANG var
library(nCov2019)
x <- get_nCov2019(lang='en')]

Real time data at province and city level

Thanks for fixing that issue so fast. I am still having issues at the city and province level:

remotes::install_github("GuangchuangYu/nCov2019")
library(nCov2019)
x <- get_nCov2019()
x['广东', ]
Error: $ operator is invalid for atomic vectors

Unable to get other countries in get_nCov2019() - global

Looks like this feature recently broke:

current_data <- get_nCov2019()
current_data_global <- current_data$global 

current_data_global now only returns data for China, but not for other countries. I believe it used to return a list of countries and their respective cases. thanks!

如何获取城市级别的每日新增病例数

亲爱的Y叔:

x = load_nCov2019()
x
其中x$data中储存的仅有cum_confirm, cum_heal, cum_dead, 以及suspected的数据,
我想获取城市级别的每日新增数据,
目前我使用的办法是用相邻两日的cum_confirm数据相减,但是会出现负值,
按道理来说cum_confirm中是累计确诊数据,后一天减前一天不可能出现负值,
检查数据后我发现有些地区的cum_confirm前一日有数值,而之后的许多天都是0,所以造成了用相邻两日的cum_confirm数据相减,会出现负值。

所以我想请问,可不可以直接从nCov2019获取每日新增数据。

province- or city-level data in past days?省级或者市级历史数据

Dear Dr. Yu,

Is it possible to get province- or city-level data in previous days? For example, can I get total confirmed cases in Hubei on Jan 30, 2019?

Thanks,
Miao

余教授好,

不知道有办法能够得到过往每个城市的数据?比如我想得到湖北各个城市2020年1月30号的数据。

谢谢!
蔡苗

找不到对象'x'

老师您好,初次接触R语言,按照链接上的步骤完成后打印x,显示“找不到对象‘x’,我该怎么做啊?
remotes::install_github("GuangchuangYu/nCov2019")
library(nCov2019)
x <- load_nCov2019()

No BiocStyle for R 3.6.3

When I try to install package & dependencies, one of them (BiocStyle) doesn't load. Message says there's no version for latest R (3.6.3). Is it essential for your package to work? Could I proceed without it? How? Thanks, hg

Problem in getting global statistics

Hi

Some how the
Data <- get_nCov2019(lang='en')
plot(Data) # a map of the world
covid.data <- Data['global']

Data$global
name nowConfirm confirm suspect dead deadRate showRate heal healRate
1 China 3514 82421 174 3306 4.01 FALSE 75601 91.73
showHeal
1 TRUE

There is only one row of data on china. no global data

please help

Thank you

Asaf

Plot of China does not work well with the English version

I'm not sure how this could be fixed, but at the moment, the plot function for China and Chinese provinces do not work well with English names. It is probably because the names of the data have been changed to English, but the names on the map information is in Chinese.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.