Code Monkey home page Code Monkey logo

datazoom.amazonia's People

Contributors

annasaraiva avatar antoniobergallo avatar arthurcbps avatar brenoavidos avatar brunoalcantarad avatar carolinamoura2000 avatar dacbarbosa avatar fernandanevesw avatar franciscocavalcanti avatar frediedidier avatar giuliaimbu avatar gnjardim avatar igorrigolon avatar luizguilhermelopesm avatar mariaclaramano avatar matthieustigler avatar michellessouza avatar pablotadeu avatar titogbruni avatar victordamatta avatar victorhugoterziani avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

datazoom.amazonia's Issues

add municipality codes to IEMA when `raw_data` is FALSE

Currently the raw_data option makes no difference, and there are no municipality codes, only a column with municipality names.

library(datazoom.amazonia);
        load_iema()
#> File downloaded:
#> • 'AMAZONIA_nao_atendidos.xlsx' <id: 10JMRtzu3k95vl8cQmHkVMQ9nJovvIeNl>
#> Saved locally as:
#> • 'C:\Users\igorr\AppData\Local\Temp\RtmpY1ELBW\filec4820212c3.xlsx'
#> # A tibble: 335 × 3
#>    municipio       populacao_nao_atendida uf   
#>    <chr>                            <dbl> <chr>
#>  1 sena madureira                   26894 AC   
#>  2 xapuri                           13420 AC   
#>  3 porto acre                        7398 AC   
#>  4 mancio lima                       6003 AC   
#>  5 brasileia                         5644 AC   
#>  6 cruzeiro do sul                   4948 AC   
#>  7 rio branco                        4933 AC   
#>  8 bujari                            4047 AC   
#>  9 feijo                             2314 AC   
#> 10 rodrigues alves                   2052 AC   
#> # … with 325 more rows

Created on 2022-07-07 by the reprex package (v2.0.1)

As an improvement, we should try to match the cities to their municipality codes when raw_data = FALSE (which might be tricky due to spelling inconsistencies). When raw_data = TRUE, we should also return the data before stripping away accents and capital letters from the city names.

Error in the dataframe sigmine_active

Hi
In the 'sigmine_active' database I used the command below to add a characteristic. But an error appears that I've never seen - do you know?
It seems to point an error in the dataframe

library(tidyverse)
library(datazoom.amazonia)
minera <- load_sigmine(dataset = 'sigmine_active', raw_data = TRUE, language = "pt")


# In the "CONCESSÃO DE LAVRA" phase, which are the 20 companies that have the most records
filter(minera, fase=="CONCESSÃO DE LAVRA") %>% 
  group_by(nome) %>% 
  summarize(total=n()) %>%    
  arrange(desc(total)) %>% 
  head(20)

Error:
st_as_s2(): dropping Z and/or M coordinate
st_as_s2(): dropping Z and/or M coordinate
Error in s2_geography_from_wkb(x, oriented = oriented, check = check) :
Evaluation error: Found 1 feature with invalid spherical geometry.
[1] Loop 0 is not valid: Edge 39 has duplicate vertex with edge 51.

Inconvenience with DOWNLOAD - IMAZON, SEEG, IEMA

The 4 README examples require installing "googledrive" package, which requires some authorizations that look very invasive and may scare the user

Context

The problems occured when running the example on README. The package was downloaded from GitHub via DevTools.

Code

IMAZON

data <- load_imazon(dataset = "imazon_shp", raw_data = TRUE,
                     geo_level = "municipality", language = "pt")

SEEG

data <- load_seeg(dataset = "seeg_industry", 
                  raw_data = FALSE,
                  geo_level = "state")
data <- load_seeg(dataset = "seeg", 
                  raw_data = TRUE,
                  geo_level = "municipality"

IEMA

data <- load_iema(dataset = "iema", raw_data = FALSE,
                     geo_level = "municipality", language = "pt")
data <- load_iema(dataset = "iema", raw_data = FALSE,
                     geo_level = "municipality", language = "eng")

Output

"Please follow the steps from googledrive package to download the data. This may take a while"

Goals and Expectations

The aim was to download the data, which was not possible due to this requirement of the googledrive package.

Suggestions!

One suggestion brought by Fernanda would be to insert instructions in the message deployed, in order to help the user match the requirements necessary. It is also important to write that these authorizations are not harmful and won´t alter the user´s personal drive, so people feel more comfortable to give the permissions necessary and are able to download and use the data.

Problem installing from Github

Hello! I was trying to download the package from Github with the specified path on the READ.me file. But everytime I ask it to do it, a message appears saying that 2 packages have more recent versions (purrr and curls), after I ask to either update or not, the installation of the desired package starts, but it never works (I've tried several time already). While also showing this message:

image

Error in <LOAD_CIPO> - <All datasets>

Very brief description of the problem

When trying to use function load_cipo with argument search = "" (the default) I get an error.

Context

This happens with all datasets in the function, downloading the package from Github or CRAN

Code

brazilian_actors <- load_cipo(dataset = "brazilian_actors")
# also happens with "international_cooperation" and "forest_governance"

Error message and Output

Error in `dplyr::filter()`:In argument: `stringr::str_detect(aux, param$search)`.
Caused by error in `stringr::str_detect()`:
! `pattern` can't be the empty string (`""`).

Suggestions!

Remove the filter in the case of search = ""

IPS README example #2 error

As I tried to run the available code, I encountered the following error message:

Error in dplyr::bind_cols():
! Can't recycle ..1 (size 3865) to match ..2 (size 773).
Run rlang::last_error() to see where the error occurred.

rlang::last_error()
<error/vctrs_error_incompatible_size>
Error in dplyr::bind_cols():
! Can't recycle ..1 (size 3865) to match ..2 (size 773).


Backtrace:

  1. datazoom.amazonia::load_ips(...)
  2. dplyr::bind_cols(year, df)
  3. vctrs::vec_cbind(!!!dots, .name_repair = .name_repair)
    Run rlang::last_trace() to see the full context.

rlang::last_trace()
<error/vctrs_error_incompatible_size>
Error in dplyr::bind_cols():
! Can't recycle ..1 (size 3865) to match ..2 (size 773).


Backtrace:

  1. ├─datazoom.amazonia::load_ips(...)
  2. │ └─dplyr::bind_cols(year, df)
  3. │ ├─dplyr:::fix_call(vec_cbind(!!!dots, .name_repair = .name_repair))
  4. │ │ └─base::withCallingHandlers(...)
  5. │ └─vctrs::vec_cbind(!!!dots, .name_repair = .name_repair)
  6. └─vctrs::stop_incompatible_size(...)
  7. └─vctrs:::stop_incompatible(...)
  8. └─vctrs:::stop_vctrs(...)
    
  9.   └─rlang::abort(message, class = c(class, "vctrs_error"), ..., call = vctrs_error_call(call))
    

I tried running the code with multiple time periods and the issue only happened when trying to cover three or more years.

Citation template inclusion

Data Zoom now has a citation template.

Please include it in the package as a message. It should show up when the user calls the package. If you think of any other situations where it would be it interesting for the message to pop-up, feel free to include the template there too.

Written format:
Data Zoom (2023). Data Zoom: Simplifying Access To Brazilian Microdata.
https://www.econ.puc-rio.br/datazoom/english/index.html

BibTex format:
https://drive.google.com/file/d/1Nyuw9LANR78not9O3Ssh-OEP-b5y7RN1/view?usp=share_link
Please copy and paste this link on your browser's search bar. I couldn't upload the .tex file to GitHub because it doesn't support this format.

Processing step: why does `raw_data=FALSE` return more rows than `raw_data=TRUE`?

Hi

Looking at the DETER data, I get more rows in output with raw_data = FALSE than with raw_data = TRUE:

  • 268795 rows with raw_data = TRUE, in load_deter(dataset = 'deter_amz', raw_data = TRUE)
  • 274080 rows with raw_data = FALSE, in load_deter(dataset = 'deter_amz', raw_data = FALSE)

Why is this? I looked at the documentation, but could find no information? I assume this is for polygons that go through multiple municipalities? But then, if an alert is split into two, the information is lost about which alerts polygons belong to the same alert?

Thanks!

Problem in CEMPRE README example #2

Showing data only for the Legal Amazon is only available at the "municipality" level (geo_level = "municipality"). However, when changing to such geo_level, the database becomes too big to run. R warned that could happen due to the choice of municipality level.

Also, while running the code with geo_level = "municipality", the following warning was shown:
problema cempre exemplo 2

Expansão de <LOAD_ANEEL> e <LOAD_EPE>

Das 6 bases sugeridas pelo CPI para serem utilizadas nas visualizações de energia, 4 já estão no datazoom.amazonia.

load_aneel carrega os datasets "energy_development_budget" e "energy_generation".
load_epe carrega os datasets "energy_consumption_per_class" e "national_energy_balance".

As duas bases que ainda não estão incluídas são

  1. "Anuário Estatístico de Energia Elétrica"
  2. "Relação de Empreendimentos de Geração Distribuida"
    cujos links para achá-las, respectivamente, são:
  3. https://www.epe.gov.br/pt/publicacoes-dados-abertos/publicacoes/anuario-estatistico-de-energia-eletrica
  4. https://dadosabertos.aneel.gov.br/dataset/relacao-de-empreendimentos-de-geracao-distribuida

Segundo a lógica sendo usada nas funções, a primeira base seria inserida como um novo dataset em load_epe e a segunda base seria inserida como um novo dataset em load_aneel.
Mais informações sobre essas bases podem ser encontradas no nosso drive, seguindo os caminhos:

Data Zoom > Sites > Site - Data Zoom Amazônia > Posts > Energia - Data Zoom e CPI - AMZ exporta energia

Data Zoom > Sites > Site - Data Zoom Amazônia > Posts > Energia - Data Zoom e CPI - GeraçãoDistribuída

Error in Download- IEMA

Hello,
Occurred the same error while downloading both of the following data:

Download treated data (raw_data = FALSE)

data <- load_iema(dataset = "iema", raw_data = FALSE,
geo_level = "municipality", language = "pt")

Download treated data in english

data <- load_iema(dataset = "iema", raw_data = FALSE,
geo_level = "municipality", language = "eng")

First, I was requested to install the "googledrive" package and then I had to authorise some permissions. When the authentication was complete, this error appeared:

Error in gargle::response_process():
! Client error: (403) Forbidden
Insufficient Permission: Request had insufficient authentication scopes.

  • domain: global
  • reason: insufficientPermissions
  • message: Insufficient Permission: Request had insufficient authentication scopes.

Error assuming windows path?

Hi

I believe the package is assuming windows path (i..e using \ separators)? This would be a problem with Linux/Mac users.

Indeed, running load_deter(dataset = 'deter_amz', raw_data = TRUE) , I get error message (note the mixed separators):

Error: Cannot open "/tmp/RtmpOQISI6\deter_public.shp"; The file doesn't seem to exist.

But the file definitely exists: file.exists("/tmp/RtmpOQISI6/deter_public.shp") returns TRUE.

Looking at the code, I think you might just want to replace paste(dir, "deter_public.shp", sep = "\\") in external_download with
file.path() ,which makes sure to use / or \ depending on Windows versus Linux/Mac.

Thanks!

Municipio duplicado nos dados gerados pela função load_ibama()

Se rodarmos o seguinte código.

load_ibama(
 download_data = FALSE,
 load_from_where = "./Desktop/data.xls",
 time_aggregation = year,
 space_aggregation = municipality
)

O dado resultante data.xls aparace o municipio com cod_municipio == 1100130 aparece duas vezes para o ano 2019. Isso provavelmente acontece porque em um caso o nome do muncipio está municipio == Machadinho d'Oeste e na outra observação o nome está municipio == Machadinho D'Oeste.

É preciso revisar o arquivo R\ibama.R para corrigir esse problema.

Define license

In the right panel, Github indicates that the license for this package is "Unknown, MIT licenses found". I think we should define clearly the license we are using in a explicit way.

I tried to fix that by excluding one of the license files but it didn't pass through the package check (5217412)

Error in Download - IBAMA

Hello,
Occurred an error while downloading the following data:

Download treated collected fines data from "BA"

data <- load_ibama(dataset = "collected_fines", raw_data = FALSE,
states = "BA", language = "pt")

Error in dplyr::mutate():
! Problem while computing municipio = dplyr::case_when(...).
Caused by error in value[[i]]:
! índice fora de limites

Suggestion: picking datasets from keyboard input

Description of the problem

When I want to use some function (I'll use load_aneel as a running example), I find it extremely satisfactory to just type

df <- load_aneel()

and instantly get a neat result. Because we have default options that we assume most users already want (e.g. raw_data = FALSE and language = "eng"), this is in fact possible for many functions. But when functions have many equally important datasets, the user must choose one. So my code above is met with

Error in load_aneel() : argument "dataset" is missing, with no default

To use the function, one is forced to copy/paste or -- god forbid -- manually type some long string of characters, such as "energy_development_budget" or "energy_generation".

Suggestion

In the README section for ANEEL, we already have this ready


Options:

  1. dataset: there are two choices:
    • "energy_development_budget": government spending towards
      energy sources
    • "energy_generation": energy generation by entity/corporation
      ...

So what if I typed

df <- load_aneel()

and got something like this in the console

No dataset selected. Type a number to pick one

1 - "energy_development_budget": government spending towards energy sources

2 - "energy_generation": energy generation by entity/corporation

Not sure how to implement it in a concise way that works for all functions, but it would be pretty cool

IEMA error

Olá, o pacote do IEMA está dando erro.

Dataframe without column names

After this command for the dataset "areas_embargadas", the generated dataframe does not have the column names:

ibama <- load_ibama(dataset = "areas_embargadas", raw_data = TRUE,
language = "pt", legal_amazon_only = FALSE)

Mapbiomes: Waiting for authentication in browser...

When runing the line example:

data = load_mapbiomas(dataset = "mapbiomas_cover", 
                                          raw_data = FALSE,
                                          language = "eng", 
                                          cover_level = 0)

The message below kept on for hours

Waiting for authentication in browser...
Press Esc/Ctrl + C to abort

Error in Download- SEEG

Hello,
Occurred an error while downloading the first example of the SEEG data:

Download raw data (raw_data = TRUE) of greenhouse gases (dataset = "seeg") by municipality (geo_level = "municipality")

data <- load_seeg(dataset = "seeg",
raw_data = TRUE,
geo_level = "municipality")

First, I was requested to install the "googledrive" package and then I had to authorise some permissions. However, I didn't authorised to edit, create and delete my files in Google Drive, because I thought it was a little bit invasive. When the authentication was complete, this error appeared:

Error in gargle::response_process():
! Client error: (403) Forbidden
Insufficient Permission: Request had insufficient authentication scopes.

  • domain: global
  • reason: insufficientPermissions
  • message: Insufficient Permission: Request had insufficient authentication scopes.

Criar warning para quantidade limite de valores solicitados em dados do SIDRA

Quando um usuário pretende gerar informações sobre PIB dos municipios através de vários anos, o comando pode dar erro pois há um limite na quantidade de valores solicitados no SIDRA: 50000

Ou seja, se o usuário rodar o seguinte comando:

data <- load_gdp(c(2000:2021))

O resultado vai gerar em um erro pois o numero de municpios-ano é maior do que 50000 (94690 nesse caso pra ser exato).

Por isso, vale a pena informar ao usuário nos exemplos e nas vignettes desse limite.

Inconvenience with size - PEVS, COMEX, TerraClimate, BACI

The size of the README examples of these datasets is too big. They are either impossible to download or take too much time. Therefore, they don´t play a good role as examples.

Context

The datasets used are listed in the issue name.

Code

PEVS

data <- load_pevs(dataset = 'pevs_forest_crops', 
                  raw_data = TRUE, 
                  geo_level = "municipality", 
                  time_period = 2012:2013, 
                  language = "eng")

COMEX

load_br_trade(dataset = "comex_import_prod",
                      raw_data = FALSE, 
                      time_period = 1997:2021)

TerraClimate

max_temp <- load_climate(dataset = "max_temperature", time_period = 2000:2020

BACI

clean_baci <- load_baci(dataset = "HS92", raw_data = FALSE, time_period = 2016,
                        language = "pt")

Goals and Expectations

When running these examples, the user expects to receive a dataframe containing treated data or a list containing raw data. When coming across this kind of problem, there is no output, as the download isn´t finished.

Suggestions!

All the suggestions made consist on restraining the range of time or filtering for a less especific geographic level. They do not solve the problem, but are consistent and clever ways of overcoming the inconvenience. For the BACI example though, this is not achievable, as the data is already restrained to a year and there is no geographic level option.

PEVS (Laura)

geo_level = "municipality" makes database too big to run. I found that setting geo_level to "region" made a good example

COMEX (Victor)

Year span reduced to 2 years in the example.

load_br_trade(dataset = "comex_import_prod", raw_data = FALSE, time_period = 2020:2021)

TerraClimate (Arthur)

max_temp <- load_climate(dataset = "max_temperature", time_period = 2000:2002)

Error in Download- TerraClimate- Example 1

The example is:

Downloading maximum temperature data from 2000 to 2020

max_temp <- load_climate(dataset = "max_temperature", time_period = 2000:2020)

And the result would be a vector of 785.2MB, or approximately 488 Million Observations, therefore, it is not a good example, since no RAM Memory I tested this could handle an database this big.
The example is way too heavy to introduce the function, although it worked when reducing the interval to lesser years such as 2000:2002 or 2000:2001

Problem in IMAZON

Please follow the steps from googledrive package to download the data. This may take a while.
Waiting for authentication in browser...
Press Esc/Ctrl + C to abort

Downloading data from IMAZON requires installing the "googledrive" package. By itself, this is not really a problem, however, when following the instructions given by the package, a lot of invasive liscenses are required to be given, such as the possibility of your google drive files being moved, altered and excluded.

load_ibama: status was 'SSL peer certificate or SSH remote key was not OK'

Hi

Thanks for this great package! I am getting an error when downloading, not sure whether it is R- or package-specific...

Workaround I found, from here, was to use: options(download.file.method="curl", download.file.extra="-k -L")

Problem (using R 4.2)

library(datazoom.amazonia)

data <- load_ibama(dataset = "areas_embargadas", raw_data = FALSE, 
                   language = "eng", legal_amazon_only = FALSE)
#> Warning in utils::download.file(url = path, destfile = temp, mode =
#> "wb"): URL 'https://servicos.ibama.gov.br/ctf/publico/areasembargadas/
#> downloadListaAreasEmbargadas.php': status was 'SSL peer certificate or SSH
#> remote key was not OK'
#> Error in utils::download.file(url = path, destfile = temp, mode = "wb"): cannot open URL 'https://servicos.ibama.gov.br/ctf/publico/areasembargadas/downloadListaAreasEmbargadas.php'

Created on 2022-06-28 by the reprex package (v2.0.1)

Erro na função "load_seeg"

Estava olhando os dados da SEEG para fazer uma visualização e notei uma coisa esquisita:

Rodei o código abaixo e aparecem 24 estados + NA
seeg_en <- load_seeg("seeg_industry", raw_data = FALSE, geo_level = "state")
seeg_en$state %>% unique()

Rodando esse outro código, em português, só aparecem 7 estados + NA
seeg_pt <- load_seeg("seeg_industry", raw_data = FALSE, geo_level = "state", language = "pt")
seeg_pt$state %>% unique()

Parece que o erro está no case_when da linha 228 no arquivo "seeg.R"

Utilização da aba <WIKI> do github

Criar uma nova página wiki para o datazoom de modo a contar a história do repositório. Essa história teria mais complexidade do que o NEWS.md, por exemplo, que já conta um pouco do que foi feito no pacote datazoom.amazonia.

Inconvenience with <LOAD_DATASUS> - <DATASUS>

Argument "state" is not filtering correctly

Context

Package was downloades from github via DevTools

Code

data <- load_datasus(dataset = "datasus_sim_do",
                     time_period = c(2020),
                     states = "RJ",
                     raw_data = FALSE)

Goals and Expectations

I was expecting to download mortality data exclusively from Rio, however, it loaded from cities outside the state, such as Manaus and Belém.

Problem in CNES - DATASUS

Dataset containing information regarding the Teaching Establishments returns and empty dataframe with eight columns, no matter the state selected.

Inconvenience with <LOAD_DATASUS> - <DATASUS_SIM_DO>

Could not download DATASUS_SIM_DO data from 1990 from RJ

Context

I tried downloading data from 1990 about datasus mortality, but it would not load.
The first year to load was 1996. The years between 1990 and 1996 had the same warning message as 1990.

Code

base <- datazoom.amazonia::load_datasus(dataset = "datasus_sim_do", states = "RJ", time_period = 1990)

The following error message was returned:

Error in `dplyr::mutate()`:
! Problem while computing `idade_anos = dplyr::case_when(...)`.
Caused by error in `substr()`:
! object 'idade' not found
Run `rlang::last_error()` to see where the error occurred.

Goals and Expectations

I was expecting to download datasus mortality data from 1990.

Suggestions!

If the problem is that there is no data available for years before 1996, I think an error message such as "Year not available" should be returned.

IPS website is offline, load_ips( ) not working

The IPS amazônia website is currently offline, thus the IPS function (datazoom.amazonia::load_ips) is not working right now.
Also, because of this, the 2021 IPS data could not be added to the package.

Language in PRODES database

When using the example

data <- load_prodes(dataset = "prodes", 
                    raw_data = TRUE,
                    time_period = 2008:2010,
                    language = 'en')

The variables' names seem to be in Portuguese. The same happens when the language parameter is not used.

Mapbiomas: Error in stringr::str_sub(path, - 4)

When runing the code:

load_mapbiomas(dataset = "mapbiomas_mining",
               raw_data = FALSE,
               geo_level = "indigenous_land",
               language = "eng")

The following error message was returned:

Error in stringr::str_sub(path, -4) : object 'path' not found

Problem with size - BACI

The download can take some time (~10-30mins)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
52 2159M 52 1126M 0 0 772k 0 0:47:43 0:24:52 0:22:51 619k

2159M observations is too much for a regular computer to handle.
Size is way too big to work as a good example for people using the package.

Consistent default options

Notei que algumas funções do pacote, como load_iema e load_baci, têm como default para o argumento language a opção "pt", enquanto a grande maioria das outras tem como default "eng", seria interessante padronizar isso para todas as funções.

Checar também que as duas opções são sempre "pt" e "eng" (e não "en", por exemplo)

COMEX: Data examples are too big

Examples such as

load_br_trade(dataset = "comex_import_prod",
                      raw_data = FALSE, 
                      time_period = 1997:2021)

are too big to run because of the aggregate information contained in the year span.

IBAMA data and interview

I tried to download Ibama's embargo data and got this error:

ibama <- load_ibama(dataset = "areas_embargadas", raw_data = TRUE,
legal_amazon_only = TRUE)
Error:
! Problem with filter() input ..1.
i Input ..1 is codigo_ibge_municipio_embargo %in% ....
x Input ..1 must be of size 77006 or 1, not size 0.
Run rlang::last_error() to see where the error occurred.

And please, I'd like to interview someone from the project. I sent an email to [email protected] but I didn't have an answer

Error in Download- TerraClimate

Good afternoon.
Whie testing the examples exposed on the READ.me of the Package of TerraClimate, in the second example, stated below:
amz_precipitation <- datazoom.amazonia::load_climate(dataset = "precipitation",
time_period = 2010,
legal_amazon_only = TRUE)
I received the following message, which then impossibilitated the creation of the object "amz_precipitation", instead retrieving the message:
Error in dplyr::filter(., AMZ_LEGAL == 1) :
object 'legal_amazon' not found

PEVS README example #2 problem

geo_level = "municipality" makes database too big to run. I found that setting geo_level to "region" made a good example.

Error in <LOAD_MAPBIOMAS> - <MAPBIOMAS_TRANSITION>

Very brief description of the problem

When trying to use load_mapbiomas' dataset mapbiomas_transition, the data was not downloaded and, instead, it made my computer slow for about a minute and, after that, it returned an error message.

Code

dat <- load_mapbiomas(dataset = "mapbiomas_transition", geo_level = "municipality")

Error message and Output

Please follow the steps from googledrive package to download the data. This may take a while.
In case of authentication errors, run vignette("GOOGLEDRIVE").
The googledrive package is requesting access to your Google account.
Select a pre-authorised account or enter '0' to obtain a new token.
Press Esc/Ctrl + C to cancel.

1: [email protected]

Selection: 1
Auto-refreshing stale OAuth token.
File downloaded:
• 1-ESTATISTICAS_MapBiomas_COL6.0_UF-MUNICIPIOS_v12_SITE.xlsx.zip <id: 1RT7J2jS6LKyISM49ctfRO31ynJZXX_TY>
Saved locally as:
• C:\Users\vhste\AppData\Local\Temp\RtmpmQ2G3g\file1fc045f95746.zip
Error: std::bad_alloc

Goals and Expectations

I was expecting to get what was once obtained by the following line of code, before the load_mapbiomas updated function..

dat <- load_mapbiomas_transition(space_aggregation = "municipality", transition_interval = 5)

Mapbiomas: Error in utils

When running this line of code:

load_mapbiomas(dataset = "mapbiomas_transition", 
                      raw_data = FALSE,
                      language = "pt")

the following error message was returned

trying URL 'https://storage.googleapis.com/mapbiomas-public/COLECAO/5/DOWNLOADS/ESTATISTICAS/Dados_Transicao_MapBiomas_5.0_UF-MUN_SITE_v2.xlsx'
Content type 'application/octet-stream' length 353792661 bytes (337.4 MB)
downloaded 246.1 MB

Error in utils::download.file(url = path, destfile = temp, mode = "wb") : 
  download from 'https://storage.googleapis.com/mapbiomas-public/COLECAO/5/DOWNLOADS/ESTATISTICAS/Dados_Transicao_MapBiomas_5.0_UF-MUN_SITE_v2.xlsx' failed
In addition: Warning messages:
1: In utils::download.file(url = path, destfile = temp, mode = "wb") :
  downloaded length 258079567 != reported length 353792661
2: In utils::download.file(url = path, destfile = temp, mode = "wb") :
  URL 'https://storage.googleapis.com/mapbiomas-public/COLECAO/5/DOWNLOADS/ESTATISTICAS/Dados_Transicao_MapBiomas_5.0_UF-MUN_SITE_v2.xlsx': Timeout of 60 seconds was reached

Inconvenience with load_pibmunic - PIB-Munic

Data from 2019 onward is not available.

Context

The dataset was pibmunic and the package had recently been downloaded from GitHub.

Code

data <- load_pibmunic(raw_data = FALSE,
                      geo_level = "state",
                      time_period = 2019)

Goals and Expectations

README file on GitHub informs data is only available until 2018. However, it would be useful to have more recent data.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.