fantasyfootballanalytics / ffanalytics Goto Github PK
View Code? Open in Web Editor NEWffanalytics R package
Home Page: http://ffanalytics.fantasyfootballanalytics.net/
ffanalytics R package
Home Page: http://ffanalytics.fantasyfootballanalytics.net/
Football nerds is pulling old seasons data. Getting RG3 and Kapernick Data
FantasyScrape2 <- scrape_data(src = c("Yahoo"), pos = c("RB", "WR"), season = 2018, week =0)
Error in [.tbl_df
(data = .x, .y[["col"]], .y[["into"]], .y[["regex"]], :
unused argument (convert = TRUE)
Calls: ... -> accumulate -> Reduce -> f -> .f -> extract
Execution halted
Trouble kniting to html in r studio- any suggestions?
I'm trying to add a JSON source (profootballfocus)
There is an issue for DST scoring. Instead of projecting the points allowed, the player data will have a key designating a point range. So if the Ravens are projected to give up between 14-20 points, there is a key value pair of "dst_pts_14_20": 1, the Chargers are projected to give up between 21-27 points, so the key value pair is "dst_pts_21_27": 1, etc.
Is there a way to handle this issue in defining the source? Thanks!
PFF = list(
base = "https://www.profootballfocus.com/api/prankster/projections?scoring=preset_ppr"
get_path = function(week){
week_no <- ifelse(week == 0, "", as.character(week))
sprintf("%s&week=%s", base, week_no)
},
min_week = 0,
max_week = 17,
id_col = "player_id",
json_elem = list(weekly = "player_projections", season = "player_projections"),
stat_elem = NULL,
player_elem = NULL,
stat_cols = c(pass_att = "pass_att", pass_comp = "pass_comp", pass_yds = "pass_yds",
pass_tds = "pass_td", pass_int = "pass_int",
rush_att = "rush_att", rush_yds = "rush_yds", rush_tds = "rush_td",
fumbles_lost = "fumbles_lost", fumbles = "fumbles",
rec = "recv_receptions", rec_yds = "recv_yds", rec_tds = "recv_td",
site_pts = "fantasy_points",
dst_sacks = "dst_sacks", dst_int = "dst_int",
dst_fum_rec = "dst_fumbles_recovered", dst_fum_force = "dst_fumbles_forced",
dst_sacks = "dst_sacks" , dst_td = "dst_td", dst_ret_tds = "dst_return_td",
dst_safety = "dst_safeties"
),
player_cols = c(src_id = "player_id", player = "player_name", pos = "position")
)
Any update on getting this fixed? Is it the credentials that are broken or is there something wrong with the login functionality itself? Is it not possible to put in your own League ID anymore?
ran each source individually to see which ones were failing
FantasySharks: Error in mutate_impl(.data, dots) : Evaluation error: object 'fg_miss_0019' not found.
FleaFlicker: Error in -x : invalid argument to unary operator This seems to be related to the "2: In .f(.x[[i]], ...) : NAs introduced by coercion" recieved during the scrape
hi, using the tool for the first time (and very excited!). I was following the initial instructions and was able to create a scrape using the tutorial's command:
my_scrape <- scrape_data(src = c("CBS", "ESPN", "Yahoo"),
pos = c("QB", "RB", "WR", "TE", "DST"),
season = 2018, week = 0)
however, when I try to see the projections using:
my_projections <- projections_table(my_scrape)
I am getting the below errors:
Warning messages:
1: Unknown columns: id
, data_src
2: Unknown columns: id
, data_src
3: Unknown columns: id
, data_src
4: Unknown columns: id
, data_src
5: Unknown columns: id
, data_src
6: Unknown columns: id
, data_src
7: Unknown columns: id
, data_src
8: Unknown columns: id
, data_src
any tips? Thanks!
issue calculating projections
my_projections <- projections_table(my_scrape)
Error in rowSums(., na.rm = TRUE) : 'x' must be numeric
my_scrape was generated with
my_scrape <- scrape_data(src = c("CBS"
,"ESPN"
,"FantasyPros"
,"FantasySharks"
,"FFToday"
,"FleaFlicker"
,"NumberFire"
,"Yahoo"
,"NFL"
,"FantasyData"
,"FantasyFootballNerd"
),
pos = c("QB", "RB", "WR", "TE","K","DST"),
season = 2018, week = 1)
looks like CBS works now
When scraping and generating projections for week 1 kicker data, it errors out with the following error:
Error in mutate_impl(.data, dots) :
Evaluation error: object 'fg_0019' not found.
Calls: projections_table ... <Anonymous> -> mutate -> mutate.tbl_df -> mutate_impl
Execution halted
This is my scrape/projections generation command:
my_scrape <- scrape_data(src = c("ESPN", "Yahoo"), pos = c("QB", "RB", "WR", "TE", "DST", "K"), season = 2018, week = 1)
my_projections <- projections_table(my_scrape)
If I remove the K position then it correctly generates all the necessary data.
I think it could be a good thing for this package to include the FootballDiehards projections, found here: https://www.footballdiehards.com/fantasy-football-player-projections.cfm
When scraping NumberFire projections for Week 1, the (MFL) id for rookie players at all positions (Kyler Murray, Miles Sanders, etc.) are not added. I haven't been able to identify the script where id is appended to the scraped projections, so I am unable to identify the problem or to fix it. I am relatively new to R, so sorry if there is an obvious solution.
(Murray is nF's number 1 QB for week 1, so important to get this fixed.)
Hi there,
I'm new to this package, so my apologies if I'm missing something obvious here. I'm running this code but having issues:
scrape_2020 <- scrape_data(
src = c("CBS", "ESPN", "FantasyData", "FantasyPros", "FantasySharks", "FFToday",
"FleaFlicker", "NumberFire", "Yahoo", "FantasyFootballNerd", "NFL", "RTSports",
"Walterfootball"),
pos = c("QB", "RB", "WR", "TE", "K", "DST", "DL", "LB", "DB"),
season = 2020,
week = 0
)
My error looks like:
"Error in split.default(x = seq_len(nrow(x)), f = f, drop = drop, ...) :
group length is 0 but data length > 0"
I also get 14 warning messages:
1-13: In .Primitive("as.double")(x, ...) : NAs introduced by coercion
14: Unknown or uninitialised column: pos
.
Can someone help me see what I'm doing incorrectly here?
Also, secondarily, I see that the package says I can put in years other than 2020 but that it will not work. Is there by chance a data repo where I can see these full scrapes from previous years? I would very much like to analyze this data also. Thank you!
All/most sources scrape defensive return touchdowns as "dst_ret_td" (singluar), except Yahoo is scraped as "dst_ret_tds" (plural) and results in two separate columns. I manually solved this by summing the two columns (treating NAs as 0), but it's probably worth fixing this in the base code.
If you take a look at https://github.com/sansbacon/nfl/blob/master/nfl/espn_api.py it shows in python how to scrape the ESPN API for season-long and weekly projections. The easiest way is to scrape team-by-team because you don't have to deal with any offsets (no team has more than 50 players with fantasy projections). I'm not sure how to handle cookies and other parameters in R, but it shouldn't be too hard for an experienced programmer to adapt it for this library.
Getting the below error...
myScrapeData <- runScrape(week = 0, season = 2019, analysts = c(4), positions = c("QB"), fbgUser = NULL, fbgPwd = NULL)
Retrieving player data
Operation in progressfailed to load external entity "http://football.myfantasyleague.com/2019/export?TYPE=players&L=&W=0&JSON=0&DETAILS=1"
Error: 1: Operation in progress2: failed to load external entity "http://football.myfantasyleague.com/2019/export?TYPE=players&L=&W=0&JSON=0&DETAILS=1"
R 3.5.1
All dependency packages updated this AM
During scrape using all sources:
my_scrape <- scrape_data(src = c("CBS", "ESPN", "FantasyData", "FantasyPros",
"FantasySharks", "FFToday", "FleaFlicker", "NumberFire", "Yahoo",
"FantasyFootballNerd", "NFL", "RTSports", "Walterfootball"), pos = c("QB",
"RB", "WR", "TE", "K", "DST"), season = 2018, week = 0)
I encountered the following error:
Error in bind_rows_(x, .id) :
Column rec
can't be converted from numeric to character
After trial and error I eventually removed FantasyData and FantasyFootballNerd from the source list in order to complete the scrape successfully.
Next, after running the projections_table function I encountered the following error:
Error in right_join_impl(x, y, by_x, by_y, aux_x, aux_y, na_matches) :
std::bad_alloc
After more trial and error, I eventually removed FantasyPros as well.
In the end, the final set of sources that worked from start to finish were:
my_scrape <- scrape_data(c("CBS", "ESPN",
"FantasySharks", "FFToday", "FleaFlicker", "NumberFire","Yahoo",
"NFL", "RTSports", "Walterfootball"),
pos = c("QB", "RB", "WR", "TE", "K","DST"),
season = 2018, week = 0)
For the add_ecr function, there should be an option for PPR, half PPR, or standard.
The fantasypros URLs are as follows:
STD: https://www.fantasypros.com/nfl/rankings/consensus-cheatsheets.php
HPPR: https://www.fantasypros.com/nfl/rankings/half-point-ppr-cheatsheets.php
PPR: https://www.fantasypros.com/nfl/rankings/ppr-cheatsheets.php
I apologize if this issue isn't reported correctly, this is my first time trying to contribute to an R package, but I found the following (minor) issue:
add_ecr() is scraping duplicates for 3 players, because they are dual listed on Fantasy Pros:
One potential quick fix would be to add an ifelse based on if rank_period is the "draft", then pull "Overall" rather than all positions separately, but I don't know if that would behave correctly during the week.
ecr_pos <- lg_type %>%
imap(~ scrape_ecr(rank_period = ifelse(week == 0, "draft", "week"),
position = ifelse(week == 0, "Overall", .y), rank_type = .x)) %>%
map(select, id, pos_ecr = avg, sd_ecr = std_dev) %>% bind_rows()
Another potential quick fix would be to add to the ecr_pos call: %>% distinct(id,.keep_all = TRUE), which would return just the first ECR ranking in the bind_rows.
ecr_pos <- lg_type %>%
imap(~ scrape_ecr(rank_period = ifelse(week == 0, "draft", "week"),
position = .y, rank_type = .x)) %>%
map(select, id, pos_ecr = avg, sd_ecr = std_dev) %>% bind_rows() %>% distinct(id,.keep_all = TRUE)
I've applied both fixes to my local override, and I could try to commit and push to GIT, but I'm unsure of the process.
Thanks for the great work! I went from losing the league in the 2018 season and running an Ultra Marathon, to the next season being within 3 points in the championship from winning the league. Much to do with a successful draft program using R made possible by ffanalytics.
When I try to get week 1 data which shows this link and the link works https://www.fantasysharks.com/apps/bert/forecasts/projections.php?League=-1&scoring=1&uid=4&Segment=660&Position=1
Error I am getting:
Error in open.connection(x, "rb") :
Timeout was reached: Connection timed out after 10000 milliseconds
my_scrape <- scrape_data(src = c("CBS", "ESPN", "Yahoo"),
pos = c("QB", "RB", "WR", "TE", "DST"),
season = 2018, week = 0)
My understanding is week = 0 is referring to week 1. Is that true? If so, why not use week 1 to make it more clear?
Issue exists for the QB position only. Other positions are collected.
Scraping QB projections from
https://www.cbssports.com/fantasy/football/stats/weeklyprojections/QB/1/avg/standard?print_rows=9999
Error: Table has inconsistent number of columns. Do you want fill = TRUE?
Code used
my_scrape <- scrape_data(src = c("CBS"
,"ESPN"
,"FantasyPros"
,"FantasySharks"
,"FFToday"
,"FleaFlicker"
,"NumberFire"
,"Yahoo"
,"NFL"
,"FantasyData"
,"FantasyFootballNerd"
),
pos = c("QB", "RB", "WR", "TE","K","DST"),
season = 2018, week = 1)
For the add_adp function, there should be an option for PPR, half PPR, or standard.
The fantasypros URLs are as follows:
STD: https://www.fantasypros.com/nfl/adp/overall.php
HPPR: https://www.fantasypros.com/nfl/adp/half-point-ppr-overall.php
PPR: https://www.fantasypros.com/nfl/adp/ppr-overall.php
The current setup will cause FantasyPros to default to the normal Position page which is for the draft, but when you enter anything other than week == 0 then you still get the default page.
Below is my fix in the source_configs.R. I'm not sure exactly how pull requests work so ill leave it here. I used the CBS list as a basis and change toupper to tolower. Changing tolower may actually fix it anyway as the position on the site in lowercase
FantasyPros = list(
base = "https://www.fantasypros.com/nfl/projections/",
get_path = function(season, week, position){
period <- ifelse(week == 0, "draft", as.character(week))
paste(tolower(position),".php?week=",period, sep = "")},
#get_path = function(season, week, position)paste0(tolower(position), ".php"),
#get_query = function(season, week, pos_id, ...){
# if(week == 0)
# return(list(week = "draft"))
#}
get_query = NULL,
Disconnects from server when I try to save custom scoring settings in the app.
I just subscribed and read the article stating my API key would be in the "My Account" modal but it is not there. Please advise. thank you!
After successfully scraping for week 1 in R, and successfully turning the scrape into projections. I have receive the error
"Error in select(player_table, id, first_name, last_name, team, position, :
object 'player_table' not found"
After reading the sight this table is supposed to be loaded from MLF on package load. This is not happening for me. I have tried reinstalling the package and nothing seems to work.
my_projections %>% add_ecr() %>% add_risk() %>%
Error: nm
must be NULL
or a character vector the same length as x
It would be helpful to have more predefined settings, for example:
settings.espn_ppr <- list(
pass = list(
pass_att = 0, pass_comp = 0, pass_inc = 0, pass_yds = 0.04, pass_tds = 4,
pass_int = -2, pass_40_yds = 0, pass_300_yds = 0, pass_350_yds = 0,
pass_400_yds = 0
),
rush = list(
all_pos = TRUE,
rush_yds = 0.1, rush_att = 0, rush_40_yds = 0, rush_tds = 6,
rush_100_yds = 0, rush_150_yds = 0, rush_200_yds = 0),
rec = list(
all_pos = TRUE,
rec = 1, rec_yds = 0.1, rec_tds = 6, rec_40_yds = 0, rec_100_yds = 0,
rec_150_yds = 0, rec_200_yds = 0
),
misc = list(
all_pos = TRUE,
fumbles_lost = -2, fumbles_total = 0,
sacks = 0, two_pts = 2
),
kick = list(
xp = 1.0, fg_0019 = 3.0, fg_2029 = 3.0, fg_3039 = 3.0, fg_4049 = 4.0,
fg_50 = 5.0, fg_miss = 0.0
),
ret = list(
all_pos = TRUE,
return_tds = 6, return_yds = 0
),
idp = list(
all_pos = TRUE,
idp_solo = 1, idp_asst = 0.5, idp_sack = 2, idp_int = 3, idp_fum_force = 3,
idp_fum_rec = 2, idp_pd = 1, idp_td = 6, idp_safety = 2
),
dst = list(
dst_fum_rec = 2, dst_int = 2, dst_safety = 2, dst_sacks = 1, dst_td = 6,
dst_blk = 2, dst_ret_yds = 0, dst_pts_allowed = 0
),
pts_bracket = list(
list(threshold = 0, points = 10),
list(threshold = 6, points = 7),
list(threshold = 20, points = 4),
list(threshold = 34, points = 0),
list(threshold = 99, points = -4)
)
)
I successfully followed tutorials to get me the 2018 data but I figured if I scrape and change the season to past seasons (2017, 2016, etc.) it would scrape those seasons but as I found out it just rescrapes 2018 instead.
How do we go about getting prior seasons data?
When i run the scripts, i am not seeing rookies id's show up. Example is all rookie QBs do not have a id. example is below Joe Burrow is in the player table with an id 14777, but in the data his id is NA. All rookies are the same way.
#test data
my_scrape <- scrape_data(src = c("CBS"),
pos = c("QB"),
season = 2020, week = 0)
DataQB <- rbind(my_scrape[["QB"]])
Not sure if this is something only happening to me but I can't figure out whats wrong. Below is what happens when I try to scrape.
> my_scrape <- scrape_data(src = c("CBS", "ESPN", "Yahoo"),
+ pos = c("QB", "RB", "WR", "TE", "DST"),
+ season = 2018, week = 0)
Scraping QB projections from
https://www.cbssports.com/fantasy/football/stats/weeklyprojections/QB/season/avg/standard? print_rows=9999
Error in make_df_colnames(data_table) :
could not find function "make_df_colnames"
I am creating a new JSON source. Assume the structure of each object in the list is as follows:
{
"mfl_player_id": 5848,
"source_player_id": "tombrady",
"source_player_name": "Tom Brady",
"source_team_id": "NE",
"source_player_position": "QB",
"stats": { .. stat items ..}
}
Are the following values correct?
id_col = "mfl_player_id",
json_elem = "players",
stat_elem = "stats",
player_elem = NULL,
player_cols = c(src_id = "source_player_id", player = "source_player_name",
team = "source_team_id", pos = "source_player_position")
Also, once I add the site description to source_configs.R, what else do I have to do so the source gets scraped when I run scrape_data()?
Thanks for your help!
Half of the scripts won't run properly if at all. The contributors aren't responding to anything. Disappointing, because this is an excellent project.
issue calculating projections
my_projections <- projections_table(my_scrape)
Error in rowSums(., na.rm = TRUE) : 'x' must be numeric
my_scrape was generated with
my_scrape <- scrape_data(src = c("CBS"
,"ESPN"
,"FantasyPros"
,"FantasySharks"
,"FFToday"
,"FleaFlicker"
,"NumberFire"
,"Yahoo"
,"NFL"
,"FantasyData"
,"FantasyFootballNerd"
),
pos = c("QB", "RB", "WR", "TE","K","DST"),
season = 2018, week = 1)
I love your package and have used it a lot in the past. For some reason though, the get_adp function doesn't seem to be working. I am using AAV and the sources are CBS, Yahoo, NFL, and ESPN. For some reason though, it doesn't seem to be working. This is the error I get: "Error in draft_list[[1]] : subscript out of bounds." Let me know if there is a fix!
my_scrape <- scrape_source(src = "ESPN", pos = "QB",season = 2018, week = 1). I'm getting the error message:
Error: $ operator is invalid for atomic vectors
Scraped fresh data today with scrape_data() and calculated new projections - data was very volatile for QB and WR ceiling and floor, way out of the ordinary. Was seen especially in QB's, and WR's had spreads (Floor/Ceiling) about 4 times larger than those of RB's, which was far different from calculations I have done in the past week. Confirmed the issue by using an old scrape from last week and data was much more normal when calculating projections. Also tried changing around the rules for the fresh scrape with PPR, 0.5 PPR, and standard scoring, and the projections were still very out of the ordinary for QB and WR.
Hi, thanks for your great work on the package. When I use "Yahoo" as one of the sources in scraped_data, I get messages such as
Scraping QB projections from
https://football.fantasysports.yahoo.com/f1/48938/players?sort=PTS&sdir=1&status=A&pos=QB&stat1=S_PW_3&jsenabled=1&count=0
but the resulting tibble does not have any entries with data_src == "Yahoo". Any idea what might be going on? Thanks.
When I try to install the package I get the following error:
devtools::install_github(repo = "FantasyFootballAnalytics/ffanalytics")
Downloading GitHub repo FantasyFootballAnalytics/ffanalytics@master
from URL https://api.github.com/repos/FantasyFootballAnalytics/ffanalytics/zipball/master
Installing ffanalytics
"C:/PROGRA1/R/R-341.3/bin/x64/R" --no-site-file --no-environ --no-save --no-restore --quiet
CMD INSTALL
"C:/Users/m7hj2/AppData/Local/Temp/RtmpiIwa2K/devtools3d7c1a3c7eba/FantasyFootballAnalytics-ffanalytics-f639f21"
--library="C:/Users/m7hj2/Documents/R/win-library/3.4" --install-tests
Warning: namespace 'ffanalytics' is not available and has been replaced
by .GlobalEnv when processing object 'projection_sources'
Error in .[[c("fullNflSchedule", "nflSchedule")]] :
no such index at level 1
Warning: namespace 'ffanalytics' is not available and has been replaced
by .GlobalEnv when processing object 'projection_sources'
** preparing package for lazy loading
Warning: package 'tidyverse' was built under R version 3.4.4
Warning: package 'ggplot2' was built under R version 3.4.4
Warning: package 'tidyr' was built under R version 3.4.4
Warning: package 'purrr' was built under R version 3.4.4
Warning: package 'dplyr' was built under R version 3.4.4
Warning: package 'rvest' was built under R version 3.4.4
Warning: package 'httr' was built under R version 3.4.4
Warning: package 'readxl' was built under R version 3.4.4
Warning: package 'janitor' was built under R version 3.4.4
Warning: package 'glue' was built under R version 3.4.4
Warning: package 'Hmisc' was built under R version 3.4.4
Warning: package 'Formula' was built under R version 3.4.4
Error in .[[c("fullNflSchedule", "nflSchedule")]] :
no such index at level 1
Error : unable to load R code in package 'ffanalytics'
ERROR: lazy loading failed for package 'ffanalytics'
library(ffanalytics)
my_scrape <- scrape_data(src = c("FantasyPros", "NumberFire"),
pos = c("QB", "RB", "WR", "TE", "DST"),
season = 2018, week = 0)
my_projections <- projections_table(my_scrape)
Error:
Error in grouped_df_impl(data, unname(vars), drop) :
Column `id` is unknown
In addition: Warning messages:
1: Unknown variables: `id`, `data_src`
2: Unknown variables: `id`, `data_src`
3: Unknown variables: `id`, `data_src`
Package installed fine, using R 3.5.1 within Rstudio.
The scrape_data
function works great when setting week=0, but fails when attempting a specific week.
> chk <- scrape_data(src = c("CBS","ESPN"), pos = c('QB','RB'), season = 2018, week = 0) Scraping QB projections from https://www.cbssports.com/fantasy/football/stats/weeklyprojections/QB/season/avg/standard?print_rows=9999 Scraping RB projections from https://www.cbssports.com/fantasy/football/stats/weeklyprojections/RB/season/avg/standard?print_rows=9999 Scraping QB projections from http://games.espn.com/ffl/tools/projections?slotCategoryId=0&seasonTotals=true&seasonId=2018&startIndex=0 Scraping RB projections from http://games.espn.com/ffl/tools/projections?slotCategoryId=2&seasonTotals=true&seasonId=2018&startIndex=0
`> chk <- scrape_data(src = c("CBS","ESPN"), pos = c('QB','RB'), season = 2018, week = 1)
Scraping QB projections from
https://www.cbssports.com/fantasy/football/stats/weeklyprojections/QB/1/avg/standard?print_rows=9999
Hide Traceback
Rerun with Debug
Error: Table has inconsistent number of columns. Do you want fill = TRUE?
27. stop("Table has inconsistent number of columns. ", "Do you want fill = TRUE?",
call. = FALSE)
26. html_table.xml_node(., header = TRUE)
25. html_table(., header = TRUE)
24. function_list[k]
23. withVisible(function_list[k])
22. freduce(value, _function_list
)
21. _fseq
(_lhs
)
20. eval(quote(_fseq
(_lhs
)), env, env)
19. eval(quote(_fseq
(_lhs
)), env, env)
18. withVisible(eval(quote(_fseq
(_lhs
)), env, env))
17. data_page %>% html_node(table_css) %>% html_table(header = TRUE) at source_classes.R#325
16. self$get_table() at source_classes.R#383
15. src$open_session(season, week, position)$scrape()
14. scrape_source(.x, season, week, .y)
13. .f(.x[[i]], .y[[i]], ...)
12. map2(.x, vec_index(.x), .f, ...)
11. imap(.x, ~scrape_source(.x, season, week, .y))
10. f(.x[[i]], ...)
9. map(., ~imap(.x, ~scrape_source(.x, season, week, .y)))
8. function_list[i]
7. freduce(value, _function_list
)
6. _fseq
(_lhs
)
5. eval(quote(_fseq
(_lhs
)), env, env)
4. eval(quote(_fseq
(_lhs
)), env, env)
3. withVisible(eval(quote(_fseq
(_lhs
)), env, env))
2. map(pos, ~map(projection_sources[src], ~.x)) %>% transpose() %>%
map(~imap(.x, ~scrape_source(.x, season, week, .y))) %>%
transpose() %>% map(discard, is.null) %>% map(bind_rows,
.id = "data_src")
Please let me know if there's any other information I can provide.
There are some zombie projections that it would be helpful to be able to exclude with a parameter to projections_table. For example, Jamaal Charles is projected for 174 points (RB22) and Matt Forte, who is retired, is projected for 155 points (RB 37), which throws off the positional ranks below them. Thanks!
Run into the following error when trying to load week one data for 2019 season.
Error: Column fg
can't be converted from numeric to character
In addition: There were 50 or more warnings (use warnings() to see the first 50)
script:
my_scrape <- scrape_data(pos=c("QB","RB","WR","TE","DST", "K"),season = 2019, week =1)
Sorry if this is trial - new to this whole thing.
add_ecr() broke for me...dont know why, but it never returned data to begin with ayway. Now i get a error on the add_column() Function when it tries to import the fp_ids. fp_ids is not getting the ID from the scrape
I fixed it in my repo, but that didn't populate the sd_ecr..etc fields. I changed it to use the Full Name because the ID was not matching and this is how i add ECR.
my_projections <- projections_table(my_scrape, scoring_rules)
my_projections <- my_projections %>% add_player_info()
my_projections$first_name <- paste(my_projections$first_name, my_projections$last_name)
names(my_projections)[2] <- c("name")
my_projections = subset(my_projections, select = -c(last_name))
my_projections <- my_projections %>% add_ecr() %>% add_risk()
scrape_data does not provide id for DSTs when scraping Yahoo for week 1
my_scrape <- scrape_data(src = c("Yahoo"),
pos = c("QB", "RB", "WR", "TE", "DST"),
season = 2019, week = 1)
While investigating the source of issue 15, I did notice a few other issues. I wasn't able to determine if the projections were being included as part of the projections table.
FantasyPros: Error in -x : invalid argument to unary operator
FleaFlicker: Error: Column id
not found
Yahoo: Error in mutate_impl(.data, dots) : Evaluation error: object 'fg' not found.
NFL: Error in .f(.x[[i]], ...) : object 'att' not found
FantasyData: Error: by
can't contain join column pos
, data_col
which is missing from LHS
FantasyFootballNerd: Error: Column id
not found
> my_projections <- projections_table(my_scrape)
Error in (function (x, strict = TRUE) :
the argument has already been evaluated
In addition: Warning messages:
1: Unknown variables:id
,data_src
2: Unknown variables:id
,data_src
I have found the following issues with projection sources retrieved using scrape_data for season = 2019 and week = 0:
ESPN: No data is scraped. Projections are publicly available. Likely a scraping issue.
FantasyData: No data is scraped. The free projections are for a limited number of players; full projections are available only with a paid subscription. I suspect this is no longer a valid public data source.
FleaFlicker: No data is scraped. A free membership is required to access projections. This may simply require a revision to incorporate login with credentials.
FantasyFootballNerd: The scraped data is incorrect. For example, Calvin Johnson is projected as the #1 WR. It appears that free projections are available, so this may just require fixing the script.
WalterFootball: The projections for most positions don't include TD projections. I couldn't find projections on the website in HTML, so I couldn't determine if this is an error or if WF doesn't project TDs (which makes the projections much less useful).
Thanks!
It appears some sources are not working.
Running
df <- scrape_data(src = c("CBS", "ESPN", "NFL"), "QB", 2019, 1) %>% as.data.frame()
leads to
> df <- scrape_data(src = c("CBS", "ESPN", "NFL"), "QB", 2019, 1) %>% as.data.frame()
Scraping QB projections from
https://www.cbssports.com/fantasy/football/stats/QB/2019/1/projections/nonppr
Scraping QB projections from
Scraping QB projections from
http://api.fantasy.nfl.com/v1/players/stats?statType=weekProjectedStats&season=2019&week=1&position=QB&format=json
Fehler: `.x` is empty, and no `.init` supplied
I would like to scrape the NFL source so that's the critical part here. Any suggestions?
I'm noticing yards_gained and other yards associated columns are not logged for the following quarterbacks and games:
Found this in a dplyr summarize call on QBs so possible it's missing for receivers as well.
When I run the standard:
library(ffanalytics)
my_scrape <- scrape_data(src = c("CBS", "ESPN", "Yahoo"),
* ```
pos = c("QB", "RB", "WR", "TE", "DST"),
```
* ```
season = 2018, week = 0)
```
In RStudio on my newly reinstalled 3.5.1 & 3.4.4 installations I get the following error:
Scraping QB projections from
https://www.cbssports.com/fantasy/football/stats/sortable/points/QB/standard/projections/2018/?print_rows=9999
Error: Column 1 must be named.
Use .name_repair to specify repair.
Call rlang::last_error()
to see a backtrace
Called from: abort(error_column_must_be_named(bad_name))
Browse[1]>
Any thoughts on what might be going on? This worked last week. I have tried uninstalling/reinstalling R & associated libraries with no success.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.