Code Monkey home page Code Monkey logo

ffanalytics's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ffanalytics's Issues

problem with knitt

FantasyScrape2 <- scrape_data(src = c("Yahoo"), pos = c("RB", "WR"), season = 2018, week =0)


Error in [.tbl_df(data = .x, .y[["col"]], .y[["into"]], .y[["regex"]], :
unused argument (convert = TRUE)
Calls: ... -> accumulate -> Reduce -> f -> .f -> extract
Execution halted


Trouble kniting to html in r studio- any suggestions?

Adding JSON source - DST scoring

I'm trying to add a JSON source (profootballfocus)

There is an issue for DST scoring. Instead of projecting the points allowed, the player data will have a key designating a point range. So if the Ravens are projected to give up between 14-20 points, there is a key value pair of "dst_pts_14_20": 1, the Chargers are projected to give up between 21-27 points, so the key value pair is "dst_pts_21_27": 1, etc.

Is there a way to handle this issue in defining the source? Thanks!

PFF = list(
base = "https://www.profootballfocus.com/api/prankster/projections?scoring=preset_ppr"
get_path = function(week){
week_no <- ifelse(week == 0, "", as.character(week))
sprintf("%s&week=%s", base, week_no)
},
min_week = 0,
max_week = 17,
id_col = "player_id",
json_elem = list(weekly = "player_projections", season = "player_projections"),
stat_elem = NULL,
player_elem = NULL,
stat_cols = c(pass_att = "pass_att", pass_comp = "pass_comp", pass_yds = "pass_yds",
pass_tds = "pass_td", pass_int = "pass_int",
rush_att = "rush_att", rush_yds = "rush_yds", rush_tds = "rush_td",
fumbles_lost = "fumbles_lost", fumbles = "fumbles",
rec = "recv_receptions", rec_yds = "recv_yds", rec_tds = "recv_td",
site_pts = "fantasy_points",
dst_sacks = "dst_sacks", dst_int = "dst_int",
dst_fum_rec = "dst_fumbles_recovered", dst_fum_force = "dst_fumbles_forced",
dst_sacks = "dst_sacks" , dst_td = "dst_td", dst_ret_tds = "dst_return_td",
dst_safety = "dst_safeties"
),
player_cols = c(src_id = "player_id", player = "player_name", pos = "position")
)

Yahoo Source - Public ID not working

Any update on getting this fixed? Is it the credentials that are broken or is there something wrong with the login functionality itself? Is it not possible to put in your own League ID anymore?

more projection_table issues

ran each source individually to see which ones were failing

FantasySharks: Error in mutate_impl(.data, dots) : Evaluation error: object 'fg_miss_0019' not found.
FleaFlicker: Error in -x : invalid argument to unary operator This seems to be related to the "2: In .f(.x[[i]], ...) : NAs introduced by coercion" recieved during the scrape

troubling loading projections after scraping

hi, using the tool for the first time (and very excited!). I was following the initial instructions and was able to create a scrape using the tutorial's command:
my_scrape <- scrape_data(src = c("CBS", "ESPN", "Yahoo"),
pos = c("QB", "RB", "WR", "TE", "DST"),
season = 2018, week = 0)

however, when I try to see the projections using:
my_projections <- projections_table(my_scrape)

I am getting the below errors:
Warning messages:
1: Unknown columns: id, data_src
2: Unknown columns: id, data_src
3: Unknown columns: id, data_src
4: Unknown columns: id, data_src
5: Unknown columns: id, data_src
6: Unknown columns: id, data_src
7: Unknown columns: id, data_src
8: Unknown columns: id, data_src

any tips? Thanks!

projections_table - must be numeric

issue calculating projections

my_projections <- projections_table(my_scrape)
Error in rowSums(., na.rm = TRUE) : 'x' must be numeric

my_scrape was generated with
my_scrape <- scrape_data(src = c("CBS"
,"ESPN"
,"FantasyPros"
,"FantasySharks"
,"FFToday"
,"FleaFlicker"
,"NumberFire"
,"Yahoo"
,"NFL"
,"FantasyData"
,"FantasyFootballNerd"
),
pos = c("QB", "RB", "WR", "TE","K","DST"),
season = 2018, week = 1)

looks like CBS works now

Issue while scraping kicker data

When scraping and generating projections for week 1 kicker data, it errors out with the following error:

Error in mutate_impl(.data, dots) :
  Evaluation error: object 'fg_0019' not found.
Calls: projections_table ... <Anonymous> -> mutate -> mutate.tbl_df -> mutate_impl
Execution halted

This is my scrape/projections generation command:

my_scrape <- scrape_data(src = c("ESPN", "Yahoo"), pos = c("QB", "RB", "WR", "TE", "DST", "K"), season = 2018, week = 1)
my_projections <- projections_table(my_scrape)

If I remove the K position then it correctly generates all the necessary data.

NumberFire - Rookies

When scraping NumberFire projections for Week 1, the (MFL) id for rookie players at all positions (Kyler Murray, Miles Sanders, etc.) are not added. I haven't been able to identify the script where id is appended to the scraped projections, so I am unable to identify the problem or to fix it. I am relatively new to R, so sorry if there is an obvious solution.

(Murray is nF's number 1 QB for week 1, so important to get this fixed.)

Can't get scrape to work

Hi there,

I'm new to this package, so my apologies if I'm missing something obvious here. I'm running this code but having issues:

scrape_2020 <- scrape_data(
src = c("CBS", "ESPN", "FantasyData", "FantasyPros", "FantasySharks", "FFToday",
"FleaFlicker", "NumberFire", "Yahoo", "FantasyFootballNerd", "NFL", "RTSports",
"Walterfootball"),
pos = c("QB", "RB", "WR", "TE", "K", "DST", "DL", "LB", "DB"),
season = 2020,
week = 0
)

My error looks like:
"Error in split.default(x = seq_len(nrow(x)), f = f, drop = drop, ...) :
group length is 0 but data length > 0"
I also get 14 warning messages:
1-13: In .Primitive("as.double")(x, ...) : NAs introduced by coercion
14: Unknown or uninitialised column: pos.

Can someone help me see what I'm doing incorrectly here?

Also, secondarily, I see that the package says I can put in years other than 2020 but that it will not work. Is there by chance a data repo where I can see these full scrapes from previous years? I would very much like to analyze this data also. Thank you!

'dst_ret_td' vs 'dst_ret_tds'

All/most sources scrape defensive return touchdowns as "dst_ret_td" (singluar), except Yahoo is scraped as "dst_ret_tds" (plural) and results in two separate columns. I manually solved this by summing the two columns (treating NAs as 0), but it's probably worth fixing this in the base code.

ESPN projections

If you take a look at https://github.com/sansbacon/nfl/blob/master/nfl/espn_api.py it shows in python how to scrape the ESPN API for season-long and weekly projections. The easiest way is to scrape team-by-team because you don't have to deal with any offsets (no team has more than 50 players with fantasy projections). I'm not sure how to handle cookies and other parameters in R, but it shouldn't be too hard for an experienced programmer to adapt it for this library.

Source data throwing errors unless sources are removed

R 3.5.1
All dependency packages updated this AM

During scrape using all sources:
my_scrape <- scrape_data(src = c("CBS", "ESPN", "FantasyData", "FantasyPros",
"FantasySharks", "FFToday", "FleaFlicker", "NumberFire", "Yahoo",
"FantasyFootballNerd", "NFL", "RTSports", "Walterfootball"), pos = c("QB",
"RB", "WR", "TE", "K", "DST"), season = 2018, week = 0)

I encountered the following error:
Error in bind_rows_(x, .id) :
Column rec can't be converted from numeric to character

After trial and error I eventually removed FantasyData and FantasyFootballNerd from the source list in order to complete the scrape successfully.

Next, after running the projections_table function I encountered the following error:
Error in right_join_impl(x, y, by_x, by_y, aux_x, aux_y, na_matches) :
std::bad_alloc

After more trial and error, I eventually removed FantasyPros as well.

In the end, the final set of sources that worked from start to finish were:

my_scrape <- scrape_data(c("CBS", "ESPN",
"FantasySharks", "FFToday", "FleaFlicker", "NumberFire","Yahoo",
"NFL", "RTSports", "Walterfootball"),
pos = c("QB", "RB", "WR", "TE", "K","DST"),
season = 2018, week = 0)

add_ecr pulls duplicates from Fantasy Pros

I apologize if this issue isn't reported correctly, this is my first time trying to contribute to an R package, but I found the following (minor) issue:

add_ecr() is scraping duplicates for 3 players, because they are dual listed on Fantasy Pros:

  • Taysom Hill (QB and TE)
  • Antonio Gibson (RB and WR)
  • Lynn Bowden Jr. (RB and WR)

One potential quick fix would be to add an ifelse based on if rank_period is the "draft", then pull "Overall" rather than all positions separately, but I don't know if that would behave correctly during the week.

ecr_pos <- lg_type %>%
imap(~ scrape_ecr(rank_period = ifelse(week == 0, "draft", "week"),
position = ifelse(week == 0, "Overall", .y), rank_type = .x)) %>%
map(select, id, pos_ecr = avg, sd_ecr = std_dev) %>% bind_rows()

Another potential quick fix would be to add to the ecr_pos call: %>% distinct(id,.keep_all = TRUE), which would return just the first ECR ranking in the bind_rows.

ecr_pos <- lg_type %>%
imap(~ scrape_ecr(rank_period = ifelse(week == 0, "draft", "week"),
position = .y, rank_type = .x)) %>%
map(select, id, pos_ecr = avg, sd_ecr = std_dev) %>% bind_rows() %>% distinct(id,.keep_all = TRUE)

I've applied both fixes to my local override, and I could try to commit and push to GIT, but I'm unsure of the process.

Thanks for the great work! I went from losing the league in the 2018 season and running an Ultra Marathon, to the next season being within 3 points in the championship from winning the league. Much to do with a successful draft program using R made possible by ffanalytics.

Why use "week = 0" to reference week 1?

 my_scrape <- scrape_data(src = c("CBS", "ESPN", "Yahoo"),
                           pos = c("QB", "RB", "WR", "TE", "DST"),
                           season = 2018, week = 0)

My understanding is week = 0 is referring to week 1. Is that true? If so, why not use week 1 to make it more clear?

Table has inconsistent number of columns

Issue exists for the QB position only. Other positions are collected.

Scraping QB projections from
https://www.cbssports.com/fantasy/football/stats/weeklyprojections/QB/1/avg/standard?print_rows=9999
Error: Table has inconsistent number of columns. Do you want fill = TRUE?

Code used
my_scrape <- scrape_data(src = c("CBS"
,"ESPN"
,"FantasyPros"
,"FantasySharks"
,"FFToday"
,"FleaFlicker"
,"NumberFire"
,"Yahoo"
,"NFL"
,"FantasyData"
,"FantasyFootballNerd"
),
pos = c("QB", "RB", "WR", "TE","K","DST"),
season = 2018, week = 1)

FantasyPros Weekly scrape returns no data

The current setup will cause FantasyPros to default to the normal Position page which is for the draft, but when you enter anything other than week == 0 then you still get the default page.

Below is my fix in the source_configs.R. I'm not sure exactly how pull requests work so ill leave it here. I used the CBS list as a basis and change toupper to tolower. Changing tolower may actually fix it anyway as the position on the site in lowercase

FantasyPros = list(
base = "https://www.fantasypros.com/nfl/projections/",
get_path = function(season, week, position){
period <- ifelse(week == 0, "draft", as.character(week))
paste(tolower(position),".php?week=",period, sep = "")},
#get_path = function(season, week, position)paste0(tolower(position), ".php"),
#get_query = function(season, week, pos_id, ...){
# if(week == 0)
# return(list(week = "draft"))
#}
get_query = NULL,

Where is the API key located?

I just subscribed and read the article stating my API key would be in the "My Account" modal but it is not there. Please advise. thank you!

No player_table to add player data

After successfully scraping for week 1 in R, and successfully turning the scrape into projections. I have receive the error

"Error in select(player_table, id, first_name, last_name, team, position, :
object 'player_table' not found"

After reading the sight this table is supposed to be loaded from MLF on package load. This is not happening for me. I have tried reinstalling the package and nothing seems to work.

Add ADP isnt working

my_projections %>% add_ecr() %>% add_risk() %>%

  • add_adp()

Error: nm must be NULL or a character vector the same length as x

League settings

It would be helpful to have more predefined settings, for example:

settings.espn_ppr <- list(
pass = list(
pass_att = 0, pass_comp = 0, pass_inc = 0, pass_yds = 0.04, pass_tds = 4,
pass_int = -2, pass_40_yds = 0, pass_300_yds = 0, pass_350_yds = 0,
pass_400_yds = 0
),
rush = list(
all_pos = TRUE,
rush_yds = 0.1, rush_att = 0, rush_40_yds = 0, rush_tds = 6,
rush_100_yds = 0, rush_150_yds = 0, rush_200_yds = 0),
rec = list(
all_pos = TRUE,
rec = 1, rec_yds = 0.1, rec_tds = 6, rec_40_yds = 0, rec_100_yds = 0,
rec_150_yds = 0, rec_200_yds = 0
),
misc = list(
all_pos = TRUE,
fumbles_lost = -2, fumbles_total = 0,
sacks = 0, two_pts = 2
),
kick = list(
xp = 1.0, fg_0019 = 3.0, fg_2029 = 3.0, fg_3039 = 3.0, fg_4049 = 4.0,
fg_50 = 5.0, fg_miss = 0.0
),
ret = list(
all_pos = TRUE,
return_tds = 6, return_yds = 0
),
idp = list(
all_pos = TRUE,
idp_solo = 1, idp_asst = 0.5, idp_sack = 2, idp_int = 3, idp_fum_force = 3,
idp_fum_rec = 2, idp_pd = 1, idp_td = 6, idp_safety = 2
),
dst = list(
dst_fum_rec = 2, dst_int = 2, dst_safety = 2, dst_sacks = 1, dst_td = 6,
dst_blk = 2, dst_ret_yds = 0, dst_pts_allowed = 0
),
pts_bracket = list(
list(threshold = 0, points = 10),
list(threshold = 6, points = 7),
list(threshold = 20, points = 4),
list(threshold = 34, points = 0),
list(threshold = 99, points = -4)
)
)

Old Seasons?

I successfully followed tutorials to get me the 2018 data but I figured if I scrape and change the season to past seasons (2017, 2016, etc.) it would scrape those seasons but as I found out it just rescrapes 2018 instead.

How do we go about getting prior seasons data?

Rookies id are not showing up in data

When i run the scripts, i am not seeing rookies id's show up. Example is all rookie QBs do not have a id. example is below Joe Burrow is in the player table with an id 14777, but in the data his id is NA. All rookies are the same way.

#test data
my_scrape <- scrape_data(src = c("CBS"),
pos = c("QB"),
season = 2020, week = 0)
DataQB <- rbind(my_scrape[["QB"]])

Unable to Scrape

Not sure if this is something only happening to me but I can't figure out whats wrong. Below is what happens when I try to scrape.

> my_scrape <- scrape_data(src = c("CBS", "ESPN", "Yahoo"), 
+                          pos = c("QB", "RB", "WR", "TE", "DST"),
+                          season = 2018, week = 0)
Scraping QB projections from 
 https://www.cbssports.com/fantasy/football/stats/weeklyprojections/QB/season/avg/standard?   print_rows=9999 
 Error in make_df_colnames(data_table) : 
  could not find function "make_df_colnames" 

new JSON source already has mfl_player_id

I am creating a new JSON source. Assume the structure of each object in the list is as follows:

{
  "mfl_player_id": 5848, 
  "source_player_id": "tombrady",
  "source_player_name": "Tom Brady",
  "source_team_id": "NE",
  "source_player_position": "QB",
  "stats": { .. stat items ..}
}
 

Are the following values correct?

id_col = "mfl_player_id",
json_elem = "players",
stat_elem = "stats",
player_elem = NULL,   
player_cols = c(src_id = "source_player_id", player = "source_player_name", 
                team = "source_team_id", pos = "source_player_position")

Also, once I add the site description to source_configs.R, what else do I have to do so the source gets scraped when I run scrape_data()?

Thanks for your help!

Project is dead

Half of the scripts won't run properly if at all. The contributors aren't responding to anything. Disappointing, because this is an excellent project.

projections_table - x must be numeric

issue calculating projections

my_projections <- projections_table(my_scrape)
Error in rowSums(., na.rm = TRUE) : 'x' must be numeric

my_scrape was generated with
my_scrape <- scrape_data(src = c("CBS"
,"ESPN"
,"FantasyPros"
,"FantasySharks"
,"FFToday"
,"FleaFlicker"
,"NumberFire"
,"Yahoo"
,"NFL"
,"FantasyData"
,"FantasyFootballNerd"
),
pos = c("QB", "RB", "WR", "TE","K","DST"),
season = 2018, week = 1)

get_adp function not working for me

I love your package and have used it a lot in the past. For some reason though, the get_adp function doesn't seem to be working. I am using AAV and the sources are CBS, Yahoo, NFL, and ESPN. For some reason though, it doesn't seem to be working. This is the error I get: "Error in draft_list[[1]] : subscript out of bounds." Let me know if there is a fix!

Ceiling and Floor estimates are way off for QB and WR

Scraped fresh data today with scrape_data() and calculated new projections - data was very volatile for QB and WR ceiling and floor, way out of the ordinary. Was seen especially in QB's, and WR's had spreads (Floor/Ceiling) about 4 times larger than those of RB's, which was far different from calculations I have done in the past week. Confirmed the issue by using an old scrape from last week and data was much more normal when calculating projections. Also tried changing around the rules for the fresh scrape with PPR, 0.5 PPR, and standard scoring, and the projections were still very out of the ordinary for QB and WR.

Error installing package

When I try to install the package I get the following error:

devtools::install_github(repo = "FantasyFootballAnalytics/ffanalytics")

Downloading GitHub repo FantasyFootballAnalytics/ffanalytics@master
from URL https://api.github.com/repos/FantasyFootballAnalytics/ffanalytics/zipball/master
Installing ffanalytics
"C:/PROGRA1/R/R-341.3/bin/x64/R" --no-site-file --no-environ --no-save --no-restore --quiet
CMD INSTALL
"C:/Users/m7hj2/AppData/Local/Temp/RtmpiIwa2K/devtools3d7c1a3c7eba/FantasyFootballAnalytics-ffanalytics-f639f21"
--library="C:/Users/m7hj2/Documents/R/win-library/3.4" --install-tests

  • installing source package 'ffanalytics' ...
    ** R
    ** data
    *** moving datasets to lazyload DB
    Error in .[[c("fullNflSchedule", "nflSchedule")]] :
    no such index at level 1

Warning: namespace 'ffanalytics' is not available and has been replaced
by .GlobalEnv when processing object 'projection_sources'
Error in .[[c("fullNflSchedule", "nflSchedule")]] :
no such index at level 1

Warning: namespace 'ffanalytics' is not available and has been replaced
by .GlobalEnv when processing object 'projection_sources'
** preparing package for lazy loading
Warning: package 'tidyverse' was built under R version 3.4.4
Warning: package 'ggplot2' was built under R version 3.4.4
Warning: package 'tidyr' was built under R version 3.4.4
Warning: package 'purrr' was built under R version 3.4.4
Warning: package 'dplyr' was built under R version 3.4.4
Warning: package 'rvest' was built under R version 3.4.4
Warning: package 'httr' was built under R version 3.4.4
Warning: package 'readxl' was built under R version 3.4.4
Warning: package 'janitor' was built under R version 3.4.4
Warning: package 'glue' was built under R version 3.4.4
Warning: package 'Hmisc' was built under R version 3.4.4
Warning: package 'Formula' was built under R version 3.4.4
Error in .[[c("fullNflSchedule", "nflSchedule")]] :
no such index at level 1

Error : unable to load R code in package 'ffanalytics'
ERROR: lazy loading failed for package 'ffanalytics'

  • removing 'C:/Users/m7hj2/Documents/R/win-library/3.4/ffanalytics'
    In R CMD INSTALL
    Installation failed: Command failed (1)

Error trying to run projections_table

library(ffanalytics)

my_scrape <- scrape_data(src = c("FantasyPros", "NumberFire"), 
                         pos = c("QB", "RB", "WR", "TE", "DST"),
                         season = 2018, week = 0)

my_projections <- projections_table(my_scrape)

Error:

Error in grouped_df_impl(data, unname(vars), drop) : 
  Column `id` is unknown
In addition: Warning messages:
1: Unknown variables: `id`, `data_src` 
2: Unknown variables: `id`, `data_src` 
3: Unknown variables: `id`, `data_src` 

Weekly scrape failure

Package installed fine, using R 3.5.1 within Rstudio.

The scrape_data function works great when setting week=0, but fails when attempting a specific week.

> chk <- scrape_data(src = c("CBS","ESPN"), pos = c('QB','RB'), season = 2018, week = 0) Scraping QB projections from https://www.cbssports.com/fantasy/football/stats/weeklyprojections/QB/season/avg/standard?print_rows=9999 Scraping RB projections from https://www.cbssports.com/fantasy/football/stats/weeklyprojections/RB/season/avg/standard?print_rows=9999 Scraping QB projections from http://games.espn.com/ffl/tools/projections?slotCategoryId=0&seasonTotals=true&seasonId=2018&startIndex=0 Scraping RB projections from http://games.espn.com/ffl/tools/projections?slotCategoryId=2&seasonTotals=true&seasonId=2018&startIndex=0

`> chk <- scrape_data(src = c("CBS","ESPN"), pos = c('QB','RB'), season = 2018, week = 1)
Scraping QB projections from
https://www.cbssports.com/fantasy/football/stats/weeklyprojections/QB/1/avg/standard?print_rows=9999
Hide Traceback

Rerun with Debug
Error: Table has inconsistent number of columns. Do you want fill = TRUE?
27. stop("Table has inconsistent number of columns. ", "Do you want fill = TRUE?",
call. = FALSE)
26. html_table.xml_node(., header = TRUE)
25. html_table(., header = TRUE)
24. function_list[k]
23. withVisible(function_list[k])
22. freduce(value, _function_list)
21. _fseq(_lhs)
20. eval(quote(_fseq(_lhs)), env, env)
19. eval(quote(_fseq(_lhs)), env, env)
18. withVisible(eval(quote(_fseq(_lhs)), env, env))
17. data_page %>% html_node(table_css) %>% html_table(header = TRUE) at source_classes.R#325
16. self$get_table() at source_classes.R#383
15. src$open_session(season, week, position)$scrape()
14. scrape_source(.x, season, week, .y)
13. .f(.x[[i]], .y[[i]], ...)
12. map2(.x, vec_index(.x), .f, ...)
11. imap(.x, ~scrape_source(.x, season, week, .y))
10. f(.x[[i]], ...)
9. map(., ~imap(.x, ~scrape_source(.x, season, week, .y)))
8. function_list[i]
7. freduce(value, _function_list)
6. _fseq(_lhs)
5. eval(quote(_fseq(_lhs)), env, env)
4. eval(quote(_fseq(_lhs)), env, env)
3. withVisible(eval(quote(_fseq(_lhs)), env, env))
2. map(pos, ~map(projection_sources[src], ~.x)) %>% transpose() %>%
map(~imap(.x, ~scrape_source(.x, season, week, .y))) %>%
transpose() %>% map(discard, is.null) %>% map(bind_rows,
.id = "data_src")

  1. scrape_data(src = c("CBS", "ESPN"), pos = c("QB", "RB"), season = 2018,
    week = 1)`

Please let me know if there's any other information I can provide.

Excluding players - parameter for projections_table

There are some zombie projections that it would be helpful to be able to exclude with a parameter to projections_table. For example, Jamaal Charles is projected for 174 points (RB22) and Matt Forte, who is retired, is projected for 155 points (RB 37), which throws off the positional ranks below them. Thanks!

Cannot Scrape 2019 Wk 1 Data

Run into the following error when trying to load week one data for 2019 season.

Error: Column fg can't be converted from numeric to character
In addition: There were 50 or more warnings (use warnings() to see the first 50)

script:
my_scrape <- scrape_data(pos=c("QB","RB","WR","TE","DST", "K"),season = 2019, week =1)

Sorry if this is trial - new to this whole thing.

ECR non functioning

add_ecr() broke for me...dont know why, but it never returned data to begin with ayway. Now i get a error on the add_column() Function when it tries to import the fp_ids. fp_ids is not getting the ID from the scrape

I fixed it in my repo, but that didn't populate the sd_ecr..etc fields. I changed it to use the Full Name because the ID was not matching and this is how i add ECR.

my_projections <- projections_table(my_scrape, scoring_rules)
my_projections <- my_projections %>% add_player_info()
my_projections$first_name <- paste(my_projections$first_name, my_projections$last_name)
names(my_projections)[2] <- c("name")
my_projections = subset(my_projections, select = -c(last_name))
my_projections <- my_projections %>% add_ecr() %>% add_risk()

No id for Yahoo DST when scraping week 1

scrape_data does not provide id for DSTs when scraping Yahoo for week 1

my_scrape <- scrape_data(src = c("Yahoo"), 
                         pos = c("QB", "RB", "WR", "TE", "DST"),
                         season = 2019, week = 1)

Additional projection_table issues

While investigating the source of issue 15, I did notice a few other issues. I wasn't able to determine if the projections were being included as part of the projections table.

FantasyPros: Error in -x : invalid argument to unary operator
FleaFlicker: Error: Column id not found
Yahoo: Error in mutate_impl(.data, dots) : Evaluation error: object 'fg' not found.
NFL: Error in .f(.x[[i]], ...) : object 'att' not found
FantasyData: Error: by can't contain join column pos, data_col which is missing from LHS
FantasyFootballNerd: Error: Column id not found

projections_table function triggers multiple errors

> my_projections <- projections_table(my_scrape)

Error in (function (x, strict = TRUE) :
the argument has already been evaluated
In addition: Warning messages:
1: Unknown variables: id, data_src
2: Unknown variables: id, data_src

Issues with projection sources

I have found the following issues with projection sources retrieved using scrape_data for season = 2019 and week = 0:

  1. ESPN: No data is scraped. Projections are publicly available. Likely a scraping issue.

  2. FantasyData: No data is scraped. The free projections are for a limited number of players; full projections are available only with a paid subscription. I suspect this is no longer a valid public data source.

  3. FleaFlicker: No data is scraped. A free membership is required to access projections. This may simply require a revision to incorporate login with credentials.

  4. FantasyFootballNerd: The scraped data is incorrect. For example, Calvin Johnson is projected as the #1 WR. It appears that free projections are available, so this may just require fixing the script.

  5. WalterFootball: The projections for most positions don't include TD projections. I couldn't find projections on the website in HTML, so I couldn't determine if this is an error or if WF doesn't project TDs (which makes the projections much less useful).

Thanks!

Some Sources not working

It appears some sources are not working.

Running

df <- scrape_data(src = c("CBS", "ESPN", "NFL"), "QB", 2019, 1) %>% as.data.frame()

leads to

> df <- scrape_data(src = c("CBS", "ESPN", "NFL"), "QB", 2019, 1) %>% as.data.frame()
Scraping QB projections from 
 https://www.cbssports.com/fantasy/football/stats/QB/2019/1/projections/nonppr 
Scraping QB projections from 
 
Scraping QB projections from 
 http://api.fantasy.nfl.com/v1/players/stats?statType=weekProjectedStats&season=2019&week=1&position=QB&format=json 
Fehler: `.x` is empty, and no `.init` supplied

I would like to scrape the NFL source so that's the critical part here. Any suggestions?

Missing Yards gained data for QB's

I'm noticing yards_gained and other yards associated columns are not logged for the following quarterbacks and games:

  • Ben Roethlisberger for game_id: 2013112401
  • Brandon Weeden for game_ID: 2013112401
  • Chris Henne, game_ID: 2013120101
  • Jason Cambell, game_id: 2013112401

Found this in a dplyr summarize call on QBs so possible it's missing for receivers as well.

Error with scrape_data

When I run the standard:

library(ffanalytics)

my_scrape <- scrape_data(src = c("CBS", "ESPN", "Yahoo"),

* ```
                       pos = c("QB", "RB", "WR", "TE", "DST"),
  ```

* ```
                       season = 2018, week = 0)
  ```

In RStudio on my newly reinstalled 3.5.1 & 3.4.4 installations I get the following error:

Scraping QB projections from
https://www.cbssports.com/fantasy/football/stats/sortable/points/QB/standard/projections/2018/?print_rows=9999
Error: Column 1 must be named.
Use .name_repair to specify repair.
Call rlang::last_error() to see a backtrace
Called from: abort(error_column_must_be_named(bad_name))
Browse[1]>

Any thoughts on what might be going on? This worked last week. I have tried uninstalling/reinstalling R & associated libraries with no success.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.