Code Monkey home page Code Monkey logo

binary.com-interview-question's Introduction


次元期权(binary.com)量化分析员/量化交易员面试题

大秦赋 (Chinese Emperor)
春秋战国《礼记•经解
孔子曰:『君子慎始,差若毫厘,缪以千里。』

《礼记•经解》孔子曰:「君子慎始,差若毫厘,谬以千里。」1

引用:「快懂百科」《礼记•经解》第一范文网:差之毫厘,谬以千里的故事「百度百科」春秋时期孔子作品《礼记•经解》「當代中國」差之毫釐 謬以千里

面试题

应征次元期权(法人马企)面试入门测验。借鉴西蒙·柯林斯的 https://matchodds.org (或詹姆斯·西蒙斯的高频量化对冲基金---文艺复兴科技)愚生尝试编写个自动采撷数据、科研回测、筹算、算卜预测、自动下单、结算、显示盈亏、风险管理报告、评估再改良高频量化对冲投资战略的一条龙服务的智能网页应用。愚生于此尝试着手于科研多元化计数/机数建模,再评估有效性与可行性,并参阅硕士程度量化作业(英)。盼受禄于次元期权

第一题

第一题第一章)解答

愚生使用从阳历二零一四年一月一日至二零一七年一月廿日的每日美日兑换阴阳烛加交易量数据,再通过以下一些计数/机数建模来算卜预测最高价与最低价:

  • 自回归移动平均模型
  • 指数平滑模型
  • 单变量广义自回归条件异方差模型
  • 加权指数移动平均模型
  • 蒙迪卡洛马尔科夫链
  • 贝叶斯时间序列
  • 混频抽样回归 / MIDAS:混频数据回归

请查阅次元期权面试题一(英) (旧链接备用网址备用网址二(添加均方误差,比较计数/机数模型的精准度))。

此栏开始以下相关科研文献所使用的阴阳烛加交易量数据有七种货币兑换,从阳历二零一三年一月一日至二零一七年八月卅一日:

  • 澳美兑换(AUDUSD)
  • 欧美兑换(EURUSD)
  • 英美兑换(GBPUSD)
  • 美加兑换(USDCAD)
  • 美瑞兑换(USDCHF)
  • 美中兑换(USDCNY)
  • 美日兑换(USDJPY)

文献如下:

为了着手于高频量化对冲数据计数/机数建模,尝试审查并整顿数据,文献「鄀客栈」次元期权面试试题一 - 单变量数据缺失值管理和文献「鄀客栈」次元期权面试试题一 - 多变量数据缺失值管理(乙)但单变量建模出现一些错误(一些是人为的美国洋番黑客洲际入侵犯罪),文献中使用多种弥补数据缺失值的计数/机数筹算方法如interpolatankalmanlocfma。. The 次元期权面试题一 - 日间高频交易计数/机数建模比较(英)比较了ts、msts、SARIMA、mcsGARCH、midasr、midas-garch、Levy process 计数/机数模型。

第一题第二章)幕后花絮

原本,愚生编写个闪霓应用(如下动态图)奈何读取速度、筹算与运行效率并不高,欲知更多详情请浏览闪霓应用(ShinyApp)并查阅「鄀客栈」binary.com Interview Question I - Lasso, Elastic-Net and Ridge Regression以了解详情。该应用包含三个面试题与解答。投注策略方面,纯粹筹算并占卜最高汇价与最低汇价,然后:

  • 使用汇价数据中的最高与最低汇价并采用凯利标准尤物来筹算,并占卜闭市汇价。当数据中拥有缺失值或没有最高与最低汇价观测值的时候,就以闭市汇价来筹算并占卜下一个时间单位的汇价。
  • 根据筹算并占卜出来的占卜值的方差来决定投注门槛,比方说筹算并占卜出最高汇价的方差最高价、最低汇价的方差最低价。在拥有优势的情况之下下注一百元,然后依照闭市汇价来结算盈亏。

借鉴解密复兴科技 - 基于隐蔽马尔科夫模型的时序分析方法,它日会筹算夏普率来评估风险与优势后才决定最佳投注时机(最佳建仓/开仓时机与最佳清仓时机)。

Secondly, I wrote another app testRealTimeTransc trial version to test the real time trading, and a completed version is Q1App2.

Due to the paper Binary.com Interview Q1 - Tick-Data-HiLo For Daily Trading (Blooper) simulated the data and then only noticed I not yet updated the new function, then I wrote 广义自回归条件异方差模型中的ARIMA(p,d,q)参数最优化 to compare the accuracy. However my later paper simulated dataset doesn't save the $fit$ in order to retrieve the $\sigma^2$ and VaR values for stop-loss pips when I got the idea. Here I put it as blooper and start binary-Q1 Multivariate GARCH Models and later on will write another FOREX Day Trade Simulation which will simulate all tick-data but not only HiLo data.

第一题第三章)闪霓应用

  • shinyApp : shiny::runGitHub('englianhu/binary.com-interview-question') - Application which compare the accuracy of multiple lasso, ridge and elastic net models (blooper).
  • Q1App : shiny::runGitHub('englianhu/binary.com-interview-question', subdir = 'Q1') - the application gather, calculate and forecast price. Once the user select currency and the forecast day, the system will auto calculate and plot the graph.
  • testRealTimeTransc : shiny::runGitHub('englianhu/binary.com-interview-question', subdir = 'testRealTimeTransc') - real time trading system which auto gather, calculate the forecast price, and also place orders, as well as settlement and plot P&L everyday.
  • Q1App2 : shiny::runGitHub('englianhu/binary.com-interview-question', subdir = 'Q1App2') - The application contain the Banker and Punter section which applied aboved statistical modelling.

第二题

第二题第一章)解答

第二题,尝试编写个闪霓应用Q2App,双变量或三变量泊松计数/机数模型可以运用在分析投资者在资本投入与题款/脱售基金上的概率,方便管理投资基金的整体投资基金资本与资金流动性。奈何并无任何投资基金的投资者资金流动数据可供科研用途。

第二题第二章)闪霓应用

Q2:运行代码shiny::runGitHub('englianhu/binary.com-interview-question', subdir = 'Q2')来通过闪霓应用实践排队论(运筹学理论)

第三题

For question 3, due to the question doesn't states we only bet on the matches which overcame a certain edge, therefore I just simply list the scenario. Kindly refer to Betting strategy for more informtion.

参考资源

第四题第一章)第一题

  1. Stock Market Forecasting Using LASSO Linear Regression Model by Sanjiban Sekhar Roy, Dishant Mital, Avik Basu, Ajith Abraham (2015)❤‍🔥
  2. Using LASSO from lars (or glmnet) package in R for variable selection by Juancentro (2014)
  3. Difference between glmnet() and cv.glmnet() in R? by Amrita Sawant (2015)
  4. Testing Kelly Criterion and Optimal f in R by Roy Wei (2012) ❤‍🔥
  5. Portfolio Optimization and Monte Carlo Simulation by Magnus Erik Hvass Pedersen (2014) ❤‍🔥
  6. Glmnet Vignette by Trevor Hastie and Junyang Qian (2014)
  7. lasso怎么用算法实现? by shuaihuang (2010)
  8. The Sparse Matrix and {glmnet} by Manuel Amunategui (2014)
  9. Regularization and Variable Selection via the Elastic Net by Hui Zou and Trevor Hastie
  10. LASSO, Ridge, and Elastic Net ❤‍🔥
  11. 热门数据挖掘模型应用入门(一): LASSO回归 by 侯澄钧 (2016)
  12. The Lasso Page
  13. Call_Valuation.R by Mariano (2016)
  14. Lecture 6 – Stochastic Processes and Monte Carlo (http://zorro-trader.com/manual) ❤‍🔥 ❤‍🔥
  15. The caret Package by Max Kuhn (2017) ❤‍🔥
  16. Time Series Cross Validation ❤‍🔥
  17. Character-Code.com
  18. Size Matters – Kelly Optimization by Roy Wei (2012) ❤‍🔥
  19. Time Series Cross Validation by William Chiu (2015) ❤‍🔥
  20. Forecasting Volatility by Stephen Figlewski (2004)
  21. Successful Algorithmic Trading by Michael Halls Moore (2015) ❤‍🔥 ❤‍🔥
  22. Financial Risk Modelling and Portfolio Optimization with R (2nd Edt) by Bernhard Praff (2016) ❤‍🔥
  23. Analyzing Financial Data and Implementing Financial Models using R by Clifford S.Ang (2015) ❤‍🔥

第四题第二章)第二题

  1. Queueing model 534 in Excel ❤‍🔥
  2. Queueing model macro in Excel ❤‍🔥
  3. Queueing up in R, (continued)
  4. Waiting in line, waiting on R
  5. Simulating a Queue in R
  6. What is the queue data structure in R?
  7. Implementing a Queue as a Reference Class
  8. queue implementation?
  9. Queueing Theory Calculator ❤‍🔥
  10. The Pith of Performance by Neil Gunther (2010)
  11. Computationally Efficient Simulation of Queues - The R Package queuecomputer
  12. Waiting-Line Models
  13. Queues with Breakdowns and Customer Discouragement

第四题第三章)第三题

  1. Data APIs/feeds available as packages in R
  2. Application of Kelly Criterion model in Sportsbook Investment

量化交易

一)简介

已将次元期权科研项目相关数据,一律迁移至「数据仓库」次元期权(binary.com)量化分析员/量化交易员面试题,并继续科研高频量化对冲计数/机数建模。

季节性时间序列与高频量化对冲计数/机数模型如下:

它日学习投资风险管理与夏普率,欲知更多详情,请查阅:

二)幕后花絮

Deriv.com - Interday High Frequency Trading Models Comparison (Blooper)着手于季节性计数/机数建模,而文献中提及的一些计数/机数模型mcsGARCH、midasr、midas-garch、Levy process会在日后继续科研。




Sςιβrοκεrs Trαdιηg®
世博量化®企业知识产权®及版权®所有,盗版必究。

Footnotes

  1. HTML Color Codes

binary.com-interview-question's People

Contributors

englianhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

binary.com-interview-question's Issues

`forecast::Arima(xreg = XXXX)`咱们道家易经、天文历法、廿四节气、十二时辰、算卜与气象学,孔明借东风与草船借箭出现错误信息

秦谏启奏


有事启奏,无事退朝

fabletools::accuracy 错误信息

大纲

  • fabletools::accuracy()在咱们R鄀文艺坊和咱们道家终端杏林「牡蛎」有落差。
  • forecast::Arima(xreg = XXXX)咱们道家易经、天文历法、廿四节气、十二时辰、算卜与气象学,孔明借东风与草船借箭出现错误信息。

与预期出入/代码有误

问题一:fabletools::accuracy()在咱们R鄀文艺坊和咱们道家终端杏林「牡蛎」有落差如下:

> 外因周期自回归系列 <- bind_rows(list(
+   外因周期自回归甲 = as_tibble(accuracy(外因周期自回归甲)), 
+   外因周期自回归乙 = as_tibble(accuracy(外因周期自回归乙)), 
+   外因周期自回归丙 = as_tibble(accuracy(外因周期自回归丙)),
+   外因周期自回归丁 = as_tibble(accuracy(外因周期自回归丁)), 
+   外因周期自回归戊 = as_tibble(accuracy(外因周期自回归戊)), 
+   外因周期自回归己 = as_tibble(accuracy(外因周期自回归己)),
+   外因周期自回归庾 = as_tibble(accuracy(外因周期自回归庾))), .id = '计数模型')
Error in accuracy(外因周期自回归甲) : 缺少参数"predicted",也没有缺省值
> 外因周期自回归系列
错误: 找不到对象'外因周期自回归系列'

问题二:咱们道家易经、天文历法、廿四节气、十二时辰、算卜与气象学,孔明借东风与草船借箭出现错误信息如下:

> 自回归均移模型最优值(tk_xts(总汇[, .(年月日时分, 预测价)]), 外因 = as.matrix(总汇[, .(廿四节气乙, 时辰乙)]))
Using column `年月日时分` for date_var.
Error in auto.arima(样本, D = 季节差分的次数, seasonal = 季节性,  : 
  No suitable ARIMA model found
此外: Warning message:
Non-numeric columns being dropped: 年月日时分

自回归均移模型最优值 <- function(样本, 季节差分的次数 = NULL, 季节性 = '', 规律极限值 = 10, 自回归均移模型值 = FALSE, 外因 = NULL) {
# 通过设置并迭代筹算自回归均移模型中不同`p,d,q`的规律值,来比较并筛选出最低或最大负数的赤池信息量准则,也就是最优统计模型。
#
# 《预测:方法与实践(第三版)》第九章第九节 - 季节性ARIMA模型
# https://otexts.com/fpp3cn/seasonal-arima-cn.html
if (!季节性 %in% c('', '', '', '')) stop('请选择季节性:"勾"或"有"或"是",或者"叉"或"冇"或"否"。')
if (季节性 %in% c('', '')) 季节性 <- TRUE
if (季节性 %in% c('', '')) 季节性 <- FALSE
# 季节差分的次数,一般上使用到的数值是零到二。
半成品 <- auto.arima(样本, D = 季节差分的次数, seasonal = 季节性,
max.order = 规律极限值, xreg = 外因)
if (自回归均移模型值 == FALSE) {
成果 <- arimaorder(半成品)
} else {
#https://stats.stackexchange.com/questions/178577/how-to-read-p-d-and-q-of-auto-arima
成果 <- 半成品$arma
#https://stackoverflow.com/questions/23617662/extract-arima-specificaiton
names(成果) <- c('p', 'q', 'P', 'Q', 's', 'd', 'D')
成果 %<>% .[c(1, 6, 2, 3, 7, 4, 5)]
#(p,d,q) and (P,D,Q) and seasonal period
} #范例:`s` seasonal period = 12 表示十二个月
return(成果)

操作系统信息

$ $ neofetch
        #####           englianhu@Scibrokes 
       #######          ------------------- 
       ##O#O##          OS: RedFlag Desktop 11.0 x86_64 
       #######          Host: 23-p080d 
     ###########        Kernel: 5.10.0-1-amd64 
    #############       Uptime: 2 days, 6 hours, 1 min 
   ###############      Packages: 7971 (dpkg) 
   ################     Shell: bash 5.0.3 
  #################     Resolution: 1920x1080 
#####################   DE: KDE 
#####################   WM: KWin 
  #################     Theme: RedFlag Dark [KDE], Breeze [GTK3] 
                        Icons: RedFlag-Themes-Chinrse-Red [KDE], oxygen [GTK3] 
                        Terminal: konsole 
                        CPU: Intel i5-4590T (4) @ 3.000GHz 
                        GPU: NVIDIA GeForce 710M 
                        GPU: Intel HD Graphics 
                        Memory: 14498MiB / 15901MiB 

#R>  devtools::session_info()$platform
 setting  value
 version  R version 4.3.3 (2024-02-29)
 os       RedFlag Desktop 11.0
 system   x86_64, linux-gnu
 ui       X11
 language zh_CN:en
 collate  zh_CN.UTF-8
 ctype    zh_CN.UTF-8
 tz       Asia/Shanghai
 date     2024-03-29
 pandoc   NA (via rmarkdown)
#R> Sys.info()
 sysname 
 "Linux" 
 release 
 "5.10.0-1-amd64" 
 version 
"#1 SMP Debian 5.10.40-1~rf11u1.2 (2022-09-22)" 
 nodename 
 "Scibrokes" 
 machine 
 "x86_64" 
 login 
 "englianhu" 
 user 
 "englianhu" 
 effective_user 
 "englianhu" 

原本预期结果

> 外因周期自回归系列 <- bind_rows(list(
+     外因周期自回归甲 = as_tibble(accuracy(外因周期自回归甲)), 
+     外因周期自回归乙 = as_tibble(accuracy(外因周期自回归乙)), 
+     外因周期自回归丙 = as_tibble(accuracy(外因周期自回归丙)),
+     外因周期自回归丁 = as_tibble(accuracy(外因周期自回归丁)), 
+     外因周期自回归戊 = as_tibble(accuracy(外因周期自回归戊)), 
+     外因周期自回归己 = as_tibble(accuracy(外因周期自回归己)),
+     外因周期自回归庾 = as_tibble(accuracy(外因周期自回归庾))), .id = '计数模型')
> 外因周期自回归系列
# A tibble: 7 × 8
  计数模型                ME  RMSE   MAE      MPE  MAPE    MASE  ACF1
  <chr>                <dbl> <dbl> <dbl>    <dbl> <dbl>   <dbl> <dbl>
1 外因周期自回归甲  2.44e-14 1.96  1.66  -0.0323  1.53  0.0152   1.00
2 外因周期自回归乙 -4.94e-14 0.801 0.651 -0.00539 0.600 0.00599  1.00
3 外因周期自回归丙 -6.89e-14 1.96  1.66  -0.0323  1.53  0.0152   1.00
4 外因周期自回归丁 -5.33e-14 0.801 0.651 -0.00539 0.600 0.00599  1.00
5 外因周期自回归戊  3.18e-13 1.96  1.66  -0.0323  1.53  0.0152   1.00
6 外因周期自回归己 -2.45e-14 0.799 0.651 -0.00537 0.600 0.00598  1.00
7 外因周期自回归庾  3.66e-14 0.799 0.651 -0.00537 0.600 0.00598  1.00

提案

文献与案例

众卿商议

结论

众方案

  • 方案一
  • 方案二
  • 方案三

最优方案

相关奏折

在`ts`、`xts`、`matrix`、`zoo`格式上使用`auto.arima`

tsxtsmatrix格式上使用auto.arima

数据来源:猫城@englianhu/binary.com-interview-question-data/世博量化研究院/文艺数据库/fx/USDJPY/样本2.rds

1 GiB [世博量化研究院*]❯ 样本2 <- readRDS("~/文档/猫城/binary.com-interview-question-data/文艺数据库/fx/USDJPY/样本2.rds")
✖ 1 GiB [世博量化研究院*]❯ 测试数据 <- 样本2[10000:11200, c('年月日时分', '闭市价')]
✔ 1 GiB [世博量化研究院*]❯ 测试数据 %>% data.frame %>% head
           年月日时分                  闭市价
1 2015-01-13 22:40:00 117.9290000000000020464
2 2015-01-13 22:41:00 117.8984999999999985221
3 2015-01-13 22:42:00 117.9070000000000106866
4 2015-01-13 22:43:00 117.8985000000000127329
5 2015-01-13 22:44:00 117.8849999999999909051
6 2015-01-13 22:45:00 117.85849999999999226931 GiB [世博量化研究院*]❯ 
✔ 1 GiB [世博量化研究院*]❯ 测试数据 %>% data.table::as.matrix(rownames = TRUE) %>% auto.arima
Series: . 
ARIMA(0,1,0) 

sigma^2 = 0.0009413966542781437592:  log likelihood = 2485.590000000000145519
AIC=-4969.189999999999599822   AICc=-4969.180000000000291038   BIC=-4964.1000000000003637981 GiB [世博量化研究院*]❯ 测试数据 %>% as.xts %>% auto.arima
Series: . 
ARIMA(0,1,0) 

sigma^2 = 0.0009413966542781437592:  log likelihood = 2485.590000000000145519
AIC=-4969.189999999999599822   AICc=-4969.180000000000291038   BIC=-4964.1000000000003637981 GiB [世博量化研究院*]❯ 测试数据 %>% as.xts %>% ts %>% auto.arima
Series: . 
ARIMA(0,1,0) 

sigma^2 = 0.0009413966542781437592:  log likelihood = 2485.590000000000145519
AIC=-4969.189999999999599822   AICc=-4969.180000000000291038   BIC=-4964.1000000000003637981 GiB [世博量化研究院*]❯ 测试数据 %>% as.xts %>% ts(frequency = 1200) %>% auto.arima
Series: . 
ARIMA(0,1,0) 

sigma^2 = 0.0009413966542781437592:  log likelihood = 2485.590000000000145519
AIC=-4969.189999999999599822   AICc=-4969.180000000000291038   BIC=-4964.1000000000003637981 GiB [世博量化研究院*]❯ 测试数据 %>% as.xts %>% ts(frequency = 120) %>% auto.arima
Series: . 
ARIMA(0,1,0) 

sigma^2 = 0.0009413966542781437592:  log likelihood = 2485.590000000000145519
AIC=-4969.189999999999599822   AICc=-4969.180000000000291038   BIC=-4964.1000000000003637981 GiB [世博量化研究院*]❯ matrix(测试数据$闭市价, dimnames = list(测试数据$年月日时分, '闭市价'), ncol = 1) %>% auto.arima
Series: . 
ARIMA(0,1,0) 

sigma^2 = 0.0009413966542781437592:  log likelihood = 2485.590000000000145519
AIC=-4969.189999999999599822   AICc=-4969.180000000000291038   BIC=-4964.1000000000003637981 GiB [世博量化研究院*]❯ matrix(测试数据$闭市价, dimnames = list(测试数据$年月日时分, '闭市价'), ncol = 1) %>% as.ts %>% auto.arima
Series: . 
ARIMA(0,1,0) 

sigma^2 = 0.0009413966542781437592:  log likelihood = 2485.590000000000145519
AIC=-4969.189999999999599822   AICc=-4969.180000000000291038   BIC=-4964.1000000000003637981 GiB [世博量化研究院*]❯ matrix(测试数据$闭市价, dimnames = list(测试数据$年月日时分, '闭市价'), ncol = 1) %>% as.zoo %>% auto.arima
Series: . 
ARIMA(0,1,0) 

sigma^2 = 0.0009413966542781437592:  log likelihood = 2485.590000000000145519
AIC=-4969.189999999999599822   AICc=-4969.180000000000291038   BIC=-4964.100000000000363798

测试一下不同函数有何分别,即使设置个frequency,结果都是一样...

1.5 GiB [世博量化研究院*]❯ microbenchmark(
     'as.matrix()' = as.matrix(测试数据, rownames = TRUE) %>% auto.arima, 
     'as.matrix %>% ' = 测试数据 %>% as.matrix(rownames = TRUE) %>% auto.arima, 
     'as.xts' = 测试数据 %>% as.xts %>% auto.arima, 
     'as.xts %>% as.ts' = 测试数据 %>% as.xts %>% ts %>% auto.arima, 
     'as.ts(freq = 1200)' = 测试数据 %>% as.xts %>% ts(frequency = 1200) %>% auto.arima, 
     'as.ts(freq = 120)' = 测试数据 %>% as.xts %>% ts(frequency = 120) %>% auto.arima, 
     'matrix %>% as.ts' = matrix(测试数据$闭市价, dimnames = list(测试数据$年月日时分, '闭市价'), ncol = 1) %>% as.ts %>% auto.arima, 
     'matrix %>% as.zoo' = matrix(测试数据$闭市价, dimnames = list(测试数据$年月日时分, '闭市价'), ncol = 1) %>% as.zoo %>% auto.arima, 
     'matrix' = matrix(测试数据$闭市价, dimnames = list(测试数据$年月日时分, '闭市价'), ncol = 1) %>% auto.arima)
Unit: milliseconds
               expr                       min                        lq                      mean
        as.matrix()   43.51706399999999774764   44.45820700000000158525   51.01456184999999976526
     as.matrix %>%    43.60379600000000266391   44.43160650000000089221   52.06498675000000275759
             as.xts   42.19390899999999788861   43.44118149999999900501   52.29102436000000153626
   as.xts %>% as.ts   42.21445899999999795682   43.00931149999999547617   51.63839049999999986085
 as.ts(freq = 1200)   42.24664899999999789770   43.10529650000000145837   51.77428743000000110897
  as.ts(freq = 120) 2392.39838700000018434366 2411.68259299999999711872 2462.60519316999989314354
   matrix %>% as.ts   41.50966199999999872716   42.22200399999999831380   50.11591734999999658839
  matrix %>% as.zoo   46.18885300000000171394   46.91678300000000234604   58.44752450000000010277
             matrix   41.58900100000000321643   42.62158850000000143154   49.24968701000000237400
                    median                        uq                       max neval
   45.56872500000000059117   48.29254650000000026466  113.00312399999999968259   100
   45.77284600000000125419   52.01385399999999492593   98.40117499999999495230   100
   44.63580299999999567717   49.53256000000000369710  126.15233999999999525699   100
   43.99932599999999638385   49.67516450000000105547   99.18939199999999800639   100
   44.28173999999999921329   49.64366749999999939291  121.05843600000000037653   100
 2427.76964899999984481838 2455.14120800000000599539 3041.27480900000000474392   100
   43.36704900000000151294   47.44317900000000065575  150.78695899999999596730   100
   48.47124600000000071987   61.96651900000000523505  222.77181400000000621731   100
   43.92246000000000094587   48.42570549999999940383  111.59604899999999361171   100

参考资源

sharing `as.Date` vs `lubridate::as_date`

I noticed default as.Date function unable recognize the date of datatime format as we can see from 2015-01-19.

> tk_tbl(mbase)[14390:14410,]$index
 [1] "2015-01-16 23:50:00 JST" "2015-01-16 23:51:00 JST"
 [3] "2015-01-16 23:52:00 JST" "2015-01-16 23:53:00 JST"
 [5] "2015-01-16 23:54:00 JST" "2015-01-16 23:55:00 JST"
 [7] "2015-01-16 23:56:00 JST" "2015-01-16 23:57:00 JST"
 [9] "2015-01-16 23:58:00 JST" "2015-01-16 23:59:00 JST"
[11] "2015-01-17 00:00:00 JST" "2015-01-19 00:01:00 JST"
[13] "2015-01-19 00:02:00 JST" "2015-01-19 00:03:00 JST"
[15] "2015-01-19 00:04:00 JST" "2015-01-19 00:05:00 JST"
[17] "2015-01-19 00:06:00 JST" "2015-01-19 00:07:00 JST"
[19] "2015-01-19 00:08:00 JST" "2015-01-19 00:09:00 JST"
[21] "2015-01-19 00:10:00 JST"
> tk_tbl(mbase)[14390:14410,]$index %>% as.Date
 [1] "2015-01-16" "2015-01-16" "2015-01-16" "2015-01-16" "2015-01-16"
 [6] "2015-01-16" "2015-01-16" "2015-01-16" "2015-01-16" "2015-01-16"
[11] "2015-01-16" "2015-01-18" "2015-01-18" "2015-01-18" "2015-01-18"
[16] "2015-01-18" "2015-01-18" "2015-01-18" "2015-01-18" "2015-01-18"
[21] "2015-01-18"
> tk_tbl(mbase)[14390:14410,]$index %>% as_date
 [1] "2015-01-16" "2015-01-16" "2015-01-16" "2015-01-16" "2015-01-16"
 [6] "2015-01-16" "2015-01-16" "2015-01-16" "2015-01-16" "2015-01-16"
[11] "2015-01-17" "2015-01-19" "2015-01-19" "2015-01-19" "2015-01-19"
[16] "2015-01-19" "2015-01-19" "2015-01-19" "2015-01-19" "2015-01-19"
[21] "2015-01-19"
> tk_tbl(mbase)[14390:14410,]$index %>% as.Date %>% class
[1] "Date"
> tk_tbl(mbase)[14390:14410,]$index %>% as_date %>% class
[1] "Date"

sharing as.Date vs lubridate::as_date

Here is relevant question but in Lubridate as_date and as_datetime differences in behavior and Dates and Times in R Without Losing Your Sanitythere will be use in intraday high-frequency-trading.

Support high-frequency-trading

Screenshot_1


Best RAM You Can Buy Today
Patriot Viper Steel DDR4-4400 (2x 8GB) ...
Patriot Viper RGB DDR4-3600 (2x 8GB) ...
Patriot Viper 4 DDR4-3400 (2x 8GB) ...
Corsair Vengeance RGB Pro DDR4-3200 (4x 8GB) ...
Patriot Viper Steel DDR4-3200 (2x 16GB) ...
Patriot Viper Steel DDR4-3600 C18 (2x 32GB) ...
G. ...
Corsair Vengeance LPX DDR4-2666 (2x 8GB)
Oct 20, 2020
Best RAM 2020: Fast, Cheap and RGB | Tom's Hardware


Nvidia GeForce RTX 3080. The best graphics card for PC gaming right now. ...
Nvidia GeForce RTX 2070 Super. The best 4K graphics card for reasonable money. ...
AMD Radeon RX 5700. The best 1440p graphics card… with a little work. ...
AMD Radeon RX 5600 XT. ...
Nvidia GeForce GTX 1660 Super. ...
Nvidia GeForce GTX 1650 Super.
Oct 21, 2020
The best graphics cards in 2020 | PC Gamer

Need to upgraded my equipement to enchance the efficiency...

Originally posted by @englianhu in englianhu/report#9 (comment)

> **I have tried the code that you mentioned. but I got this error:**

I have tried the code that you mentioned. but I got this error:

install_github("james-thorson/FishStatsUtils", ref="test")
Using GitHub PAT from envvar GITHUB_PAT
Downloading GitHub repo james-thorson/FishStatsUtils@test
from URL https://api.github.com/repos/james-thorson/FishStatsUtils/zipball/test
Installation failed: Bad credentials (401)

r-lib/devtools#1566 (comment) solved my issue for https://github.com/englianhu/binary.com-interview-question

Originally posted by @englianhu in James-Thorson-NOAA/VAST#147 (comment)

Error in start_shell(master = master, spark_home = spark_home, spark_version = version, : Failed to find 'spark-submit2.cmd' under 'C:\Users\Owner\AppData\Local\spark\spark-3.0.0-bin-hadoop2.7', please verify - SPARK_HOME.

I tried to setup sparklyr for big-data https://github.com/englianhu/binary.com-interview-question/blob/master/binary-Q1Inter-HFT.Rmd but failed.

> library('BBmisc')
> library('sparklyr')
> sc <- spark_connect(master = 'local')
- Error in start_shell(master = master, spark_home = spark_home, spark_version = version,  : 
-   Failed to find 'spark-submit2.cmd' under 'C:\Users\Owner\AppData\Local\spark\spark-3.0.0-bin-hadoop2.7', please verify - SPARK_HOME.
> spark_home_dir()
[1] "C:\\Users\\Owner\\AppData\\Local/spark/spark-3.0.0-bin-hadoop2.7"
> spark_installed_versions()
  spark hadoop                                                              dir
1 3.0.0    2.7 C:\\Users\\Owner\\AppData\\Local/spark/spark-3.0.0-bin-hadoop2.7
> spark_home_set()
Setting SPARK_HOME environment variable to C:\Users\Owner\AppData\Local/spark/spark-3.0.0-bin-hadoop2.7
> sc <- spark_connect(master = 'local')
- Error in start_shell(master = master, spark_home = spark_home, spark_version = version,  : 
-   Failed to find 'spark-submit2.cmd' under 'C:\Users\Owner\AppData\Local\spark\spark-3.0.0-bin-hadoop2.7', please verify - SPARK_HOME.

**北京大学赢家ξng黄氏江夏堂「大秦赋」:目前使用🚩🇨🇳🏹**国产红旗礼逆袭,并且将`REmap`鄀编程语言程序包中文化,以下鄀编程语言函数也开始中文化,日后编程都会使用中文编程。

🚩🇨🇳🏹红旗中科 RedFlag Linux (🍥Debian 11)

身为中华民族而非倭寇或者回教徒宦官,会把东京修改为北京,β站:笑傲江户 --- 英雄梦

大秦赋 - 北京大学

Originally posted by @englianhu in scibrokes/r-world#1


欢迎到【百度绘图】R编程语言程序包

大秦赋 - 北京大学

🚩🇨🇳🏹红旗中科 RedFlag Linux (🍥Debian 11)

红旗礼逆袭

<iframe src="//player.bilibili.com/player.html?aid=938367824&bvid=BV1pT4y1a7Zt&cid=581199416&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>

视频来源β站:安装红旗Linux系统

目前使用🚩🇨🇳🏹**国产红旗礼逆袭,并且将REmapR编程语言程序包中文化,以下R编程语言函数也开始中文化,日后编程都会使用中文编程。

Originally posted by @englianhu in https://github.com/englianhu/REmap/wiki/%E6%AC%A2%E8%BF%8E%E5%88%B0%E3%80%90%E7%99%BE%E5%BA%A6%E7%BB%98%E5%9B%BE%E3%80%91R%E7%BC%96%E7%A8%8B%E8%AF%AD%E8%A8%80%E7%A8%8B%E5%BA%8F%E5%8C%85

Error: forecast::Arima() #910

> 循环周期 <- 600
> 季回归 <- 培训数据$闭市价 %>% 
        matrix(dimnames = list(培训数据$年月日时分, '闭市价')) %>% 
        tk_ts(frequency = 循环周期)
      rownames(季回归) <- 培训数据$年月日时分
> 季回归
Time Series:
Start = c(1, 1) 
End = c(2, 600) 
Frequency = 600 
            闭市价
1451592060 120.208
1451592120 120.208
1451592180 120.208
1451592240 120.208
1451592300 120.208
1451592360 120.208
1451592420 120.208
1451592480 120.208
1451592540 120.208
1451592600 120.208
1451592660 120.208
1451592720 120.208
1451592780 120.208
1451592840 120.208
1451592900 120.208
1451592960 120.208
1451593020 120.208
1451593080 120.208
1451593140 120.208
1451593200 120.208
1451593260 120.208
1451593320 120.208
1451593380 120.208
1451593440 120.208
1451593500 120.208
1451593560 120.208
1451593620 120.208
1451593680 120.208
1451593740 120.208
1451593800 120.208
1451593860 120.208
1451593920 120.208
1451593980 120.208
1451594040 120.208
1451594100 120.208
1451594160 120.208
1451594220 120.208
1451594280 120.208
1451594340 120.208
1451594400 120.208
1451594460 120.208
1451594520 120.208
1451594580 120.208
1451594640 120.208
1451594700 120.208
1451594760 120.208
1451594820 120.208
1451594880 120.208
1451594940 120.208
1451595000 120.208
1451595060 120.208
1451595120 120.208
1451595180 120.208
1451595240 120.208
1451595300 120.208
1451595360 120.208
1451595420 120.208
1451595480 120.208
1451595540 120.208
1451595600 120.208
1451595660 120.208
1451595720 120.208
1451595780 120.208
1451595840 120.208
1451595900 120.208
1451595960 120.208
1451596020 120.208
1451596080 120.208
1451596140 120.208
1451596200 120.208
1451596260 120.208
1451596320 120.208
1451596380 120.208
1451596440 120.208
1451596500 120.208
1451596560 120.208
1451596620 120.208
1451596680 120.208
1451596740 120.208
1451596800 120.208
1451596860 120.208
1451596920 120.208
1451596980 120.208
1451597040 120.208
1451597100 120.208
1451597160 120.208
1451597220 120.208
1451597280 120.208
1451597340 120.208
1451597400 120.208
1451597460 120.208
1451597520 120.208
1451597580 120.208
1451597640 120.208
1451597700 120.208
1451597760 120.208
1451597820 120.208
1451597880 120.208
1451597940 120.208
1451598000 120.208
 [ reached getOption("max.print") -- omitted 1100 rows ]

Tried to build ts format seasonal dataset and forecast::Arima() but all errors, somebody take a look?

> Arima(季回归)
Error in solve.default(res$hessian * n.used, A) : 
  Lapack routine dgesv: system is exactly singular: U[1,1] = 0
> Arima(季回归, order = c(3, 1, 0))
Error in optim(init[mask], armaCSS, method = optim.method, hessian = FALSE,  : 
  initial value in 'vmmin' is not finite
> Arima(季回归, order = c(2, 1, 0))
Error in optim(init[mask], armaCSS, method = optim.method, hessian = FALSE,  : 
  initial value in 'vmmin' is not finite
> Arima(季回归, order = c(2, 0, 0))
Error in stats::arima(x = x, order = order, seasonal = seasonal, include.mean = include.mean,  : 
  non-stationary AR part from CSS
> Arima(季回归, order = c(1, 0, 0))
Error in stats::arima(x = x, order = order, seasonal = seasonal, include.mean = include.mean,  : 
  non-stationary AR part from CSS
> Arima(季回归, order = c(0, 0, 0))
Error in solve.default(res$hessian * n.used, A) : 
  Lapack routine dgesv: system is exactly singular: U[1,1] = 0
> Arima(季回归, order = c(1, 1, 1))
Error in optim(init[mask], armaCSS, method = optim.method, hessian = FALSE,  : 
  initial value in 'vmmin' is not finite
> Arima(季回归, order = c(0, 1, 1))
Error in optim(init[mask], armaCSS, method = optim.method, hessian = FALSE,  : 
  initial value in 'vmmin' is not finite
> Arima(季回归, order = c(0, 0, 1))
Error in solve.default(res$hessian * n.used, A) : 
  system is computationally singular: reciprocal condition number = 4.19528e-25

Citation : robjhyndman/forecast#910 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.