Code Monkey home page Code Monkey logo

finrl-meta's Introduction

FinRL-Meta: A Metaverse of Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning

Downloads Downloads Python 3.6 PyPI

FinRL-Meta builds a universe of market environments for data-driven financial reinforcement learning. We aim to help the users in our community to easily build environments.

Check out our latest competition: ACM ICAIF 2023 FinRL Contest

Check out the FinRL Project

  1. FinRL-Meta provides hundreds of market environments.
  2. FinRL-Meta reproduces existing papers as benchmarks.
  3. FinRL-Meta provides dozens of demos/tutorials, organized in a curriculum.

Outline

News and Tutorials

Our Goals

  • To provide benchmarks and facilitate fair comparisons, we allow researchers to evaluate different strategies on the same dataset. Also, it would help researchers to better understand the “black-box” nature (deep neural network-based) of DRL algorithms.
  • To reduce the simulation-reality gap: existing works use backtesting on historical data, while the actual performance may be quite different.
  • To reduce the data pre-processing burden, so that quants can focus on developing and optimizing strategies.

Design Principles

  • Plug-and-Play (PnP): Modularity; Handle different markets (say T0 vs. T+1)
  • Completeness and universal: Multiple markets; Various data sources (APIs, Excel, etc); User-friendly variables.
  • Layer structure and extensibility: Three layers including: data layer, environment layer, and agent layer. Layers interact through end-to-end interfaces, achieving high extensibility.
  • “Training-Testing-Trading” pipeline: simulation for training and connecting real-time APIs for testing/trading, closing the sim-real gap.
  • Efficient data sampling: accelerate the data sampling process is the key to DRL training! From the ElegantRL project. we know that multi-processing is powerful to reduce the training time (scheduling between CPU + GPU).
  • Transparency: a virtual env that is invisible to the upper layer
  • Flexibility and extensibility: Inheritance might be helpful here

Overview

Overview image of FinRL-Meta We utilize a layered structure in FinRL-Meta, as shown in the figure above, that consists of three layers: data layer, environment layer, and agent layer. Each layer executes its functions and is independent. Meanwhile, layers interact through end-to-end interfaces to implement the complete workflow of algorithm trading. Moreover, the layer structure allows easy extension of user-defined functions.

DataOps

DataOps applies the ideas of lean development and DevOps to the data analytics field. DataOps practices have been developed in companies and organizations to improve the quality and efficiency of data analytics. These implementations consolidate various data sources, unify and automate the pipeline of data analytics, including data accessing, cleaning, analysis, and visualization.

However, the DataOps methodology has not been applied to financial reinforcement learning researches. Most researchers access data, clean data, and extract technical indicators (features) in a case-by-case manner, which involves heavy manual work and may not guarantee the data quality.

To deal with financial big data (unstructured), we follow the DataOps paradigm and implement an automatic pipeline in the following figure: task planning, data processing, training-testing-trading, and monitoring agents’ performance. Through this pipeline, we continuously produce DRL benchmarks on dynamic market datasets.

Supported Data Sources:

Data Source Type Range and Frequency Request Limits Raw Data Preprocessed Data
Akshare CN Securities, A share 2017-now, 1 day NA OHLCV Prices&Indicators
Alpaca US Stocks, ETFs 2015-now, 1min Account-specific OHLCV Prices&Indicators
Baostock CN Securities 1990-12-19-now, 5min Account-specific OHLCV Prices&Indicators
Binance Cryptocurrency API-specific, 1s, 1min API-specific Tick-level daily aggegrated trades, OHLCV Prices&Indicators
CCXT Cryptocurrency API-specific, 1min API-specific OHLCV Prices&Indicators
IEXCloud NMS US securities 1970-now, 1 day 100 per second per IP OHLCV Prices&Indicators
JoinQuant CN Securities 2005-now, 1min 3 requests each time OHLCV Prices&Indicators
QuantConnect US Securities 1998-now, 1s NA OHLCV Prices&Indicators
RiceQuant CN Securities 2005-now, 1ms Account-specific OHLCV Prices&Indicators
Tushare CN Securities, A share -now, 1 min Account-specific OHLCV Prices&Indicators
WRDS US Securities 2003-now, 1ms 5 requests each time Intraday Trades Prices&Indicators
YahooFinance US Securities Frequency-specific, 1min 2,000/hour OHLCV Prices&Indicators

OHLCV: open, high, low, and close prices; volume

adjusted_close: adjusted close price

Technical indicators users can add: 'macd', 'boll_ub', 'boll_lb', 'rsi_30', 'dx_30', 'close_30_sma', 'close_60_sma'. Users also can add their features.

Plug-and-Play (PnP)

In the development pipeline, we separate market environments from the data layer and the agent layer. A DRL agent can be directly plugged into our environments. Different agents/algorithms can be compared by running on the same benchmark environment for fair evaluations.

The following DRL libraries are supported:

  • ElegantRL: Lightweight, efficient and stable DRL implementation using PyTorch.
  • Stable-Baselines3: Improved DRL algorithms based on OpenAI Baselines.
  • RLlib: An open-source DRL library that offers high scalability and unified APIs.

A demonstration notebook for plug-and-play with ElegantRL, Stable Baselines3 and RLlib: Plug and Play with DRL Agents

"Training-Testing-Trading" Pipeline

We employ a training-testing-trading pipeline. First, a DRL agent is trained in a training dataset and fine-tuned (adjusting hyperparameters) in a testing dataset. Then, backtest the agent (on historical dataset), or depoy in a paper/live trading market.

This pipeline address the information leakage problem by separating the training/testing and trading periods.

Such a unified pipeline also allows fair comparisons among different algorithms.

Our Vision

For future work, we plan to build a multi-agent-based market simulator that consists of over ten thousands of agents, namely, a FinRL-Metaverse. First, FinRL-Metaverse aims to build a universe of market environments, like the XLand environment (source) and planet-scale climate forecast (source) by DeepMind. To improve the performance for large-scale markets, we will employ GPU-based massive parallel simulation just as Isaac Gym (source). Moreover, it will be interesting to explore the deep evolutionary RL framework (source) to simulate the markets. Our final goal is to provide insights into complex market phenomena and offer guidance for financial regulations through FinRL-Meta.

Citing FinRL-Meta

Dynamic Datasets and Market Environments for Financial Reinforcement Learning

@article{dynamic_datasets,
    author = {Liu, Xiao-Yang and Xia, Ziyi and Yang, Hongyang and Gao, Jiechao and Zha, Daochen and Zhu, Ming and Wang, Christina Dan and Wang, Zhaoran and Guo, Jian},
    title = {Dynamic Datasets and Market Environments for Financial Reinforcement Learning},
    journal = {Machine Learning - Nature},
    year = {2024}
}

FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning

@article{finrl_meta_2022,
    author = {Liu, Xiao-Yang and Xia, Ziyi and Rui, Jingyang and Gao, Jiechao and Yang, Hongyang and Zhu, Ming and Wang, Christina Dan and Wang, Zhaoran and Guo, Jian},
    title = {{FinRL-Meta}: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning},
    journal = {NeurIPS},
    year = {2022}
}

FinRL-Meta: Data-Driven Deep ReinforcementLearning in Quantitative Finance

@article{finrl_meta_2021,
    author = {Liu, Xiao-Yang and Rui, Jingyang and Gao, Jiechao and Yang, Liuqing and Yang, Hongyang and Wang, Zhaoran and Wang, Christina Dan and Guo Jian},
    title   = {{FinRL-Meta}: Data-Driven Deep ReinforcementLearning in Quantitative Finance},
    journal = {Data-Centric AI Workshop, NeurIPS},
    year    = {2021}
}

Collaborators

           

Disclaimer: Nothing herein is financial advice, and NOT a recommendation to trade real money. Please use common sense and always first consult a professional before trading or investing.

finrl-meta's People

Contributors

alvarocperez avatar amexn-me avatar athe-kunal avatar bassemfg avatar bruceyanghy avatar burntt avatar c4i0kun avatar ckive avatar cryptocoinserver avatar csbobby avatar devnodereact avatar dogfood1 avatar eltociear avatar everssun avatar eyast avatar geekpineapple avatar hexiao5886 avatar kifile avatar krishdotn1 avatar ma7555 avatar mhdmyz avatar oliverwang15 avatar pre-commit-ci[bot] avatar rayrui312 avatar tracycuiyating avatar yangletliu avatar yonv1943 avatar yuchen-song avatar zhumingpassional avatar ziyixia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

finrl-meta's Issues

[Suggestion] Synthetic series for first "games" and / or testing

A great repo and paper: https://github.com/golsun/deep-RL-trading

This could be useful for FinRL maybe as helper / environment function. Training first on rather simple idealized synthetic prices before feeding real data might be beneficial to learn the agent the "basics". Also it's great for testing.

  • Sine wave
  • Trend curves
  • Random walk
  • Different types of autocorrelation
  • Adding different degrees of noise/trend
  • Reoccurring patterns
    (Source: https://youtu.be/c0gpgCyjTM8?t=1372)

There are existing libraries / examples available like

Error in multiple crypto env: AttributeError: ‘CryptoEnv’ object has no attribute ‘observation_space’

Hello,

I am having an issue with the multiple crypto env. Anybody's help would be appreciated. Thanks!

model = agent.get_model(model_name, model_kwargs=agent_params) File “/Users/smail/Desktop/TraderAI/TraderAI/10-finrl/FinRL-Meta/agents/stablebaselines3_models.py”, line 88, in get_model model = MODELS[model_name]( File “/Users/smail/Desktop/TraderAI/TraderAI/10-finrl/FinRL-Meta/env/lib/python3.8/site-packages/stable_baselines3/ddpg/ddpg.py”, line 81, in __init__ super(DDPG, self).__init__( File “/Users/smail/Desktop/TraderAI/TraderAI/10-finrl/FinRL-Meta/env/lib/python3.8/site-packages/stable_baselines3/td3/td3.py”, line 91, in __init__ super(TD3, self).__init__( File “/Users/smail/Desktop/TraderAI/TraderAI/10-finrl/FinRL-Meta/env/lib/python3.8/site-packages/stable_baselines3/common/off_policy_algorithm.py”, line 107, in __init__ super(OffPolicyAlgorithm, self).__init__( File “/Users/smail/Desktop/TraderAI/TraderAI/10-finrl/FinRL-Meta/env/lib/python3.8/site-packages/stable_baselines3/common/base_class.py”, line 163, in __init__ env = self._wrap_env(env, self.verbose, monitor_wrapper) File “/Users/smail/Desktop/TraderAI/TraderAI/10-finrl/FinRL-Meta/env/lib/python3.8/site-packages/stable_baselines3/common/base_class.py”, line 207, in _wrap_env env = DummyVecEnv([lambda: env]) File “/Users/smail/Desktop/TraderAI/TraderAI/10-finrl/FinRL-Meta/env/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py”, line 27, in __init__ VecEnv.__init__(self, len(env_fns), env.observation_space, env.action_space) File “/Users/smail/Desktop/TraderAI/TraderAI/10-finrl/FinRL-Meta/env/lib/python3.8/site-packages/gym/core.py”, line 238, in __getattr__ return getattr(self.env, name) AttributeError: ‘CryptoEnv’ object has no attribute ‘observation_space’

Agents not working

DDPG, TD3, SAC, A2C, PPO, PPO doesn't support the crypto environment
DQN, DuelingDQN, DoubleDQN, and D3QN are working in the crypto environment but the rest doesn't work with the crypto environment, Is there a way the crypto environment can support DDPG, TD3, SAC, A2C, PPO, PPO?

[Suggestion] Feature helper / ideas

Candle information (might help the model learning candlestick patterns):

  • hl2 Price
  • hlc3 Price
  • ohlc4 Price
def upper_shadow(df):
    return df['High'] - np.maximum(df['Close'], df['Open'])

def lower_shadow(df):
    return np.minimum(df['Close'], df['Open']) - df['Low']
    
def real_body(df):
     # maybe np.abs()
    return df['Close'] - df['Open']

def full_length(df):
     # maybe np.abs()
    return df['High'] - df['Low']

Seasonality information (See this great video, where this was recommended - https://www.youtube.com/watch?v=c0gpgCyjTM8):
For example...

  • day of week
  • day of month
  • month
  • year
  • week of year
  • hour

Volatility Features:

  • Realized
  • Parkinson
  • Garman-Klass
  • Roger-Satchell
  • Garman-Klass-Yang-Zhang
  • Yang-Zhang

Source: https://www.kaggle.com/yamqwe/crypto-prediction-volatility-features#Feature-Engineering-%F0%9F%94%AC - this kaggle competition is a good source for more ideas.

TSfresh
https://tsfresh.readthedocs.io/en/latest/text/list_of_features.html Great tool to easily add (and filter) a lot more valuable features.

[Discussion] How to design the structure of tutorials with two versions (notebook and python)

We will provide tutorials with two versions, notebook and python. The python version will be more efficient to debug locally step by step. The notebook .ipynb can be obtained by directly copying from corresponding .py except the front install commands.

There may be several structures:

  1. Like the current architecture in FinRL-Meta, tutorials_notebook and tutorials_python
  2. tutorials -> 1-Introduction_notebook
    1-Introduction_python
  3. tutorials -> 1-Introduction -> FinRL_StockTrading_NeurIPS_2018 -> FinRL_StockTrading_NeurIPS_2018.py, FinRL_StockTrading_NeurIPS_2018.ipynb

Would you like to provide any ideas, or suggestions for the file structure? Moreover, the reasons of your suggestion are also welcome. Thanks very much.

NameError: name 'MACD' is not defined

Demo_MultiCrypto_Trading.ipynb

Binance successfully connected
Adding self-defined technical indicators is NOT supported yet.
Use default: MACD, RSI, CCI, DX.

---------------------------------------------------------------------------

NameError                                 Traceback (most recent call last)

<ipython-input-15-29348b501caa> in <module>()
     11       erl_params=ERL_PARAMS,
     12       break_step=5e4,
---> 13       if_vix=False
     14       )

3 frames

/FinRL-Meta/finrl_meta/data_processors/processor_binance.py in add_technical_indicator(self, df, tech_indicator_list)
     48         for i in df.tic.unique():
     49             tic_df = df[df.tic==i]
---> 50             tic_df['macd'], tic_df['macd_signal'], tic_df['macd_hist'] = MACD(tic_df['close'], fastperiod=12, 
     51                                                                                 slowperiod=26, signalperiod=9)
     52             tic_df['rsi'] = RSI(tic_df['close'], timeperiod=14)

NameError: name 'MACD' is not defined

Caused by the commented line here:

# from talib.abstract import CCI, DX, MACD, RSI

AttributeError: module 'elegantrl.agents.AgentTD3' has no attribute 'if_off_policy' . used tushare

Traceback (most recent call last):
File "E://FinRL-Meta-master/main.py", line 130, in
main()
File "E://FinRL-Meta-master/main.py", line 75, in main
token='d39d700674f33370058aeeaa3b074edda4f683a950b878cc660d4c3c'
File "E:\FinRL-Meta-master\train.py", line 34, in train
model = agent.get_model(model_name, model_kwargs=erl_params)
File "E:\FinRL-Meta-master\agents\elegantrl_models.py", line 60, in get_model
model = Arguments(agent, env)
File "E:\Anconda\envs\FinRL\lib\site-packages\elegantrl\train\config.py", line 127, in init
self.if_off_policy = agent.if_off_policy # agent is on-policy or off-policy
AttributeError: module 'elegantrl.agents.AgentTD3' has no attribute 'if_off_policy'

Error in Demo_FinRL_Meta_Integrate_Trends_data_to_DOW_Jones.ipynb

Hello,

First issue

if I run the jupyter notebook Demo_FinRL_Meta_Integrate_Trends_data_to_DOW_Jones.ipynb on my local computer, I receive warnings in 2 cells. In the cell
# !gdown --id "1sp11dtAJGGqC-3UdSn774ZD1zWCsqbn4" !gdown --id "1m63ncE-BYlS77u5ejYTte9Nmh35DWhzp"

I receive the warning that the command "gdown" is either written wrongly or couldnt be found. And for the cell

!unzip "/content/Pytrends.zip"

I get the warning

unzip: cannot find or open /content/Pytrends.zip, /content/Pytrends.zip.zip or /content/Pytrends.zip.ZIP. I assume its due to these warnings that later on in the cell

user_df = get_user_df() len(user_df)

I get the error FileNotFoundError: [WinError 3] Das System kann den angegebenen Pfad nicht finden: 'Pytrends_Data'

which means that "Pytrends_data" could not be found by the system.

Second Issue

I tried running the code on Google Colab instead, however here I received a different error. In the cell

train_env_instance = get_train_env(TRAIN_START_DATE, TRAIN_END_DATE, ticker_list, data_source, time_interval,model_name, env,info_col) val_env_instance = get_test_env(VAL_START_DATE, VAL_END_DATE, ticker_list, data_source, time_interval, info_col, env, model_name)

I got the error

` in download_data(self, ticker_list, start_date, end_date, time_interval)
33 end_date = end_date,
34 time_interval = time_interval)
---> 35 self.dataframe = self.processor.dataframe
36
37 def clean_data(self):

AttributeError: 'YahooFinanceProcessor' object has no attribute 'dataframe'`

I would be very grateful for help for either the errors on my local computer or the error in the google colab. Thank you in advance for any help!

unexpected keyword argument 'if_on_policy'

I have installed ElegantRL on windows 10 and run the test.py getting below error.

args = Arguments(if_on_policy=True)
TypeError: __init__() got an unexpected keyword argument 'if_on_policy' 

Python Installation Error: Multiple top-level packages discovered in a flat-layout

I met a problem when I tried to install FinRL-Meta by running "python setup.py install".

error: Multiple top-level packages discovered in a flat-layout: ['figs', 'agents', 'results', 'datasets', 'finrl_meta', 'trained_models', 'tensorboard_log', 'tutorials_python', 'tutorials_notebook'].

To avoid accidental inclusion of unwanted files or directories,
setuptools will not proceed with this build.

If you are trying to create a single distribution with multiple packages
on purpose, you should not rely on automatic discovery.
Instead, consider the following options:

  1. set up custom discovery (find directive with include or exclude)
  2. use a src-layout
  3. explicitly set py_modules or packages with a list of names

To find more information, look for "package discovery" on setuptools docs.

I temporarily solved it by moving the .py files to the root directory of the project.

BTW, FinRL-meta requires "Ta-Lib" and "Tushare" packages. But those two did not mention in the requirements.txt.

Problems when using joinquant data?

In Demo_Plug_and_Play_with_DRL_Libraries.ipynb

①No preprocessor to deal with inconsistent data, in config.SSE_50_ticker, the data is like '000001.SZ' but we get '000001.XSHE' from joinquant source. Thus, '找不到标的' error would occur.

②when train with joinquant data, error 'AttributeError: 'NoneType' object has no attribute 'copy'' arised.
I assume that it happened because after function 'clean_data', the whole dataframe was cleaned to be empty.
So I look into the code and find that there was nothing about clean_data... So what happened?

③I strongly suggest a more complete doc about how to use FinRL-Meta, thanks.

image

The model does not sell at all

Hello,
I tested the code Demo_MultipleCrypto_Trading.ipynb for trading cryptocurrency. It seems that the model only buys without selling at all - the action is always positive (corresponding to buying). The initial balance is 100000.
6d456a4bbb8869d1e7c79e9b61891bd3_
e5bf496bcf2b4a7c1bb8cb0e7bbedcc8_
Is this something buggy and can you please help to fix it?

Error with TRAIN and TEST interval

In "Demo_MultiCrypto_Trading" notebook I try to change the interval of data eg:
TRAIN_START_DATE = '2020-08-01' -- original START DATE=2021-09-01
TRAIN_END_DATE = '2021-09-20'

TEST_START_DATE = '2021-09-21'
TEST_END_DATE = '2021-09-30'

with a different timeframe of "60m" instead of "5m". It gives me back the following error:
ValueError: If using all scalar values, you must pass an index

How can I solve it please?
Thx

Delete obsolete duplicates in FinRL vs FinRL-Meta

I cannot understand why there is FinRL and FinRL-Meta. Why you don't merge them? Duplicates:

https://github.com/AI4Finance-Foundation/FinRL/blob/master/tutorials/3-Practical/FinRL_MultiCrypto_Trading.ipynb

https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/master/tutorials_notebook/3-Practical/FinRL_MultiCrypto_Trading.ipynb

https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/master/tutorials_python/3-Practical/FinRL_MultiCrypto_Trading.py seems empty

Tutorial FinRL:
Seems it doesn't use FinRL. Than why it's in FinRL repo, if it use only FinRL-Meta?
DataProcessor in FinRL doesn't support Binance.

For what is better to use FinRL and for what FinRL-Meta?
FinRL is obsolete (vs FinRL-Meta) and I can just ignore it and use FinRL-Meta only?
Why do you support 2 similar repos with similar structure, but one is more updated than another?

By your info in README.md https://github.com/AI4Finance-Foundation/FinRL I think that I need FinRL, but seems FinRL-Meta is actual and better version of FinRL. FinRL obsolete (vs FinRL-Meta)

CryptoEnv there is in tutorial, but also it's available from finrl_meta.env_crypto_trading.env_multiple_crypto import CryptoEnv . Which one is more updated and better to use?
https://github.com/AI4Finance-Foundation/FinRL/blob/master/finrl/finrl_meta/env_cryptocurrency_trading/env_multiple_crypto.py

https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/master/finrl_meta/env_crypto_trading/env_multiple_crypto.py

Why doesn't delete it from tutorial and just import from actual repo?

How to add features/columns not by specifying parameter tech_indicator_list of StockTradingEnv in env_stocktrading_China_A_shares.py?

We pass the state_space/action_space that must be calculated externally to the constructor StockTradingEnv, and are required to pass in the parameter tech_indicator_list to calculate the state of the internal record. If we add self-calculated columns to the feature data (df), the instances initialized by StockTradingEnv will have errors in subsequent model training because the dimensions of the data columns are different.
Is it possible to directly calculate the internal variable state through df in the StockTradingEnv constructor or obtain dimension information from the *_space parameter and disconnect the strong relationship with tech_indicator_list?

Thanks!

Training intraday models with stoploss/cashpenalty

Hi guys,

I love this awesome project but I'm struggling to get it to work with intraday stock training sets. I am hitting the following error with caching and can't figure it out, I assume it's because I'm using datetime and not date so it's screwing the caching up. Any pointers are much appreciated. Thanks!

My single stock processed dataframe is of the form:

| date | open | high | low | close | volume | average | barCount | tic | macd | rsi_30 | cci_30 | dx_30 | boll | boll_ub | boll_lb | turbulence | log_volume | change | daily_variance
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
2021-01-01 00:00:00 | 704.99 | 705.00 | 704.58 | 704.71 | 134 | 704.889 | 74 | TSLA | 7.394569 | 62.146219 | 70.637092 | 8.675102 | 701.3865 | 716.933746 | 685.839254 | 0.006246 | 11.455626 | -0.000397 | 0.000596
2021-01-01 01:00:00 | 704.65 | 704.70 | 704.05 | 704.50 | 465 | 704.365 | 139 | TSLA | 6.881793 | 61.999634 | 65.323743 | 7.306197 | 701.8760 | 717.152270 | 686.599730 | 0.005461 | 12.699526 | -0.000213 | 0.000923
2021-01-04 10:00:00 | 709.00 | 726.75 | 709.00 | 726.75 | 815 | 721.375 | 423 | TSLA | 8.176549 | 69.805741 | 159.774757 | 40.588349 | 703.5560 | 721.878295 | 685.233705 | 11.471878 | 13.291771 | 0.024424 | 0.024424
2021-01-04 11:00:00 | 726.85 | 729.77 | 725.00 | 725.12 | 249 | 726.649 | 166 | TSLA | 8.967750 | 68.735677 | 181.446638 | 43.555530 | 705.0770 | 725.259924 | 684.894076 | 0.082599 | 12.103790 | -0.002386 | 0.006578
2021-01-04 12:00:00 | 725.60 | 728.00 | 725.00 | 726.14 | 156 | 726.208 | 90 | TSLA | 9.566808 | 69.042873 | 167.348900 | 43.555530 | 706.6430 | 728.282687 | 685.003313 | 0.009877 | 11.637599 | 0.000744 | 0.004131

`/home/finrl/neo_finrl/env_stock_trading/env_stocktrading_stoploss.py in init(self, df, buy_cost_pct, sell_cost_pct, date_col_name, hmax, discrete_actions, shares_increment, stoploss_penalty, profit_loss_ratio, turbulence_threshold, print_verbosity, initial_amount, daily_information_cols, cache_indicator_data, cash_penalty_proportion, random_start, patient, currency)
112 print("caching data")
113 self.cached_data = [
--> 114 self.get_date_vector(i) for i, _ in enumerate(self.dates)
115 ]
116 print("data cached!")

/home/finrl/neo_finrl/env_stock_trading/env_stocktrading_stoploss.py in (.0)
112 print("caching data")
113 self.cached_data = [
--> 114 self.get_date_vector(i) for i, _ in enumerate(self.dates)
115 ]
116 print("data cached!")

/home/finrl/neo_finrl/env_stock_trading/env_stocktrading_stoploss.py in get_date_vector(self, date, cols)
165 for a in self.assets:
166 subset = trunc_df[trunc_df[self.stock_col] == a]
--> 167 v += subset.loc[date, cols].tolist()
168 assert len(v) == len(self.assets) * len(cols)
169 return v

/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py in getitem(self, key)
871 # AttributeError for IntervalTree get_value
872 pass
--> 873 return self._getitem_tuple(key)
874 else:
875 # we by definition only have the 0th axis

/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py in _getitem_tuple(self, tup)
1042 def _getitem_tuple(self, tup: Tuple):
1043 try:
-> 1044 return self._getitem_lowerdim(tup)
1045 except IndexingError:
1046 pass

/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py in _getitem_lowerdim(self, tup)
808 return section
809 # This is an elided recursive call to iloc/loc
--> 810 return getattr(section, self.name)[new_key]
811
812 raise IndexingError("not applicable")

/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py in getitem(self, key)
877
878 maybe_callable = com.apply_if_callable(key, self.obj)
--> 879 return self._getitem_axis(maybe_callable, axis=axis)
880
881 def _is_scalar_access(self, key: Tuple):

/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py in _getitem_axis(self, key, axis)
1097 raise ValueError("Cannot index with multidimensional key")
1098
-> 1099 return self._getitem_iterable(key, axis=axis)
1100
1101 # nested tuple slicing

/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py in _getitem_iterable(self, key, axis)
1035
1036 # A collection of keys
-> 1037 keyarr, indexer = self._get_listlike_indexer(key, axis, raise_missing=False)
1038 return self.obj._reindex_with_indexers(
1039 {axis: [keyarr, indexer]}, copy=True, allow_dups=True

/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py in _get_listlike_indexer(self, key, axis, raise_missing)
1252 keyarr, indexer, new_indexer = ax._reindex_non_unique(keyarr)
1253
-> 1254 self._validate_read_indexer(keyarr, indexer, axis, raise_missing=raise_missing)
1255 return keyarr, indexer
1256

/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py in _validate_read_indexer(self, key, indexer, axis, raise_missing)
1314 with option_context("display.max_seq_items", 10, "display.width", 80):
1315 raise KeyError(
-> 1316 "Passing list-likes to .loc or [] with any missing labels "
1317 "is no longer supported. "
1318 f"The following labels were missing: {not_found}. "

KeyError: "Passing list-likes to .loc or [] with any missing labels is no longer supported. The following labels were missing: Index(['day'], dtype='object'). See https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike"`

Not able to run Demo_Plug_and_Play_with_DRL_Libraries notebook at the second time

Not able to run again Demo_Plug_and_Play_with_DRL_Libraries notebook at Colab.
When you run this notebook at the second time, it will give you the error message,

RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/finrl/drl_agents/elegantrl/models.py in DRL_prediction(model_name, cwd, net_dimension, environment)
101 agent.init(net_dim, state_dim, action_dim)
--> 102 agent.save_or_load_agent(cwd=cwd, if_save=False)
103 act = agent.act

5 frames
RuntimeError: Error(s) in loading state_dict for ActorPPO:
size mismatch for a_std_log: copying a param with shape torch.Size([1, 29]) from checkpoint, the shape in current model is torch.Size([1, 30]).
size mismatch for net.0.0.weight: copying a param with shape torch.Size([512, 293]) from checkpoint, the shape in current model is torch.Size([512, 303]).
size mismatch for net.3.weight: copying a param with shape torch.Size([29, 512]) from checkpoint, the shape in current model is torch.Size([30, 512]).
size mismatch for net.3.bias: copying a param with shape torch.Size([29]) from checkpoint, the shape in current model is torch.Size([30]).

During handling of the above exception, another exception occurred:

actions on plug and play DRL notebook

Hello, I'm a beginner of RL.
I wonder that if we can see the actions with the ElegentRL's DRL_prediction function ?
(Return actions in [-1,0 1], just like the sb3's DRL_prediction function).
It will be helpful if someone share their idea, thanks!

Besides, I wonder that is it suitable to trade index (like DJI) as single stock with the env in plug_and_play_DRL notebook ?

I cannot understand your sharpe ratio in the ipynb.

Thanks for publishing a great framework.

As far as I know, Shape Ratio is usually defined as the difference between the returns of the investment and the risk-free return, divided by the standard deviation of the investment returns.

In the notebook 'FinRL_Ensemble_StockTrading_ICAIF_2020.ipynb', Part 7: Backtest Our Strategy,
You defined it as,
sharpe=(252**0.5)*df_account_value.account_value.pct_change(1).mean()/df_account_value.account_value.pct_change(1).std()

Could you please explain why?

error: RPC failed; curl 56 GnuTLS recv error (-9): Error decoding the received TLS packet.

Hi, I have problem in installing FinRL-Meta
ubuntu 20.04
conda 4.8.2
python 3.7.6
pip 20.0.2

pip install git+https://github.com/AI4Finance-Foundation/FinRL-Meta.git
Collecting git+https://github.com/AI4Finance-Foundation/FinRL-Meta.git
Cloning https://github.com/AI4Finance-Foundation/FinRL-Meta.git to /tmp/pip-req-build-6bq0z6z7
Running command git clone -q https://github.com/AI4Finance-Foundation/FinRL-Meta.git /tmp/pip-req-build-6bq0z6z7
error: RPC failed; curl 56 GnuTLS recv error (-9): Error decoding the received TLS packet.
fatal: the remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
ERROR: Command errored out with exit status 128: git clone -q https://github.com/AI4Finance-Foundation/FinRL-Meta.git /tmp/pip-req-build-6bq0z6z7 Check the logs for full command output.

CCXTProcessor can't use BasicProcessor.add_technical_indicator or BasicProcessor.clean_data

CCXTProcessor uses a MultiIndex to organize each pairs data, but it seems that only CCXTProcessor does this while all other Processors do not. BasicProcessor.add_technical_indicator and BasicProcessor.clean_data do not handle this data format.

Side note, it would be nice to be able to specify the exchange used by CCXTProcessor instead of forcing Binance

To reproduce:

from finrl.finrl_meta.data_processors.processor_ccxt import CCXTProcessor
pairs = ['BTC/USDT', 'ETH/USDT']
tech_indicator_list = [
    "macd",
    "boll_ub",
    "boll_lb",
    "rsi_30",
    "dx_30",
    "close_30_sma",
    "close_60_sma",
]
#Why do I have to specify data_source here? 
#Would be nice if we could specify exchange instead as I had to manually change mine in CCXTProcessor.
ccxtProc = CCXTProcessor('ccxt', start_date=start, end_date=end, time_interval=timeframe) 
ccxtProc.download_data(pairs)
ccxtProc.clean_data() #error here
ccxtProc.add_technical_inidicator(tech_indicator_list, 0) #error here if above skipped, neither stockstats nor talib works

[Suggestion] Reward helper / env setting

It's really important to not overlook the reward part. Just using return is probably the reason most fail with RL. This might be far more essential than the agents: "The reward fed to the RL agent is completely governing its behavior, so a wise choice of the reward shaping function is critical for good performance. There are quite a number of rewards one can choose from or combine, from risk-based measures, to profitability or cumulative return, number of trades per interval, etc. The RL framework accepts any sort of rewards, the denser the better."

Great paper giving a nice overview over different reward functions:
https://arxiv.org/abs/2004.06985

Chapter 4 Reward Functions

  • PnL-based Rewards (Unrealized PnL, Unrealized PnL with Realized Fills, Asymmetrical Unrealized PnL with Realized Fills, Asymmetrical Unrealized PnL with Realized Fills and Ceiling, Realized PnL Change)
  • Goal-based Rewards (Trade Completion)
  • Risk-based Rewards (Differential Sharpe Ratio)

This is a great example (environment) using a parameter for reward selection:
https://github.com/sadighian/crypto-rl/blob/arctic-streaming-ticks-full/gym_trading/envs/base_environment.py
The code for the reward functions used:
https://github.com/sadighian/crypto-rl/blob/arctic-streaming-ticks-full/gym_trading/utils/reward.py
It would be a great improvement of the above example, if one would be able to easily combine multiple reward functions into one, too.

More notes I made in my research about reward:

TypeError: 'module' object is not callable in Demo_MultiCrypto_Trading.ipynb

I guess this could be a result of underlying library updates. Any thoughts on this one?

Original code:

train(start_date=TRAIN_START_DATE, 
      end_date=TRAIN_END_DATE,
      ticker_list=TICKER_LIST, 
      data_source='binance',
      time_interval='5m', 
      technical_indicator_list=TECHNICAL_INDICATORS_LIST,
      drl_lib='elegantrl', 
      env=env, 
      model_name='ppo', 
      current_working_dir='./test_ppo',
      erl_params=ERL_PARAMS,
      break_step=5e4,
      if_vix=False
      )

Output before fail:

Not support for {self.data_source}
{self.data_source} successfully connected
Using cached file ./cache/BTCUSDT_ETHUSDT_ADAUSDT_BNBUSDT_XRPUSDT_SOLUSDT_DOTUSDT_DOGEUSDT_AVAXUSDT_UNIUSDT_binance_2021-09-01_2021-09-20_5m.pickle
tech_indicator_list:  ['macd', 'rsi', 'cci', 'dx']
indicator:  macd
indicator:  rsi
indicator:  cci
indicator:  dx
Succesfully add technical indicators
Successfully transformed into array
| Arguments Remove cwd: ./test_ppo

Errors:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-40-0e8fc459cfcb>](https://localhost:8080/#) in <module>()
     11       erl_params=ERL_PARAMS,
     12       break_step=5e4,
---> 13       if_vix=False
     14       )

3 frames
[<ipython-input-34-ea6d6205e665>](https://localhost:8080/#) in train(start_date, end_date, ticker_list, data_source, time_interval, technical_indicator_list, drl_lib, env, model_name, if_vix, **kwargs)
     34         trained_model = agent.train_model(model=model, 
     35                                           cwd=current_working_dir,
---> 36                                           total_timesteps=break_step)
     37 
     38     elif drl_lib == 'rllib':

[/FinRL-Meta/drl_agents/elegantrl_models.py](https://localhost:8080/#) in train_model(self, model, cwd, total_timesteps)
     80         model.cwd = cwd
     81         model.break_step = total_timesteps
---> 82         train_and_evaluate(args=model)
     83 
     84     @staticmethod

[/FinRL-Meta/ElegantRL/elegantrl/train/run.py](https://localhost:8080/#) in train_and_evaluate(args)
     20     env = build_env(args.env, args.env_func, args.env_args)
     21 
---> 22     agent = init_agent(args, gpu_id, env)
     23     buffer = init_buffer(args, gpu_id)
     24     evaluator = init_evaluator(args, gpu_id)

[/FinRL-Meta/ElegantRL/elegantrl/train/run.py](https://localhost:8080/#) in init_agent(args, gpu_id, env)
     62 
     63 def init_agent(args, gpu_id, env=None):
---> 64     agent = args.agent(args.net_dim, args.state_dim, args.action_dim, gpu_id=gpu_id, args=args)
     65     agent.save_or_load_agent(args.cwd, if_save=False)
     66 

TypeError: 'module' object is not callable

Thanks a lot!

AttributeError: 'NoneType' object has no attribute 'config'

Hi, train RLlib come up with following error

~/anaconda3/lib/python3.7/site-packages/ray/worker.py in get(object_refs, timeout)
1625 raise value.as_instanceof_cause()
1626 else:
-> 1627 raise value
1628
1629 if is_individual_id:

RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=43634, ip=192.168.1.81)
AttributeError: 'NoneType' object has no attribute 'config'

During handling of the above exception, another exception occurred:

ray::RolloutWorker.init() (pid=43634, ip=192.168.1.81)
File "/home/reza/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 565, in init
devices = get_tf_gpu_devices()
File "/home/reza/anaconda3/lib/python3.7/site-packages/ray/rllib/utils/tf_ops.py", line 54, in get_gpu_devices
devices = tf.config.experimental.list_physical_devices()
AttributeError: 'NoneType' object has no attribute 'config'

it seems that is due to Tensorflow TensorBoard ...
any idea, how can we sort it out? regards

[Suggestion] Include timestamp / DatetimeIndex for plotting

After some more exploring, one thing I realized is, that plotting (with pyfolio / quantstats) to evaluate the results is harder than it needs to be. Currently we have no timestamps easily available which are needed for direct plotting or using the above libraries.

The Demo_FinRL_Meta_Integrate_Trends_data_to_DOW_Jones.ipynb is a good example.
It is done there, but with a (hacky) workaround using a Custom_DataProcessor / the yahoofinance dataframe timestamps.

A solution I could think of would be adding 3 functions to the env that return equal weight portfolio returns, buy-and-hold returns and the agent-returns including the timestamps. A pd.Series with DateTimeindex could work.

This would also be a good preparation for more advanced reward functions like (differential / deflated) sharpe, comparing to buy-and-hold etc. that rely on a return series.

What do you think?

Opened "Demo_MultiCrypto_Trading.ipynb" in colab ran into multiple problems

I was trying to run the multi crypto demo. I made a few changes to get through the first part maybe some of those need to be added. Happy to make a pull request if so. But still not able to get the demo to work.

pip install tushare is missing from install list

In the "Functions for Training and Testing" block I had to add a directory change for it to find the data_processor
%cd /FinRL-Meta/
from drl_agents.elegantrl_models import DRLAgent as DRLAgent_erl
from drl_agents.rllib_models import DRLAgent as DRLAgent_rllib
from drl_agents.stablebaselines3_models import DRLAgent as DRLAgent_sb3
from finrl_meta.data_processor import DataProcessor

After these changes I was able to get to the training step but got stuck here with this error:
Binance successfully connected
Succesfully add technical indicators
| Arguments Remove cwd: ./test_ppo
################################################################################
ID Step maxR | avgR stdR avgS stdS | expR objC etc.

IndexError Traceback (most recent call last)
in ()
11 erl_params=ERL_PARAMS,
12 break_step=5e4,
---> 13 if_vix=False
14 )

in train(start_date, end_date, ticker_list, data_source, time_interval, technical_indicator_list, drl_lib, env, model_name, if_vix, **kwargs)
36 trained_model = agent.train_model(model=model,
37 cwd=current_working_dir,
---> 38 total_timesteps=break_step)
39
40 elif drl_lib == 'rllib':

/FinRL-Meta/drl_agents/elegantrl_models.py in train_model(self, model, cwd, total_timesteps)
79 model.cwd = cwd
80 model.break_step = total_timesteps
---> 81 train_and_evaluate(args=model)
82
83 @staticmethod

/usr/local/lib/python3.7/dist-packages/elegantrl/train/run.py in train_and_evaluate(failed resolving arguments)
39
40 torch.set_grad_enabled(True)
---> 41 logging_tuple = agent.update_net(buffer, batch_size, repeat_times, soft_update_tau)
42 torch.set_grad_enabled(False)
43

/usr/local/lib/python3.7/dist-packages/elegantrl/agents/AgentPPO.py in update_net(self, buffer, batch_size, repeat_times, soft_update_tau)
187 buf_logprob = self.act.get_old_logprob(buf_action, buf_noise)
188
--> 189 buf_r_sum, buf_adv_v = self.get_reward_sum(buf_len, buf_reward, buf_mask, buf_value) # detach()
190 buf_adv_v = (buf_adv_v - buf_adv_v.mean()) * (self.lambda_a_value / (buf_adv_v.std() + 1e-5))
191 # buf_adv_v: buffer data of adv_v value

/usr/local/lib/python3.7/dist-packages/elegantrl/agents/AgentPPO.py in get_reward_sum_raw(self, buf_len, buf_reward, buf_mask, buf_value)
237 pre_r_sum = 0
238 for i in range(buf_len - 1, -1, -1):
--> 239 buf_r_sum[i] = buf_reward[i] + buf_mask[i] * pre_r_sum
240 pre_r_sum = buf_r_sum[i]
241 buf_adv_v = buf_r_sum - buf_value[:, 0]

IndexError: index 8641 is out of bounds for dimension 0 with size 1

Need to restart runtime in colab

Since yesterday I need to restart runtime after installing FinRL library. Before it worked perfect.

WARNING: The following packages were previously imported in this runtime:
[pkg_resources]
You must restart the runtime in order to use newly installed versions.

[Suggestion] Normalization.

Adding normalization to the data preprocessor might be a great feature:

  • Min-Max Normalization,

  • Decimal Scaling Normalization,

  • Z-Score Normalization,

  • Median Normalization,

  • Sigmoid Normalization,

  • Tanh estimators

  • Bhanja, Samit & Das, Abhishek. (2018). Impact of Data Normalization on Deep Neural Network for Time Series Forecasting. ResearchGate

These are more advanced / adaptive approaches:

  • Passalis, Nikolaos, u. a. Deep Adaptive Input Normalization for Time Series Forecasting. 2019. Github Repo arXiv:1902.07892
  • Nalmpantis, Angelos, u. a. „Deep Adaptive Group-Based Input Normalization for Financial Trading“. Pattern Recognition Letters, Bd. 152, Dezember 2021, S. 413–19. DOI.org (Crossref), https://doi.org/10.1016/j.patrec.2021.11.004
  • Tran, Dat Thanh, u. a. Bilinear Input Normalization for Neural Networks in Financial Forecasting. 2021. arXiv:2109.00983

Data Sources

Hi I have a different data source than what is available here, are the data sources customizable or can we define our own data source, if so which module ? Also on the environment and technical indicators, there are several custom indicators i have built, can we add them as well ? The central idea is to add to the environment settings? Thanks !! Hope to use this in a live production environment soon !!

Error to run Demo Plug and Play notebook

KeyError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/finrl/drl_agents/elegantrl/models.py in get_model(self, model_name, model_kwargs)
64 model.net_dim = model_kwargs["net_dimension"]
---> 65 model.target_step = model_kwargs["target_step"]
66 model.eval_gap = model_kwargs["eval_gap"]

KeyError: 'target_step'

Question about scaling in and around get_state()

Hi, I see various scaling operations going on in some of the environments, e.g. in env_multiple_crypto.py there are lines like this:

state =  np.hstack((self.cash * 2 ** -18, self.stocks * 2 ** -3))
normalized_tech_i = tech_i * 2 ** -15
reward = (next_total_asset - self.total_asset) * 2 ** -16

I am wondering how these numbers were determined. Am I right in that these should be dependendent on the input data and needs to be made dynamic instead of hardcoded?

Error in Demo_MultiCrypto_Trading.ipynb

I opened the file in Google Collab thinking that it should be straightforward to run once the libraries are installed. All libraries got installed but when I run the train method below, I get the error of TypeError: init() missing 3 required positional arguments: 'start_date', 'end_date', and 'time_interval'

train(start_date=TRAIN_START_DATE,
end_date=TRAIN_END_DATE,
ticker_list=TICKER_LIST,
data_source='binance',
time_interval='5m',
technical_indicator_list=TECHNICAL_INDICATORS_LIST,
drl_lib='elegantrl',
env=env,
model_name='ppo',
current_working_dir='./test_ppo',
erl_params=ERL_PARAMS,
break_step=5e4,
if_vix=False
)
image

ModuleNotFoundError in Demo_MultiCrypto_Trading.ipynb on colab

Hello,

Great project. I'm attempting to execute the colab notebook and get an error on line 3 of the cell "Functions for Training and Testing".

from drl_agents.stablebaselines3_models import DRLAgent as DRLAgent_sb3

Full stack trace is below. Looks like some neo_finrl references need to be updated in stablebaselines3_models.py?

Note: If I move the associated imports inside the drllib condition, I am able to start training with e.g. elegantrl.

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-3-a440fe771f96> in <module>()
      1 from drl_agents.elegantrl_models import DRLAgent as DRLAgent_erl
      2 from drl_agents.rllib_models import DRLAgent as DRLAgent_rllib
----> 3 from drl_agents.stablebaselines3_models import DRLAgent as DRLAgent_sb3
      4 from finrl_meta.data_processor import DataProcessor
      5 

/FinRL-Meta/drl_agents/stablebaselines3_models.py in <module>()
      5 import pandas as pd
      6 from finrl.apps import config
----> 7 from finrl.neo_finrl.env_stock_trading.env_stocktrading import StockTradingEnv
      8 from finrl.neo_finrl.preprocessor.preprocessors import data_split
      9 from stable_baselines3 import A2C, DDPG, PPO, SAC, TD3

ModuleNotFoundError: No module named 'finrl.neo_finrl'

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.