tcae / followupward Goto Github PK
View Code? Open in Web Editor NEWFollow crypto trend- a python and ML learning project
Follow crypto trend- a python and ML learning project
Need to change recipes to remember what was done for adaptation and evaluation.
config file for
target key
time aggregations
pairs of currency pair and set config file
balancing strategy (that should be the same for all sets of one adaptation):
target_log shall be maintained to log:
In general one doesn't want to much print output, which is time consuming to inspect for important messages. However, in error situations such output is highly appreciated to analyze the situation. Hence, a filter utility is required that provides the right level of information depending in the current need. Python provides a standard logging facility or may be there are even better packages out there.
Log trade data and compare in dashboard with saved data.
To understand what was going on, classifier predictioins and decision results shall be logged during evaluation and production
in the period of sell and buy signals, log the order book in periods of seconds to train a classifier where to place the order limit.
features: 1x5min top/height/bottom/delta, last and running 1min top/height/bottom/delta, order book with 1/10 quantiles(average delta, volume %) for both ask and bid.
target: identify quantile with highest profit that consumed 450USDT within 20s
position a higher order in better profit position in case a whale eats it
load set config and initialize TrainingControl
if price drops below buyprice then sell (emergency cell)
documents all actions and provides a means to reuse ccxt action results
when the action id is stored together with the results in pandas files
Performance assessment happens today in a single process outside tensorflow. However, it is based on a number of subsets, which can be calculated in parallel without conflicts. Today the performance assessment takes much longer time than the actual adaptation, which is a nice opportunity for speedup.
The training dashboard is used to understand what was implemented by using the cached history of saved cryptos. Similar the live dashboard shall provide an overview over activities of the last 10 days to take action if automation doesn't work as expected.
benefit form efficient code and parallelism by embedding performance assessment into an extended keras model. This probably requires switching from sequential to functional API keras model because 2 inputs are required (features and close prices) and 2 outputs are required (class predictions and performances per buy/sell threshold).
for training use equal sell, buy, hold but use every X (e.g.10th) of those to train different situations equally and not bias the learning to a few exotic situations.
To achieve that a df with all samples and a training usage counter is used.
trading class required. Catalyst too limited. try directly with ccxt.
what is needed?
flow
depending in the normalization, features of one or the other base are not scaled in a comparable fashion resulting missing / under represented trade signals for such bases. a scaling per base may be a solution.
place limit sell at +1%*classifier_probability after buy and increase with every buy signal. The sell signal is independently used as risk mitigation.
counterexample that 24h volume is not good is IOTA 29 Apr
rationale: in crypto world currencies raise unexpectedly
measure: look at last hour volume to decide about liquidity
deinvest when liquidity decreases too much again
use hysteresis to trigger add at higher volume than deinvest
Trade signal classes are very unbalanced. The vast majority is hold and the loss function does value a buy versus hold mistake the same as a buy versus sell. This is not OK and underpins the need for a performance assessment next to the precision, loss and other KPIs. It is expected to be advantageous to experiment with regression testers, where the loss directly represents the performance impact.
Today changes easily break something else unexpected, which may be identified long after the fact. That makes debugging difficult because the relation to the causing change is lost.
index, close and target usage (with set assignment and slice ix assignment) outside core data, classifier title = target usage
start with balanced training following the weakest class
base shall be discarded because volume gaps are equal to low liquidity. If such base bases the average minute volume criteria then likely some whales are dominating that market.
to avoid overnervous reactions avoid to label contradicting signals to quickly. One measure is teh already introdcued smoothing, i.e. change sell or buy signal to hold if the gain or loss does not justify buy or sell, the second measure is to consider a X (e.g. 5 minute) reaction time after a signal, i.e. the signal should still hold after 5 minutes, which reduces the chances but mitigates risk.
if gain by >1% is done only within next minute then ignore it as fake peak because the stategy cannot benefit from it
CpcSet and Cpc should not need the concept of currency pair. It should be removed.
TargetFeatures does not need it either but as we load pair data, it should stay there for information purposes.
Reduce set bunches during adaptation because laptop cannot train in background with the required computational load. Especially memory is a problem.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.