Code Monkey home page Code Monkey logo

universal-portfolios's People

Contributors

alexander-myltsev avatar angonyfox avatar booxter avatar changlan avatar clumma avatar dexhunter avatar drpaprikaa avatar e23099 avatar mackimaow avatar mannmann2 avatar marigold avatar mdengler avatar paulorodriguesxv avatar shayandavoodii avatar stergnator avatar xanderdunn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

universal-portfolios's Issues

MaybeEncodingError while using the run_combination() method

While running

train_olmar = algos.OLMAR.run_combination(train, window=5, eps=[3,5,10,15])
train_olmar.plot()

I get the following error

---------------------------------------------------------------------------
MaybeEncodingError                        Traceback (most recent call last)
<ipython-input-53-a58f3354f3e1> in <module>
----> 1 train_olmar = algos.OLMAR.run_combination(train, window=5, eps=[3,5,10,15])
      2 train_olmar.plot()

~/anaconda3/lib/python3.8/site-packages/universal/algo.py in run_combination(cls, S, **kwargs)
    299         # try all combinations in parallel
    300         with tools.mp_pool(n_jobs) as pool:
--> 301             results = pool.map(
    302                 _run_algo_params, [(S, cls, all_params) for all_params in params_to_try]
    303             )

~/anaconda3/lib/python3.8/multiprocessing/pool.py in map(self, func, iterable, chunksize)
    362         in a list that is returned.
    363         '''
--> 364         return self._map_async(func, iterable, mapstar, chunksize).get()
    365 
    366     def starmap(self, func, iterable, chunksize=None):

~/anaconda3/lib/python3.8/multiprocessing/pool.py in get(self, timeout)
    769             return self._value
    770         else:
--> 771             raise self._value
    772 
    773     def _set(self, i, obj):

MaybeEncodingError: Error sending result: '[<universal.result.AlgoResult object at 0x7f62d07f00a0>]'. Reason: 'AttributeError("'NoneType' object has no attribute 'picklable'")'

Interpretation Issue

HI.
I've been trying to test algorithms with data provided by you. but I don't understand the meaning of measures provided by result.summary(). like profit_factor and turnover.
and also couldn't figure out how much is the initial investment when running algorithms? and what do Annualized return and Turnover mean in that regard?

image

Installation Issues

Hi,
I am using Python 3.9.7, facing the following error while installing using pip. Appreciate your help.

" error: could not create 'build\bdist.win-amd64\wheel\.\sklearn\datasets\tests\data\openml\292\api-v1-json-data-list-data_name-australian-limit-2-data_version-1-status-deactivated.json.gz': No such file or directory
  ----------------------------------------
  ERROR: Failed building wheel for scikit-learn
Failed to build cvxopt scikit-learn
ERROR: Could not build wheels for scikit-learn which use PEP 517 and cannot be installed directly
WARNING: Ignoring invalid distribution -uantstats (c:\programdata\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -uantstats (c:\programdata\anaconda3\lib\site-packages)
WARNING: Ignoring invalid distribution -uantstats (c:\programdata\anaconda3\lib\site-packages)"

Issue with Mac M1

Dear all,
It is impossible to install with Mac M1.
A lot of issue with the version of Scikit-learn, statmodels,cvxopt.
Could you help me?

Thanks

Riccardo

a self.eta error in TCO

I don't know where does this int 10 come from, guess should be self.eta which default value is 10 ?

    def update_tco(self, x: npt.NDArray, b: npt.NDArray, x_pred: npt.NDArray):
        """
        :param x: ratio of change in price
        """
        lambd = 10 * self.trx_fee_pct  # SHOUD BE   self.eta * self.trx_fee_pct  ?

        # last price adjusted weights
        updated_b = np.multiply(b, x) / np.dot(b, x)

Strategy of "Best "

Hi, thanks for well-organized code

I have a question about Best strategy
BestSoFar seems to be different from classic best method

I refer to this survey

If BestSoFar is different strategy, could you tell me a definition or details?

Thanks!

Issue with result.plot()

When I was trying to run Paul Perry's ' comparison of all algorithms ' yesterday, I found all the lines represent the result of each portfolios were all blue. Then I found it was because you have added a hightlight in result.plot() that will plot the portfolio again with color blue. I think maybe it will be better to make highlight an True/False args of plot().

2 questions

Hi Marigold!
I have 2 questions:

  1. How do you get a time-series, with the result of the olps every day, out of the "result"?
    I can only find summaries like the "result.summary()" from the result.

  2. How do you plot the result of many different olps in the same graph?
    Right now you only get one olps as PORTFOLIO and UCRP when doing "result.plot()".

Thank you for your time in advance.

The isinstance checks if cov_estimator and mu_estimator are strings in mpt.py are Python 2 only

The isinstance checks if cov_estimator and mu_estimator are strings in mpt.py are Python 2 only.

I have just changed basestring to str on lines 49 and 63, but I it looks as if you need something more complex to be compatible with both versions of python.
http://stackoverflow.com/questions/11301138/how-to-check-if-variable-is-string-with-python-2-and-3-compatibility.

Thanks for a wonderful work, and open sourcing this,
Ric

Package Adoption

Hi @Marigold,

The python package for financial machine learning mlfinlab would like to extend our library to include online portfolio selection algorithms.

We would love to adopt and productionalise the code you have here into our library for others to use. We will make sure to give attribution to you. It would be great to open a dialog and discuss this further.

We would like to invite you to our Slack channel.

Looking forward
Jacques Joubert

simplex projection

Hello @akaniklaus. Sorry, I don't have exactly an issue but a question regarding simplex projection.
I tried to contact the authors of PAMR paper but it seem their email address is not valid anymore so I thought that I would ask you.
I was wondering that relaxing the non-negativity constraint and then projecting the solution keeps the optimality of the solution? and if we try to solve the problem with the non-negativity constraint with a numerical solver, does it yield better results?
Thanks

min_history for CORN

Hi @Marigold,

I am experiencing reproducibility issues when using different values for min_history on the CORN algorithm. I have narrowed the issue down to how the X_t and X_i objects (lines 104 & 105) are created in algos/corn.py .

I wonder if I don't understand what min_history is supposed to accomplish? Here's how I understand min_history to work. Let's say I am using 10 years of daily data starting from 1/1/2008 up to an including all of 2017, and I only want to solve for the weights for the last 4 weeks in 2017. To do this I set algo.min_history = returns.shape[0] - 20, and then call algo.run(returns).

The problem arrises when I now want to find the weights for the last 8 weeks in 2017. This time I set algo.min_history = returns.shape[0] - 40. I expect the weights for the 20 days in December that both problem formulations share to be the same, but they are not.

Am I understanding the use of min_history properly?

Thanks in advance,

Stergios

Initial weights in OLMAR

Hi,

I played with olmar.py with various window and eps parameters.

It appears that when setting all of the initial weights to zero (we haven't invested anything yet), the weights last_b at the first time step are always set to 1/N, with N the number of stocks in the portfolio. Then, at the next step, all the weights are set to 0 except one which is set to 1. From there, things look normal and realistic weights are assigned to particular stocks.

So it appears that the first two steps are unnecessary and result in unwanted trades and fees at the very beginning of the trading period.

I also noticed that when setting them already to 1/N, the first step is skipped (logically). One could say that this assumption is valid (we already own said stocks before the start), but the 2nd step remains anyway where everything is sold except one. Is there a way to change this behavior ?

Thanks,
DrPaprikaa

Transaction Costs Optimization

Hi,

Correct me if I'm wrong, but the various algos implemented in this wonderful repo have been tested in the litterature in the framework of non-existent transaction costs.

This paper improves existing OLPS strategies in the case of non-zero transaction costs, maybe you've already read it. Using their words :

We add a second term that minimizes the L1 norm of difference between two consecutive allocations,which is equivalent to minimizing the incurred transaction costs.

Their implementation seems effective, what do you think about it ? Could it be implemented here ?

I tried an implementation in OLMAR's update() for example. It goes along like this :

def update(b, x_pred):
    window = 12
    n = 10
    lam = 10*trx_fee_pct

    # Calculate variables
    vt   = x_pred / np.dot(b, x_pred)
    v_t_ = np.dot(1, vt) / window

    # Update portfolio
    b_1 = n * (vt - np.dot(v_t_, 1))
    b_  = b_1 + np.sign(b_1)*np.maximum(np.zeros(len(b_1)), np.abs(b_1) - lam)

    # project it onto simplex
    return tools.simplex_proj(b_)

Thanks !
Let me know if can help more :)

Issue on CORRN (slow version)

Hello,
Tried on Binder the slow version of the CORRN algo, there is an error (dimension mismatch on arrays).
image

Error should come from here:
X_t = history.iloc[-window:].values.flatten() for i in range(window, len(history)): X_i = history.iloc[i - window : i - 1].values.flatten()

In the paper, indexing is different:
image

I guess, we should have X_i = history.iloc[i - window : i ].values.flatten() instead. Will Xcheck with the fast version to see if results match.

Try to include transaction cost into OLMAR, but does not work

I tried to include transaction cost into OLMAR in the def update() , but doing this does not change the result.

self.trx_fee_pct = trx_fee_pct  

this line has already been added to the OLMAR() class.

    def update(self, b, x_pred, eps):
        """Update portfolio weights to satisfy constraint b * x >= eps
        and minimize distance to previous weights."""
        x_pred_mean = np.mean(x_pred)
        lam = max(
            0.0, (eps - np.dot(b, x_pred)) / np.linalg.norm(x_pred - x_pred_mean) ** 2
        )

        # limit lambda to avoid numerical problems
        lam = min(100000, lam)

        # update portfolio
        b = b + lam * (x_pred - x_pred_mean)

        lambd = self.trx_fee_pct
        # Calculate variables
        vt = x_pred / np.dot(b, x_pred)
        v_t_ = np.mean(vt)

        # Update portfolio
        b_1 = (vt - np.dot(v_t_, 1))
        b_ = b + np.sign(b_1) * (np.abs(b_1) - lambd)

        # project it onto simplex
        return tools.simplex_proj(b_)

LINK : fatal error LNK1181: cannot open input file β€œm.lib” # issue

On windows 10, 64bit, report

LINK : fatal error LNK1181: cannot open input file 'm.lib'
    error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.26.28801\\bin\\HostX86\\x64\\link.exe' failed with exit status 1181

Thought it was a C compile problem, my windows 10 may not support this.
I then installed ms visual studio and C++ 14, the error still appears.

Code de-deprecation

There is a function in tools.py that has been deprecated long-time ago.

def rolling_cov_pairwise(df, *args, **kwargs):
    d = {}
    for c in df.columns:
        d[c] = pd.rolling_cov(df[c], df, *args, **kwargs)
    p = pd.Panel(d)
    return p.transpose(1, 0, 2)

I believe if one replaces it with
return df.rolling(kwargs['window']).cov(other=df, pairwise=True)
then access to the S matrix would look like df[name].unstack(). Or am I missing something?

from pandas.stats.moments import rolling_mean as rolling_m ModuleNotFoundError: No module named 'pandas.stats'

    from universal import tools
  File "C:\spiry.ai\envs\dlwin36\lib\site-packages\universal_portfolios-0.3.2-py3.6.egg\universal\tools.py", line 5, in <module>
    from pandas.stats.moments import rolling_mean as rolling_m
ModuleNotFoundError: No module named 'pandas.stats'

I am running a code that uses universal-portfolio as library but I get this error that seems to come from universal-portfolios itself.

Optimization in CORN algo

Hi,

While I am reading thru the codes, I found the optimization method in CORN algo a bit confusing:

`
def optimal_weights(self, X):
X = np.mat(X)

    n,m = X.shape
    P = 2 * matrix(X.T * X)
    q = -3 * matrix(np.ones((1,n)) * X).T

    G = matrix(-np.eye(m))
    h = matrix(np.zeros(m))
    A = matrix(np.ones(m)).T
    b = matrix(1.)

    sol = solvers.qp(P, q, G, h, A, b)
    return np.squeeze(sol['x'])

`

In the original paper by Bin L and Stephen Hoi, this should be something like a Prod(BX) or Sum(log(BX)). Are these two versions equivalent?

Thanks in advance!

Predictive measure for portfolio rebalancing

Hi,

I am very new to OLPS/universal-portfolios and very impressed! I was hoping that UP could implement a ranking strategy using a predictive measure to rebalance the portfolio. The idea is described here and basically involves balancing a portfolio daily based on a prediction value for each equity (probability in my case). Is this something that could be implemented? I have looked through the algos and thought it could be done with something like best_so_far?

Thanks for any guidance!

Unable to install

Trying to install, I got the error message below. I am no expert in python install, trying to dig I can confirm that the file requirements.txt is not present in the temporary directory used by pip, but it is present in a (manually downloaded) ZIP file.

C:\Users\mdelvaux\AppData\Local\Continuum\Anaconda3\UniversalPortfolioMarigold>pip install --no-clean --no-deps --no-cache universal-portfolios
Collecting universal-portfolios
Downloading universal-portfolios-0.1.tar.gz (3.7MB)
100% |################################| 3.7MB 13.7MB/s
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "C:\Users\mdelvaux\AppData\Local\Continuum\Anaconda3\lib\site-packages\pip\download.py", line 412, in get_file_content
with open(url) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<string>", line 20, in <module>
  File "C:\Users\mdelvaux\AppData\Local\Temp\pip-build-sjzx1yfs\universal-portfolios\setup.py", line 6, in <module>
    reqs = [str(ir.req) for ir in install_reqs]
  File "C:\Users\mdelvaux\AppData\Local\Temp\pip-build-sjzx1yfs\universal-portfolios\setup.py", line 6, in <listcomp>
    reqs = [str(ir.req) for ir in install_reqs]
  File "C:\Users\mdelvaux\AppData\Local\Continuum\Anaconda3\lib\site-packages\pip\req\req_file.py", line 77, in parse_requirements
    filename, comes_from=comes_from, session=session
  File "C:\Users\mdelvaux\AppData\Local\Continuum\Anaconda3\lib\site-packages\pip\download.py", line 416, in get_file_content
    'Could not open requirements file: %s' % str(exc)
pip.exceptions.InstallationError: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in C:\Users\mdelvaux\AppData\Local\Temp\pip-build-sjzx1yfs\universal-portfolios

min_history for MPT

Thank you for your code. This is a problem I have with MPT. In algo.py at
def weights(self, X, min_history=None, log_progress=True):
""" Return weights. Call step method to update portfolio sequentially. Subclass
this method only at your own risk. """
min_history = self.min_history if min_history is None else min_history

self.min_history has not been initialized(is 0) and tries to calculate the covariance when t=0
I changed the code to:

def weights(self, X, min_history=None, log_progress=True):
    """ Return weights. Call step method to update portfolio sequentially. Subclass
    this method only at your own risk. """
    if not self.min_history:
        self.min_history = tools.freq(X.index)
    min_history = self.min_history if min_history is None else min_history

I don't know if this is correct, but it doesn't crash now
On another topic, it would be nice if you added the algorithms from:
"Boosting Moving Average Reversion Strategy for Online
Portfolio Selection: A Meta-Learning Approach" by
Lin Xiao1, Zhang Min2, Zhang Yongfeng3, Gu Zhaoquan4, Liu Yiqun2, and Ma Shaoping

They combine OLMAR experts. They have a github repository with matlab code to work with the OLPS toolbox. A slight modification of their code yields very nice results
Thank You

Kelly

Hi, I know you are not actively working on this project, but I was wondering from your own experience if you have ever gotten the kelly algo to work?

Best regards,
Derek

new pip 10.0 breaks installation

Collecting universal-portfolios (from ct==0.2.5.dev1)
  Downloading https://files.pythonhosted.org/packages/14/3d/067eda364ed7c5cb89a3caab2973d62c0614d8284b6125907f7e5bdf52d2/universal-portfolios-0.3.2.tar.gz (3.7MB)
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-6V_jgj/universal-portfolios/setup.py", line 3, in <module>
        from pip.req import parse_requirements
    ImportError: No module named req

Impact of Ledoit-Wolf covariance estimation

Hi @Marigold,

You suggested to implement Ledoit-Wolf covariance estimation at the end of modern-portfolio-theory.ipynb.

I read the paper. The shrinkage ratio of F estimator should go to zero when there are 11 assets (like there is at the modern-portfolio-theory.ipynb) or so. Hence the result Covariance Matrix equals to Sample Covariance Matrix. To show the benefit they made an estimation for ~900 assets having monthly historical data. I wonder what impact do you expect to achieve by implementing the method at the universal-portfolios?

specifying the args of CORN class

Hi, I'm trying to use your code for implementing the CORN strategy, But I'm confused about using it because of poor documentation. So, combining the CORN paper and your codes, I got a light taste, But not adequate.

So I decided to open an issue to get help from you. CORN class has three optional initial options, namely, window, rho, fast_version which are explicit. But to get the job done, there are further options in init_step module like X and step_slow module like x, last_b, history which are not explicitly explained about their identity and meaning, But I guess some of them, which I need to know which one is true:

  • X: historical relative prices of all stocks in a pandas.DataFrame type.
  • x: maybe a container like a vector that contains the name of equities (stocks)
  • last_b: latest weights. (But I don't know what should I pass for the initial round!?)
  • history: again, probably historical relative prices of all stocks in a pandas.DataFrame type.

Can you please shed light on these variables? Then I can properly give the correct values to these modules.
Thank you so much.

What is difference between `weight` and `step`?

I see the comment for step is

Calculate new portfolio weights.

and weight is

Return weights. Call step method to update portfolio sequentially. Subclass this method only at your own risk.

So if I want to calculate portfolio weight vector, I only need to call step? But I see in bah.py, there is no step method. So I got really confused by these two functions, Can you explain a bit? Thanks in advance!

error while running On-line portfolios.ipynb

hi all
am getting the following error (RuntimeError: dictionary changed size during iteration) after running the following code
list_result = algos.OLMAR.run_combination(yahoo_data, window=[3,5,10,15], eps=10)

is there any reason for that and can you help on that issue ?

am using python 3.7 (installed via anaconda packaging system ) on linux machine with18.04 ubuntu distribution

I noticed as well that you were using .IX instead of ILOC AND LOC functions within numpy and pandas

secondly , for one of these algos (OMLAR) is using the ratio type of prices by scaling them using the starting values , however in real situations , orders are issued to buy at the current available prices not at the scaled one , so how can I resolve that issue , shall I undo the scaling ?
your input is highly appreciated

ModuleNotFoundError: No module named 'pandas.stats' / no tools.py

a) Update: looks like you need to update your instructions to tell us to install anaconda3 ?
b) Looks like you need to explain to people that "universal" (your module) is NOT the same thing as "universal" (Shubham Chaudhary's compile tool from 2014)...

Suggestion - fix your doc to say this for "install":

1. install anaconda3
2. pip uninstall universal
3. pip install universal-portfolios

I followed these instructions: https://github.com/Marigold/universal-portfolios - in a fresh install (everything new today on CentOS 8) of jupyter lab, and the example notebook fails to run as follows:-

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-1-5760efab9188> in <module>
     11 
     12 import universal as up
---> 13 from universal import tools, algos
     14 from universal.algos import *
     15 

~/.local/lib/python3.6/site-packages/universal/tools.py in <module>
      3 import scipy.optimize as optimize
      4 from scipy.special import betaln
----> 5 from pandas.stats.moments import rolling_mean as rolling_m
      6 from pandas.stats.moments import rolling_corr
      7 import matplotlib.pyplot as plt

ModuleNotFoundError: No module named 'pandas.stats'

This guy had the same problem: #20 - but it was closed with no explanation?

Practical use

Hi,
I just discovered your very cool repo. I was able to play with the parameters on my own historical data using :

algo = algos.OLMAR(window=5, eps=10)
result = algo.run(self.df)

The Jupiter Notebook has a "How to write your own algorithm" section but it is not, in my sense, very comprehensive.

How can I translate this algo into something that works realtime ? How do I feed the data in ? How do I extract the best model parameters at time t and translate this into actual buy & sell signals ?

I willing to help writting a comprehensive guide on how to do this and make a PR, can you point me in the right direction ?
Thanks,
DrPaprikaa

Data information

Hi,
It would be great to know about the granularity and time frame of the data, and the transformations done to these stocks prices.

Thank you.

Which data format to use with OLMAR?

Hello, I have been trying to use the algorithms but since there aren't any docs (just the ipython notebooks) I want to ask what kind of format actually goes into the OLMAR method. Is it the ratio of prices, which is at time t we have price ratio p_t / p_(t-1) which basically means the ratio of the current price and the previous price OR is it the actual price of the asset at time t which should be fed to the algorithm at time t. Thank you.

PS: I have gotten exceptional results on my dataset with the price ratio format and extremely poor results with the actual price format with OLMAR. Given that the ipython notebook uses nyse_o dataset which is 'ratio' and I would choose 'ratio' as the correct format but below this example, the yahoo dataset does not use the 'ratio' price format with OLMAR as it seems. Which format is the correct one?

On-Line porfolios IPython notebook

I was going over the example notebook to get some introduction, and the binder complained that the notebook is in an old format, and requires an update or else it might be impossible to use in the future.

Perhaps you could update it to the current version?

Include an other algo from the same family

There is one set of algos related to Universal Portfolio, coming from the work of Gyorfi, check http://www.cs.bme.hu/~oti/portfolio/ . If you are still active in that area, you should consider adding some of them.

On a side topic, thank you for making this code available. I did develop some similar routines for R (package logopt), but I am in the process of standardizing on python for all my work, so I am was planning to port my code to numpy and related packages, but now I don't need to do that (except possibly for the Gyorfi algos). If you want to have a look at applications of my code, you can have a look at a (stale) blog I wrote about it http://optimallog.blogspot.com/

ModuleNotFoundError: No module named 'algos'

Hello !
Trying to test the onlineportfolio.ipynb and got this error:
ModuleNotFoundError: No module named 'algos'
It's related to the ucrp flag in result.plot(weights=False, assets=False, ucrp=True, logy=True); in effect If I set ucrp = False I don't get the error, but of course I don't see the ucrp graph.
Thank you for your good work.

undefined self.window in update_tco() in algo.py

I don't understand where self.window is defined. Guess self.window =1 since update in one step.

    def update_tco(self, b, x_pred):
        """
        Transaction Costs Optimization
        Paper : https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?referer=&httpsredir=1&article=4761&context=sis_research
        """

        lambd = 10 * self.trx_fee_pct

        # last price adjusted weights
        updated_b = np.multiply(b, x_pred) / np.dot(b, x_pred)

        # Calculate variables
        vt = x_pred / np.dot(updated_b, x_pred)
        v_t_ = np.dot(1, vt) / **self.window**

        # Update portfolio
        b_1 = self.trx_fee_n * (vt - np.dot(v_t_, 1))
        b_ = b_1 + np.sign(b_1) * np.maximum(np.zeros(len(b_1)), np.abs(b_1) - lambd)

        # project it onto simplex
        return tools.simplex_proj(y=b_)

No module named 'pandas.stats'

pandas 0.24.2
universal-portfolios 0.3.3

When I attempt to from universal import algos, I get this error:

--> 117     from universal import algos

/usr/local/lib/python3.5/dist-packages/universal/algos/__init__.py in <module>
----> 1 from .crp import CRP
      2 from .bah import BAH
      3 from .anticor import Anticor
      4 from .corn import CORN
      5 from .bcrp import BCRP

/usr/local/lib/python3.5/dist-packages/universal/algos/crp.py in <module>
----> 1 from ..algo import Algo
      2 from .. import tools
      3 import numpy as np
      4 import pandas as pd
      5 import matplotlib.pyplot as plt

/usr/local/lib/python3.5/dist-packages/universal/algo.py in <module>
      6 import inspect
      7 import copy
----> 8 from .result import AlgoResult, ListResult
      9 from scipy.misc import comb
     10 from . import tools

/usr/local/lib/python3.5/dist-packages/universal/result.py in <module>
      3 import matplotlib.pyplot as plt
      4 import pickle
----> 5 from universal import tools
      6 import seaborn as sns
      7 from statsmodels.api import OLS

/usr/local/lib/python3.5/dist-packages/universal/tools.py in <module>
      3 import scipy.optimize as optimize
      4 from scipy.special import betaln
----> 5 from pandas.stats.moments import rolling_mean as rolling_m
      6 from pandas.stats.moments import rolling_corr
      7 import matplotlib.pyplot as plt

ImportError: No module named 'pandas.stats'
  In call to configurable 'train' (<function train at 0x7f03e5e6f730>)

It looks like #15 already ran into this issue. The library should be updated to latest pandas. Newer projects will be unable to revert pandas version to 0.22.

It looks like the only two instances that need to be updated are in tools.py:

from pandas.stats.moments import rolling_mean as rolling_m
from pandas.stats.moments import rolling_corr

I believe this is the replacement. Would it be df.rolling.mean() and df.rolling().corr()?

Adding or substracting the latest dataset entry makes the model perform differently

Describe the bug
If one has a dataset of the daily closing prices of lets say 30 stocks and add the latest closing prices for the new day, then the model trained on the dataset-newest entry will no longer yield the same returns. It wont even have the same weights, this is very concerning as 1 new data entry shouldn't be able to affect the previous days weights or returns, instead the OLPS algo should just trade accordingly

To Reproduce
Steps to reproduce the behavior:

  1. Download dataset and remove the latest date entry
  2. Train any kind of OLPS model on this dataset
  3. Go back into your dataset and add the latest date entry to it again
  4. Run the OLPS model with the given hyperparameters and see it crush itself on the new data.

Expected behavior
Adding a new date entry shouldn't affect previous days weights or returns.

Invalid equity result using result weight

Hi, thanks for your excellent library.
I tried to verify the result using 15 days of data with SPY, GLD, TLT.
For the given weights resulting from the calculation of Olmar, I did the manual calculation as in sheet 1.

https://docs.google.com/spreadsheets/d/19svAxgDhEdOvGfWfQhcfH-V-7Kp94tTU/edit?usp=sharing&ouid=113915791728325835951&rtpof=true&sd=true

The final equity is in the total column. I notice starting from k8, the values are different from the equity value from the Olmar result as shown in sheet 2.
In sheet 1, if I start to use the next row's weight (ie. row 7 use row 8's weight), then the result would match. But that would result in the last row without any future weight because there is no more row to shift up. Can you help me to understand how to use the weight output to adjust the portfolio? Is there something I am doing wrong or is the algorithm is looking at future data?

Thank you so much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.