fabian-sp / gglasso Goto Github PK
View Code? Open in Web Editor NEWA Python package for General Graphical Lasso computation
License: MIT License
A Python package for General Graphical Lasso computation
License: MIT License
I am getting the following warnings when running the tests of the repo (all the tests pass):
gglasso/problem.py:19
/home/mp2242/GGLasso/gglasso/problem.py:19: DeprecationWarning: invalid escape sequence \T
"""
gglasso/problem.py:472
/home/mp2242/GGLasso/gglasso/problem.py:472: DeprecationWarning: invalid escape sequence \l
"""
gglasso/problem.py:502
/home/mp2242/GGLasso/gglasso/problem.py:502: DeprecationWarning: invalid escape sequence \l
"""
gglasso/problem.py:690
/home/mp2242/GGLasso/gglasso/problem.py:690: DeprecationWarning: invalid escape sequence \g
"""
gglasso/solver/admm_solver.py:17
/home/mp2242/GGLasso/gglasso/solver/admm_solver.py:17: DeprecationWarning: invalid escape sequence \m
"""
gglasso/solver/ext_admm_solver.py:21
/home/mp2242/GGLasso/gglasso/solver/ext_admm_solver.py:21: DeprecationWarning: invalid escape sequence \m
"""
gglasso/solver/ppdna_solver.py:154
/home/mp2242/GGLasso/gglasso/solver/ppdna_solver.py:154: DeprecationWarning: invalid escape sequence \m
"""
gglasso/tests/test_solvers.py::test_ADMM_GGL
/home/mp2242/GGLasso/gglasso/helper/basic_linalg.py:26: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float64, 2d, A), array(float64, 2d, A))
res += Sdot(X[k,:,:], Y[k,:,:])
gglasso/tests/test_solvers.py::test_admm_ppdna_ggl
/home/mp2242/GGLasso/gglasso/solver/ggl_helper.py:262: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float64, 2d, A), array(float64, 2d, F))
ij_entry = jacobian_prox_phi(X[:,i,j] , l1 , l2, reg)
gglasso/tests/test_solvers.py::test_admm_ppdna_ggl
/home/mp2242/GGLasso/gglasso/solver/ggl_helper.py:262: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float64, 2d, C), array(float64, 2d, A))
ij_entry = jacobian_prox_phi(X[:,i,j] , l1 , l2, reg)
gglasso/tests/test_solvers.py::test_admm_ppdna_ggl
/home/mp2242/GGLasso/gglasso/solver/ggl_helper.py:417: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float64, 2d, A), array(float64, 1d, A))
r = b - hessian_Y(x, Gamma, eigQ, W, sigma_t)
gglasso/tests/test_solvers.py::test_admm_ppdna_ggl
/home/mp2242/GGLasso/gglasso/solver/ggl_helper.py:417: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (array(float64, 2d, F), array(float64, 2d, A))
r = b - hessian_Y(x, Gamma, eigQ, W, sigma_t)
I think that the warnings can be easily fixed.
Thank you in advance
Hello,
I am getting the following warnings when trying to compute the inverse covariance matrix (SGL). I'm not sure about the meaning of this warning message and don't know how to solve it. (for example, the EV?) I have already searched in the documentations but did'n find anything related.
SINGLE GRAPHICAL LASSO PROBLEM
Regularization parameters:
{'lambda1': 0.1, 'mu1': None}
ADMM terminated after 1000 iterations with status: max iterations reached.
WARNING: Theta (Theta - L resp.) is not positive definite. Solve to higher accuracy! (min EV is -0.0031095006544593776)
ADMM terminated after 1000 iterations with status: max iterations reached.
WARNING: Theta (Theta - L resp.) is not positive definite. Solve to higher accuracy! (min EV is -0.0025232997414576415)
ADMM terminated after 1000 iterations with status: max iterations reached.
WARNING: Theta (Theta - L resp.) is not positive definite. Solve to higher accuracy! (min EV is -0.002806690357338764)
......
My source code is similar to the example given in the doc. tmp
is a data matrix.
Note: the feature dimension may be much larger than the number of instances (31), so it may be a very sparse matrix.
from gglasso.problem import glasso_problem
P = glasso_problem(tmp.cov().to_numpy(), 31, reg_params = {'lambda1': 0.1}, latent = False, do_scaling = False)
print(P)
lambda1_range = np.logspace(1, -3, 30)
modelselect_params = {'lambda1_range': lambda1_range}
P.model_selection(modelselect_params = modelselect_params, method = 'eBIC', gamma = 0.1)
# regularization parameters are set to the best ones found during model selection
print(P.reg_params)
Could you please help me check what the problem is?
Thanks.
This package has a dependency on decorator v4.4.2. However, newer versions are available, so I would like to request support for these newer versions.
I have noticed that in the project's source code and documentation methods that are present in similar sklearn classes such as GraphicalLasso have methods (e.g. the fit
method) which can be found in the current project but under different names and specifications.
For example, here the GGL class can be created separately and then the data can be provided via a fit
method (like they do in sklearn) instead of defining a gglasso_problem
with the data and then calling the solve
method without any arguments.
Thus, I think that the software can be improved by having method naming and functionality that is similar to sklearn (since sklearn is widely used by the data science / ML community).
Trying to run the following code:
P = glasso_problem(fc_cl, len(fc_cl), reg_params={'lambda1': 0.005, 'lambda2': 0.001}, latent=False, do_scaling=False)
I checked all the input data to be symmetric, but still this error appears.
AssertionError Traceback (most recent call last)
Cell In[31], line 1
----> 1 P.solve(verbose=True)
File ~/projects/OpenCloseProject/nilearn_env/lib/python3.10/site-packages/gglasso/problem.py:440, in glasso_problem.solve(self, Omega_0, solver_params, tol, rtol, solver, verbose)
436 info = {}
439 elif self.conforming:
--> 440 sol, info = ADMM_MGL(S = self.S, lambda1 = self.reg_params['lambda1'], lambda2 = self.reg_params['lambda2'], reg = self.reg,\\
441 Omega_0 = self.Omega_0, latent = self.latent, mu1 = self.reg_params['mu1'],\\
442 tol = self.tol, rtol = self.rtol, **self.solver_params)
445 else:
446 sol, info = ext_ADMM_MGL(S = self.S, lambda1 = self.reg_params['lambda1'], lambda2 = self.reg_params['lambda2'], reg = self.reg,\\
447 Omega_0 = self.Omega_0, G = self.G, tol = self.tol, rtol = self.rtol,\\
448 latent = self.latent, mu1 = self.reg_params['mu1'], **self.solver_params)
File ~/projects/OpenCloseProject/nilearn_env/lib/python3.10/site-packages/gglasso/solver/admm_solver.py:172, in ADMM_MGL(S, lambda1, lambda2, reg, Omega_0, Theta_0, X_0, n_samples, tol, rtol, stopping_criterion, update_rho, rho, max_iter, verbose, measure, latent, mu1)
169 Omega_t[k,:,:] = phiplus(beta = nk[k,0,0]/rho, D = eigD[k,:], Q = eigQ[k,:,:])
171 # Theta Update
--> 172 Theta_t = prox_p(Omega_t + L_t + X_t, (1/rho)*lambda1, (1/rho)*lambda2, reg)
174 #L Update
175 if latent:
File ~/projects/OpenCloseProject/nilearn_env/lib/python3.10/site-packages/gglasso/solver/ggl_helper.py:195, in prox_p()
192 @njit()
193 def prox_p(X, l1, l2, reg):
194 #X is always symmetric and hence we only calculate upper diagonals
--> 195 assert np.abs(X - trp(X)).max() <= 1e-5, \"input X is not symmetric\"
196 assert np.minimum(l1,l2) > 0, \"lambda 1 and lambda2 have to be positive\"
198 (K,p,p) = X.shape
AssertionError: input X is not symmetric
It looks like np.float
is/has been deprecated in NumPy. Currently, GGLasso requires any version of NumPy >= 1.17.3. However, this breaks for versions of NumPy > 1.22.0 where np.float
has been fully removed. Maybe this should be considered in the requirements or np.float
usage should be updated?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.