Code Monkey home page Code Monkey logo

lenstronomy's Introduction

https://raw.githubusercontent.com/lenstronomy/lenstronomy/main/docs/figures/logo_text.png Documentation Status https://codecov.io/gh/lenstronomy/lenstronomy/graph/badge.svg?token=Pk1FmwQ4Ek Powered by Astropy Badge

lenstronomy is a multi-purpose software package to model strong gravitational lenses. lenstronomy finds application for time-delay cosmography and measuring the expansion rate of the Universe, for quantifying lensing substructure to infer dark matter properties, morphological quantification of galaxies, quasar-host galaxy decomposition and much more. A (incomplete) list of publications making use of lenstronomy can be found at this link.

The development is coordinated on GitHub and contributions are welcome. The documentation of lenstronomy is available at readthedocs.org and the package is distributed through PyPI and conda-forge. lenstronomy is an affiliated package of astropy.

Installation

PyPI conda-forge

lenstronomy releases are distributed through PyPI and conda-forge. Instructions for installing lenstronomy and its dependencies can be found in the Installation section of the documentation. Specific instructions for settings and installation requirements for special cases that can provide speed-ups, we also refer to the Installation page.

Getting started

The starting guide jupyter notebook leads through the main modules and design features of lenstronomy. The modular design of lenstronomy allows the user to directly access a lot of tools and each module can also be used as stand-alone packages.

If you are new to gravitational lensing, check out the mini lecture series giving an introduction to gravitational lensing with interactive Jupyter notebooks in the cloud.

Example notebooks

We have made an extension module available at https://github.com/lenstronomy/lenstronomy-tutorials. You can find simple example notebooks for various cases. The latest versions of the notebooks should be compatible with the recent pip version of lenstronomy.

Affiliated packages

Multiple affiliated packages that make use of lenstronomy can be found here (not complete) and further packages are under development by the community.

Mailing list and Slack channel

You can join the lenstronomy mailing list by signing up on the google groups page.

The email list is meant to provide a communication platform between users and developers. You can ask questions, and suggest new features. New releases will be announced via this mailing list.

We also have a Slack channel for the community. Please send us an email such that we can add you to the channel.

If you encounter errors or problems with lenstronomy, please let us know!

Contribution

We welcome EVERY contribution from EVERYONE! See our code of conduct.

Check out the contributing page and become an author of lenstronomy! A big shout-out to the current list of contributors and developers!

Attribution

The design concept of lenstronomy is reported by Birrer & Amara 2018 and is based on Birrer et al 2015. The current JOSS software publication is presented by Birrer et al. 2021. Please cite Birrer & Amara 2018 and Birrer et al. 2021 when you use lenstronomy in a publication and link to https://github.com/lenstronomy/lenstronomy. Please also cite Birrer et al 2015 when you make use of the lenstronomy work-flow or the Shapelet source reconstruction and make sure to cite also the relevant work that was implemented in lenstronomy, as described in the release paper and the documentation. Don't hesitate to reach out to the developers if you have questions!

lenstronomy's People

Contributors

ajshajib avatar amn3142 avatar aymgal avatar basicallymaria avatar benmetha avatar dangilman avatar danjohnson98 avatar dartoon avatar ewoudwempe avatar gipagano avatar jelleaalbers avatar jhod0 avatar jiwoncpark avatar johannesulf avatar lucateo avatar lynevdv avatar martin-millon avatar mattgomer avatar maverick-oh avatar michael7198 avatar mueland avatar nataliehogg avatar nkhadka21 avatar pierrefleury avatar pre-commit-ci[bot] avatar rmorgan10 avatar sibirrer avatar swagnercarena avatar vikramb1 avatar ylilan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lenstronomy's Issues

link relative differences between parameters

sampling the relative difference between two parameters (also of other models) rather than sampling each parameter independently. This might better follow the covariances inherent in the data and allows for a smoother sampling in certain cases @sibirrer

improve tests for added samplers

@aymgal: the added source code has rather low testing:
https://coveralls.io/builds/24083754
I am aware that the samplers themselves will not be testable but all the other routines should be doable. Overall test coverage of lenstronomy is reduced by 0.5%. Sorry for being peaky here, but I want to keep (and even beat) the current state of testing. Thank you!

import error in jupyter notebook; first code cell

[IPKernelApp] Exception in execute request:

ImportError Traceback (most recent call last)
in ()
2
3 # import main simulation class of lenstronomy
----> 4 from lenstronomy.Extensions.SimulationAPI.simulations import Simulation
5
6 SimAPI = Simulation()

/Users/nord/Projects/lenstronomy/lenstronomy/Extensions/SimulationAPI/simulations.py in ()
----> 1 from lenstronomy.ImSim.make_image import MakeImage
2 from lenstronomy.ImSim.lens_model import LensModel
3 from lenstronomy.Solver.image_positions import ImagePosition
4 import astrofunc.util as util
5 from astrofunc.util import Util_class

/Users/nord/Projects/lenstronomy/lenstronomy/ImSim/make_image.py in ()
3 from lenstronomy.ImSim.lens_model import LensModel
4 from lenstronomy.ImSim.light_model import LensLightModel, SourceModel
----> 5 from lenstronomy.ImSim.point_source import PointSource
6
7 from lenstronomy.ImSim.data import Data

/Users/nord/Projects/lenstronomy/lenstronomy/ImSim/point_source.py in ()
1 import numpy as np
2
----> 3 import astrofunc.util as util
4
5

ImportError: No module named astrofunc.util

Issue with the ellipticity parameters during the fitting routine

Below is the output of a fitting sequence. Notice that the ellipticity parameters of the lens light result are the exact same values of the ellipticity parameters of the lens result from the last fitting routine. This looks like there might be a bug that initiates the lens light result ellipticities with the last lens result ellipticities for each fitting routine.

Computing the PSO ...
10
20
30
40
50
60
70
80
90
100
max iteration reached! stoping
(-3.991083950193232, 'reduced X^2 of best position')
(-19171.171754753188, 'logL')
(9607, 'effective number of data points')
([{'theta_E': 2.2658532712145694, 'center_x': -0.262688967918308, 'center_y': -1.1404516783650798, 'e1': -0.6835855040117955, 'gamma': 2.0, 'e2': 0.06766219462405391}, {'e1': -0.17277591776609602, 'e2': -0.004212386201934322}], 'lens result')
([{'n_sersic': 1.2101341550405091, 'center_x': -0.16551583425884125, 'center_y': 0.3980786989886641, 'R_sersic': 0.01174977715669391, 'I0_sersic': 1}], 'source result')
([{'R_sersic': 0.5198435543188105, 'I0_sersic': 1, 'n_sersic': 8.54619360833984, 'center_x': -0.10394830644136022, 'center_y': 0.032031977997840855, 'e1': 0.0, 'e2': 0.0}, {'mean': 1}], 'lens light result')
([{'point_amp': 1, 'ra_image': array([-0.26293853, -1.16111134, 0.07670262, 1.25560955]), 'dec_image': array([-1.14435482, 2.00781588, 2.10684856, 1.69381513])}], 'point source result')
(251.46317219734192, 'time used for PSO', 'PSO')
===================
Computing the PSO ...
10
20
30
40
50
60
70
80
90
100
max iteration reached! stoping
(-82.09378982381793, 'reduced X^2 of best position')
(-394337.51941870945, 'logL')
(9607, 'effective number of data points')
([{'theta_E': 2.443229678717895, 'center_x': -0.31925546624338047, 'center_y': -1.3509513427731108, 'e1': -0.7557880427714526, 'gamma': 2.0, 'e2': 0.16897409092818896}, {'e1': -0.27709102267361096, 'e2': 0.02144742885269826}], 'lens result')
([{'n_sersic': 3.658520539312845, 'center_x': -0.11697501252269911, 'center_y': 0.2589494782305022, 'R_sersic': 0.0367272943905438, 'I0_sersic': 1}], 'source result')
([{'R_sersic': 0.7118295387881899, 'I0_sersic': 1, 'n_sersic': 10.374378125133127, 'center_x': -0.262688967918308, 'center_y': -1.1404516783650798, 'e1': -0.6835855040117955, 'e2': 0.06766219462405391}, {'mean': 1}], 'lens light result')
([{'point_amp': 1, 'ra_image': array([-0.32019651, -1.14376681, 0.06836124, 1.22290893]), 'dec_image': array([-1.35820856, 1.98377831, 2.08533192, 1.61804553])}], 'point source result')
(256.9203588962555, 'time used for PSO', 'PSO')
===================
iteration of step 11 gave best reconstruction.
log likelihood before: -394337.51941870945 and log likelihood after: -440520.3413147764
Computing the PSO ...
10
20
30
40
50
60
70
80
90
100
max iteration reached! stoping
(-34.42961225023929, 'reduced X^2 of best position')
(-165365.4276378993, 'logL')
(9606, 'effective number of data points')
([{'theta_E': 4.971275164006167, 'center_x': 4.674338692519211, 'center_y': 2.183729849036633, 'e1': 0.5234875406962552, 'gamma': 1.9005320425598882, 'e2': -1.9302898068344583}, {'e1': 0.026393279834134063, 'e2': 0.18026803969180105}], 'lens result')
([{'n_sersic': 3.918801335435724, 'center_x': 4.491783584968015, 'center_y': 2.1241156308801283, 'R_sersic': 0.03462582928108242, 'I0_sersic': 1}], 'source result')
([{'R_sersic': 0.6809958956397767, 'I0_sersic': 1, 'n_sersic': 10.62172644723493, 'center_x': -0.31925546624338047, 'center_y': -1.3509513427731108, 'e1': -0.7557880427714526, 'e2': 0.16897409092818896}, {'mean': 1}], 'lens light result')
([{'point_amp': 1, 'ra_image': array([-0.25592257, -1.14670748, 0.10838787, 1.19883924]), 'dec_image': array([-1.30630152, 1.97325499, 2.11690723, 1.59755353])}], 'point source result')
(297.37227487564087, 'time used for PSO', 'PSO')
===================
iteration of step 13 gave best reconstruction.
log likelihood before: -165365.4276378993 and log likelihood after: -181984.6246381529
Computing the PSO ...
10
20
30
40
50
60
70
80
90
100
max iteration reached! stoping
(-25.2987790496482, 'reduced X^2 of best position')
(-121510.0357754603, 'logL')
(9606, 'effective number of data points')
([{'theta_E': 4.971275164006167, 'center_x': 4.674338692519211, 'center_y': 2.183729849036633, 'e1': 0.5234875406962552, 'gamma': 1.8819269229474558, 'e2': -1.9302898068344583}, {'e1': 0.015734324898259912, 'e2': 0.18957469310475653}], 'lens result')
([{'n_sersic': 3.3324215288842756, 'center_x': 4.483217340121929, 'center_y': 2.1094410214834336, 'R_sersic': 0.09126143903235771, 'I0_sersic': 1}], 'source result')
([{'R_sersic': 0.7020202550333958, 'I0_sersic': 1, 'n_sersic': 10.263096549517757, 'center_x': 4.674338692519211, 'center_y': 2.183729849036633, 'e1': 0.5234875406962552, 'e2': -1.9302898068344583}, {'mean': 1}], 'lens light result')
([{'point_amp': 1, 'ra_image': array([-0.28166275, -1.14653881, 0.11554444, 1.22407825]), 'dec_image': array([-1.28597166, 1.95879435, 2.11527493, 1.66086215])}], 'point source result')
(358.81216382980347, 'time used for PSO', 'PSO')
===================
iteration of step 19 gave best reconstruction.
log likelihood before: -121510.0357754603 and log likelihood after: -91156.84354035514
Computing the PSO ...
10
20
30
40
50
60
70
80
90
100
max iteration reached! stoping
(-18.86720634500439, 'reduced X^2 of best position')
(-90619.1920750561, 'logL')
(9606, 'effective number of data points')
([{'theta_E': 4.971275164006167, 'center_x': 4.674338692519211, 'center_y': 2.183729849036633, 'e1': 0.5234875406962552, 'gamma': 1.881920864557125, 'e2': -1.9302898068344583}, {'e1': 0.015731430614447414, 'e2': 0.18961217456913146}], 'lens result')
([{'n_sersic': 3.3323869710942033, 'center_x': 4.48322191155607, 'center_y': 2.109423134590232, 'R_sersic': 0.0912840088365735, 'I0_sersic': 1}], 'source result')
([{'R_sersic': 0.7022962736071097, 'I0_sersic': 1, 'n_sersic': 10.263431148119702, 'center_x': 4.674338692519211, 'center_y': 2.183729849036633, 'e1': 0.5234875406962552, 'e2': -1.9302898068344583}, {'mean': 1}], 'lens light result')
([{'point_amp': 1, 'ra_image': array([-0.28179184, -1.14649611, 0.11546242, 1.22444828]), 'dec_image': array([-1.28623442, 1.95798722, 2.11576269, 1.66020438])}], 'point source result')
(376.12515211105347, 'time used for PSO', 'PSO')
===================
iteration of step 0 gave best reconstruction.
log likelihood before: -90619.1920750561 and log likelihood after: -95245.53921389392
1564.73388696 total time needed for computation
Result saved in: j0147_21_s_out.txt
============ CONGRATULATION, YOUR JOB WAS SUCCESSFUL ================

Issue with PS rendering (with 0.9.2)

I'm having trouble with version 0.9.2 : when I try the following kwargs

kwargs_model = {..., 'point_source_model_list': ['LENSED_POSITION'], 'fixed_magnification_list': [False]} 

#kwargs_params = {..., 'point_source_model': [[{'ra_image': array([-0.9,  0.6,  0.9, -0.7]), 'dec_image': array([-0.9, -0.9,  0.7,  0.9])}], [{'ra_image': array([0.1, 0.1, 0.1, 0.1]), 'dec_image': array([0.1, 0.1, 0.1, 0.1])}], [{}], [{'ra_image': array([-10., -10., -10., -10.]), 'dec_image': array([-10., -10., -10., -10.])}], [{'ra_image': array([10., 10., 10., 10.]), 'dec_image': array([10., 10., 10., 10.])}]]', ...}

kwargs_results = {..., 'kwargs_ps': [{'point_amp': array([266.79773197, 280.07506424, 273.46808759, 265.219639  ]), 'ra_image': array([-0.9204886 ,  0.64522794,  0.85173559, -0.70591383]), 'dec_image': array([-0.83137116, -0.78471491,  0.76415318,  0.82666677])}], 'kwargs_special': {'delta_x_image': array([-0.0010609 ,  0.00510851, -0.00247576,  0.00350916]), 'delta_y_image': array([ 0.00097327, -0.00090853,  0.00016772, -0.00074446]), 'D_dt': 5361.082157276348}, ...}

I obtain the following error in ImSim/Numerics/point_source_rendering :

lenstronomy/ImSim/Numerics/point_source_rendering.py", line 35, in _displace_astrometry
delta_x_new[0:len(delta_x)] = delta_x
ValueError: could not broadcast input array from shape (4) into shape (1)

I went back to previously called methods, and I found that the method PointSource.linear_response_set() returns the ra_pos variable as

ra_pos = [[-0.9204885979919323], [0.6452279400704862], [0.8517355905217909], [-0.7059138349159952]]

i.e. separating each individual point source from the same model in lists containing one coordinate. Is it the expected behaviour ? By looking to the point_source_rendering() and _displace_astrometry() methods, it seems not...

What is strange though is that with version 0.9.0 I didn't have this issue (with the exact same input kwargs), despite having not spotted any obvious changes between 0.9.0 and 0.9.2 (except this new _displace_astrometry() method, but I suspect the error is causing prior to this method).

Optimizer functionality is using private functions of other modules

@dangilman I re-designed a bit the multi-plane lensing module for better control of when to compute the observed-to-physical position convention (could improve the speed in the lens equation solver as currently a bottleneck for Anowar).
During the re-design, I was forced to change/update your Optimizer modules as they access private functions of LensModel. Everything should work for you and all the tests pass.

In the longer term: either get rid of the private function calls (by demanding specific features out of the lens model module) or perhaps you can directly more use the updated lens model module as it is improving. Up to you. I just let you know and feel free to close that issue.

Return plotted matrix from the outplot methods

It is a convenience feature for the users to make their own customized plotting with the plotted matrix without having to generate the matrix. It would be useful if, e.g., ModelPlot.source_plot() returns ax, source etc. It may not be necessary to return ax at all, as the user already provides the axes instance to the method in the first place.

1D Power Spectrum Calculation

  1. I believe the formula for the power spectrum in line 22 of lenstronomy.Util.correlation should be
    ps2D = (numpy.abs(F2))**2

  2. In order to get a normalization from fftpack that leads to the sum of the fft amplitudes equalling the amplitudes of the input signal you have to divide by 1/N where N is the total number of points going into your fourier function.

  3. I think there should be a 1/pi*(r outer edge of bin2-rinner edge of bin2) factor in the return from analysis_util.azimuthalAverage() function, since that is the area of an annulus. I might be misreading the code there though, but pretty sure you need the angular factor.

Change 'theta_E' keyword for Chameleon profile to reflect its actual definition

Effective Einstein radius is calculated fine for SPEMD, but not for Chameleon profile.

import numpy as np
import matplotlib.pyplot as plt

from lenstronomy.LensModel.lens_model import LensModel
from lenstronomy.LensModel.lens_model_extensions import LensModelExtensions


theta_es = np.linspace(0.4, 2, 30)
e1 = 0.0
e2 = 0.0

chams_theta_e = []
spemds_theta_e = []


for theta_e in theta_es:
    kwargs_chameleon = {'theta_E': theta_e, 'w_c': 1.13, 'w_t': 0.03, 'e1': e1, 'e2': e2,
                             'center_x': 0, 'center_y': 0}

    kwargs_spemd = {'theta_E': theta_e, 'gamma': 1.8, 'e1': e1, 'e2': e2, 'center_x': 0, 'center_y': 0}

    lens_model_ext_cham = LensModelExtensions(LensModel(lens_model_list=['CHAMELEON']))
    lens_model_ext_spemd = LensModelExtensions(LensModel(lens_model_list=['SPEMD']))

    spacing = 1000

    chams_theta_e.append(lens_model_ext_cham.effective_einstein_radius([kwargs_chameleon], spacing=spacing))
    spemds_theta_e.append(lens_model_ext_spemd.effective_einstein_radius([kwargs_spemd], spacing=spacing))

plt.plot(theta_es, spemds_theta_e, label='SPEMD')
plt.plot(theta_es, chams_theta_e, label='CHAMELEON')
plt.xlabel('True')
plt.ylabel('Calculated')
plt.legend()
plt.show()

test

MCMC walkers initiate outside given bounds.

The MCMC walkers can be initiated outside the bounds as it seems that sigma_scale is prioritized or the bounds are not checked. Then the following issue happens as shown in the figure below. Tho horizontal axis is for MCMC steps and the 1-sigma spread of the walkers at each step are shown along the vertical axis. Two panels are for two different parameters. In the top panel, the walkers initiated outside can not come back into the allowed bounds. If the likelihood function checks for bounds and assigns -infinite log-likelihood for out-of-bounds walkers, they can't move to any other point. While initiating the walker distribution, the specified bounds need to be prioritized.

screen shot 2018-04-30 at 11 14 15 pm

clear distinction of public and private definitions

@dangilman
make sure only routines, that a user should access and we can assure those are sustainable and "long time supported and backwards compatible" are public functions. All other definitions (including parameters actually) should be _private.
Sorry to be a bit picky here. As this sub-package is now part of lenstronomy, new releases have to be backwards compatible and people will contact me if their routines and wrappers do not work anymore.

Is 'UNIFORM' light profile not compatible with flux_components()?

lenstronomy.Analysis.lens_analysis.flux_components() throws the following error if 'UNIFORM' light profile is provided.


TypeError Traceback (most recent call last)
in ()
17
18 lens_analysis = LensAnalysis(lens_model_output['kwargs_options'])
---> 19 deflector_flux, _ = lens_analysis.flux_components(lens_model_output['lens_light_result'])
20 print deflector_flux
21

/home/ajshajib/mybin/lenstronomy/lenstronomy/Analysis/lens_analysis.pyc in flux_components(self, kwargs_light, n_grid, delta_grid, type)
134 kwargs_copy[k]['center_y'] = 0
135 if type == 'lens':
--> 136 light = self.LensLightModel.surface_brightness(x_grid, y_grid, kwargs_copy, k=k)
137 elif type == 'source':
138 light = self.SourceModel.surface_brightness(x_grid, y_grid, kwargs_copy, k=k)

/home/ajshajib/mybin/lenstronomy/lenstronomy/LightModel/light_model.pyc in surface_brightness(self, x, y, kwargs_lens_light_list, k)
16 :type x: set or single 1d numpy array
17 """
---> 18 return self.lightModel.surface_brightness(x, y, kwargs_lens_light_list, k=k)
19
20

/home/ajshajib/mybin/lenstronomy/lenstronomy/LightModel/light_model.pyc in surface_brightness(self, x, y, kwargs_list, k)
102 if self.valid_list[i]:
103 if k == None or k == i:
--> 104 flux += func.function(x, y, **kwargs_list[i])
105 return flux
106

TypeError: function() got an unexpected keyword argument 'center_x'

test failing

@aymgal the tests are failing due to multinest:
https://travis-ci.org/sibirrer/lenstronomy/jobs/547600389

you can locally do the same test when performing the tests in a virtual environment (without pre-installed multi-nest & co). One suggestion is to test only specifically the interface and move the import statements to a very narrow region in the source code that are not tested.

thanks, Simon

errors in the convergence integrals due to x_grid-x_mean close to 0 (during calculation of the deflection angles from kappa maps)

In lentronomy 0.8.2 (using python 3.6, scipy 1.10, numpy 1.15.4), I encountered an issue with the convergence integrals (lenstronomy/LensModel/numerical_profile_integrals.py) : sometimes, the (x_grid-x_mean) (line 130 in calcultation of r2 in _deflection_kernel) is close to 0 (around 10**-16) which makes r2 around 10**-32. So kernel_x is exploding and bias the fftconvole (line 101 in deflection_from_kappa). There is an attempt to avoid this kind of error with the line l0 = np.where(r2 == 0) (line 131 in _deflection_kernel). But 10**-32 is not included in this condition. Replacing " l0 = np.where(r2 == 0)" by "l0 = np.where(util.array2image(r2) < 10**-10)" solves the problem. It would be great to make this small change (or something like that) for the next version if possible. And there might be the same kind of problem with _potential_kernel but I did not encountered it.

Latest commit broke multi-source-plane scaling

My notebook broke after the latest commit. Below is the error I get:

IndexError                                Traceback (most recent call last)
<ipython-input-12-996d66422ef4> in <module>()
     11     subgrid_res = 2
     12     configure_model(job_name, deflector_model, perturber_model, mask, n_max, foreground_shear, 
---> 13                     subgrid_res, num_perturber, cluster_compute=False, verbose=True)

<ipython-input-10-4b72c7058a4a> in configure_model(job_name, deflector_model, perturber_model, mask, n_max_list, foreground_shear, subgrid_res, num_perturber, group_halo, cluster_compute, verbose)
    997         fitting_seq = FittingSequence(kwargs_data_joint, kwargs_model, kwargs_constraints, kwargs_likelihood, kwargs_params, mpi=False, verbose=True)
    998 
--> 999         fit_output = fitting_seq.fit_sequence(fitting_kwargs_list)
   1000         lens_result, source_result, lens_light_result, ps_result, cosmo_result = fitting_seq.best_fit(bijective=False)
   1001         multi_band_list_out = fitting_seq.multi_band_list

/Users/ajshajib/Dropbox/PyCharm Projects/lenstronomy/lenstronomy/Workflow/fitting_sequence.pyc in fit_sequence(self, fitting_list)
     75 
     76             elif fitting_type == 'PSO':
---> 77                 lens_result, source_result, lens_light_result, ps_result, cosmo_result, chain, param = self.pso(**kwargs)
     78                 self._updateManager.update_param_state(lens_result, source_result, lens_light_result, ps_result, cosmo_result)
     79                 chain_list.append([fitting_type, chain, param])

/Users/ajshajib/Dropbox/PyCharm Projects/lenstronomy/lenstronomy/Workflow/fitting_sequence.pyc in pso(self, n_particles, n_iterations, sigma_scale, print_key, threadCount)
    236         sampler = Sampler(likelihoodModule=self.likelihoodModule)
    237         result, chain = sampler.pso(n_particles, n_iterations, lowerLimit, upperLimit, init_pos=init_pos,
--> 238                                        threadCount=threadCount, mpi=self._mpi, print_key=print_key)
    239         lens_result, source_result, lens_light_result, ps_result, cosmo_result = param_class.args2kwargs(result,
    240                                                                                                          bijective=True)

/Users/ajshajib/Dropbox/PyCharm Projects/lenstronomy/lenstronomy/Sampling/sampler.pyc in pso(self, n_particles, n_iterations, lower_start, upper_start, threadCount, init_pos, mpi, print_key)
     57             pso.gbest.position = init_pos
     58             pso.gbest.velocity = [0]*len(init_pos)
---> 59             pso.gbest.fitness, _ = self.chain.likelihood(init_pos)
     60         X2_list = []
     61         vel_list = []

/Users/ajshajib/Dropbox/PyCharm Projects/lenstronomy/lenstronomy/Sampling/likelihood.pyc in likelihood(self, a)
    178 
    179     def likelihood(self, a):
--> 180         return self.logL(a)
    181 
    182     def computeLikelihood(self, ctx):

/Users/ajshajib/Dropbox/PyCharm Projects/lenstronomy/lenstronomy/Sampling/likelihood.pyc in logL(self, args)
    106         """
    107         #extract parameters
--> 108         kwargs_lens, kwargs_source, kwargs_lens_light, kwargs_ps, kwargs_cosmo = self.param.args2kwargs(args)
    109         #generate image and computes likelihood
    110         self._reset_point_source_cache(bool=True)

/Users/ajshajib/Dropbox/PyCharm Projects/lenstronomy/lenstronomy/Sampling/parameters.py in args2kwargs(self, args, bijective)
    190         # update source joint with point source
    191         kwargs_source = self._update_source_joint_with_point_source(kwargs_lens, kwargs_source, kwargs_ps, kwargs_cosmo,
--> 192                                                                         image_plane=bijective)
    193         # update source joint with source
    194         kwargs_source = self._update_joint_param(kwargs_source, kwargs_source, self._joint_source_with_source)

/Users/ajshajib/Dropbox/PyCharm Projects/lenstronomy/lenstronomy/Sampling/parameters.py in _update_source_joint_with_point_source(self, kwargs_lens_list, kwargs_source_list, kwargs_ps, kwargs_cosmo, image_plane)
    282 
    283     def _update_source_joint_with_point_source(self, kwargs_lens_list, kwargs_source_list, kwargs_ps, kwargs_cosmo, image_plane=False):
--> 284         kwargs_source_list = self.image2source_plane(kwargs_source_list, kwargs_lens_list, image_plane=image_plane)
    285 
    286         for setting in self._joint_source_with_point_source:

/Users/ajshajib/Dropbox/PyCharm Projects/lenstronomy/lenstronomy/Sampling/parameters.py in image2source_plane(self, kwargs_source, kwargs_lens, image_plane)
    276                 if 'center_x' in kwargs:
    277                     x_mapped, y_mapped = self._image2SourceMapping.image2source(kwargs['center_x'], kwargs['center_y'],
--> 278                                                                                 kwargs_lens, idex_source=i)
    279                     kwargs['center_x'] = x_mapped
    280                     kwargs['center_y'] = y_mapped

/Users/ajshajib/Dropbox/PyCharm Projects/lenstronomy/lenstronomy/ImSim/image2source_mapping.py in image2source(self, x, y, kwargs_lens, idex_source)
     70             if self._multi_lens_plane is False:
     71                 x_alpha, y_alpha = self._lensModel.alpha(x, y, kwargs_lens)
---> 72                 scale_factor = self._deflection_scaling_list[idex_source]
     73                 x_source = x - x_alpha * scale_factor
     74                 y_source = y - y_alpha * scale_factor

IndexError: list index out of range

I put a print statement print(idex_source, self._deflection_scaling_list) at line 72 of image2source_mapping.py. Here's the output now:

(6, [1, 1, 1.0218063886598245, 1.0218063886598245, 1])

Here's the output right after the commit c34dc99:

(6, [1, 1, 1, 1, 1, 1, 1.0218063886598245, 1.0218063886598245, 1, 1.0218063886598245, 1.0218063886598245, 1, 1.0218063886598245, 1.0218063886598245, 1])

Before the commit, the deflection_scaling_list would contain the whole list, thus the source index for a particular band could be correctly connected. Now, deflection_scaling_list sees the list only for one band, then the source index for that band doesn't get connected properly.

change light amplitude parameters to surface brightness

@sibirrer
currently, the 'amp' parameters in the light module are executed as per unit of pixel. Changing this to surface brightness (per square arc seconds) is better to compute absolute brightnesses.

This may have implication of current use of lenstronomy to do the photometry.

arrival_time_sort option useless in "findBrightImage" in lens_equation_solver.py

In the "findBrightImage" (in LensModel/Solver/lens_equation_solver.py with lenstronomy 0.9.1), there is the possibility to put arrival_time_sort = True or False and it is used in the line "x_mins,y_mins = self.image_position_from_source(...)". But after "x_mins_sorted = util.selecBest(...)", it's sorted according to the magnitude. It's not a big deal, I just put a sort_arrival_times(...) function after in my code. I just wanted to mention it for the next lenstronomy version :)
Have a nice day !
Lyne Van de Vyvere

Issue with PointSource.linear_response_set(...)

The function PointSource.linear_response_set(....) is providing different outputs for the same input. Sometimes, the function outputs 3 values for x_pos and amp, whereas sometimes it provides 4. The code from the Jupyter Notebook is given below.

import numpy as np
import pickle
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib

%matplotlib inline

import lenstronomy.Util.data_util as data_util
import lenstronomy.Util.util as util
import lenstronomy.Plots.plot_util as plot_util
import lenstronomy.Util.param_util as param_util
from lenstronomy.SimulationAPI.sim_api import SimAPI
from lenstronomy.LensModel.lens_model import LensModel
from lenstronomy.LensModel.Solver.lens_equation_solver import LensEquationSolver

file = open("lens_outputs_new.pickle", "rb")
lens_outputs = pickle.load(file, encoding='latin1')

HST_camera = {'pixel_scale': 0.05, # scale (in arcseonds) of pixels
}

numpix = 128 # number of pixels per axis of the image to be modelled

read_noise = [14, 3, 3]
ccd_gain = [2.5, 1.5, 1.5]
exposure_time = [2196.9, 1429, 1119.2]
sky_brightness = [22.9617, 23.1502, 24.1893]
magnitude_zero_point = [24.9514, 24.6997, 25.7388]
num_exposures = [1, 1, 1]
seeing = [0, 0, 0]
psf_lens_index = 0

HST_F160W_band_obs = {'read_noise': read_noise[0], # std of noise generated by read-out (in units of electrons)
'ccd_gain': ccd_gain[0], # electrons/ADU (analog-to-digital unit). A gain of 8 means that the camera digitizes the CCD signal so that each ADU corresponds to 8 photoelectrons.
'exposure_time': exposure_time[0], # exposure time per image (in seconds)
'sky_brightness': sky_brightness[0], # sky brightness (in magnitude per square arcseconds)
'magnitude_zero_point': magnitude_zero_point[0], # magnitude in which 1 count per second per arcsecond square is registered (in ADU's)
'num_exposures': num_exposures[0], # number of exposures that are combined
'seeing': seeing[0], # full width at half maximum of the PSF (if not specific psf_model is specified)
'psf_type': 'PIXEL', # string, type of PSF ('GAUSSIAN' and 'PIXEL' supported)
'psf_model': lens_outputs[psf_lens_index]['multi_band_list'][0][1]['kernel_point_source'] # 2d numpy array, model of PSF centered with odd number of pixels per axis (optional when psf_type='PIXEL' is chosen)
}

HST_F814W_band_obs = {'read_noise': read_noise[1],
'ccd_gain': ccd_gain[1],
'exposure_time': exposure_time[1],
'sky_brightness': sky_brightness[1],
'magnitude_zero_point': magnitude_zero_point[1],
'num_exposures': num_exposures[1],
'seeing': seeing[1],
'psf_type': 'PIXEL',
'psf_model': lens_outputs[psf_lens_index]['multi_band_list'][1][1]['kernel_point_source']
}

HST_F475X_band_obs = {'read_noise': read_noise[2],
'ccd_gain': ccd_gain[2],
'exposure_time': exposure_time[2],
'sky_brightness': sky_brightness[2],
'magnitude_zero_point': magnitude_zero_point[2],
'num_exposures': num_exposures[2],
'seeing': seeing[2],
'psf_type': 'PIXEL',
'psf_model': lens_outputs[psf_lens_index]['multi_band_list'][1][1]['kernel_point_source']
}

lens_model_list = ['SPEMD', 'SHEAR', 'NIE']
kwargs_lens = [{'theta_E': 1.075402182869375, 'e1': 0.14130452695618256, 'e2': -0.054344922343890024, 'center_x': -0.4897334262220723, 'center_y': -0.4874173918839593, 'gamma': 1.9004657358991497},
{'e1': 0.05621848339059133, 'e2': -0.001012628846729346},
{'theta_E': 0.40903435500364377, 'e1': -0.0939768697315022, 'e2': -0.03325148758042043, 's_scale': 1e-10}]

lens_light_model_list = ['SERSIC_ELLIPSE', 'SERSIC_ELLIPSE', 'SERSIC_ELLIPSE']
kwargs_lens_light_mag_g = [{'R_sersic': 1.9271001282512756, 'n_sersic': 1.3086971828367564, 'e1': -0.2343885121498464, 'e2': -0.01532164101638798, 'center_x': -0.4897334262220723, 'center_y': -0.4874173918839593, 'amp': 3.2643604022473096},
{'R_sersic': 1.159608361920042, 'n_sersic': 5.828515480768427, 'e1': -0.12991728107230696, 'e2': -0.07214709761226769, 'center_x': -0.4897334262220723, 'center_y': -0.4874173918839593, 'amp': 6.494225311915972},
{'R_sersic': 0.7100508991041691, 'n_sersic': 6.245883375803998, 'e1': -0.0939768697315022, 'e2': -0.03325148758042043, 'center_x': -0.4115184450909908, 'center_y': -1.4373464403550351, 'amp': 0.24218269573929976}]

kwargs_model = {'lens_model_list': lens_model_list, # list of lens models to be used
'lens_light_model_list': lens_light_model_list, # list of unlensed light models to be used
'source_light_model_list': ['SERSIC_ELLIPSE'], # list of extended source models to be used
'point_source_model_list': ['SOURCE_POSITION'] # list of point source models to be used
}

kwargs_source_mag_g = [{'R_sersic': 0.24063211539441176, 'n_sersic': 1.1368668531035915, 'e1': -0.0645191523449048, 'e2': 0.24308983418569952,
'center_x': -0.4906457423641043, 'center_y': -0.6479828671771817, 'amp': 12.568443559405836}]

kwargs_ps_mag_g = [{'ra_source': -0.4906457423641043, 'dec_source': -0.6479828671771817, 'source_amp': 7.575898373439072}]

numpix = 128 # number of pixels per axis of the image to be modelled

kwargs_F160W_band = util.merge_dicts(HST_camera, HST_F160W_band_obs)
kwargs_F814W_band = util.merge_dicts(HST_camera, HST_F814W_band_obs)
kwargs_F475X_band = util.merge_dicts(HST_camera, HST_F475X_band_obs)

kwargs_numerics = {'point_source_supersampling_factor': 1}

sim_g = SimAPI(numpix=numpix, kwargs_single_band=kwargs_F160W_band, kwargs_model=kwargs_model, kwargs_numerics=kwargs_numerics)
sim_r = SimAPI(numpix=numpix, kwargs_single_band=kwargs_F814W_band, kwargs_model=kwargs_model, kwargs_numerics=kwargs_numerics)
sim_i = SimAPI(numpix=numpix, kwargs_single_band=kwargs_F475X_band, kwargs_model=kwargs_model, kwargs_numerics=kwargs_numerics)

imSim_g = sim_g.image_model_class
imSim_r = sim_r.image_model_class
imSim_i = sim_i.image_model_class

ra_pos, dec_pos, amp, n_points = imSim_g.PointSource.linear_response_set(kwargs_ps_mag_g, kwargs_lens, with_amp=True)
print("x_pos - " + str(np.array(ra_pos[0])*20+64))
print("y_pos - " + str(np.array(dec_pos[0])*20+64))
print("amp - " + str(amp[0]))

Screen Shot 2019-08-19 at 11 55 04 AM

transitioning to emcee v 3.0.0

emcee version v 2.2.1 is deprecated and not allowed anymore in python3. The default pip version of emcee is v3.0.0. This may also have impact on the cosmoHammer wrapper, that might be deprecated too. Options in emcee itself seem to support stable MPI process by now.

Testing GalKin with MGE fails with >10% mismatch

In test_lens_properties.py, is the function test_velocity_dispersion_new() testing between analytic power-law+Hernquist (for mass+light) and the specified models (looks like the profiles used for 1206)? If so, this can’t be a good test to compare different things. Whatever the case, the values from MGE and analytic functions actually mismatch by 13% and it barely passes after rounding as you are testing for 10%-match.

I modified the function a little bit (like below) to test with Hernquist+power-law for both MGE and analytic, which also fails with 12% mismatch.

    def test_velocity_dispersion_hernquist(self):
        z_lens = 0.5
        z_source = 1.5
        kwargs_options = {'lens_model_list': ['SPEMD'],
                          'lens_model_deflector_bool': [True],
                          'lens_light_deflector_bool': [False],
                          'lens_light_model_list': ['HERNQUIST']}
        lensProp = LensProp(z_lens, z_source, kwargs_options)
        kwargs_lens = [{'theta_E': 1.4272358196260446, 'e1': 0, 'center_x':
            0., 'center_y': 0., 'e2': 0, 'gamma': 1.8}]

        kwargs_lens_light = [{'amp': 50, 'Rs': .5, 'center_x': 0.,
                              'center_y': 0.}]
        r_ani = 0.62
        kwargs_anisotropy = {'r_ani': r_ani}
        R_slit = 3.8
        dR_slit = 1.
        kwargs_aperture = {'center_ra': 0, 'width': dR_slit, 'length': R_slit, 
                                         'angle': 0, 'center_dec': 0}
        aperture_type = 'slit'
        psf_fwhm = 0.7
        anisotropy_model = 'OsipkovMerritt'
        r_eff = 2.4142135 * 0.5

        kwargs_numerics = {'sampling_number': 500,
                           'interpol_grid_num': 500,
                           'log_integration': True,
                           'max_integrate': 100,
                           'min_integrate': 0.0001
                           }

        v_sigma = lensProp.velocity_dispersion_numerical(kwargs_lens,
                 kwargs_lens_light, kwargs_anisotropy,
                 kwargs_aperture, psf_fwhm,
                 aperture_type, anisotropy_model,
                 kwargs_numerics=kwargs_numerics,
                 MGE_light=True, r_eff=r_eff,
                 lens_model_kinematics_bool=kwargs_options['lens_model_deflector_bool'])
        v_sigma_mge_lens = lensProp.velocity_dispersion_numerical(kwargs_lens,
                  kwargs_lens_light, kwargs_anisotropy, kwargs_aperture,
                  psf_fwhm, aperture_type, anisotropy_model,
                  kwargs_numerics=kwargs_numerics, MGE_light=True,
                  MGE_mass=True, r_eff=r_eff,
                  lens_model_kinematics_bool=kwargs_options['lens_model_deflector_bool'])
        vel_disp_temp = lensProp.velocity_dispersion(kwargs_lens,
                            kwargs_lens_light, aniso_param=r_ani,
                            r_eff=r_eff, R_slit=R_slit, dR_slit=dR_slit,
                            psf_fwhm=psf_fwhm, num_evaluate=5000)
    
        print(v_sigma, v_sigma_mge_lens, vel_disp_temp)
        npt.assert_almost_equal(v_sigma / v_sigma_mge_lens, 1, decimal=2)
        npt.assert_almost_equal(v_sigma_mge_lens / vel_disp_temp, 1, decimal=2)

redesign input arguments in ray_shooting_partial

@dangilman I will re-design the input statements for 'keep_range' in the ray_shooting_partial function.
Reason: When performing multi-source plane ray-shooting, there are different but reoccurring redshifts (and thus values) of the first and last T_xy statements.
Proposed design changes: make optional arguments for T_start and T_stop defined in the input and leave it to a higher hierarchical class to store whatever values are necessary in their own cache.
Workaround for you: compute and store the T_start/T_stop statements in the class that accesses ray_shooting_partial with keep_range=True.

If this is fine with you, I proceed with the re-design.

NFW lookup table renaming

@dangilman the new_lookup file is currently a python file and it generates an empty documentation entry. Can you save the lookup table with numpy and also load the arrays with numpy? This might even be faster reading with less storage required. Thanks

Improve tests and documentation of Optimizer

@dangilman The optimizer testing is below the lenstronomy average now and I am in the process of completing test coverage (100%, every single line) and documentation module by module. So if you have some spare time, I'd appreciate if you could complete testing and documentation. I know, I am a bit peaky here, but in the long run, this improves the overall quality of the code. Take your time! Thanks,
Simon

4 points solver and joint centroids

When joining the center of the first mass profile in the lens model list to some other profile, for a quad lens, center_x and center_y do not take the correct expected value, because there are inferred by the non-linear solver ('PROFILE' or 'PROFILE_SHEAR') instead.

In this case, the use of no solver ('NONE') would be the solution, but increasing the sampling time. Is there a way to keep using the non-linear solver but forcing the center coordinates to be consistent with such constraints ?

'PROFILE_SHEAR' not supported in Solver2Point

In the catalogue modelling.ipynb, the solver for constraint lens models is set to be:
kwargs_constraints = {'solver_type': 'PROFILE_SHEAR'}

However, when I run lenstronomy, this solver_type isn't supported.
The error message is as follows:
File "/lenstronomy/LensModel/Solver/solver2point.py", line 31, in __init__ raise ValueError("solver_type %s is not a valid option!") ValueError: solver_type %s is not a valid option
`
The only supported solvers seem to be: 'CENTER', 'ELLIPSE', 'SHAPELETS', 'THETA_E_PHI', 'THETA_E_ELLIPSE'

Is there an easy fix for this?

critical curves and caustics

separate routine for users to output critical curves and caustics for a given lens model (may be part of LensModel class)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.