pahfit / pahfit Goto Github PK
View Code? Open in Web Editor NEWModel Decomposition for Near- to Mid-Infrared Spectroscopy of Astronomical Sources
Home Page: https://pahfit.readthedocs.io/
Model Decomposition for Near- to Mid-Infrared Spectroscopy of Astronomical Sources
Home Page: https://pahfit.readthedocs.io/
Should add the units to the model pack and output files. This would require the ingest of the model pack to make sure to convert the units to the internally used ones. And probably have the output convert back to the input units for consistency.
Should be fairly straightforward to implement, but including good check for robustness would be good and add complexity.
Keeps the code base clean. Can get it back via the magic of git version control.
Especially as old, not maintained code muddies the lgtm results.
Incorporate alternative dust continuum fitting.
We should define fixed-ratio PAH model spectra based on noise-free templates (Smith+07) to be used in the PAHFIT model. Especially useful for absorption-dominated spectra.
Save modeling and fitting parameters of all feature components, as well as read/load them back into the main code.
It's very common now to PAHFIT hundreds or more spectra over and over, from large samples to spectral cubes. This can get slow. Optimizers usually run on one processor. I wonder if there is a package or convenient way to farm fits out on multiple threads and then collect the results back together?
So the user could say "here are 1000 different spectra I need to (re)-fit", and it would thread them out appropriately and fit them simultaneously. I suspect JWST will have this issue in spades.
What features of the model framework should live in separate instrument packs? I can imagine:
But what if you want to blend spectra together from multiple intstruments?
We could obviously go much further to correct response function issues, etc., but PAHFIT is not a calibration or reduction tool.
need a tests to check that the api for the astropy.modeling we use does not change.
6.0um water ice absorption
6.85um aliphatic hydrocarbon abs feature
7.25um aliphatic hydrocarbon abs feature
16, 19, 23, 28 and 33um crystalline silicate emission/absorption bands
As I was working on the Table output function I realized that 2 identical model components are used when building the final model.
In both cases it's the first element in the H2 features and ionic features lists (H2 S(7) and [ArII] respectively).
Currently, the final model has 55 components. However, there are 9 blackbodies, 25 dust features, 8 H2 features, 10 ionic features, and 1 component for the tau of the attenuation curve, adding up to 53 components in total.
One things we haven't thought of yet is the prescription for selecting the starting values for parameters. We need to figure out where that belongs. Obviously starting values will depend on the instrument pack(s) selected (think line widths, etc.). But it may also depend on the science model.
Right now in PAHFIT original parameter initialization is just hard-coded in, doing some simple averages to set tau's and line center intensities, etc., as appropriate for Spitzer/IRS. And now there is no direct mapping between the "PARINFO" structure which initializes, ties, and constrains parameters for the LM optimization, and the "PAHFIT results" structure, which is the end/science-user consumable. So there is no way to chain the output of one fit to the input of another (as suggested in #13).
PAHFIT was developed to work on SINGs galaxies. Which observations were not well fitted by PAHFIT-IDL? What was the issue?
Continuum issues:
Would be good to document how to add a new component model. E.g., add it to component_models.py and add the needed options to creating a combined model and read/write/plot functions.
Determine extinction based on HI lines, H2 lines to help with the degeneracy of the silicate absorption band and the surrounding PAH bands.
Update the base class to allow for bounds to be optional.
If not in input feature dictionary, then set to (None, None).
Probably should add helper functions in this code area to make it easier to do this (less copying).
Later versions of IDL PAHFIT (never widely released) toyed with allowing redshift z
to be a free parameter. It wasn't widely tested and worked poorly. Free redshift obviously introduces some real challenges for an optimizer. With a fitting pack using mostly fixed PAH band ratios, this is conceivable, however.
Use ISO data to test the code with "JWST-like" data
We could use astropy.units to specify the units of the spectra. Then we could ensure they are in the units we expect. It may even be transparent through the use of the astropy.models as long as we setup things correctly. Worth thinking about.
Currently the access to the model of the individual components requires a bit of work so I think it would be very useful to create a class that has the components as attributes. From what I've seen of the plot_pahfit.py
, it would be only a matter of adapting some things but #19 makes me question if it is that simple.
This would make it easier to work directly with the fitted data in operations such as plotting specific components or just saving the individual components for another use -- e.g.: the dust features can be used as input for the NASA Ames PAH IR Spectroscopic Database. In addition to the components as attributes there could be a method that allows saving everything in a neat table.
Should be added to the index.rst file in the docs directory
Possibly could be done by artificially shifting the spectra from different instruments to non-overlapping wavelengths (w/ appropriate scalings) and constraining the common features for the fitting. Then the same fitting machinery should work.
The proposed YAML format needs a reader and appropriate integration into the class init function. Probably would be good to have the reader picked based on the file extension (e.g., switching between the YAML and straight table formats).
Hello from Astropy Project!
The following updates to your package might be necessary to ensure compatibility with the latest stable version of Astropy:
ci-helpers
, packages expecting interactive plotting should override it in .travis.yml
sphinx-astropy
as a package dependency if you are using astropy-helpers
3.1 or later. Otherwise, your documentation build (e.g., on ReadTheDocs) might fail.six
that is bundled with Astropy, please consider using the standalone six
package instead.If these are no longer applicable to you, please close the issue.
This is an automated issue for packages that opted in for automated astropy-helpers update. If this is opened in error, please let @pllim know.
Add in the code philosophy. What is the core of the python version (astropy.modeling)? What capabilities does it enable (e.g., new components, more of the same component, constraining parameters, etc.)?
Incorporate alternative PAH decompositions.
Not in astropy.modeling
Need to document and show examples of the model pack and input spectrum formats. Would be good to provide code to generate the astropy.table compatible format.
The astropy modeling got a major update in v4.0. Updates needed. As part of this update, the Drude1D and BlackBody1D functions were added to astropy (by me actually). So, they can be removed from pahfit.
The read function that can read in a simple table format with standard constraints needs to be integrated into the class init function.
One of the most useful (and under-used) features of the original PAHFIT
is the pahfit_components
function, which looks like this:
function pahfit_components,lambda,decoded_params, $
DUST_CONTINUUM=dust_continuum, $
TOTAL_DUST_CONTINUUM=dc_tot,STARLIGHT=stars, $
TOTAL_CONTINUUM=tot_cont, $
DUST_FEATURES=dust_features, $
TOTAL_DUST_FEATURES=df_tot, $
LINES=lines,TOTAL_LINES=lines_tot, $
EXTINCTION_FAC=ext, $
EXTINCTION_CURVE=ext_curve, $
DISABLE_EXTINCTION=de,LAMBDA_REST=lam_rest
This code allows you to take a decoded_params
structure, which is output from a PAHFIT
run, and selectively pull pieces from it: the total continuum, all due features, each features separately, etc. This is used to produce the canonical multi-colored plots we all known and love, but it's also essential for using model results more deeply than just examining the parameters output. We need something similar.
Need to have a robust training set. Maybe for each model pack. Useful to be able to quickly check if changes to a pack or main code "screws" up the fitting. Need data and expected fit parameter values.
Should start with getting the code running on Spitzer IRS data.
Check the results with the IDL code results
We need a simple and flexible means of expressing constraints on and between parameters, for the input specification discussed in #9.
Examples that can be attached to one parameter:
fixed
(or omit). Example: temperature of starlight continuum.bounded: [p1,]
or bounded: [,p2]
, or bounded: [p1,p2]
. Examples: central intensities, wavelengths, etc.tied: 1.15 pah_77_feature1_central_lam
. Example: fixing ratios between (probably new) sub-features of PAH complexes.Examples that might need to be expressed separately, since they govern multiple parameters:
tied: integrated_strength(pah_11*) = 0.4 integrated_strength(pah_77*)
or some such.Instrument packs can use this notation as well.
We should use a class (object) for the pypahfit model and fitting. This should be an astropy.modeling class to take advantage of various buildin functions. See http://docs.astropy.org/en/stable/modeling/.
We can use the code in pyISMFit as starter code (this starter code should be removed once it has been used).
In the current version of the code the input file format is decided from the input file extension.
If I run pahfit for the example spectra in the data directory it raises an error of "No reader defined for format ...", followed by a table with the available formats.
This is strange because .ipac is included in the available formats, but "M101_Nucleus_irs.ipac" gives the above error.
The same happens with "orion_bar_SWS_with7023shape.txt", although if I rename the file to .ascii it will read the file.
But, it will raise an error in the following line when the unit conversion is happening, because no unit assignment is happening.
So currently we can't run pahfit as: run_pahfit spectrumfile scipack_ExGal_SpitzerIRSSLLL.ipac without modifying the code, and it's harder to test.
Updating to more modern packaging would be good.
Move from travis to github actions at the same time (faster and better integration).
APE17: https://docs.astropy.org/projects/package-template/en/latest/ape17.html
Update to get everything for testing, docs, etc working
Defined in the astropy affiliated package specutils.
A less common but very powerful use of PAHFIT
is to take a fitted spectrum or spectral average, then up-res the lines to target a different instrument or hypothetical instrument. Right now this is a hard-codeding affair, but if we had the ability to take a given PAHFIT
fit result, and recreate the spectrum according to arbitrary instrument packs, or at an arbitrary resolution, this would be valuable for mission planning, preparing science cases, etc.
Would be great if we could use the same setup with an MCMC code. One obvious choice is the 'emcee' package.
There is some work in astropy to enable this through adding the ability to add a prior to the parameter class used by all models. Currently suspended until the refactoring of the astropy.model class is done (soon).
When done by astropy, add to PAHFIT.
Hi, sorry I got a problem when I tried to run the pahfit, my astropy version is 4.0 and pahfit is 2.0.dev96
here is my command:
run_pahfit M101_Nucleus_irs.ipac scipack_ExGal_SpitzerIRSSLLL.ipac
And I got these messeges:
WARNING: AstropyDeprecationWarning: Class BlackBody1D defines class attributes inputs
.
This has been deprecated in v4.0 and support will be removed in v4.1.
Starting with v4.0 classes must define a class attribute n_inputs
.
Please consult the documentation for details.
[astropy.modeling.core]
WARNING: AstropyDeprecationWarning: Class BlackBody1D defines class attributes inputs
.
This has been deprecated in v4.0 and support will be removed in v4.1.
Starting with v4.0 classes must define a class attribute n_inputs
.
Please consult the documentation for details.
[astropy.modeling.core]
Traceback (most recent call last):
File "/Users/k/anaconda3/bin/run_pahfit", line 90, in
pmodel = PAHFITBase(filename=packfile)
│ └ '/Users/k/anaconda3/lib/python3.7/site-packages/pahfit/packs//scipack_ExGal_SpitzerIRSSLLL.ipac'
└ <class 'pahfit.base.PAHFITBase'>
File "/Users/k/anaconda3/lib/python3.7/site-packages/pahfit/base.py", line 166, in init
'stddev': dust_features['fwhms_fixed'][k]})
File "/Users/k/anaconda3/lib/python3.7/site-packages/astropy/modeling/core.py", line 441, in init
init, args, kwargs, varkwargs='kwargs')
File "/Users/k/anaconda3/lib/python3.7/site-packages/astropy/modeling/core.py", line 438, in init
return super(cls, self).init(*params, **kwargs)
│ │ │ └ {'amplitude': 100.0, 'x_0': 5.27, 'fwhm': 0.17918, 'name': 'DF0', 'bounds': {'amplitude': (0.0, None), 'x_0': (5.17, 5.369999999...
│ │ └ ()
│ └ <Drude1D(amplitude=1., x_0=10., fwhm=1., name='DF0')>
└ <class 'pahfit.component_models.Drude1D'>
Name: Drude1D
N_inputs: 1
N_outputs: 1
Fittable parameters: ('amplitude', 'x_0', 'fwhm...
File "/Users/k/anaconda3/lib/python3.7/site-packages/astropy/modeling/core.py", line 715, in init
self._initialize_constraints(kwargs)
│ └ {'amplitude': 100.0, 'x_0': 5.27, 'fwhm': 0.17918, 'bounds': {'amplitude': (0.0, None), 'x_0': (5.17, 5.369999999999999), 'fwhm'...
└ <Drude1D(amplitude=1., x_0=10., fwhm=1., name='DF0')>
File "/Users/k/anaconda3/lib/python3.7/site-packages/astropy/modeling/core.py", line 1801, in _initialize_constraints
param = getattr(self, ckey)
│ │ └ 'stddev'
│ └ <Drude1D(amplitude=1., x_0=10., fwhm=1., name='DF0')>
└ Parameter('x_0', value=10.0)
AttributeError: 'Drude1D' object has no attribute 'stddev'
Is there a solution about this problem?
I'm a strong proponent of GPL licenses. IDL PAHFIT is license under GPLv2, and I think that or GPL3 are ideal for what we're trying to achieve. Are there license requirements for AstroPy affiliation?
I'm trying to re-install pahfit from within a new branch containing updated files regarding issue #65, but when running python setup.py install
from my astroconda environment I get the following error:
running install
running bdist_egg
running egg_info
error: [Errno 13] Permission denied
Need a switch to handle both (not just Jy).
The latest version of PAHFIT in python runs slower than the original PAHFIT in IDL.
Need to investigate where the bottleneck is in the code. Likely to be the compound model in astropy.modeling.
It is also worth investigating whether the refactor of astropy v4.0+ makes the code slow.
How to set the parameters in the models
How to interface this to real humans.
Would be good to couple the data to an instrument pack. Not quite sure how to do this, but this would make it clear the coupling of the instrument pack to the right data.
With the scripts being installed system wide (once PR #51), now possible to run pahfit from any location. But need to know the full path to the model pack files - not ideal. One solution would be to have a script that would download the model pack files into the current directory.
Will add the features between 2.5-5 um in PAHFIT.
The parameters and constraints will be the same as the classic PAHFIT in IDL 1.6?
@jdtsmith Do we call our version of the IDL code that fits the AKARI spectra as 1.6?
The assumed attenuation curve is now fixed. A more flexible model for handling attenuation should be added. For example, a 3.1 um ice feature should be added on top of the silicate extinction curve. Any ideas?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.