Code Monkey home page Code Monkey logo

legacypipe's Introduction

legacysurvey

Overview

Source for Nikola-built website for the Legacy Survey.

With nikola (v7.0+) installed:

  • nikola build will build the site in the output directory.
  • nikola serve will start a local webserver to preview the site.

The site uses a custom theme based on nikola's built-in bootstrap3 theme. The custom theme allows us to:

  • use a drop-in bootstrap.css replacement from bootswatch.
  • Tweak various aspects of the generated HTML, such as titles, links and listings.

Deployment

To simplify the preparation of site for deployment, a Makefile handles several steps, so that this simple command prepares a TAR file that can be copied to NERSC:

make TAG=9.0.2

In this example, the output would be legacysurvey-9.0.2.tar.gz.

Logo

There are a few logo files in the files directory:

  • logo.svg: An "Inkscape SVG" file, which is the main source.
  • logo_54.png: A PNG generated by exporting the logo in Inkscape.
  • logo_square.svg: Inkscape SVG file with just the logo part (no text).
  • logo_square_32.ico: A favicon used in the site. Generated from the by exporting the square logo to PNG in Inkscape then converting using ImageMagick: convert logo_square_32.png logo_square_32.ico.

legacypipe's People

Contributors

arjundey avatar ashleyjross avatar cstil avatar djschlegel avatar dstndstn avatar geordie666 avatar kaylanb avatar manera avatar mehdirezaie avatar mlandriau avatar moustakas avatar rainwoodman avatar rongpu avatar schlafly avatar ziyaointl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

legacypipe's Issues

Objects that are strictly negative are in the catalog

Only retain objects if they have positive flux contributing to the reduction of chi2. When assessing detection confidence, multiply the chi2 in each filter by the sign of the flux in that filter. Do that for both the chi2 cut and for the fractional chi2 cut. One problem from this is that we're generically getting some bright stars deblended as close pairs. Example: Brick 2410p070 OBJID 4486 & 4487 (dr1j reductions).

Allow galaxy color gradients

The galaxy models are the same shape parameters in all filters, which does not allow for color gradients. We could capture color gradients by allowing the bulge and disk fluxes (for composite galaxies) to freely float relative to one another. This would probably be dangerous to do with something like the DR1 reductions, where these effects are likely small when compared to the remaining systematics in the PSF mis-estimation.

DECam raw data should be organized by observing night

The DECam raw images are stored on NERSC under cosmo/staging/decam-public with subdirectories named after the DATE-OBS in the image headers. This splits the data into different directories for the same night. We probably would rather put them in directories named by the DTCALDAT header card, which is the calendar date at the start of the night (same as how they're written at the telescope).

propose we rename "analysis" to "legacyanalysis"

We're starting to write some useful utilities in the "analysis" section of legacypipe (e.g., a new module that returns the set of PS1 stars on a given CCD), which necessitated adding an init.py file to the subdirectory. I'd like to suggest we rename "analysis" something less generic like "legacyanalysis" so there's clarity when these scripts are imported.

Recover the 14-pixel border on DECam images

For DR1, we mask the 14 pix on all edges of the CP-processed frames. This is due to the pixels appearing larger near the edges of the CCD, presumably from the same field-line effects as the brighter/fatter effect. This ought to flat-field out for photometry (does it?), and the astrometric shifts ought to be fixable in the astrometric solution without too much complexity.

Source detection for extended objects

The source detection algorithm for DR1 used two matched-filters in color space but only a PSF filter spatially. Therefore, the 5-sigma detection depth is for point sources, and DR1 will be less deep (in terms of sigma) for extended sources. Consider including a detection step, perhaps in a later iteration in the pipeline, for spatially-extended sources.

fracDev can stray outside [0,1]

For composite models, the fraction of light in the deVauc model (FRACDEV) can stray outside of the range [0,1]. When that happens, the composite model can stray into territories where it contains unphysical negative values, which is an undesirable model property. Consider using softened model parameters to avoid such values.

Astrometry conversion bug resulting in crop circle images

The astrometric mapping of some of the coadd images in DR1 are completely messed up, and look like lots of little crop circles of missing data in the images. An example is coadd/000/0003m055/decals-0003m055-image-r.fits in DR1, which only has two contributing CCD images to that r-band brick that each look normal.

crop-circles

Depth images are optimistic by 25%

Depth images are optimistic by 25% since the n_eff for the PSF is taken from the image headers as reported by SISPI at the observatory, rather than the actual computed n_eff from the detailed PSF on each image. One example image (arbitrarily cosmo/staging/decam/CP20150108c4d_150109_053533_ooi_z_v1.fits.fz), and sqrt(n_eff/(4_pi))_2.35 is larger than the header FWHM values by 15%. This difference gets squared, and explains the factor of 1.25. See post on 12 Mar 2015 https://desi.lbl.gov/mailman/private/decam-data/2015-March/000861.html .

Include large galaxies using HST images

Extended sources that have high-resolution data available from elsewhere, such as NGC galaxies that have been imaged by HST, could be included in the models as either much more complex GALMOD models or as the high-resolution images themselves. These would then be rendered through the PSF convolution on each image,

DCHISQ values may be inconsistent with the final catalogs

In DR1, the DCHISQ values may be inconsistent with the final catalogs, since blended objects can be re-fit after these are computed; a solution would be to re-run the fits solely for the purpose of recomputing these values consistently, and not change source parameters in that final call.

Wings not included in PSF model

The PSF model used in DR1 was a sum of two gaussian components, with no additional wing components to capture what must be closer to a Moffat profile. In the residual images, the lack of wings in the models around bright stars are the most obvious modeling problem. Either the PSF must include a wing component, or (less ideally) the images should have the PSF wings subtracted before the Tractor source fitting.

Recover the 100 rows on edges of DECam DES+COSMOS frames

For DR1, we trim 100 rows from the bottom (top) of the N (S) CCDs in the DES and COSMOS r-band images, to exclude some bad flat-fielding. This is an interim solution until this flat-fielding is fixed. Implementation is hard-wired into Tractor.

DECam CP images need more than a constant sky model

Sky model needs more freedom beyond a constant. An example is brick 3280p200 in DR1, where some contributing frames have sky gradients resulting in a negative region, which then creates spurious (negative) objects.

Function for selection cuts for DR2 field list

Create a function in legacypipe to return the selection cuts for the CCD image list
to use in a data release.

For DR1, the set of cuts were:
— CCDNMATCH >= 20 (At least 20 stars to determine zero-pt)
— abs(ZPT - CCDZPT) < 0.05 (Agreement with full-frame zero-pt)
— CCDPHRMS < 0.2 (Uniform photometry across the CCD)
— CCDNUM != 31 (Not the varying-gain S7 chip)
— ZPT within 0.20 mag of 25.08 for g-band
— ZPT within 0.15 mag of 25.29 for r-band
— ZPT within 0.15 mag of 24.92 for z-band
— DEC > -12 (in our footprint)

For DR2, the set of cuts are made inclusive of unphotometric data, up to about 0.5 mag
of extinction:
— CCDNMATCH >= 20 (At least 20 stars to determine zero-pt)
— abs(ZPT - CCDZPT) < 0.10 (Loose agreement with full-frame zero-pt)
— ZPT within [25.08-0.50, 25.08+0.25] for g-band
— ZPT within [25.29-0.50, 25.29+0.25] for r-band
— ZPT within [24.92-0.50, 24.92+0.25] for z-band
— DEC > -20 (in DESI footprint)
— EXPTIME >= 30
— CCDNUM = 31 (S7) should mask outside the region [1:1023,1:4094]

Compute depth maps for extended sources

In DR1, we computed the depth map images for point sources. The DESI target selection group is requesting depth maps for extended sources, perhaps for the r_exp=0.45 arcsec ELG sources. The DECaLS survey is meant to have uniform depth for those sources.

Track versions of CP frames and calibration files

Track the version of the calibrated frames for the calibration files, such as the sky values. This was the problem in dr1d, where the calib/sky values were computed from the DES reduced frames, but used for the CP-reduced frames.

Increase aperture size for photometric zero-points

Increase aperture size for photometric zero-points, perhaps from 3.5 arcsec radius to 7 arcsec. More importantly, the PSF should be normalized self-consistently such that it is unity within whatever aperture is chosen for this normalization. For the DR1 reductions, there was no such consistency imposed.

Include catalog-level measure of PSF size

In DR1, there's no measure of the PSF size of the contributing images at the catalog level. This is complicated, because several images contributed to any given object. For the forced photometry catalogs of the single-epoch images, we could include the effective number of pixels of the PSF, n_eff = [ SUM(image) ]^2 / SUM(image^2) . For the stacked object measures in the Tractor catalogs, it's less obvious what to report -- perhaps a weighted sum of n_eff?

problems creating custom mosaics

I'm raising this as an issue rather than fixing this myself because I want to be sure I don't break any desired behavior/functionality you want that I'm not aware of.

This pertains to running 'runbrick' from the command line, not within another python script.

If "brick" and "radec" are both set, the code uses the default brick (2440p070) rather than creating a "custom" brickname based on the input radec. This can be solved by not setting a default brick.

In addition, when "radec" is passed, "outdir" is ignored.

Finally, (and maybe this should be a new issue) should we move away from the deprecated optparse module in favor of argparse?

Include unphotometric data

In DR1, we excluded all exposures that were not definitively photometric. For DR2, we could include these since we're tying down each exposure to PS1 (at least for now).

Require sources to be significantly positive not negative fluxes

Only retain objects if they have positive flux contributing to the reduction of chi2. When assessing detection confidence, multiply the chi2 in each filter by the sign of the flux in that filter. Do that for both the chi2 cut and for the fractional chi2 cut. One problem from this is that we're generically getting some bright stars deblended as close pairs. Example: Brick 2410p070 OBJID 4486 & 4487 (dr1j reductions).

Note that Lupton applied a trick of doing a dot-product of the models for very close pairs of objects, and would remove one of the sources if that dot-product was near enough to unity.

Color terms in PSFs

The PSF models are not a function of the colors of sources within an image. In principle, the PSF should be chromatic, with the dominant effects being a shift in the centroid relative to the horizon and a PSF width that are functions of the color of a source.

Note that this information was used for the "astrometric redshifts" of quasars by Kaczmarczik et al 2009 (http://arxiv.org/abs/0904.3909) .

Build PCA images for fringing in z-band

For DR1, the z-band fringing is removed by scaling a single median fringe image.
We should do better by solving for a set of PCA eigen-images for the z-band sky,
and then subtracting the best-fit linear combination of those from each CCD image.

As input to producing these PCA images would be the equivalent of CP-processed
frames, but without the fringing removed. The EMPCA implementation by Stephen
Bailey should work well for this: https://github.com/sbailey/empca

decals-sims improvement: do not make copies of the data

Implement this: "...we could have set it up a little differently, by overriding the "Decals" class so that its (new) decals.get_image_object() method would read the "tims" as usual, but then just-in-time insert the fake sources before returning the tims to the runbrick code. This isn't as traceable and modular as your setup, but might be a little easier to manage -- it wouldn't need to write files to disk; all the fake-source modifications happen in memory only."

Remove moon contamination from unWISE image stack

The unWISE image stack shows residual contamination from the moon. Dustin did attempt to make a cut on the moon proximity, but it appears to have not been quite aggressive enough. This has re-surfaced as lots of fake QSO targets from DECaLS DR1 around RA=343,DEC=0 and RA=347,DEC=0.

moon_unwise.pdf

CR rejection needed

DR1 only included CR rejection on individual CP-processed images. CR rejection should be an iterative step after first iteration with Tractor fits, looking for CRs in the single-epoch residual images, possibly comparing those residuals across multiple images when they exist.

Some metadata (AIRMASS) in decals-ccds.fits wrong

About 3% of the entries in the DR2 CCD list (decals-ccds.fits) has AIRMASS=0.
These entries don't appear to be from our program, but from:
2013A-0360
2013A-0529
2013A-0611
2013A-0716
2013A-0719
2013A-0723
2013B-0616

This means that the AIRMASS values aren't consistent with the coordinates + timestamps.

Add ability to fix input sources, such as a known star catalog

For sources where parameters are known, we should be fixing those parameters in the Tractor fits. This is potentially most important for known stars, where DR1 has many of these mis-classified as extended objects. (Note this is made particularly difficult for the Tractor, since so many pixels in bright objects are masked.)

When we have GAIA (summer 2016?), there should be a very good catalog of known point-like objects on the full sky to 20th mag. Prior to that, we could make a pretty good guess for a "known star" catalog by selecting sources from PS1 or SDSS or DECam that appear point-like and are very close in color to the stellar locus. This catalog need not be complete, but simply be some set of stars where we do have very high confidence that they should be point sources. It's less clear how to do this in regions of high Galactic extinction where the color locus is shifted, but maybe we don't much care since our survey area is primarily high-Galactic latitude.

DECAM_FRACFLUX is low for objects blended with bright, masked stars

Obviously blended objects near bright stars have FRACFLUX near zero, which incorrectly gives the impression that these are not blended. This is because bright stars have lots of masked pixels, and those masked pixels are excluded from the calculation.

The problem is exasperated by the CP reduced-frames that mask far too many pixels near bright-isn objects, making this problem very common. A partial solution would be to fix that masking in the CP frames. But we may also wish to compute this quantity either using the model properties, or use the models for masked pixels.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.