Code Monkey home page Code Monkey logo

spyffi's People

Contributors

doctormo avatar noqsi avatar xcthulhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spyffi's Issues

Continuous Integration

It would be nice if we could have continuous integration in case someone breaks the design by accident

Introduce CAMNUM Header

Right now it appears that SPyFFI is writing an empty string to the CAMERA header.

It should stop doing that and write a non-negative integer to CAMNUM to be consistent with tess_obsim.

RNG Seed

In order to have deterministic runs and tests, it would be nice to thread an RNG seed through SPyFFI

Scripts have module call and relative import

The scripts in ./scripts/ all use a relative import from ..Operation import thing which only works if you import the script as if it were a module. This negates the bang line that appears in several scripts.

Changed local scripts to import from SPyFFI.Operation import thing and they then would run using the install from pip without issue. I have these changes locally, but didn't want to merge request without knowing why the project might have chosen this way of writing scripts over the typical binary bang line method.

Reproducibility Strangeness

The reproducibility of the noiseless output in successive runs depends on whether you write the simulated data out.

The following script yields the same results in the "noiseless" output file if you run it twice. However, if you remove the line inputs['expose']['writesimulated']=False, successive runs will yield slightly different images.

from SPyFFI.Observation import Observation, default

# start from the default settings
inputs = default

inputs['camera']['label'] = 'teensynoiseless'
inputs['camera']['subarray'] = 100
inputs['catalog']['name'] = 'sky'
inputs['expose']['skipcosmics'] = True
inputs['expose']['jitter'] = False
inputs['expose']['writenoiseless']=True
inputs['expose']['writesimulated']=False
inputs['catalog']['lckw']['fractionofstarswithlc'] = 0
inputs['catalog']['lckw']['fractionwithextremelc'] = 0
inputs['catalog']['lckw']['fractionwithtrapezoid'] = 0
inputs['catalog']['lckw']['fractionwithrotation'] = 0
inputs['catalog']['lckw']['fractionwithcustom'] = 0
inputs['jitter']['amplifyinterexposurejitter'] = 0.0
inputs['camera']['variablefocus'] = False
inputs['camera']['aberrate'] = False
inputs['observation']['cadencestodo'] = {1800:14*48}
Observation(inputs).create()

Unit Test framework

We have ad hoc integration tests, but we really need an organized way to do unit tests on modules.

Star positions off by 1/2 pix

The fiducial location for a star appears to be offset by 1/2 a pixel from the centroid of the model PSF, e.g. the pixels on which the simulated star falls are up and to the right of the input star location.
example

HTTM: Digitization & Clipping

Note: This is only to be done when converting from Calibrated to RAW

This calls the calibrated to RAW responsivity functional transformation in #34, applies a ceiling (this is the Clipping!) and converts the output to uint32

Equinox

SPyFFI explicitly works in IRCS coordinates, implicitly equinox 2000.0. We should add an explicit EQUINOX=2000.0 record to the output FITS headers. It shouldn’t hurt, and in the tangled FITS WCS web it may prevent some reader precessing to EPOCH.

HTTM: Smear

  • Calibrated to RAW: Sum all image pixels in a column within a slice and multiply by a constant ξ calculated from the sequencer program Hemiola.fpe

This yields a vector of smear pixels with dimensions equal to the number of columns in the slice

Set each of the ten smear rows at the top of the slice to this value. The smear rows are beneath the dark pixel rows at the top of the slice

  • RAW to Calibrated: Make a single vector which is the average of the smear rows at the top of the slice and subtract this vector from each image row in the slice

HTTM: Video Bias

Write a conversion function and its inverse to model the following video bias effects per exposure per slice:

Note: that together Baseline and Drift can be modeled as constant Gaussian noise term when converting from calibrated to RAW

  • Baseline (μ): This is a scalar constant that needs to be added to (or subtracted from) each pixel. This defaults to 3000.
    • Calibrated → RAW: When adding to each pixel , hand as a parameter; also add to dark pixels
    • RAW → Calibrated: When subtracting, estimate baseline from dark pixels (using median) ; Note that estimated baseline also includes Drift below
  • Drift (σ) : This is a constant value that is present in every pixel in a slice in an exosure but is not the same qualitively as the Baseline. This is optional, defaults to 0.
    • Calibrated → RAW: For each transformation of a slice, add a small normally distributed random value, constant over that slice
    • RAW → Calibrated: Handled by baseline transformation above, so not implemented
  • Start of Line Ringing: This is a vector added to (or subtracted from) each row in a slice.
    • Calibrated → RAW: When adding, this is a constant you get from calibration team (consider speaking with Jerry Roberts)
    • RAW → Calibrated: When subtracting, this is computed by taking the average of the dark pixels in each column of the slice

Seed runs with random number generator seeds

In order to have reliable tests, we need to be able to seed the random number generator.

As of last week LightCurve.py can now in principle be seeded, but the seed needs to be injected somehow.

Other modules also need this treatment.

HTTM: Responsivity & Compression

Implement formula and inverse to convert electron counts to digital units and make transformations for modifying slices back and forth given these conversions

  • When converting a calibrated FITs image to RAW, this is done by multiplying a constant ψ, which is approximately .2 digital units per electron and comes from calibration data ; consider talking to Joel Villasenor to get an estimate

    To handle compression, apply a near identity, slightly non-linear transformation function (derived empirically). Note that a theoretical model for this exists already due to John Doty.


Make sure to represent whether data is in ADU or Electrons for bookkeeping purposes

Cosmic Ray Injection Nonfunctional

Turning on cosmic ray generation with

inputs['expose']['skipcosmics'] = False  

breaks SPyFFI with the following error message

File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/Users/jvaughn/Desktop/spiffyFiles/SPyFFI/scripts/demonstration.py", line 147, in <module>
    o.create()
  File "/Users/jvaughn/Desktop/spiffyFiles/SPyFFI/Observation.py", line 77, in create
    c.expose(**self.inputs['expose'])
  File "/Users/jvaughn/Desktop/spiffyFiles/SPyFFI/CCD.py", line 920, in expose
    cosmics = self.addCosmics(write=writecosmics, version=cosmicsversion, diffusion=cosmicsdiffusion, correctcosmics=correctcosmics)
  File "/Users/jvaughn/Desktop/spiffyFiles/SPyFFI/CCD.py", line 637, in addCosmics
    gradient=gradient, version=version, diffusion=diffusion, rate=rate)
TypeError: cosmicImage() got an unexpected keyword argument 'version'

Use logging instead of Talker

There are many, many places where SPyFFI uses print and the Talker class from zachopy rather than the conventional logging module built in to python

Modify saturation values in SPyFFI

Modify the saturation levels to be 200,000 electrons per individual read, to be closer to lab tests.

While saturation should now fall under the purview of HTTM, it would be nice to preserve the capability for SPyFFI to produce perfectly calibrated imaging data. To so, we should leave in the option for saturate stars.

Install the repository manually

Hello all,

I would like to ask it there is any developers document. I'm finding a way to install SPyFFI manually directly from the repository. I tried cloning the repo and use python setup.py install but it didn't work.

Get Rid of imports.py

imports.py always throws the following error when attempting to install:

zip_safe flag not set; analyzing archive contents...
SPyFFI.imports: module references __file__

Moreover, for many files it unnecessarily imports a large number of symbols increasing the chance that one might be shadowed.

Need to simulate a rectangular CCD

SpyFFI can only simulate a square CCD. While the TESS CCDs are officially 2048 square, they really have 2068 rows. Except for a few bottom rows covered in Al, these rows are optically sensitive, so they will be a source of blooming and streaking just like the rest of the array. They will appear in the raw images. There is, in fact, no reason to avoid using them for science, too. We need to include them in simulations.

Cosmic Ray Code not Compiling

In the course of installing SPyFFI on another user's computer, I could not get Al Levine's cosmic ray code to compile. I don't have the exact error message, but the gist of it was a missing file named setup.py in the cosmical_realistic directory.

Copying the setup.py from the cosmical_realistic directory of Zach's legacy branch into the new one fixed the problem. That file's entire contents are:

from setuptools import setup, Extension
import numpy.distutils.misc_util

setup(install_requires=['numpy'],
      ext_modules=[Extension("_cosmical", ["_cosmical.c", "cosmical.c", "twister.c", "seed_tw_ran.c"],
                             include_dirs=numpy.distutils.misc_util.get_numpy_include_dirs())], )

No open source license visible

There doesn't appear to be a license file in the root of the repository. The setup.py file doesn't mention an open source license either so it's hard to know what license the code is under from the repository.

Jitter Intermediate Data Products are Identical

It appears that the different jitter data products in SPYFFIDATA are identical:

To reproduce:

# mkdir -p /tmp/spyffidata
# cd /tmp/spyffidata
# wget -qO- https://www.dropbox.com/s/0e4c2uk34phv4qx/SPyFFI_coreinputs.tar.gz | tar xz
# find . -type f -print0 | xargs -0 shasum

c4c751dee27b7047f32c980b8cf210ac0bfd36dc  ./inputs/AttErrTimeArcsec_80k.dat
364da0b684757ea59e27214f99f32fc54ca4ee23  ./inputs/cartoon.jitter
88264db54846de3ad885a3fa04765f98ce448bb2  ./inputs/covey_pickles.txt
04da11b315b2b1091d8edc8aa1c4b342b8f06f72  ./inputs/davenport_table1.txt
034c4911b6a2bc4a92d3421dee2f779071142f93  ./inputs/pickles_table2.txt
e8299c812c7353c72fc82fef0fed185956d520a4  ./intermediates/psfs/RRUasbuilt/focus0and10_stellartemp4350/originaldeblibrary.npy
1c0557c34b57abd0efe00b4100ca208d19b09efb  ./intermediates/psfs/RRUasbuilt/focus0and10_stellartemp4350/pixelizedlibrary_cartoon.jitter.cadence120s.unscaled_perfectpixels_11positions_11offsets.npy
1c0557c34b57abd0efe00b4100ca208d19b09efb  ./intermediates/psfs/RRUasbuilt/focus0and10_stellartemp4350/pixelizedlibrary_cartoon.jitter.cadence1800s.unscaled_perfectpixels_11positions_11offsets.npy
1c0557c34b57abd0efe00b4100ca208d19b09efb  ./intermediates/psfs/RRUasbuilt/focus0and10_stellartemp4350/pixelizedlibrary_cartoon.jitter.cadence2s.unscaled_perfectpixels_11positions_11offsets.npy

HTTM: Undershoot

  • Calibrated to RAW: Apply simple convolution

         Simulated_RAW_Row(column_index) = 
                      Calibrated_Row(column_index) - u * Calibrated_Row(column_index-1)
    

to each row in a slice

u is a scalar constant obtained from calibration data, so consider talking with Jerry Roberts

  • RAW to Calibrated: Apply simple convolution

         Calibrated_Row(column_index) = 
                      RAW_Row(column_index) - u * RAW_Row(column_index-1)
    

to each row in a slice

This transformation is not perfect, but it is a reasonable approximation because u is small (~ 0.001) and the errors are on the order of

Conflict in demonstrations

Via Jacobi Kosairek:

The demonstration.py file from SPyFFI/scripts in the github tries to set the compression of each cadence length twice, in lines 106 and 133.

How does "logging" work?

The code used to print text updates of every step of the process to the terminal. I gather this has now been absorbed into a "logger" object that, I would imagine, send this text to a log file. I would imagine this to be ~/.tess/spyffi/SPyFFI.log, but this file stays at 0 bytes for me, even after lots of stuff has happened.

Is there a simple way to ask the loggers to report text to the terminal?

Remove HTTM Features From SPyFFI

SPyFFI needs to remove the following features, which are migrating to HTTM

(1) blooming
(2) readout noise
(3) shot noise
(4) smear(?) (not sure if SPyFFI supports this)

Fix setup.py

Right now setup.py does not fully install SPyFFI.

What needs to be done:

  • Figure out what packages are actually needed for SPyFFI to operate
  • Get unit test not to crash when using SPyFFI in a virtual environment
  • Break cosmic ray removal code into a separate project

Remove pyds9 as an Upstream Dependency

It does not appear that pyds9 is necessary for generating images using spyffi.

It may be removed; however it is also a dependency for zachopy which we may wish to vendor and refactor.

Python 3.0 Support

As mentioned in #27, SPyFFI does not currently support Python 3.

This is unfortunate as it is the default python in modern operating systems, however it is hard to assess how many of SPyFFIs upstream requirements also depend on Python 2.7

Do Not Write READNOIS FITS Keyword In a Noiseless File

Currently, SPyFFI writes the READNOIS FITS Keyword even if it does not simulate readout noise.

SPyFFI should not simulate readout noise, nor should it write this FITS Keyword to the header.

Downstream in HTTM we are going to throw a warning if the user tries to process a file with this forbidden keyword. See TESScience/httm#25 for more details.

HTTM: Non-separable pattern noise

Note: This is only to be done when converting from Calibrated to RAW

Add a pattern noise array, obtained from the calibration team, to the slice

Generate Documentation

It would be nice if documentation in documentation/input.py could output a nicely formatted markdown file to read

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.