Code Monkey home page Code Monkey logo

tifffile's Introduction

Read and write TIFF files

Tifffile is a Python library to

  1. store NumPy arrays in TIFF (Tagged Image File Format) files, and
  2. read image and metadata from TIFF-like files used in bioimaging.

Image and metadata can be read from TIFF, BigTIFF, OME-TIFF, DNG, STK, LSM, SGI, NIHImage, ImageJ, MMStack, NDTiff, FluoView, ScanImage, SEQ, GEL, SVS, SCN, SIS, BIF, ZIF (Zoomable Image File Format), QPTIFF (QPI, PKI), NDPI, Philips DP, and GeoTIFF formatted files.

Image data can be read as NumPy arrays or Zarr arrays/groups from strips, tiles, pages (IFDs), SubIFDs, higher order series, and pyramidal levels.

Image data can be written to TIFF, BigTIFF, OME-TIFF, and ImageJ hyperstack compatible files in multi-page, volumetric, pyramidal, memory-mappable, tiled, predicted, or compressed form.

Many compression and predictor schemes are supported via the imagecodecs library, including LZW, PackBits, Deflate, PIXTIFF, LZMA, LERC, Zstd, JPEG (8 and 12-bit, lossless), JPEG 2000, JPEG XR, JPEG XL, WebP, PNG, EER, Jetraw, 24-bit floating-point, and horizontal differencing.

Tifffile can also be used to inspect TIFF structures, read image data from multi-dimensional file sequences, write fsspec ReferenceFileSystem for TIFF files and image file sequences, patch TIFF tag values, and parse many proprietary metadata formats.

Author

Christoph Gohlke

License

BSD 3-Clause

Version

2024.5.10

DOI

10.5281/zenodo.6795860

Quickstart

Install the tifffile package and all dependencies from the Python Package Index:

python -m pip install -U tifffile[all]

Tifffile is also available in other package repositories such as Anaconda, Debian, and MSYS2.

The tifffile library is type annotated and documented via docstrings:

python -c "import tifffile; help(tifffile)"

Tifffile can be used as a console script to inspect and preview TIFF files:

python -m tifffile --help

See Examples for using the programming interface.

Source code and support are available on GitHub.

Support is also provided on the image.sc forum.

Requirements

This revision was tested with the following requirements and dependencies (other versions may work):

  • CPython 3.9.13, 3.10.11, 3.11.9, 3.12.3, 64-bit
  • NumPy 1.26.4
  • Imagecodecs 2024.1.1 (required for encoding or decoding LZW, JPEG, etc. compressed segments)
  • Matplotlib 3.8.4 (required for plotting)
  • Lxml 5.2.1 (required only for validating and printing XML)
  • Zarr 2.18.0 (required only for opening Zarr stores)
  • Fsspec 2024.3.1 (required only for opening ReferenceFileSystem files)

Revisions

2024.5.10

  • Pass 5082 tests.
  • Support reading JPEGXL compression in DNG 1.7.
  • Read invalid TIFF created by IDEAS software.

2024.5.3

  • Fix reading incompletely written LSM.
  • Fix reading Philips DP with extra rows of tiles (#253, breaking).

2024.4.24

  • Fix compatibility issue with numpy 2 (#252).

2024.4.18

  • Fix write_fsspec when last row of tiles is missing in Philips slide (#249).
  • Add option not to quote file names in write_fsspec.
  • Allow compress bilevel images with deflate, LZMA, and Zstd.

2024.2.12

  • Deprecate dtype, add chunkdtype parameter in FileSequence.asarray.
  • Add imreadargs parameters passed to FileSequence.imread.

2024.1.30

  • Fix compatibility issue with numpy 2 (#238).
  • Enable DeprecationWarning for tuple compression argument.
  • Parse sequence of numbers in xml2dict.

2023.12.9

  • Read 32-bit Indica Labs TIFF as float32.
  • Fix UnboundLocalError reading big LSM files without time axis.
  • Use os.sched_getaffinity, if available, to get the number of CPUs (#231).
  • Limit the number of default worker threads to 32.

2023.9.26

  • Lazily convert dask array to ndarray when writing.
  • Allow to specify buffersize for reading and writing.
  • Fix IndexError reading some corrupted files with ZarrTiffStore (#227).

2023.9.18

  • Raise exception when writing non-volume data with volumetric tiles (#225).
  • Improve multi-threaded writing of compressed multi-page files.
  • Fix fsspec reference for big-endian files with predictors.

2023.8.30

  • Support exclusive file creation mode (#221, #223).

2023.8.25

  • Verify shaped metadata is compatible with page shape.
  • Support out parameter when returning selection from imread (#222).

2023.8.12

  • Support decompressing EER frames.
  • Facilitate filtering logged warnings (#216).
  • Read more tags from UIC1Tag (#217).
  • Fix premature closing of files in main (#218).
  • Don't force matplotlib backend to tkagg in main (#219).
  • Add py.typed marker.
  • Drop support for imagecodecs < 2023.3.16.

2023.7.18

  • Limit threading via TIFFFILE_NUM_THREADS environment variable (#215).
  • Remove maxworkers parameter from tiff2fsspec (breaking).

2023.7.10

  • Increase default strip size to 256 KB when writing with compression.
  • Fix ZarrTiffStore with non-default chunkmode.

2023.7.4

  • Add option to return selection from imread (#200).
  • Fix reading OME series with missing trailing frames (#199).
  • Fix fsspec reference for WebP compressed segments missing alpha channel.
  • Fix linting issues.
  • Detect files written by Agilent Technologies.
  • Drop support for Python 3.8 and numpy < 1.21 (NEP29).

2023.4.12

  • Do not write duplicate ImageDescription tags from extratags (breaking).
  • Support multifocal SVS files (#193).
  • Log warning when filtering out extratags.
  • Fix writing OME-TIFF with image description in extratags.
  • Ignore invalid predictor tag value if prediction is not used.
  • Raise KeyError if ZarrStore is missing requested chunk.

2023.3.21

  • Fix reading MMstack with missing data (#187).

2023.3.15

  • Fix corruption using tile generators with prediction/compression (#185).
  • Add parser for Micro-Manager MMStack series (breaking).
  • Return micromanager_metadata IndexMap as numpy array (breaking).
  • Revert optimizations for Micro-Manager OME series.
  • Do not use numcodecs zstd in write_fsspec (kerchunk issue 317).
  • More type annotations.

2023.2.28

  • Fix reading some Micro-Manager metadata from corrupted files.
  • Speed up reading Micro-Manager indexmap for creation of OME series.

2023.2.27

  • Use Micro-Manager indexmap offsets to create virtual TiffFrames.
  • Fixes for future imagecodecs.

2023.2.3

  • Fix overflow in calculation of databytecounts for large NDPI files.

2023.2.2

  • Fix regression reading layered NDPI files.
  • Add option to specify offset in FileHandle.read_array.

2023.1.23

  • Support reading NDTiffStorage.
  • Support reading PIXTIFF compression.
  • Support LERC with Zstd or Deflate compression.
  • Do not write duplicate and select extratags.
  • Allow to write uncompressed image data beyond 4 GB in classic TIFF.
  • Add option to specify chunkshape and dtype in FileSequence.asarray.
  • Add option for imread to write to output in FileSequence.asarray (#172).
  • Add function to read GDAL structural metadata.
  • Add function to read NDTiff.index files.
  • Fix IndexError accessing TiffFile.mdgel_metadata in non-MDGEL files.
  • Fix unclosed file ResourceWarning in TiffWriter.
  • Fix non-bool predictor arguments (#167).
  • Relax detection of OME-XML (#173).
  • Rename some TiffFrame parameters (breaking).
  • Deprecate squeeze_axes (will change signature).
  • Use defusexml in xml2dict.

2022.10.10

Refer to the CHANGES file for older revisions.

Notes

TIFF, the Tagged Image File Format, was created by the Aldus Corporation and Adobe Systems Incorporated. STK, LSM, FluoView, SGI, SEQ, GEL, QPTIFF, NDPI, SCN, SVS, ZIF, BIF, and OME-TIFF, are custom extensions defined by Molecular Devices (Universal Imaging Corporation), Carl Zeiss MicroImaging, Olympus, Silicon Graphics International, Media Cybernetics, Molecular Dynamics, PerkinElmer, Hamamatsu, Leica, ObjectivePathology, Roche Digital Pathology, and the Open Microscopy Environment consortium, respectively.

Tifffile supports a subset of the TIFF6 specification, mainly 8, 16, 32, and 64-bit integer, 16, 32 and 64-bit float, grayscale and multi-sample images. Specifically, CCITT and OJPEG compression, chroma subsampling without JPEG compression, color space transformations, samples with differing types, or IPTC, ICC, and XMP metadata are not implemented.

Besides classic TIFF, tifffile supports several TIFF-like formats that do not strictly adhere to the TIFF6 specification. Some formats allow file and data sizes to exceed the 4 GB limit of the classic TIFF:

  • BigTIFF is identified by version number 43 and uses different file header, IFD, and tag structures with 64-bit offsets. The format also adds 64-bit data types. Tifffile can read and write BigTIFF files.
  • ImageJ hyperstacks store all image data, which may exceed 4 GB, contiguously after the first IFD. Files > 4 GB contain one IFD only. The size and shape of the up to 6-dimensional image data can be determined from the ImageDescription tag of the first IFD, which is Latin-1 encoded. Tifffile can read and write ImageJ hyperstacks.
  • OME-TIFF files store up to 8-dimensional image data in one or multiple TIFF or BigTIFF files. The UTF-8 encoded OME-XML metadata found in the ImageDescription tag of the first IFD defines the position of TIFF IFDs in the high dimensional image data. Tifffile can read OME-TIFF files (except multi-file pyramidal) and write NumPy arrays to single-file OME-TIFF.
  • Micro-Manager NDTiff stores multi-dimensional image data in one or more classic TIFF files. Metadata contained in a separate NDTiff.index binary file defines the position of the TIFF IFDs in the image array. Each TIFF file also contains metadata in a non-TIFF binary structure at offset 8. Downsampled image data of pyramidal datasets are stored in separate folders. Tifffile can read NDTiff files. Version 0 and 1 series, tiling, stitching, and multi-resolution pyramids are not supported.
  • Micro-Manager MMStack stores 6-dimensional image data in one or more classic TIFF files. Metadata contained in non-TIFF binary structures and JSON strings define the image stack dimensions and the position of the image frame data in the file and the image stack. The TIFF structures and metadata are often corrupted or wrong. Tifffile can read MMStack files.
  • Carl Zeiss LSM files store all IFDs below 4 GB and wrap around 32-bit StripOffsets pointing to image data above 4 GB. The StripOffsets of each series and position require separate unwrapping. The StripByteCounts tag contains the number of bytes for the uncompressed data. Tifffile can read LSM files of any size.
  • MetaMorph Stack, STK files contain additional image planes stored contiguously after the image data of the first page. The total number of planes is equal to the count of the UIC2tag. Tifffile can read STK files.
  • ZIF, the Zoomable Image File format, is a subspecification of BigTIFF with SGI's ImageDepth extension and additional compression schemes. Only little-endian, tiled, interleaved, 8-bit per sample images with JPEG, PNG, JPEG XR, and JPEG 2000 compression are allowed. Tifffile can read and write ZIF files.
  • Hamamatsu NDPI files use some 64-bit offsets in the file header, IFD, and tag structures. Single, LONG typed tag values can exceed 32-bit. The high bytes of 64-bit tag values and offsets are stored after IFD structures. Tifffile can read NDPI files > 4 GB. JPEG compressed segments with dimensions >65530 or missing restart markers cannot be decoded with common JPEG libraries. Tifffile works around this limitation by separately decoding the MCUs between restart markers, which performs poorly. BitsPerSample, SamplesPerPixel, and PhotometricInterpretation tags may contain wrong values, which can be corrected using the value of tag 65441.
  • Philips TIFF slides store padded ImageWidth and ImageLength tag values for tiled pages. The values can be corrected using the DICOM_PIXEL_SPACING attributes of the XML formatted description of the first page. Tile offsets and byte counts may be 0. Tifffile can read Philips slides.
  • Ventana/Roche BIF slides store tiles and metadata in a BigTIFF container. Tiles may overlap and require stitching based on the TileJointInfo elements in the XMP tag. Volumetric scans are stored using the ImageDepth extension. Tifffile can read BIF and decode individual tiles but does not perform stitching.
  • ScanImage optionally allows corrupted non-BigTIFF files > 2 GB. The values of StripOffsets and StripByteCounts can be recovered using the constant differences of the offsets of IFD and tag values throughout the file. Tifffile can read such files if the image data are stored contiguously in each page.
  • GeoTIFF sparse files allow strip or tile offsets and byte counts to be 0. Such segments are implicitly set to 0 or the NODATA value on reading. Tifffile can read GeoTIFF sparse files.
  • Tifffile shaped files store the array shape and user-provided metadata of multi-dimensional image series in JSON format in the ImageDescription tag of the first page of the series. The format allows for multiple series, SubIFDs, sparse segments with zero offset and byte count, and truncated series, where only the first page of a series is present, and the image data are stored contiguously. No other software besides Tifffile supports the truncated format.

Other libraries for reading, writing, inspecting, or manipulating scientific TIFF files from Python are aicsimageio, apeer-ometiff-library, bigtiff, fabio.TiffIO, GDAL, imread, large_image, openslide-python, opentile, pylibtiff, pylsm, pymimage, python-bioformats, pytiff, scanimagetiffreader-python, SimpleITK, slideio, tiffslide, tifftools, tyf, xtiff, and ndtiff.

References

Examples

Write a NumPy array to a single-page RGB TIFF file:

>>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb')

Read the image from the TIFF file as NumPy array:

>>> image = imread('temp.tif') >>> image.shape (256, 256, 3)

Use the photometric and planarconfig arguments to write a 3x3x3 NumPy array to an interleaved RGB, a planar RGB, or a 3-page grayscale TIFF:

>>> data = numpy.random.randint(0, 255, (3, 3, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb') >>> imwrite('temp.tif', data, photometric='rgb', planarconfig='separate') >>> imwrite('temp.tif', data, photometric='minisblack')

Use the extrasamples argument to specify how extra components are interpreted, for example, for an RGBA image with unassociated alpha channel:

>>> data = numpy.random.randint(0, 255, (256, 256, 4), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb', extrasamples=['unassalpha'])

Write a 3-dimensional NumPy array to a multi-page, 16-bit grayscale TIFF file:

>>> data = numpy.random.randint(0, 2**12, (64, 301, 219), 'uint16') >>> imwrite('temp.tif', data, photometric='minisblack')

Read the whole image stack from the multi-page TIFF file as NumPy array:

>>> image_stack = imread('temp.tif') >>> image_stack.shape (64, 301, 219) >>> image_stack.dtype dtype('uint16')

Read the image from the first page in the TIFF file as NumPy array:

>>> image = imread('temp.tif', key=0) >>> image.shape (301, 219)

Read images from a selected range of pages:

>>> images = imread('temp.tif', key=range(4, 40, 2)) >>> images.shape (18, 301, 219)

Iterate over all pages in the TIFF file and successively read images:

>>> with TiffFile('temp.tif') as tif: ... for page in tif.pages: ... image = page.asarray() ...

Get information about the image stack in the TIFF file without reading any image data:

>>> tif = TiffFile('temp.tif') >>> len(tif.pages) # number of pages in the file 64 >>> page = tif.pages[0] # get shape and dtype of image in first page >>> page.shape (301, 219) >>> page.dtype dtype('uint16') >>> page.axes 'YX' >>> series = tif.series[0] # get shape and dtype of first image series >>> series.shape (64, 301, 219) >>> series.dtype dtype('uint16') >>> series.axes 'QYX' >>> tif.close()

Inspect the "XResolution" tag from the first page in the TIFF file:

>>> with TiffFile('temp.tif') as tif: ... tag = tif.pages[0].tags['XResolution'] ... >>> tag.value (1, 1) >>> tag.name 'XResolution' >>> tag.code 282 >>> tag.count 1 >>> tag.dtype <DATATYPE.RATIONAL: 5>

Iterate over all tags in the TIFF file:

>>> with TiffFile('temp.tif') as tif: ... for page in tif.pages: ... for tag in page.tags: ... tag_name, tag_value = tag.name, tag.value ...

Overwrite the value of an existing tag, for example, XResolution:

>>> with TiffFile('temp.tif', mode='r+') as tif: ... _ = tif.pages[0].tags['XResolution'].overwrite((96000, 1000)) ...

Write a 5-dimensional floating-point array using BigTIFF format, separate color components, tiling, Zlib compression level 8, horizontal differencing predictor, and additional metadata:

>>> data = numpy.random.rand(2, 5, 3, 301, 219).astype('float32') >>> imwrite( ... 'temp.tif', ... data, ... bigtiff=True, ... photometric='rgb', ... planarconfig='separate', ... tile=(32, 32), ... compression='zlib', ... compressionargs={'level': 8}, ... predictor=True, ... metadata={'axes': 'TZCYX'}, ... )

Write a 10 fps time series of volumes with xyz voxel size 2.6755x2.6755x3.9474 micron^3 to an ImageJ hyperstack formatted TIFF file:

>>> volume = numpy.random.randn(6, 57, 256, 256).astype('float32') >>> image_labels = [f'{i}' for i in range(volume.shape[0] * volume.shape[1])] >>> imwrite( ... 'temp.tif', ... volume, ... imagej=True, ... resolution=(1.0 / 2.6755, 1.0 / 2.6755), ... metadata={ ... 'spacing': 3.947368, ... 'unit': 'um', ... 'finterval': 1 / 10, ... 'fps': 10.0, ... 'axes': 'TZYX', ... 'Labels': image_labels, ... }, ... )

Read the volume and metadata from the ImageJ hyperstack file:

>>> with TiffFile('temp.tif') as tif: ... volume = tif.asarray() ... axes = tif.series[0].axes ... imagej_metadata = tif.imagej_metadata ... >>> volume.shape (6, 57, 256, 256) >>> axes 'TZYX' >>> imagej_metadata['slices'] 57 >>> imagej_metadata['frames'] 6

Memory-map the contiguous image data in the ImageJ hyperstack file:

>>> memmap_volume = memmap('temp.tif') >>> memmap_volume.shape (6, 57, 256, 256) >>> del memmap_volume

Create a TIFF file containing an empty image and write to the memory-mapped NumPy array (note: this does not work with compression or tiling):

>>> memmap_image = memmap( ... 'temp.tif', shape=(256, 256, 3), dtype='float32', photometric='rgb' ... ) >>> type(memmap_image) <class 'numpy.memmap'> >>> memmap_image[255, 255, 1] = 1.0 >>> memmap_image.flush() >>> del memmap_image

Write two NumPy arrays to a multi-series TIFF file (note: other TIFF readers will not recognize the two series; use the OME-TIFF format for better interoperability):

>>> series0 = numpy.random.randint(0, 255, (32, 32, 3), 'uint8') >>> series1 = numpy.random.randint(0, 255, (4, 256, 256), 'uint16') >>> with TiffWriter('temp.tif') as tif: ... tif.write(series0, photometric='rgb') ... tif.write(series1, photometric='minisblack') ...

Read the second image series from the TIFF file:

>>> series1 = imread('temp.tif', series=1) >>> series1.shape (4, 256, 256)

Successively write the frames of one contiguous series to a TIFF file:

>>> data = numpy.random.randint(0, 255, (30, 301, 219), 'uint8') >>> with TiffWriter('temp.tif') as tif: ... for frame in data: ... tif.write(frame, contiguous=True) ...

Append an image series to the existing TIFF file (note: this does not work with ImageJ hyperstack or OME-TIFF files):

>>> data = numpy.random.randint(0, 255, (301, 219, 3), 'uint8') >>> imwrite('temp.tif', data, photometric='rgb', append=True)

Create a TIFF file from a generator of tiles:

>>> data = numpy.random.randint(0, 2**12, (31, 33, 3), 'uint16') >>> def tiles(data, tileshape): ... for y in range(0, data.shape[0], tileshape[0]): ... for x in range(0, data.shape[1], tileshape[1]): ... yield data[y : y + tileshape[0], x : x + tileshape[1]] ... >>> imwrite( ... 'temp.tif', ... tiles(data, (16, 16)), ... tile=(16, 16), ... shape=data.shape, ... dtype=data.dtype, ... photometric='rgb', ... )

Write a multi-dimensional, multi-resolution (pyramidal), multi-series OME-TIFF file with metadata. Sub-resolution images are written to SubIFDs. Limit parallel encoding to 2 threads. Write a thumbnail image as a separate image series:

>>> data = numpy.random.randint(0, 255, (8, 2, 512, 512, 3), 'uint8') >>> subresolutions = 2 >>> pixelsize = 0.29 # micrometer >>> with TiffWriter('temp.ome.tif', bigtiff=True) as tif: ... metadata = { ... 'axes': 'TCYXS', ... 'SignificantBits': 8, ... 'TimeIncrement': 0.1, ... 'TimeIncrementUnit': 's', ... 'PhysicalSizeX': pixelsize, ... 'PhysicalSizeXUnit': 'µm', ... 'PhysicalSizeY': pixelsize, ... 'PhysicalSizeYUnit': 'µm', ... 'Channel': {'Name': ['Channel 1', 'Channel 2']}, ... 'Plane': {'PositionX': [0.0] * 16, 'PositionXUnit': ['µm'] * 16}, ... } ... options = dict( ... photometric='rgb', ... tile=(128, 128), ... compression='jpeg', ... resolutionunit='CENTIMETER', ... maxworkers=2, ... ) ... tif.write( ... data, ... subifds=subresolutions, ... resolution=(1e4 / pixelsize, 1e4 / pixelsize), ... metadata=metadata, ... options, ... ) ... # write pyramid levels to the two subifds ... # in production use resampling to generate sub-resolution images ... for level in range(subresolutions): ... mag = 2 (level + 1) ... tif.write( ... data[..., ::mag, ::mag, :], ... subfiletype=1, ... resolution=(1e4 / mag / pixelsize, 1e4 / mag / pixelsize), ... **options, ... ) ... # add a thumbnail image as a separate series ... # it is recognized by QuPath as an associated image ... thumbnail = (data[0, 0, ::8, ::8] >> 2).astype('uint8') ... tif.write(thumbnail, metadata={'Name': 'thumbnail'}) ...

Access the image levels in the pyramidal OME-TIFF file:

>>> baseimage = imread('temp.ome.tif') >>> second_level = imread('temp.ome.tif', series=0, level=1) >>> with TiffFile('temp.ome.tif') as tif: ... baseimage = tif.series[0].asarray() ... second_level = tif.series[0].levels[1].asarray() ...

Iterate over and decode single JPEG compressed tiles in the TIFF file:

>>> with TiffFile('temp.ome.tif') as tif: ... fh = tif.filehandle ... for page in tif.pages: ... for index, (offset, bytecount) in enumerate( ... zip(page.dataoffsets, page.databytecounts) ... ): ... _ = fh.seek(offset) ... data = fh.read(bytecount) ... tile, indices, shape = page.decode( ... data, index, jpegtables=page.jpegtables ... ) ...

Use Zarr to read parts of the tiled, pyramidal images in the TIFF file:

>>> import zarr >>> store = imread('temp.ome.tif', aszarr=True) >>> z = zarr.open(store, mode='r') >>> z <zarr.hierarchy.Group '/' read-only> >>> z[0] # base layer <zarr.core.Array '/0' (8, 2, 512, 512, 3) uint8 read-only> >>> z[0][2, 0, 128:384, 256:].shape # read a tile from the base layer (256, 256, 3) >>> store.close()

Load the base layer from the Zarr store as a dask array:

>>> import dask.array >>> store = imread('temp.ome.tif', aszarr=True) >>> dask.array.from_zarr(store, 0) dask.array<...shape=(8, 2, 512, 512, 3)...chunksize=(1, 1, 128, 128, 3)... >>> store.close()

Write the Zarr store to a fsspec ReferenceFileSystem in JSON format:

>>> store = imread('temp.ome.tif', aszarr=True) >>> store.write_fsspec('temp.ome.tif.json', url='file://') >>> store.close()

Open the fsspec ReferenceFileSystem as a Zarr group:

>>> import fsspec >>> import imagecodecs.numcodecs >>> imagecodecs.numcodecs.register_codecs() >>> mapper = fsspec.get_mapper( ... 'reference://', fo='temp.ome.tif.json', target_protocol='file' ... ) >>> z = zarr.open(mapper, mode='r') >>> z <zarr.hierarchy.Group '/' read-only>

Create an OME-TIFF file containing an empty, tiled image series and write to it via the Zarr interface (note: this does not work with compression):

>>> imwrite( ... 'temp.ome.tif', ... shape=(8, 800, 600), ... dtype='uint16', ... photometric='minisblack', ... tile=(128, 128), ... metadata={'axes': 'CYX'}, ... ) >>> store = imread('temp.ome.tif', mode='r+', aszarr=True) >>> z = zarr.open(store, mode='r+') >>> z <zarr.core.Array (8, 800, 600) uint16> >>> z[3, 100:200, 200:300:2] = 1024 >>> store.close()

Read images from a sequence of TIFF files as NumPy array using two I/O worker threads:

>>> imwrite('temp_C001T001.tif', numpy.random.rand(64, 64)) >>> imwrite('temp_C001T002.tif', numpy.random.rand(64, 64)) >>> image_sequence = imread( ... ['temp_C001T001.tif', 'temp_C001T002.tif'], ioworkers=2, maxworkers=1 ... ) >>> image_sequence.shape (2, 64, 64) >>> image_sequence.dtype dtype('float64')

Read an image stack from a series of TIFF files with a file name pattern as NumPy or Zarr arrays:

>>> image_sequence = TiffSequence('temp_C0*.tif', pattern=r'_(C)(d+)(T)(d+)') >>> image_sequence.shape (1, 2) >>> image_sequence.axes 'CT' >>> data = image_sequence.asarray() >>> data.shape (1, 2, 64, 64) >>> store = image_sequence.aszarr() >>> zarr.open(store, mode='r') <zarr.core.Array (1, 2, 64, 64) float64 read-only> >>> image_sequence.close()

Write the Zarr store to a fsspec ReferenceFileSystem in JSON format:

>>> store = image_sequence.aszarr() >>> store.write_fsspec('temp.json', url='file://')

Open the fsspec ReferenceFileSystem as a Zarr array:

>>> import fsspec >>> import tifffile.numcodecs >>> tifffile.numcodecs.register_codec() >>> mapper = fsspec.get_mapper( ... 'reference://', fo='temp.json', target_protocol='file' ... ) >>> zarr.open(mapper, mode='r') <zarr.core.Array (1, 2, 64, 64) float64 read-only>

Inspect the TIFF file from the command line:

$ python -m tifffile temp.ome.tif

tifffile's People

Contributors

cgohlke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tifffile's Issues

Bug in 2020.8.13

In version 2020.8.13 when reading OME-TIFF files:

with tifffile.TiffFile(filepath) as tif:
    data = tif.asarray(out="memmap")

data shape, as an example, looks like (1, 1, 47, 801, 800, 1)

where in version 2020.7.24 it was (47, 801, 800)

Writing direction for creating ome.tif?

Hello,
at the moment I am trying to read a tiff-file tile-wise, modify it and subsequently upload it.
For the tile-generator I am using the following chunk of code (with openslide):

#%% iterate over all tiles
x, y = 0, 0
count =  1
while x < x_tiles:

    while y < y_tiles:

        new_tile = np.array(tiles.get_tile(level, (x, y)), dtype=np.uint8)

        new_tile = func_handle(new_tile)
        yield new_tile 
        y += 1
        count += 1
        print('iteration #' + str(count) + ' from n=' + str(y_tiles * x_tiles))

    y = 0
    x += 1

However, the created image is somehow shifted. The tiles are not fitting to each other.
Therefore, I wonder in which direction the file-writing is done? A nested while-loop seems not to work?
Or would it be better to use your tile-reading function? If so, how could I there choose the tile-size?

With kind regards.

How to read single tiles from a file?

Hello @cgohlke and thank you for developing/maintaining this awesome package ^^.

I am trying to read a large pyramidal, multi-stack WS image in svs format (15×91775×87648×3 pixels at full resolution):

>>> with tifffile.TiffFile(fname) as f:
...   for s in f.series:
...     print(s.shape)
... 
(15, 91775, 87648, 3)
(768, 733, 3)
(15, 22943, 21912, 3)
(15, 5735, 5478, 3)
(15, 2867, 2739, 3)
(680, 653, 3)
(613, 1600, 3)

I would like to read the image by chunks, and I saw in the changelog that it is possible to read individual tiles from the image but I can't found the function or doc for this.
Can you please point me to the correct method for doing this?
Thank you.

Can't save 4 channels tiff image properly

Describe the bug
Hi,
I have a 400x800 4 channels tiff image. When I try to save it no error is raised but reading the saved image again will result in a completely different image (I'm using Photoshop 2020 as image editors), although the numpy arrays are equal. What am I doing wrong?
Thank you

from tifffile import tifffile
import numpy as np

tif = tifffile.imread("blue.tif") # ndarray (400, 800, 4)
tifffile.TiffWriter("blue_saved.tif").write(data=tif, dtype=np.uint8)

tif_saved = tifffile.imread("blue_saved.tif")

print(np.array_equal(tif , tif_saved) # True

File
blue.zip

Screenshots from Photoshop

image

immagine_2020-11-17_125333

Environment

  • platform : win-64
  • conda version [4.8.2]
  • Python [3.7.9]
  • tifffile [2020.10.1]
  • numpy [1.19.1]

Exception when writing contiguous OME-TIFF

Please consider this minimal example, taken from the documentation ("Successively write the frames of one contiguous series to a TIFF file"), which works as expected:

data = numpyp.random.randint(0, 255, (30, 301, 219), 'uint8')
with TiffWriter('temp.tif', bigtiff=True) as tif:
    for frame in data:
        tif.write(frame, contiguous=True)

This is the same identical example, with the only difference that I'm saving to a .ome.tif:

data = numpyp.random.randint(0, 255, (30, 301, 219), 'uint8')
with TiffWriter('temp.ome.tif', bigtiff=True) as tif:
    for frame in data:
        tif.write(frame, contiguous=True)

Unfortunately this does not work and throws a ValueError: shape does not match stored shape.

NDTiffStorage example file

I am following up on the discussion in micro-manager/NDTiffStorage#10, and am providing an example file here. As mentioned there, tifffile has no issues reading image data itself, but does not preserve information on axes. The NDTiffStorage format is also not compatible with the TiffFile.imagej_metadata attribute, which limits the flexibility of this format.

Some comments on the file format are provided in micro-manager/NDTiffStorage#11. The file itself was generated using the script democam.py from the PR micro-manager/pycro-manager#76. I am not sufficiently familiar with the underpinnings of the TIFF standards, so this is merely an attempt to summarize.

PS: I am also not sure how relevant this issue is, so it's ok to close.

democam_MagellanStack.zip

imread and imwrite different

Forgive my bad English. when I img = imread(path) , I get img.shape like (11, 1000, 2000, 2) but expect like (11, 1000, 2000). And when I img = imread(path) imwrite(img), the result is different from the original.

How to read TIFF files from MicroManager with memory-mapping

Hi,
first of all thanks for this great Python package.

A question:
I would like to seek and read within a set of large (4.3GB) OME-TIFF files, which were stored with MicroManager 2.
Is it possible to memory-map and read these files with tifffile? I was only able to find the ability to write to memory-mapped files.

Thanks and best regards,
Michael

imwrite with partially-filled single tiles in contiguous mode creates corrupted files

If tile is set, the image shape is smaller than tile, and contiguous is True, then the number of bytes allocated and written is less than the value in TileByteCounts. Furthermore there is no way to disable contiguous mode as the contiguous argument to imwrite is unconditionally overwritten with True. Here is a minimal reproduction:

import numpy as np
import tifffile
img = np.zeros((15, 15), np.uint8)
# This vertical stripe makes the problem stand out in the image data.
img[:, 0] = 255
tifffile.imsave('tiled-15x15.tif', img, tile=(16, 16))

The output file in my case contains TileOffsets <320> and TileByteCounts <256> but the file is only 545 bytes long when it should be 320+256=576. From looking at a hex dump of the file it can be seen that only 15*15 bytes from img were written out without padding to 16*16. tifffile will actually read this back in correctly, but every other tool I tried either reports it as corrupt or reads the image data as if it were padded and ignores the buffer overrun (yielding an image with a diagonal stripe instead of a vertical one and possible garbage at the bottom).

I traced this to the "contiguous" support introduced all the way back in v0.13.1 (2017-something?) and tried to disable that by passing contiguous=False, but I think there is also a small logic bug that prevents this from working. tifffile.py:1658 (in current master) says contiguous = not compress where I think it should say contiguous = contiguous and not compress.

I'll be submitting a PR to address the contiguous problem, but I'm not sure what the use case for contiguous mode is so I'm hesitant to propose a fix for the bigger problem of corrupted output.

page decode function parameter number error

Version: 2020.09.03
Environment: Anaconda, Python 3.6, Win10

For this decode example:

>>> with TiffFile('my.tif') as tif:
...     fh = tif.filehandle
...     for page in tif.pages:
...         for index, (offset, bytecount) in enumerate(
...             zip(page.dataoffsets, page.databytecounts)
...         ):
...             fh.seek(offset)
...             data = fh.read(bytecount)
...             tile, indices, shape = page.decode(data, index,
...                                                page.jpegtables)

The last line page.decode give the following error:

TypeError: decode() takes 2 positional arguments but 3 were given

And I use the jupyter-notebook help function to check how the parameters required, it gives the function with only 2 parameters:

[in] ?page.decode
[out] Signature: page.decode(data, segmentindex)

Currently, I can simply ignore the page.jpegtable to make this function work, because for my case is None, but it conflict to the example and maybe some cases jpgetable is required, can you fix it or change the documents description?

How to read photometric

How can I read the photometric of a tif?
At the moment I am doing:

with TiffFile(tif_file) as tif:
    tif. ... 

I just can't seem to find a way to extract the photometric string, for example minisblack.

[BUG] Using "compression" option crashes "imwrite"

Great package, have been using it for a long time! I recently upgraded to 2020.7.24 (in Python 3.8.2) and now, when running something like

import tifffile
tifffile.imwrite("test.tiff", np.zeros((10, 10, 10)), shape=(10, 10, 10), dtype=np.int8, compress=True)

I get the following error:

  File "<stdin>", line 1, in <module>
  File "<string>", line 1, in <module>
  File "/home/gire/.local/lib/python3.8/site-packages/tifffile/tifffile.py", line 778, in imwrite
    return tif.save(data, shape, dtype, **kwargs)
  File "/home/gire/.local/lib/python3.8/site-packages/tifffile/tifffile.py", line 2096, in save
    page = next(dataiter).reshape(storedshape[1:])
TypeError: 'numpy.ndarray' object is not an iterator

Everything works perfectly when the shape argument is not provided, e.g., tifffile.imwrite("test.tiff", np.zeros((10, 10, 10)), dtype=np.int8, compress=True).

I understand (might be wrong) that shape and dtype should be provided only when data=None, but it I think it would be better to get a more readable error saying not to use them with a data != None or something similar 😄

Writing correct ome.tiff files

Hi,

I am basically setting up a deconvolution pipeline (each nd2 file is >1.5TB) of my live cell imaging data.

I managed to make .ome.tiffs (metadata and tiff files) that are successfully opened in ImageJ, however I still have an error while populating and reading the metadata (that makes a higher parse time and having into account the size of the files I would like to fix this).
The last example in the code: test4.ome.tiff has ImageJ type hyperstack loading without errors, but then I can't save the ome.xml type metadata into it.

  • If I drag a file into ImageJ I only have on XY frame.
    I have this in console:
    [> WARNING] Image ID 'Image:0': missing plane #0. Using TiffReader to determine the number of planes.
  • If I use the bioformats plugin then the whole stack is correctly loaded I have T,C,Z in the proper order.
    But I still have this message:

First line outputs when I call bioformats plugin (all other messages happen when I click okay to read it

> [WARNING] Image ID 'Image:0': missing plane #0.  Using TiffReader to determine the number of planes.
> [WARN] Image ID 'Image:0': missing plane #0
> [WARN] Image ID 'Image:0': missing plane #1
> [WARN] Image ID 'Image:0': missing plane #2
> [WARN] Image ID 'Image:0': missing plane #3
> [WARN] Image ID 'Image:0': missing plane #4
> [WARN] Image ID 'Image:0': missing plane #5
> [WARN] Image ID 'Image:0': missing plane #6
> [WARN] Image ID 'Image:0': missing plane #7
> [WARN] Image ID 'Image:0': missing plane #8
> [WARN] Image ID 'Image:0': missing plane #9
> [WARN] Image ID 'Image:0': missing plane #10
> [WARN] Image ID 'Image:0': missing plane #11
> [WARN] Image ID 'Image:0': missing plane #12
> [WARN] Image ID 'Image:0': missing plane #13
> [WARN] Image ID 'Image:0': missing plane #14
> [WARN] Image ID 'Image:0': missing plane #15
> [WARN] Image ID 'Image:0': missing plane #16
> [WARN] Image ID 'Image:0': missing plane #17
> [WARN] Image ID 'Image:0': missing plane #18
> [WARN] Image ID 'Image:0': missing plane #19
> [WARN] Image ID 'Image:0': missing plane #20
> [WARN] Image ID 'Image:0': missing plane #21
> [WARN] Using TiffReader to determine the number of planes.
> Reading IFDs
> Populating metadata
> Checking comment style
> Populating OME metadata
> Reading IFDs
> Populating metadata
> Checking comment style
> Populating OME metadata

This is a snippet code exemplifying different writing examples (I need to write each YX or ZYX or CZYC one by one as it's impossible to have the whole data loaded into my RAM).

import numpy as np
import bioformats.omexml as ome
import tifffile as tf
import sys


def writeplanes(pixel, SizeT=1, SizeZ=1, SizeC=1, order='TZCYX'
            , verbose=False):

    if order == 'TZCYX':

        p.DimensionOrder = ome.DO_XYCZT
        counter = 0
        for t in range(SizeT):
            for z in range(SizeZ):
                for c in range(SizeC):

                    if verbose:
                        print('Write PlaneTable: ', t, z, c),
                        sys.stdout.flush()

                    pixel.Plane(counter).TheT = t
                    pixel.Plane(counter).TheZ = z
                    pixel.Plane(counter).TheC = c
                    counter = counter + 1

    return pixel


# Dimension TZCXY
SizeT = 1
SizeZ = 11
SizeC = 2
SizeX = 2044
SizeY = 2044
Series = 0


scalex = 0.10833
scaley = scalex
scalez = 0.3
pixeltype = 'uint16'
dimorder = 'TZCYX'
output_file = r'/tmp/stack.ome.tiff' #this does nothing in this example

# create numpy array with correct order

# Getting metadata info
omexml = ome.OMEXML()
omexml.image(Series).Name = output_file
p = omexml.image(Series).Pixels
#p.ID = 0
p.SizeX = SizeX
p.SizeY = SizeY
p.SizeC = SizeC
p.SizeT = SizeT
p.SizeZ = SizeZ
p.PhysicalSizeX = np.float(scalex)
p.PhysicalSizeY = np.float(scaley)
p.PhysicalSizeZ = np.float(scalez)
p.PixelType = pixeltype
p.channel_count = SizeC
p.plane_count = SizeZ * SizeT * SizeC
p = writeplanes(p, SizeT=SizeT, SizeZ=SizeZ, SizeC=SizeC, order=dimorder)

for c in range(SizeC):
    if pixeltype == 'unit8':
        p.Channel(c).SamplesPerPixel = 1
    if pixeltype == 'unit16':
        p.Channel(c).SamplesPerPixel = 2


omexml.structured_annotations.add_original_metadata(
            ome.OM_SAMPLES_PER_PIXEL, str(SizeC))

# Converting to omexml
xml = omexml.to_xml()


img5d = np.random.randn(
        SizeT, SizeZ, SizeC, SizeY, SizeX).astype(np.uint16)

# ~ write file and save OME-XML as description
tf.imwrite(r'/tmp/test1.ome.tiff', img5d#,
    , description=xml)


with tf.TiffWriter('/tmp/test2.ome.tiff'
               #, bigtiff=True
               #, imagej=True
                      ) as tif:
    for t in range(SizeT):
        for z in range(SizeZ):
            for c in range(SizeC):      
                # ~ print(img5d[t,z,c,:,:].shape)   # -> (2044, 2044)
                tif.save(img5d[t,z,c,:,:]
            #                     ,shape=res.shape

                        #,resolution= (.1083,0.1083,3)
                         , description = xml
                        , photometric='minisblack'
                        #, datetime= True
                        , metadata={'axes': 'TZCYX'
                            , 'DimensionOrder' : 'TZCYX'
                            , 'Resolution': 0.10833}
                            )

with tf.TiffWriter('/tmp/test3.ome.tiff'
               #, bigtiff=True
               #, imagej=True
                      ) as tif:
    for t in range(SizeT):
        # ~ print(img5d[t,z,c,:,:].shape)   # -> (2044, 2044)
        tif.save(img5d[t,:,:,:,:]
    #                     ,shape=res.shape

                #,resolution= (.1083,0.1083,3)
                 , description = xml
                , photometric='minisblack'
                #, datetime= True
                , metadata={'axes': 'TZCYX'
                    , 'DimensionOrder' : 'TZCYX'
                    , 'Resolution': 0.10833}
                    )

with tf.TiffWriter('/tmp/test4.ome.tiff'
               #, bigtiff=True
               , imagej=True
                      ) as tif:
    for t in range(SizeT):
        # ~ print(img5d[t,z,c,:,:].shape)   # -> (2044, 2044)
        tif.save(img5d[t,:,:,:,:]
    #                     ,shape=res.shape

                #,resolution= (.1083,0.1083,3)
                 , description = xml
                , photometric='minisblack'
                #, datetime= True
                , metadata={'axes': 'TZCYX'
                    , 'DimensionOrder' : 'TZCYX'
                    , 'Resolution': 0.10833}
                    )

imread messes up (scanimage) tif file

I have some (~85Mb) tif files acquired using scanimage microscopy software. Loading with tifffile version 20.9.3 is just fine:

3.7.8 | packaged by conda-forge | (default, Jul 31 2020, 01:53:57) [MSC v.1916 64 bit (AMD64)]
Windows-10-10.0.19041-SP0
scikit-image version: 0.17.2
numpy version: 1.19.1
tifffile version: 2020.9.3
Image min/max: -23/2048

Loading using latest tifffile version 20.9.28 messes up intensities:
3.7.8 | packaged by conda-forge | (default, Jul 31 2020, 01:53:57) [MSC v.1916 64 bit (AMD64)]
Windows-10-10.0.19041-SP0
scikit-image version: 0.17.2
numpy version: 1.19.1
tifffile version: 2020.9.28
Image min/max: -32498/32662

Image dimensions are kept. The image stacks look completely fine if opened in Fiji.

Missing xml declaration from apeer-ometiff-library

I have made the maintainers of apeer-ometiff-library aware, but OME-TIFFs generated from apeer-ometiff-library.io.write_ometiff fail to have the property tifffile.TiffFile.is_ome (and subsequently tifffile.TiffFile.ome_metadata) when read by tifffile. I believe this is due to the line in the return that checks the first characters of the xml. This is because apeer-ometiff-library does not generate the xml declaration header <?xml version="1.0" encoding="UTF-8"?>. It is not clear to me if the xml declaration is required to have a valid schema, but does this check need to be so stringent here?

tifffile/tifffile/tifffile.py

Lines 5216 to 5221 in 5f9f138

def is_ome(self):
"""Page contains OME-XML in ImageDescription tag."""
if self.index > 1 or not self.description:
return False
d = self.description
return d[:13] == '<?xml version' and d[-4:] == 'OME>'

Files produced in ScanImage 2020.1 cannot be parsed.

We at Vidrio have received a ticket pertaining parsing scanimage tifs with tifffile. Just passing this on:

In the newest scanimage, metadata from the images has changed and is no longer parseable by https://pypi.org/project/tifffile/

My pipeline relies on this package. It seems the error is when the package tries to check if the metadata is a scanimage image in a not very robust way. I've fixed this for myself but this may affect other users.

def is_scanimage(self):
    """Page contains ScanImage metadata."""
    return (
        self.description[:12] == 'state.config'
        or self.software[:22] == 'SI.LINE_FORMAT_VERSION'
        or 'scanimage.SI' in self.description[-256:] 

Write single image series in append mode

Hi @cgohlke

Thanks for the awesome package :)

I'm trying to write chunks of a video to tiff using the append mode (append=True). When I load the tiff file I get the right number of pages, but if I try to memmap it I only get one chunk at a time. In fact, each imsave call appends a new series to the tiff file.
Is it possible to write in append mode and extend the existing series, so that in the end I can memmap the entire video?

Minimal example:

import numpy as np
import tifffile

save_path = 'test_append.tif'

# create 1000 frmes, 50x50 image size
video = np.random.randn(1000, 50, 50)

# write 100 frames at a time:
len_chunk = 100
for i in range(10):
     tifffile.imsave(save_path, video[i * len_chunk:(i+1) * len_chunk], append=True)

tif = tifffile.TiffFile(save_path)
print(len(tif.pages)) # correctly prints 10000
tif.close()

video0 = tifffile.memmap(save_path, series=0)
print(video0.shape) # prints (100, 50, 50)

video1 = tifffile.memmap(save_path, series=1)
print(video1.shape) # prints (100, 50, 50)

Issue when using tifffile 2020.10.1 inside KNIME scripting node with memmap

Hi, I just tried to use tifffile 2020.10.1inside a KNIME scripting node (python 3.7.6) and got stuck (see below). When I try the same outside of KNIME it works ok, so I might also not a tifffile issue at all. But any hint or help is appreciated.

cannot import name 'memmap' from 'tifffile' (/home/sebi06/programs/knime/configuration/org.eclipse.osgi/578/0/.cp/py/tifffile.py)
Traceback (most recent call last):
File "<string>", line 10, in <module>
  File "/home/sebi06/miniconda3/envs/imageanalysis/lib/python3.7/site-packages/czitools-0.0.1-py3.7.egg/czitools/__init__.py", line 1, in <module>
    from .imgfileutils import get_imgtype, get_metadata, get_metadata_ometiff
  File "/home/sebi06/miniconda3/envs/imageanalysis/lib/python3.7/site-packages/czitools-0.0.1-py3.7.egg/czitools/imgfileutils.py", line 14, in <module>
    import czifile as zis
  File "/home/sebi06/miniconda3/envs/imageanalysis/lib/python3.7/site-packages/czifile/__init__.py", line 4, in <module>
    from .czifile import __doc__, __all__, __version__
  File "/home/sebi06/miniconda3/envs/imageanalysis/lib/python3.7/site-packages/czifile/czifile.py", line 208, in <module>
    from tifffile import (
ImportError: cannot import name 'memmap' from 'tifffile' (/home/sebi06/programs/knime/configuration/org.eclipse.osgi/578/0/.cp/py/tifffile.py)

Create new (big) tiff tile by tile?

Hello,
Is there an option to create a new tiff or ometiff and fill it tile-wise? The section covering creating a new image by tiles looks like the entire image needs to be in the workspace?
The background is that I want to process a whole slide image and create a heatmap of the same size.
Maybe, I just have not found the option yet.
With kind regards.

Example saving ImageJ TIFF has 57 channels instead of 57 z slices

In your examples you give the following for saving an ImageJ TIFF of a 3D volume. This actually saves a 57 channel ImageJ 2D image instead of the intended 3D (57, 256, 256) map. You would need to use shape (57, 1, 256, 256) since ImageJ uses axis order ZCYX. Maybe there is some other way to specify 1 channel when saving ImageJ TIFF but I didn't see it in the code.

Write a volume with xyz voxel size 2.6755x2.6755x3.9474 micron^3 to an ImageJ hyperstack formatted TIFF file:

volume = numpy.random.randn(57, 256, 256).astype('float32')
imwrite('temp.tif', volume, imagej=True, resolution=(1./2.6755, 1./2.6755),
... metadata={'spacing': 3.947368, 'unit': 'um', 'axes': 'ZYX'})

Thanks for the fantastic tifffile.py! You keep the whole light microscopy field going.

Merge several z-stacks from several files into one file

Hi there, I am new here.
I have several tiff files which are named as PECCU_A23_T0001F001L01A03Z*C01.tif.
That is, there are several files like PECCU_A23_T0001F001L01A03Z01C01.tif, PECCU_A23_T0001F001L01A03Z02C01.tif, PECCU_A23_T0001F001L01A03Z03C01.tif ... till PECCU_A23_T0001F001L01A03Z81C01.tif.

What will be the best way to merge them together as a single tiff file.
Originally, I thought I could do something like this:

import numpy as np
import tifffile

stack_1 = '/nfs/turbo/umms-jzsexton/VirtualStaining/BRO0117014VirtStain-20X_20201111_163715/PECCU/PECCU_A23_T0001F001L01A03Z80C01.tif'
stack_2 = '/nfs/turbo/umms-jzsexton/VirtualStaining/BRO0117014VirtStain-20X_20201111_163715/PECCU/PECCU_A23_T0001F001L01A03Z81C01.tif'

img = tifffile.imread(stack_1)
img = np.expand_dims(img, axis=0)
img2 = tifffile.imread(stack_2)
img2 = np.expand_dims(img2, axis=0)
img_merged = np.concatenate([img, img2], axis=0)
tifffile.imwrite('./temp.tif', img_merged, photometric='minisblack')

img3 = tifffile.imread('./temp.tif')
print(img3.shape)

However, it seems to me that this will consume lots of memory. I am thinking if it's possible to write to a file in an unsynchronized way. Thank you.

Clarity on geotiff_metadata

Hey,

I was just wondering if I could get some clarity on ModelPixelScale and ModelTiepoint.

    tif = TiffFile(input_file)
    print(tif.geotiff_metadata['ModelPixelScale'])
    # [5.01483146067815e-05, 5.0153046594992315e-05, 0.0]
    print(tif.geotiff_metadata['ModelTiepoint'])
    # [0.0, 0.0, 0.0, 137.9049898, -30.9165658, 0.0]

Now I assume ModelPixelScale returns [x,y,z] but using gdals GetGeoTransform() the y should be a negative. ie: -5.0153046594992315e-05. That's no big deal though, I just thought Id mention it for consistency.

Not sure the order/contents of the ModelTiepoint. Any ideas? Something like: [?, ?, ?, lat, lon, ?]

Save uint16 array as 12-bit tif image

I would like to save a np.uint16 array as a 12-bit precision tif image because the camera has only 12-bit and I don't want to waste disk space. Is this possible? If yes, how?

Thank you

aszarr feature not supported?

From one of the examples in the README:

>>> import zarr
>>> store = imread('temp.ome.tif', aszarr=True)
>>> z = zarr.open(store, mode='r')
>>> z
<zarr.hierarchy.Group '/' read-only>
>>> z[0]  # base layer
<zarr.core.Array '/0' (1024, 1024, 3) uint8 read-only>
>>> store.close()

got:

Traceback (most recent call last):
  File "/home/kipling/Programs/fsm-python/big_tiff.py", line 66, in <module>
    store = imread(mytif_file, aszarr=True)
  File "/home/kipling/.local/lib/python3.8/site-packages/tifffile/tifffile.py", line 716, in imread
    return tif.asarray(**kwargs)
TypeError: asarray() got an unexpected keyword argument 'aszarr'

Is this feature not supported anymore or is there something that I'm missing?

Write a geotiff

Hi,

First, thank you for this fantastic Python Library which is very useful to me.

I am trying to write a geotiff with the same geo-metadata than another file.
Here is my code which duplicates tags of a geotiff file and create another one :

from tifffile import TiffFile, imwrite

path='geofile.tiff'
new_path='new_geofile.tiff'

tiff = TiffFile(path)

extratags = []

# for each tag, append to extratags
for _, tag in tiff.pages[0].tags.items():
    extratags.append((tag.code, tag.dtype, tag.count, tag.value, False))

# create the new geotiff
imwrite(new_path, data=tiff.asarray(), extratags=extratags)

Unfortunatly, the result is unexpected : the returned map is partly shifted as illustrated above.

shift

Is my method to copy geo-metadata wrong ? Is there a better way to do it ?

Thanks !

Save file in CCITT tif format issues

Hello,

how can I save file in CCITT format?
I tried to use following constants: CCITTRLE, CCITTFAX3, CCITT_T4, CCITTFAX4, CCITT_T6

This is in the statement:

    tifffile.imwrite('tifffilename.tif', tif_data, compression="<one of the CCITTRLE, CCITTFAX3, CCITT_T4, CCITTFAX4, CCITT_T6>")

I was not able to get anything work.
The library imagecodecs installed.

Thank you

micromanager_metadata['Time']

Hello,

For an image file created by micromanager, should the time that the image was created be in the metadata somewhere?

I recently updated tifffile from an old version and I thought previously it was something like (although maybe I'm mistaken):

with tifffile.TiffFile(tifpath) as tif:
    time = tif.micromanager_metadata['Time']

Many thanks in advance for your help,
Tom

Edit tags

Hi!

Your library is awesome. I have a question. How can I edit tiff tags in a multi-page tiff file?

Thanks!

Support large NDPI files ( > 4GB)

When I try to read a large NDPI file, I get only the first page and the rest are truncated. I understand the problem and the fact that NDPI is based on the classic TIFF format, which does not support files larger than 4 GB. However, large NDPI files are out there. Given the fact that some libraries out there (i.e. openslide) are able to read them and that there are "To-Do" comments in your code pointing to it, I wanted to ask you what is your plan on supporting this?

Please have a look here on how openslide addressed the issue last year.

Get error when tile generator in different tile shape

Hi,
Below is the tile generator sample code from the readme. The shape and tile shape are matches when the shape is times of tile, but when the shape is not divisible by tile, it will give an error. Need your help in dealing with this case. Thanks.

def tiles():
    data = numpy.arange(3*4*16*16, dtype='uint16').reshape((3*4, 16, 16))
    for i in range(data.shape[0]): yield data[i]
imwrite('temp.tif', tiles(), dtype='uint16', shape=(48, 64), tile=(16, 16))

-------Not divisible.------------

def tiles(data, tileshape):
    for y in range(0, 50, tileshape[0]):
        for x in range(0, 64, tileshape[1]):
            y2 = min(y + tileshape[0], 50)
            x2 = min(x + tileshape[1], 64)
            yield (data[:y2-y, :x2-x]).astype(np.uint8)
imwrite('temp.tif', tiles(data, (16,16)), dtype='uint16', shape=(50, 64), tile=(16, 16))

ValueError: invalid tile shape or dtype

How to write GPSTags via tifffile.TiffWriter.write()?

Dear Mr Gohlke,

first of all, thanks for providing and maintaining this very helpful tool.
I attempt to write a bunch of individual TIFF files into a multipage TIFF using tifffile.TiffWriter.write() as shown below. this works well. However, I can not figure out how to "transplant" the meta tags from the individual TIFF files into the according page in the output TIFF stack. You can see below that I read the meta tags and then attempt to manually reassemble it in the "extratags" argument. This does not yield the desired output.

Do you know, how I could copy the meta tags of the individual TIFF (ideally all, not only the GPSTags) into the output TIFF stack most easily and flexibly?

Your help would be gladly appreciated!

`
image_files = sorted(glob.glob("radiometric/*"))

with tifffile.TiffWriter("out.tif", bigtiff=True) as tif:
for i, image_file in enumerate(image_files):
if i == 10:
break

    # get GPS meta tag
    with tifffile.TiffFile(image_file) as file:
        gps_info = file.pages[0].tags["GPSTag"].value
        
    meta = {k: v for k, v in gps_info.items() if k != 'GPSVersionID'}
    print(meta)
    
    img = tifffile.imread(image_file)
    tif.write(img, compression="ZSTD", extratags=[(34853, 's', 1, 'N', False),
                                                  (34853+1, 's', 2, gps_info['GPSLatitudeRef'], False),
                                                  (34853+2, '2i', 3, gps_info['GPSLatitude'], False),
                                                 ])

`

EDIT:

This are the tags which end up in the output TIFF stack.

read_tags: corrupted tag list at offset 78
read_tags: corrupted tag list at offset 78
TiffTag 256 ImageWidth @848268 1I @848280 640
TiffTag 257 ImageLength @848288 1I @848300 512
TiffTag 258 BitsPerSample @848308 1H @848320 16
TiffTag 259 Compression @848328 1H @848340 ZSTD
TiffTag 262 PhotometricInterpretation @848348 1H @848360 MINISBLACK
TiffTag 270 ImageDescription @848368 22s @848616 {"shape": [512, 640]}
TiffTag 273 StripOffsets @848388 11Q @848702 (848880, 888675, 931301, 975444,
TiffTag 277 SamplesPerPixel @848408 1H @848420 1
TiffTag 278 RowsPerStrip @848428 1I @848440 51
TiffTag 279 StripByteCounts @848448 11I @848790 (39795, 42626, 44143, 42165, 4
TiffTag 282 XResolution @848468 2I @848480 (1, 1)
TiffTag 283 YResolution @848488 2I @848500 (1, 1)
TiffTag 296 ResolutionUnit @848508 1H @848520 NONE
TiffTag 305 Software @848528 12s @848834 tifffile.py
TiffTag 34853 GPSTag|OlympusSIS2 @848548 2s @78 ()
TiffTag 34854 @848568 2s @848580 N
TiffTag 34855 ISOSpeedRatings @848588 6i @848846 (49, 1, 36, 1, 20727, 500)

UnicodeEncodeError when using tifffile

Hi,

I cannot figure out why I run into this problem: Any hints what goes wrong

tifffile: 2020.12.8
Python 3.7.8 | packaged by conda-forge | (default, Nov 27 2020, 18:48:03) [MSC v.1916 64 bit (AMD64)] on win32

Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\envs\cellpose\lib\site-packages\tifffile\tifffile.py", line 7332, in overwrite
    value = value.encode('ascii')
UnicodeEncodeError: 'ascii' codec can't encode character '\xb5' in position 555: ordinal not in range(128)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "c:/Users/m1srh/Documents/GitHub/czi_demos/test_nuclei_segmentation.py", line 426, in <module>
  File "c:/Users/m1srh/Documents/GitHub/czi_demos/test_nuclei_segmentation.py", line 426, in <module>
    img.close()
  File "C:\ProgramData\Anaconda3\envs\cellpose\lib\site-packages\tifffile\tifffile.py", line 2771, in __exit__
    self.close()
  File "C:\ProgramData\Anaconda3\envs\cellpose\lib\site-packages\tifffile\tifffile.py", line 2764, in close
    self._write_image_description()
  File "C:\ProgramData\Anaconda3\envs\cellpose\lib\site-packages\tifffile\tifffile.py", line 2753, in _write_image_description
    self._descriptiontag.overwrite(self, description, erase=False)
  File "C:\ProgramData\Anaconda3\envs\cellpose\lib\site-packages\tifffile\tifffile.py", line 7336, in overwrite
    ) from exc
ValueError: TIFF strings must be 7-bit ASCII

TiffFile.save() cannot write ExifTag or GPSTag metadata

As part of my work, I am attempting to open a single-page .TIF, perform some calculations based on the contents of the image array, modify the array accordingly, and then save the array back to a .TIF. However, I need to do this in a way that preserves all valid metadata from the original image (valid meaning that tags such as DateTime would be ignored).

This is easy enough with standard tags -- simply extract them from the original TiffTags object, store them in a list of (code, dtype, count, value) tuples, and pass them to the extratags argument of save(). However, this approach fails when attempting to write both ExifTag and GPSTag IFD groups.

Is there a relatively simple method, with the current library, to bypass this limitation? I thought I might be able to recursively traverse the TiffTags dictionary, pull the individual tags out of their groups, and append them on to the full extratags list, but then they no longer belong to an IFD group.

I am not sure if this is the best place to pose this issue, so please feel free to close if so.

NotImplementedError: lzw_encode

Hi there,
I'm trying to write a multi page tiff file, like this

with tifffile.TiffWriter("nyname.tif", append=True) as P: P.write(**this_page_dict)

where the dict is:

{
'extratags': [(274, 3, 1, 1), (339, 3, 1, 1), (285, 2, 25, b'myTitle')], 
'bitspersample': 16,
'compression': 5, 
'photometric': 1, 
'rowsperstrip': 8, 
'planarconfig': 1, 
'resolution': (1677721600, 16777216), 
'data': array(..., dtype=uint16), 
'dtype': dtype('uint16')
}

It's giving me a NotImplemented error, since I'm using compression=5 (LZW).

Traceback (most recent call last):
File "...filename.py", line 122, in
P.write(**this_page_dict)
File "...\venv\lib\site-packages\tifffile\tifffile.py", line 2342, in write
strip = compress(strip)
File "imagecodecs_imcd.pyx", line 856, in imagecodecs._imcd.lzw_encode
NotImplementedError: lzw_encode

Any pointer as of why and maybe how to "solve" / work around this?

Bug with user submitted descriptions

Hi there,

I have a problem with the TiffWriter.write() function.
I'm providing the write function a custom description (tag 270).
This results with the tags list having two descriptions (two items with value 270), one with the description I want and one with the shape+metadata (not provided by me).

Since two tags with 270 value exist, they both are removed from this:
if pageindex == 0: tags = [tag for tag in tags if not tag[-1]]

, and we end up with no description among tags.

I think this is not the wanted behaviour of the function, right?

I ended up temporarily solving this problem by commenting all the code regarding "write shape and metadata to ImageDescription" (line 1838) to "del description" (line 1895).

[question] why does tag 282 (XResolution) yield a 2-tuple?

When I read tiff tag 282 with PIL, I get a scalar:

    resX = pil_img.tag.get(282) 
    if resX not in [None, 0]:
        # from px / unit to unit / px
        self.x_resolution = 1. / resX

With tifffile, when reading page 0 of a NDPI file, I get a 2-tuple:

    tags = tif.pages[0].tags
    resX_tag = tags.get(282)
    if resX_tag is not None:
        resX = resX_tag.value
        if hasattr(resX, "__len__"):
            resX = resX[0]
        # from px / unit to unit / px
        self.x_resolution = 1. / resX if resX else 0.

Can you explain the meaning of the 2 values? In the case of my NDPI file, only the first value seems meaningful, the second one is 1.
But in the examples you give in the Readme file, it seems there are 2 meaningfull values:

>>> with TiffFile('temp.tif') as tif:
...     tag = tif.pages[0].tags['XResolution']
>>> tag.value
(2000, 5351)

This page seems to indicate that we should expect a single value from this tag: https://www.awaresystems.be/imaging/tiff/tifftags/xresolution.html

Why tifffile package only load 5 series rather than 9 series as openslide does?

Hello there,
When I load an NDPI image with an Openslide package, it shows there are 9 different image dimensions but when I try to load the same image with tifffile package, it just loads 0, 2, 4, 6, 8 series (not 1, 3, 5, 7). Why is that? How to solve this problem?
1

As you can see in the above picture, Image_dimension has 9 dimensions from 40x to 0.125x, but I open the same image with the below code, it just loads even series, not an odd one.
Code:
with tifffile.imread(input_1, aszarr=True) as store:
group = zarr.open(store, mode='a')
assert isinstance(group, zarr.Group)
for r in group.keys():
stack = group[r]
assert stack.ndim == 4 # ZYXC
for z in range(stack.shape[0]):
zstack = z
And one more question, how can I pass the input in a tiles format. so that I don't have a memory problem
Thank you

Character encoding

Hi!

I am working on images from microscopes and I noticed that biological software not always following strict to TIFF specification (ASCII encoding). Now with forcing ASCII encoding in the output image I lose some valuable data. It's will be great if output's encoding can be change by developer.

Please, consider another parameter to the save function.

Faster method to find number of images in file?

Hello,

I have lots of large tiff files and some have had problems during copying so that their metadata (e.g. fluoview_metadata['Dimensions']) doesn't match how many images are actually stored in the file. What is the fastest way to check how many images are in the file? I'm currently doing:

with tifffile.TiffFile('test.tif') as tif:
    t = len(tif.pages)

but it seems like it has to do a lot of work that way. Is there a better way?

Many thanks in advance,
Tom

Reading from .tiff file modifies original content

Running into an issue where I read from a saved tiff file, and at some point (say 2 or 3 reads) I notice that the contents of the file are no longer correct. E.g. I calculate the mean temperature of the image on the data and it is about 90 degrees the whole way. After running the same analysis code, the temperature will then change to something like 60 degrees.

Here is the sudo code I am using to read the tifffile:

tiffFileHandle = tifffile.TiffFile(filename)
numTiffPages = len(self.tiffFileHandle.pages)

i = 0
while i < numTiffPages:
    img = tiffFileHandle.pages[i].asarray()
    i++
   process img

Any idea why this happens? I confirmed this issue by making a copy of my original data and comparing results on the backup. This indeed shows an unexpected temperature difference.

reading tiff stacks

I am using tifffile as a plugin in the scikit-image io module, and for some reason when I read tif stacks, it only reads the first layer. Not sure what the issues is as it works with other plugins like pil. Ex.

from skimage import io
im = io.imread('an_image.tif')

After writing image with TIffWriter.write output is black.

Hi there,
I saw you commented there (python-pillow/Pillow#5012), I was trying to post this question there but unfortunately, that issue was closed as a duplicate.
I once again went through a tifffile package and I found that I can use TiffWriter.write a function to write a TIFF file within a compressed of jpeg or JPEg2000.

  1. I tried to compress image shape = (9152, 46624) with size 426.7mb.
    1.1 At first I run this code with jpeg compression and I got and nice output of size 105.6 MB
    img = tifffile.imread(input_C_10)
    with TiffWriter(str(output_C_10)+'compressed_100.tif') as tif:
    tif.write(img, compression='jpeg')
    1.2 secondly I run this code with Jpeg2000 compression and I got an output of 220.mb but the image is pull black.
    img = tifffile.imread(input_C_10)
    with TiffWriter(str(output_C_10)+'compressed_100.tif') as tif:
    tif.write(img, compression='jpeg2000')
    as u can see here. It´s totally black.
    Screenshot from 2020-10-27 17-28-53

Secondly,
2. I tried to compress image shape = (36608, 186496) with size 6.83 Gb.
2.1. at first with jpeg compression and I got an error as follows:
Traceback (most recent call last):
File "/home/yuvi/PycharmProjects/task-1/VIdeo_frame_extraction/NDPI-ext/TEMP-1-COMPRESSION.py", line 20, in
tif.write(img, compression='jpeg')
File "/home/yuvi/anaconda3/envs/task-1/lib/python3.7/site-packages/tifffile/tifffile.py", line 2340, in write
strip = compress(strip)
File "/home/yuvi/anaconda3/envs/task-1/lib/python3.7/site-packages/imagecodecs/imagecodecs.py", line 829, in jpeg_encode
optimize=optimize, smoothing=smoothing, out=out)
File "imagecodecs/_jpeg8.pyx", line 211, in imagecodecs._jpeg8.jpeg8_encode
imagecodecs._jpeg8.Jpeg8Error: Maximum supported image dimension is 65500 pixels

Process finished with exit code 1

  1. now I am trying it with jpeg2000 with the same code as above and it´s writing but I don´t know the output yet (I guess size will be more than 3 GB probably).
    Code:
    img = tifffile.imread(input_C_10)
    with TiffWriter(str(output_C_10)+'compressed_10000.tif') as tif:
    tif.write(img, compression='jpeg2000')

I want to ask is this is a correct way to compress or write an image.

Writing compressed 16-bit images fails in version 2020.8.25

The following code reads a 1024×1024 NumPy array of unsigned 16-bit integers from a file (example attached) and saves it as a TIFF image using zlib compression.

from numpy import load
from tifffile import imwrite

image = load('image.npy')
imwrite('image.tif', image, compress=6)

This works as expected with TiffFile up until version 2020.8.13. As of version 2020.8.25, released a week ago, it fails with the following error trace-back:

$ python test_imwrite.py
Traceback (most recent call last):
  File "test_imwrite.py", line 5, in <module>
    imwrite('image.tif', image, compress=6)
  File "C:\programs\Python\lib\site-packages\tifffile\tifffile.py", line 785, in imwrite
    return tif.save(data, shape, dtype, **kwargs)
  File "C:\programs\Python\lib\site-packages\tifffile\tifffile.py", line 2151, in save
    strip = compress(strip)
  File "C:\programs\Python\lib\site-packages\tifffile\tifffile.py", line 2015, in compress
    return compressor(data, level)
  File "C:\programs\Python\lib\site-packages\tifffile\tifffile.py", line 12176, in zlib_encode
    return zlib.compress(data, level)
ValueError: ndarray is not C-contiguous

The error occurs regardless of compression level (6 in the above code). It does not occur if compress is not specified, i.e. saving uncompressed images still works.

image.npy.zip

Saved ImageJ test image produces a tif that crashes tifffile.imread()

Steps to reproduce:
Open ImageJ 1.52v
Run this ImageJ macro:

run("HeLa Cells (1.3M, 48-bit RGB)");
//run("Channels Tool...");
Stack.setDisplayMode("grayscale");
saveAs("Tiff", "C:/Users/Admin/Desktop/tifffile_debugging/hela-cells.tif");

(If you omit the line Stack.setDisplayMode("grayscale");, the tif opens as expected)

Now run the following python code:

import tifffile
print(tifffile.__version__)
data = tifffile.imread('hela-cells.tif')

Which yields the following output:

2020.5.7
Traceback (most recent call last):
  File "C:\Users\Admin\Desktop\tifffile_debugging\test.py", line 12, in <module>
    data = tifffile.imread('hela-cells.tif')
  File "C:\Users\Admin\AppData\Local\Programs\Python\Python38\lib\site-packages\tifffile\tifffile.py", line 589, in imread
    return tif.asarray(**kwargs)
  File "C:\Users\Admin\AppData\Local\Programs\Python\Python38\lib\site-packages\tifffile\tifffile.py", line 2306, in asarray
    result = self.filehandle.read_array(
  File "C:\Users\Admin\AppData\Local\Programs\Python\Python38\lib\site-packages\tifffile\tifffile.py", line 6402, in read_array
    raise ValueError(f'failed to read {size} bytes')
ValueError: failed to read 6193152 bytes
>>> 

Before this line, shape is [3]:

shape.extend(page.shape)

After this line, shape is [3, 512, 672, 3], which doesn't match the file size.

Thanks for making tifffile, it's a fantastic tool that I've always been grateful for!

[feature] let imread accept pathlib.Path filenames

Hi,

First, thanks so much for all the great work put into tifffile!

Currently, imread does not accept pathlib.Path objects, i.e. the following fails:

import numpy as np 
import tempfile
from pathlib import Path
import tifffile

with tempfile.NamedTemporaryFile(suffix='.tif') as tmp:
    # save random data 
    tifffile.imsave(tmp, np.random.randint(0,255,(100,100)))

    fpath = Path(tmp.name)

    # works 
    tifffile.imread(str(fpath))

    # does not work
    tifffile.imread(fpath)

I suspect it should be easy to fix this, as removing this line does already fix the issue (FileSequence already can handle Path objects as indicated here).

CI test errors on BIg Endian systems

Hi,

I am maintaining the Debian package of tifffile, which is also used in Ubuntu. The Ubuntu distribution runs the CI tests on all their supported platforms, and found a regression on the s390x platform, which is a 64 bit big endian one:

=================================== FAILURES ===================================
____________________________ test_issue_valueoffset ____________________________

    def test_issue_valueoffset():
        """Test read TiffTag.valueoffsets."""
        unpack = struct.unpack
        data = random_data('uint16', (2, 19, 31))
        software = 'test_tifffile'
        with TempFileName('valueoffset') as fname:
            imwrite(fname, data, software=software, photometric='minisblack')
            with TiffFile(fname, _useframes=True) as tif:
                with open(fname, 'rb') as fh:
                    page = tif.pages[0]
                    # inline value
                    fh.seek(page.tags['ImageLength'].valueoffset)
>                   assert page.imagelength == unpack('H', fh.read(2))[0]
E                   assert 19 == 0
E                     -19
E                     +0

test_tifffile.py:638: AssertionError
____________________________ test_func_memmap_fail _____________________________

    def test_func_memmap_fail():
        """Test non-native byteorder can not be memory mapped."""
        with TempFileName('memmap_fail') as fname:
            with pytest.raises(ValueError):
>               memmap(fname, shape=(16, 16), dtype='float32', byteorder='>')
E               Failed: DID NOT RAISE <class 'ValueError'>

test_tifffile.py:1000: Failed
_________________________ test_func_byteorder_isnative _________________________

    def test_func_byteorder_isnative():
        """Test byteorder_isnative function."""
>       assert not byteorder_isnative('>')
E       AssertionError: assert not True
E        +  where True = byteorder_isnative('>')

test_tifffile.py:1018: AssertionError
_____________________________ test_func_unpack_rgb _____________________________

    def test_func_unpack_rgb():
        """Test unpack_rgb function."""
        data = struct.pack('BBBB', 0x21, 0x08, 0xFF, 0xFF)
>       assert_array_equal(unpack_rgb(data, '<B', (5, 6, 5), False),
                           [1, 1, 1, 31, 63, 31])
E       AssertionError: 
E       Arrays are not equal
E       
E       Mismatch: 50%
E       Max absolute difference: 7
E       Max relative difference: 7.
E        x: array([ 4,  8,  8, 31, 63, 31], dtype=uint8)
E        y: array([ 1,  1,  1, 31, 63, 31])

test_tifffile.py:1185: AssertionError
======= 4 failed, 1609 passed, 268 skipped, 30 xfailed in 35.44 seconds ========

The test was done with version 2020.06.03, 2020.05.11 was (probably) working.
Full test log here.

Preserving singlet dimensions when loading ImageJ hyperstack tifs

Noticed a change in behavior between 2019.5.30 -> 2019.6.18 and wanted to make sure it was intended. Hopefully I'm just missing a new way of preserving dimensions of size 1 when loading ImageJ hyperstack tifs.

# works on tifffile==2019.5.30
# assert fails on tifffile==2019.6.18
import numpy as np
import tifffile
a = np.zeros((10, 5, 1, 256, 256), dtype='uint16')
md = {'frames': 10, 'slices': 5, 'channels': 1}
tifffile.imsave('test.tif', a, imagej=True, metadata=md)
b = tifffile.imread('test.tif')
assert b.shape == a.shape, f'Loaded array has shape {b.shape}, not {a.shape}'

In the new version, it appears that dimensions of size 1 are not included in the final stack shape, even if they are specified in the imagej metadata:

tifffile/tifffile/tifffile.py

Lines 2630 to 2648 in 6991c4b

images = ij.get('images', len(pages))
frames = ij.get('frames', 1)
slices = ij.get('slices', 1)
channels = ij.get('channels', 1)
mode = ij.get('mode', None)
hyperstack = ij.get('hyperstack', False)
shape = []
axes = []
if frames > 1:
shape.append(frames)
axes.append('T')
if slices > 1:
shape.append(slices)
axes.append('Z')
if channels > 1 and (page.photometric != 2 or mode != 'composite'):
shape.append(channels)
axes.append('C')

In the older version, the channel dim would be included if it was specified in the metadata:

tifffile/tifffile/tifffile.py

Lines 2526 to 2529 in ffbdf61

if 'channels' in ij and not (page.photometric == 2 and not
ij.get('hyperstack', False)):
shape.append(ij['channels'])
axes.append('C')

Depends on imagecodecs-lite on non x86 architectures

There are person who would like to use tifffile on non x86 architecture processors.
Czaki/imagecodecs#6 (comment)
scikit-image/scikit-image#4705 (comment)

I try to fast build aarm64 wheel but it wail without proper message and I do not have access to arm64 hardware to check it locally

https://travis-ci.com/github/Czaki/imagecodecs/builds/

So maybe best soulltion is to use imagecodecs-lite on other architectures?

(I'm not sure why issues lands in my repository, not in your)

opening .SCN file returns compressed image

I am attempting to open .scn files using tifffile in python 3.7.7

I have installed:
tifffile 2020.6.3
imagecodecs 2020.5.30
numpy 1.16.6

When I read in the .scn file it appears to have automatically compressed it to some degree. When this file is opened in ImageScope, it correctly shows the native image shape as (31090, 15884, 3) but tifffile reads it at (12776, 5689, 3). The .scn file is in a pyramidal format, but none of the pyramid levels correlate to the shape that tifffile read.

Code:
import tifffile
file = '/home/~~~~~~.scn'
image = tifffile.imread(file)
image.shape

Is there a way to force tifffile to read the image at native resolution?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.