Code Monkey home page Code Monkey logo

npy2bdv's People

Contributors

abred avatar miketaormina avatar nvladimus avatar pr4deepr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

npy2bdv's Issues

Example of writing tiles with overlap request

I wondered if you could provide an example of overlapping tiles for h5 conversion?

I can see you have channels, angles and so forth. What if you had some overlapping tiles that needed correlation bigstitcher? I can see you can add the views and in the code you have tiles, but there really isnt and examples of this. I am not sure if the tile should be a tuple (though it doesnt look like it) or an in of where it is. Being able to convert to the bigstitcher format that has some of the overlap already completed with the speed of writing here could be a big help.

shape broadcast error with odd shaped image arrays

Hi
Thanks for making this library.
I'm writing single z planes using append_plane function and I've been running into issues with image shapes that are odd.

If my array is odd shape and I try subsampling, I get a shape broadcast error.

For example, here is a minimal example:

import numpy as np 
import npy2bdv
save_path = "test.h5"

#initial image
img = np.zeros((147,1001,1001))
nz,ny,nx = img.shape

bdv_writer = npy2bdv.BdvWriter(save_path, nchannels=1,subsamp=((1,2,2),(1, 8, 8),(1, 16, 16)),
                            blockdim=((8, 32, 32), (8, 32, 32), (8, 32, 32)),
                            compression='gzip', overwrite=True)

bdv_writer.append_view(stack=None, virtual_stack_dim=(nz, ny, nx), channel=0)
bdv_writer.append_plane(plane=img[0], channel=0,z=0)

I get the error:

\npy2bdv\npy2bdv.py:409, in BdvWriter.append_plane(self, plane, z, time, illumination, channel, tile, angle)
    407 print(plane.shape)
    408 print(self.subsamp[ilevel])
--> 409 dataset[z, :, :] = self._subsample_plane(plane, self.subsamp[ilevel]).astype('int16')

File h5py\_objects.pyx:54, in h5py._objects.with_phil.wrapper()

File h5py\_objects.pyx:55, in h5py._objects.with_phil.wrapper()

File ~\AppData\Roaming\Python\Python39\site-packages\h5py\_hl\dataset.py:997, in Dataset.__setitem__(self, args, val)
    994     mshape = val.shape
    996 # Perform the write, with broadcasting
--> 997 mspace = h5s.create_simple(selection.expand_shape(mshape))
    998 for fspace in selection.broadcast(mshape):
    999     self.id.write(mspace, fspace, val, mtype, dxpl=self._dxpl)
...
    267     # All dimensions from target_shape should either have been popped
    268     # to match the selection shape, or be 1.
    269     raise TypeError("Can't broadcast %s -> %s" % (source_shape, self.array_shape))  # array shape

TypeError: Can't broadcast (501, 501) -> (500, 500)

I believe its do with how npy2bdv and the skimage.transform.downscale_local_mean function calculates image shapes...
Latter returns 501,501, whereas npy2bdv calculates it as (500,500)

npy2bdv calculates it here:

shape=virtual_stack_dim // self.subsamp[ilevel],

downscale_local_mean function uses this block function:
https://github.com/scikit-image/scikit-image/blob/441fe68b95a86d4ae2a351311a0c39a4232b6521/skimage/measure/block.py#L78

I've essentially modified your code to round up the shape which seems to solve this:

grp.create_dataset('cells', chunks=self.chunks[ilevel],
                                   shape = np.ceil(virtual_stack_dim / self.subsamp[ilevel]),
                                   compression=self.compression, dtype='int16')

Not sure if this is the best way to do this.

Cheers
Pradeep

affine options to save a view

Hi,
thanks for making this public. I think this will be quite useful to me. I'm after creating BigStitcher datasets files directly from python. I know the tiling pattern and the approximate overlap of multiple tiles from a SPIM, would you consider adding the option to optionally pass in the affine transform parameters for each view (currently you pass in the dx,dy,dz as far as I could see?

Bug with append_substack()?

Was working with npy2bdv this past week and I think there may be a bug with append_substack()?

I think these lines:

npy2bdv/npy2bdv/npy2bdv.py

Lines 444 to 446 in e7bdce7

dataset[z_start : z_start + substack.shape[0],
y_start : y_start + substack.shape[1],
x_start : x_start + substack.shape[2]] = subdata

Should include downsampling in the indices such as?

sub_z_start = int(z_start/2**ilevel)
sub_y_start = int(y_start/2**ilevel)
sub_x_start = int(x_start/2**ilevel)
dataset[sub_z_start : sub_z_start + subdata.shape[0],
        sub_y_start : sub_y_start + subdata.shape[1],
        sub_x_start : sub_x_start + subdata.shape[2]] = subdata

per-dataset attributes not handled correctly

While looking through the code to see how one could implement having one affine transform per dataset (see my latest comment regarding #1), I noticed that there are other attributes that can be had per-dataset that are not treated correctly.

  • Apart from affine, each dataset added view append_view could also have it's own calibration and therefore it's own dx,dy,dz.
  • The stack shape is stored in the instance variable self.stack_shape. This instance variable is changed whenever append_view is called here:
    self.stack_shape = stack.shape

During writing of the xml file this variable is accessed here:

nz, ny, nx = tuple(self.stack_shape)

Therefore nz, ny, nx used to write the ViewSetup subelement size (

ET.SubElement(vs, 'size').text = '{} {} {}'.format(nx, ny, nz)
)

I guess what would be needed is that these parameters are passed in (or determined) for each invocation of append_view. Then some instance variables (lists) or dictionaries with keys derived from (time, ill, ch, tile, angle) could be used to keep track of these parameters for subsequent use when write_xml_file(...) is called.

revert change of license ?

Hi,
I just noticed that you changed the license from BSD-3/MIT to GPL some time back.
I bundled/vendored an earlier version of this code here in a project for easier distribution (while it was still under BSD-license).

Read affine transform from XML

Hi Nikita!

For our OPM post-processing, it will be useful to read the affine transform that contains the stage coordinates back out for the tiled acquisition. Right now we write using the virtual stack with no downsampling and write the stage coordinates to the affine translation column.

For post-processing, we read each big strip scan (~100,000x1600x256) back in to deconvolve, deskew, and split it into smaller blocks. We then write a new H5 with downsampling for stitching. We need to transform the stage coordinates after the deskew and splitting for each new tile.

I took a quick look at the new BdvEditor class and I think this should be simple to implement. We will take a shot at it implementing it and do a pull request, but I I wanted to let you know in case you have any input before we get started on it.

Thanks,
Doug

Data always saved as int16

It would be nice if npy2bdv would also offer the option to store data in a different datatype than int16. Maybe the writer could give a warning if data might have been altered? This is the case for any data not in [np.uint8 np.int8 np.int16].

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.