Comments (13)
I just tried the command you posted and it works for me.
However, you need elf
for the tif wrapper, I suspect it's not in the python env you are using.
If you have trouble installing it from conda-forge (there seems to be some issue with the version pinning), just do the following:
pip install https://github.com/constantinpape/elf/archive/0.2.3.tar.gz
from pybdv.
OK, thanks.
Seems to work now. I remember doing this task with --target slurm
. However this parameter is no longer supported. Will it return at some point?
from pybdv.
Seems to work now. I remember doing this task with
--target slurm
. However this parameter is no longer supported. Will it return at some point?
No, that has never been supported in pybdv. If you want to use slurm, you can use the cluster_tools.DownscalingWorkflow with format bdv.n5
. Here is a short function that wraps it: https://github.com/constantinpape/paintera_tools/blob/master/paintera_tools/convert/converter.py#L96-L141
from pybdv.
Cool thanks!
from pybdv.
convert_to_bdv
seems to not downsample without explicit factors. Can I run it again on an existing n5 to just downsample, or what would you use for that?
from pybdv.
convert_to_bdv
seems to not downsample without explicit factors
Yes, you need to pass the downsampling factors for this.
Can I run it again on an existing n5 to just downsample, or what would you use for that?
It will fail by default or overwrite the data if you pass the flag overwrite data
.
The downsample functions I linked to above should work if you run it with the s0
dataset existing already and just downsample.
from pybdv.
OK, what is halo
for?
Something still fails here...
ERROR: [pid 39011] Worker Worker(salt=066972180, workers=1, host=login.cluster.embl.de, username=schorb, pid=39011) failed DownscalingSlurm(tmp_folder=/scratch/schorb, max_jobs=50, config_dir=/scratch/schorb/configs, input_path=/g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/stomach.n5, input_key=setup0/s0, output_path=/g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/stomach.n5, output_key=setup0/s1, scale_factor=(1, 2, 2), scale_prefix=s1, halo=[1, 2, 2], effective_scale_factor=[1, 2, 2], dependency=DummyTask)
Traceback (most recent call last):
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/luigi/worker.py", line 191, in run
new_deps = self._run_get_new_deps()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/luigi/worker.py", line 133, in _run_get_new_deps
task_gen = self.task.run()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/cluster_tools/cluster_tasks.py", line 95, in run
raise e
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/cluster_tools/cluster_tasks.py", line 81, in run
self.run_impl()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/cluster_tools/downscaling/downscaling.py", line 86, in run_impl
prev_shape = f[self.input_key].shape
AttributeError: 'Group' object has no attribute 'shape'
from pybdv.
call:
DownscalingWorkflow(tmp_folder=/scratch/schorb, max_jobs=50, config_dir=/scratch/schorb/configs, target=slurm, dependency=DummyTask, input_path=/g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/stomach.n5, input_key=setup0, scale_factors=[[1, 2, 2], [2, 2, 2], [2, 2, 2]], halos=[[1, 2, 2], [2, 2, 2], [2, 2, 2]], metadata_format=paintera, metadata_dict={}, output_path=, output_key_prefix=setup0, force_copy=False, skip_existing_levels=False, scale_offset=0)
from pybdv.
I annotated some of the arguments that you should change.
DownscalingWorkflow(
tmp_folder=/scratch/schorb, # this will write all the logs etc, I would recommend to write this in a dedicated subfolder like /scatch/schorb/tmp_downscale
input_key=setup0, # this needs to be the name of the actual dataset: "setup0/timepoint0/s0". That is causing the error you see
halos=[[1, 2, 2], [2, 2, 2], [2, 2, 2]], # the halo is used to enlarge the blocks in downscaling to avoid artifacts. Setting it to the same as the factors is fine
metadata_format=paintera, # this needs to be "bdv.n5"
metadata_dict={}, # you should pass the resolution / voxel size here: {"resolution": [rz, ry, rx]}
output_path=, # you need to spcify the output path! probably the same as the input path will do in your case
output_key_prefix=setup0, # this should be blank, i.e. ""
)
Edit: output_key_prefix
needs to be blank.
from pybdv.
I cannot run it due to issues with anistropic voxel sizes...
ERROR: [pid 57435] Worker Worker(salt=327618298, workers=1, host=login.cluster.embl.de, username=schorb, pid=57435) failed WriteDownscalingMetadata(tmp_folder=/scratch/schorb/blbla, output_path=/g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/stomach.n5, scale_factors=[[1, 2, 2], [2, 2, 2], [2, 2, 2]], dependency=DownscalingSlurm, metadata_format=bdv.n5, metadata_dict={"resolution": [0.05, 0.015, 0.015]}, output_key_prefix=, scale_offset=0, prefix=downscaling)
Traceback (most recent call last):
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/luigi/worker.py", line 191, in run
new_deps = self._run_get_new_deps()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/luigi/worker.py", line 133, in _run_get_new_deps
task_gen = self.task.run()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/cluster_tools/downscaling/downscaling_workflow.py", line 95, in run
self._bdv_metadata()
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/cluster_tools/downscaling/downscaling_workflow.py", line 84, in _bdv_metadata
overwrite_data=False, enforce_consistency=False)
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/pybdv/metadata.py", line 266, in write_xml_metadata
overwrite, overwrite_data, enforce_consistency)
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/pybdv/metadata.py", line 97, in _require_view_setup
_check_setup(vs)
File "/g/emcf/software/python/miniconda/envs/mobie/lib/python3.7/site-packages/pybdv/metadata.py", line 75, in _check_setup
raise ValueError("Incompatible voxel size")
ValueError: Incompatible voxel size
tried with [0.015,0.015,0.05]
as well...
from pybdv.
The problem is that you have an xml with metadata already. Now it tries to overwrite the data and sees that the voxels sizes are incompatible, so it complains.
Just remove /g/emcf/ronchi/Benvenuto-Giovanna-seaurchin_SBEMtest_5833/giovanna_G3-1_20-07-07_3view/S010_acquisition/aligned_dec2020/stomach.xml
and it should work.
from pybdv.
That's it, thanks!
Will you join the call at 3?
from pybdv.
Will you join the call at 3?
Oh, I was not aware there was anything scheduled at 3. What is it about?
(Unfortunately I have something else scheduled already, Thursdays are really full for me.)
from pybdv.
Related Issues (20)
- anisotropic voxel sizes HOT 4
- dealing with absolute paths HOT 4
- attributes no longer supported HOT 4
- Downsampling if already present HOT 2
- #viewsetups >100 HOT 10
- memmaps while modifying metadata only HOT 2
- edit mode and group mode kill each other HOT 3
- pybdv crashes when I import HOT 1
- Scales by make_bdv seem to 'move' the data HOT 5
- Halo computation in downsample function does not work correctly HOT 3
- make_bdv fails upon certain dataset size HOT 2
- generate N5 fails HOT 2
- chunk boundaries visible as black cubes HOT 7
- On the fly editing much slower for n5 than hdf5 HOT 2
- on the fly example 3D, multi channels, multi timepoints HOT 1
- Updating version on conda HOT 3
- Feature request: working with dask arrays HOT 5
- file not found error in abspath HOT 2
- Interpolation mode failed in the newest version?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pybdv.