Code Monkey home page Code Monkey logo

isonet's People

Contributors

heng-z avatar lianghaozhao avatar logicvay2010 avatar procyontao avatar rdrighetto avatar samopolacek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

isonet's Issues

Next release?

Dear IsoNet developers,

My name is Dimitrios Bellos and I work for the Rosalind Franklin Institute in the UK as part of the ARC team (https://www.rfi.ac.uk/research/artificial-intelligence/advanced-research-computing-at-the-franklin/)

Many of our users here in the Franklin, mainly biologists utilise your software, and we wish to help them regarding issues they have with the software. However, we mainly work only with stable releases of softwares. Since the more recent one for yours (0.2.1) was 2 years ago, I am wondering if you are working towards a stable release soon.

Since 2 years ago, there have been a number of commit to the master branch that seem to help with many issues regarding IsoNet. So approximately, do you might know when can we expect for a new release (aka the software reaching a stable state)?

Kind regards,
Dimitrios

suport for tf 2.1 dropped?

Hi,

when using the latest version of the software with tensorflow 2.1, following the tutorial the analysis crashes at the refine step with the error msg:

File "/software/f2020/software/tensorflow/2.1.0-fosscuda-2019a-python-3.7.2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 874, in collect_per_output_metric_info

    Expected a list or dictionary, found:  + str(metrics))

TypeError: Type of metrics argument not understood. Expected a list or dictionary, found: ('mse', 'mae') `

Is this possibly due to changes in this commit Heng-Z@4c0e1b4 , thus dropped support for tf 2.1?

cheers

IsoNet predict fail

When I run isonet.py predict tomoset.star demo_results/model_iter30.h5 --gpuID 0,1 --cube_size 48 --crop_size 64, I got an error.
微信图片_20240718103209

And then I tried adding the following to the start of isonet.py.
'''
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
for gpu in physical_devices:
tf.config.experimental.set_memory_growth(gpu, True)
'''
The previous error has been solved and the prediction can be made, but new problems have appeared and the predicted result is not accurate, the error and result as follow:
微信图片_20240718104011

predict result:
微信图片_20240718104146

I have checked the intermediate result of the training and it is normal. So why ? can anyone help solve this problem?

Isonet stops after a few iterations at time for refinement

Hi,

I have been having recent issues with isonet where it will run fine for a few iterations during refinement and at some during Noise level estimation it will just stop and quit with no errors. Is there something causing this to happen? Thanks!

Artefacts in corrected tomograms (plastic sections)

Hi there,

I am trying to run (for collaborators) isonet on tomograms acquired from heavy metal stained plastic sections (so this isn't cryo data, no ctf correction was performed). The "corrected" tomograms display strong "pixelation" artefacts, see screenshots attached. Here are the commands I used to get there. I would be happy about suggestions what is going wrong here. More generally, has anyone tried IsoNet on non-cryo data?

isonet.py prepare_star tomoset --output_star tomoset.star --pixel_size 4.51

isonet.py extract tomoset.star

isonet.py extract prealigned.star

isonet.py refine subtomo.star --gpuID 0,1,2,3 --iterations 30

isonet.py predict tomoset.star results/model_iter30.h5 --gpuID 0

Thanks a lot,
Best,
Matthias

IsoNet_artifacts

ERROR multiprocessing.pool.RemoteTraceback:

Using the command in the tutorial, plus batch size 16, preprocessing cpus 16, when it gets to iteration 9, I get the following error:

"""
Traceback (most recent call last):
  File "/opt/miniconda3/envs/isonet/lib/python3.9/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/opt/miniconda3/envs/isonet/lib/python3.9/multiprocessing/pool.py", line 48, in mapstar
    return list(map(*args))
  File "/opt/IsoNet/preprocessing/prepare.py", line 157, in get_cubes
    get_cubes_one(data_X, data_Y, settings, start = start)
  File "/opt/IsoNet/preprocessing/prepare.py", line 95, in get_cubes_one
    noise_volume = read_vol(path_noise[path_index])
  File "/opt/IsoNet/preprocessing/prepare.py", line 92, in read_vol
    with mrcfile.open(f) as mf:
  File "/home/t93j956/.local/lib/python3.9/site-packages/mrcfile/load_functions.py", line 138, in open
    return NewMrc(name, mode=mode, permissive=permissive,
  File "/home/t93j956/.local/lib/python3.9/site-packages/mrcfile/mrcfile.py", line 108, in __init__
    self._open_file(name)
  File "/home/t93j956/.local/lib/python3.9/site-packages/mrcfile/mrcfile.py", line 125, in _open_file
    self._iostream = open(name, self._mode + 'b')
OSError: [Errno 9] Bad file descriptor: 'results/training_noise/n_00363.mrc'
"""

This has happened twice now, not sure why. GPU usage is at 33GB/45 but does it go up when the noise model stuff starts?

gui bugs

  1. 错误没有打印在窗口内
  2. keras fit 过程中的信息没有打印(可能不需要)
  3. 可以考虑在refine 旁边加一个kill 或stop的按钮
  4. Denoise setting 里加入noise mode
  5. predict panel 需要tomo_idx
  6. "already running another job, please wait until it finished" 可以用弹窗提醒

Refine, FileNotFound error if there is '.' in the name of the tomogram

Dear IsoNet maintainers,

When having a '.' (except for the one right before the file-extension) in the name of the tomogram, refine will fail with a file not found error. Making a symbolic link to the tomogram (or as I did, renaming just the subtomograms), makes it work as expected.

Best,
Rasmus

'Generate Mask' by GUI not working

After running HIV data tutorial, I am trying neuronal synapse data tutorial.

I could run commandline tutorial up to refine well.

However, when I used GUI, I can't see any result/output after 'Generate Mask' although I see that I made 'tomograms.star' well (simply by specifying input tomo rec file path, Apix and Number of subtomo) and '3dmod view'.
Screenshot 2023-12-19 at 1 50 02 PM

Screenshot 2023-12-19 at 1 58 58 PM

Screenshot 2023-12-19 at 1 48 53 PM

An error message after the last iteration

I believe this message can simply be ignored. But I will try to fix this.

Exception ignored in: <function Pool.del at 0x7f6bc4fbae50>
Traceback (most recent call last):
File "/usr/local/lib/python3.8/multiprocessing/pool.py", line 268, in del
self._change_notifier.put(None)
File "/usr/local/lib/python3.8/multiprocessing/queues.py", line 368, in put
self._writer.send_bytes(obj)
File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
OSError: [Errno 9] Bad file descriptor

The question about deconvolution.py

Could you please explain why the code in line 19 of deconvolution.py has an extra division of 2? It seems not to make sense. As I understand, you try to set up the units of spatial frequency. If you add an extra division of 2, this function would have an input and an output that do not match the subsequent functions (Wiener filter and SNR). If I am wrong, please walk me through . Thank you!

System seemed stopped during refine

Hi there,

I am a student new to cryo-EM. I am now trying to apply IsoNet on the analysis of my data and I have encountered a problem.

I found that it took extremely long time in the refine step without any response or error messages.

The slurm after 4 hours of running still in Epoch 1/10 stage [as (1) below]. I repeated running with the official tutoral HIV dataset and exactly same commands and parameters according to the tutorial, and got the same problem. The code seemed still running in Epoch 1/10 even after 15 hours.
Then I have checked the GPU [nviaid-smi checked in (2) below]. The GPUs seemed are not working(?), while memory is being used. No new files were written in the waiting hours.

Would anyone give me some advice? Thank you very much!

Chris

(1) Slurm log-----------------------------------------------------------------------------------
11-25 10:58:34, INFO
######Isonet starts refining######

11-25 10:58:38, INFO Start Iteration1!
11-25 10:58:38, WARNING The results folder already exists
The old results folder will be renamed (to results~)
11-25 11:00:31, INFO Noise Level:0.0
11-25 11:01:08, INFO Done preparing subtomograms!
11-25 11:01:08, INFO Start training!
11-25 11:01:10, INFO Loaded model from disk
11-25 11:01:10, INFO begin fitting
Epoch 1/10
slurm-37178.out (END)

(2) nvidia-smi --------------------------------------------------------

Fri Nov 25 14:38:39 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 3090 Off | 00000000:04:00.0 Off | N/A |
| 30% 33C P8 19W / 350W | 17755MiB / 24268MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 3090 Off | 00000000:43:00.0 Off | N/A |
| 30% 32C P8 20W / 350W | 17755MiB / 24268MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 GeForce RTX 3090 Off | 00000000:89:00.0 Off | N/A |
| 30% 30C P8 32W / 350W | 17755MiB / 24268MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 GeForce RTX 3090 Off | 00000000:C4:00.0 Off | N/A |
| 30% 30C P8 25W / 350W | 17755MiB / 24268MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1212666 C python3 17747MiB |
| 0 N/A N/A 3278054 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 1212666 C python3 17747MiB |
| 1 N/A N/A 3278054 G /usr/lib/xorg/Xorg 4MiB |
| 2 N/A N/A 1212666 C python3 17747MiB |
| 2 N/A N/A 3278054 G /usr/lib/xorg/Xorg 4MiB |
| 3 N/A N/A 1212666 C python3 17747MiB |
| 3 N/A N/A 3278054 G /usr/lib/xorg/Xorg 4MiB |

No such file or directory: results

Hi

I followed your tutorial steps except using my data. I gave it several tomograms, preprocessed as you describe, then ran refine on the HPC. It runs for about 3 minutes then throws an error cannot find Results/....

I am running isonet/0.2.

File "/mnt/nfs/clustersw/Debian/bullseye/cuda/11.2/isonet/0.2/IsoNet/bin/refine.py", line 114, in run
get_cubes_list(args)
File "/mnt/nfs/clustersw/Debian/bullseye/cuda/11.2/isonet/0.2/IsoNet/preprocessing/prepare.py", line 183, in get_cubes_list
p.map(func,inp)
File "/usr/lib/python3.9/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/usr/lib/python3.9/multiprocessing/pool.py", line 771, in get
raise self._value
FileNotFoundError: [Errno 2] No such file or directory: 'results/L1g4top2_ts_004_iter00.mrc'

I looked in the results folder and found results/L1g4top2_ts_004_iter00.mrc_17 (and other files with _17 ending for my other tomograms).

any suggestions are welcome!

thanks

Jesse

No compatibility with NumPy > 1.23

I just wanted to let everyone know that we ran into an error regarding a deprecation from NumPy:

 File IsoNet/util/noise_generator.py", line 110, in simulate_noise
    res = p.map(part_iradon_nofilter,sinograms)
 File "lib/python3.9/multiprocessing/pool.py", line 364, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
 File "lib/python3.9/multiprocessing/pool.py", line 771, in get
    raise self._value
AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

The deprecation warning finally became an error in v1.24, which is why we had to downgrade numpy to 1.23.*. This solved the issue 👍

GPU ID can not be int

mwr_cli.py predict tomogram/tomogram2.mrc tomogram_predict.mrc results/model_iter30.h5 --gpuID 1
Traceback (most recent call last):
File "/home/lytao/software/mwr/bin/mwr_cli.py", line 164, in
fire.Fire(MWR)
File "/home/lytao/software/Python-3.6.5/build/lib/python3.6/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/lytao/software/Python-3.6.5/build/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire
target=component.name)
File "/home/lytao/software/Python-3.6.5/build/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/home/lytao/software/mwr/bin/mwr_cli.py", line 123, in predict
predict(d_args)
File "/home/lytao/software/mwr/bin/mwr3D_predict.py", line 60, in predict
os.environ["CUDA_VISIBLE_DEVICES"]=args.gpuID
File "/home/lytao/software/Python-3.6.5/build/lib/python3.6/os.py", line 674, in setitem
value = self.encodevalue(value)
File "/home/lytao/software/Python-3.6.5/build/lib/python3.6/os.py", line 744, in encode
raise TypeError("str expected, not %s" % type(value).name)
TypeError: str expected, not int

some bugs

  1. refine 后台运行有问题,报错 broken pipe
  2. continue 参数可以改debug level
  3. add use deconvolve map in prediction
  4. data_folder 改成 data_dir (统一化)
  5. 可能的建议:logging to logger

Errors during refine

Hello everyone, I've been running into difficulties running Isonet's refinement and was hoping to find some assistance.
To give some background details, I'm trying to correct for the missing wedge on 5 tomograms. After following the tutorial online, I was able to generate a star file, correct the CTF, generate masks, and extract the subtomograms. However, when running the refine program, the job fails.

I ran the following script: isonet.py refine subtomo.star --gpuID 0,1,2,3,4,5 --iterations 30 --noise_start_iter 10,15,20,25 --noise_level 0.05,0.1,0.15,0.2
I submitted the job on a node on our university cluster. I asked for 6 GPU A100 Devices and 600GB memory.
Later in the evening the job failed after stalling out at Epoch 1/10.

I've been in contact with our CHPC department who tried to look further into this and found that we have 3 issues convoluted which makes it hard to find exactly what the problem is. So, let's break it down in its constituent parts:
a. Isonet was originally installed on rocky8. Our CHPC department tested out Tensorflow and checked whether CUDNN worked correctly. It did.
The corresponding module was written for Rocky8, however a few days ago we realized we need the software to run on a Centos7 node (this is the server I have access to).
But attempting this on Centos7 (or Rocky8), I didn't see any tangible progress (still stuck at Epoch 1/10). At the end, both jobs were stopped and threw an error shown in the screenshot below:

Do you all have any suggestions for getting the refinement to work? Please let me know if you need any additional details
Best,
Ben

Error1

General models

Dear Authors,

I was wondering if you could provide the models used in the paper? For me, I am particular interested in the axoneme model.

Best,
Pengxin

Error during Refine

Hi everyone! I'm trying to perform IsoNet on over 12 tomograms but due to memory and time limitations on my university's cluster i've been trouble shooting on just one tomogram. I'm able to generate star files, ctf correct, generate masks, and extract subtomos according to the tutorials and documentation successfully but when I try to run the command:

isonet.py refine subtomo.star --gpuID 0,1,2,3 --preprocessing_ncpus 16

I get this error within the log file:

03-13 19:58:47, INFO
######Isonet starts refining######

03-13 19:58:57, INFO Start Iteration1!
03-13 19:59:02, INFO Noise Level:0.0
03-13 19:59:03, ERROR multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/programs/x86_64-linux/isonet/0.2/miniconda/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/programs/x86_64-linux/isonet/0.2/miniconda/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/programs/x86_64-linux/isonet/0.2/IsoNet/preprocessing/prepare.py", line 126, in get_cubes
with mrcfile.open(current_mrc) as mrcData:
File "/programs/x86_64-linux/isonet/0.2/miniconda/lib/python3.8/site-packages/mrcfile/load_functions.py", line 139, in open
return NewMrc(name, mode=mode, permissive=permissive,
File "/programs/x86_64-linux/isonet/0.2/miniconda/lib/python3.8/site-packages/mrcfile/mrcfile.py", line 109, in init
self._open_file(name)
File "/programs/x86_64-linux/isonet/0.2/miniconda/lib/python3.8/site-packages/mrcfile/mrcfile.py", line 126, in _open_file
self._iostream = open(name, self._mode + 'b')
FileNotFoundError: [Errno 2] No such file or directory: 'results/grid2U32_iter00.mrc'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/programs/x86_64-linux/isonet/0.2/IsoNet/bin/refine.py", line 114, in run
get_cubes_list(args)
File "/programs/x86_64-linux/isonet/0.2/IsoNet/preprocessing/prepare.py", line 183, in get_cubes_list
p.map(func,inp)
File "/programs/x86_64-linux/isonet/0.2/miniconda/lib/python3.8/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/programs/x86_64-linux/isonet/0.2/miniconda/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
FileNotFoundError: [Errno 2] No such file or directory: 'results/grid2U32_iter00.mrc'

I'm not very experienced with this type of software or python, any help would be greatly appreciated!

"Data array contains NaN values" error

Hi,

I'm getting the following error in prediction:

diogori@worker08:isonet$ time isonet.py predict MTEC_tomo055_72_d3.star ./MTEC_tomo055_72_d3_results/model_iter09.h5 --output_dir MTEC_tomo055_72_d3_corrected_iter09 --cube_size 80 --gpuID 0,1,2,3
09-09 21:06:46, INFO     

######Isonet starts predicting######

09-09 21:07:35, INFO     gpuID:0,1,2,3
09-09 21:08:37, INFO     Loaded model from disk
09-09 21:08:37, INFO     predicting:MTEC_tomo055_72_d3
09-09 21:08:51, INFO     total batches: 54
100%|                                                              | 54/54 [01:37<00:00,  1.80s/it]
/scicore/home/engel0006/GROUP/pool-engel/soft/isonet/venv_isonet/lib/python3.9/site-packages/mrcfile/mrcobject.py:545: RuntimeWarning: Data array contains NaN values
  warnings.warn("Data array contains NaN values", RuntimeWarning)
09-09 21:10:47, INFO     Done predicting
Exception ignored in: <function Pool.__del__ at 0x7fe7ff44f700>
Traceback (most recent call last):
  File "/scicore/soft/apps/Python/3.9.5-GCCcore-10.3.0/lib/python3.9/multiprocessing/pool.py", line 268, in __del__
    self._change_notifier.put(None)
  File "/scicore/soft/apps/Python/3.9.5-GCCcore-10.3.0/lib/python3.9/multiprocessing/queues.py", line 378, in put
    self._writer.send_bytes(obj)
  File "/scicore/soft/apps/Python/3.9.5-GCCcore-10.3.0/lib/python3.9/multiprocessing/connection.py", line 205, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/scicore/soft/apps/Python/3.9.5-GCCcore-10.3.0/lib/python3.9/multiprocessing/connection.py", line 416, in _send_bytes
    self._send(header + buf)
  File "/scicore/soft/apps/Python/3.9.5-GCCcore-10.3.0/lib/python3.9/multiprocessing/connection.py", line 373, in _send
    n = write(self._handle, buf)
OSError: [Errno 9] Bad file descriptor

I know the "Bad file descriptor" error is harmless, but mrcfile is complaining that the resulting tomogram has NaN values. This is the part that concerns me:
/scicore/home/engel0006/GROUP/pool-engel/soft/isonet/venv_isonet/lib/python3.9/site-packages/mrcfile/mrcobject.py:545: RuntimeWarning: Data array contains NaN values
warnings.warn("Data array contains NaN values", RuntimeWarning)

It does write out an MRC volume, but it cannot be displayed because of the NaN-valued voxels. Has anyone encountered this error before and knows how to correct this?

Thank you!

Function call stack error

Hi,

I am having an issue on installing isonet on a certain workstation. The workstation has 2x 3080 Ti's. The driver version is 470.57.02, I've also tried a few different ones. It is also running cuda version 11.4. Tensorflow 2.6.0 with keras 2.6.0. I've installed isonet on many different computers but strangely only having this issue with the 3080 gpus. I attached a copy for the error if this helps!

Thanks,
Jakeisonet.txt

Some improvements

It seems that the software works.

There are still several things to improve:

  1. patch edge artifact in deconvolution
  2. Tilt angle offset and xtilt: The solution should be rotating subtomograms during 'isonet.py extract' using this two angles, along y and x respectively. During 'isonet.py predict', rotate entire tomograms then predict and rotate back.
  3. Reduce overlap of subtomograms as much as possible: The solution might be generating a lot more more random coordinates and select a subset of them with largest min interparticle distance.
  4. check grammar "The program crop the tomogram in multiple tiles (z,y,x) for multiprocessing and assembly them into one. e.g. (1,2,2)"

Use coordinates for subtomo extraction

Hi guys,

Is there a possibility to pass multiple coordinates for the subtomogram extraction rather than using none or a mask (e.g. a IMOD coordinate list)? If so, how did you managed? Thanks and all the best,

Felix

error

Refine in l1-l4 computer with no GPU: Illegal instruction (core dumped).

Check micrograph name when "open star"

Windows10 failed launch isonet.py

Hi,

I have a problem when I follow 3. Add environment variables: on my Windows10 machine, with git bash tool. The problem is that after I cautiously rename the 'IsoNet' folder, export both the correct PATH and PYTHONPATH, an annoying report comes:
bash: isonet.py: command not found

When I run the same commands on my M1 pro silicon MackbookPro, everything is fine. But for further training and keras-based models developing, I have to make IsoNet work on my Windows10 machine.

So could you please tell me how to deal with this? Or perhaps include the win10 installation in your tutorial.

Thanks!

Pixel size of 0 in corrected tomograms

Hi there,

I have an issue with the predict step of isonet. I am running it on the same tomograms that I used for training, the pixel size is 10.00 Å/px and this information is also present in the star file that I submit during the predict step. However, the corrected_tomograms have a pixel size of 0 and when I open them they are accordingly on the incorrect scale compared to unprocessed tomograms.

Is anyone else having this problem and suggestions what causes this?

Illegal Instruction (core dumped)

Hello,

I installed IsoNet no problem and can run all the preparation steps fine either with GUI or command line.

When I try to start the refining step through the GUI nothing happens. When I try through the command line I get an "Illegal Instruction (core dumped)" error (picture attached)
Screenshot from 2021-11-03 13-10-16
. By googling the error it seems to be a cpu issue.

NVIDIA GeForce GTX 1080 running with NVIDIA drivers 470.63.01
Intel Xeon CPU E5-2687W 3.10GhZ x 16
Ubuntu 20.04
Python 3.8.10
cuDNN v8.2.4 for cuda 11.4
GCC 9.3.0
Cuda 11.4
tensorflow 2.4.0

Thank you for your help,

Best,

William J Nicolas

ERROR multiprocessing.pool.RemoteTraceback

Hi all,

I am running into an issue during the refinement stage. I have all the dependencies installed with the right version according to the tensorflow table:

Python 3.7.11
Tensor flow 2.6.2
cuDNN 8.1.0.77-h90431f1_0
CUDA 11.2

Would you know what is causing the following error?
many thanks!

isonet.py refine ./subtomo.star  --gpuID 0,1,2,3

######Isonet done extracting subtomograms######

11-10 20:12:55, INFO     
######Isonet starts refining######

11-10 20:13:11, INFO     Done preperation for the first iteration!
11-10 20:13:11, INFO     Start Iteration1!
11-10 20:13:27, INFO     Noise Level:0.0
11-10 20:13:28, ERROR    multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/lmb/home/cvhoorn/miniconda3/envs/isonet/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/lmb/home/cvhoorn/miniconda3/envs/isonet/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "/lmb/home/cvhoorn/IsoNet/preprocessing/prepare.py", line 147, in get_cubes
    with mrcfile.open(current_mrc) as mrcData:
  File "/lmb/home/cvhoorn/miniconda3/envs/isonet/lib/python3.7/site-packages/mrcfile/load_functions.py", line 139, in open
    header_only=header_only)
  File "/lmb/home/cvhoorn/miniconda3/envs/isonet/lib/python3.7/site-packages/mrcfile/mrcfile.py", line 108, in __init__
    self._open_file(name)
  File "/lmb/home/cvhoorn/miniconda3/envs/isonet/lib/python3.7/site-packages/mrcfile/mrcfile.py", line 125, in _open_file
    self._iostream = open(name, self._mode + 'b')
FileNotFoundError: [Errno 2] No such file or directory: 'results/TS_010_iter00.mrc'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/lmb/home/cvhoorn/IsoNet/bin/refine.py", line 25, in run
    run_whole(args)
  File "/lmb/home/cvhoorn/IsoNet/bin/refine.py", line 143, in run_whole
    get_cubes_list(args)
  File "/lmb/home/cvhoorn/IsoNet/preprocessing/prepare.py", line 189, in get_cubes_list
    res = p.map(func,inp)
  File "/lmb/home/cvhoorn/miniconda3/envs/isonet/lib/python3.7/multiprocessing/pool.py", line 268, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/lmb/home/cvhoorn/miniconda3/envs/isonet/lib/python3.7/multiprocessing/pool.py", line 657, in get
    raise self._value
FileNotFoundError: [Errno 2] No such file or directory: 'results/TS_010_iter00.mrc'

RuntimeError: No GPU detected, Please check your CUDA version and installation

Hi,

I met this problem during refine step even if I downloaded the right tensorflow-gpu version.
**10-20 11:26:01, INFO
######Isonet starts refining######

10-20 11:26:05, ERROR No GPU detected, Please check your CUDA version and installation
10-20 11:26:05, ERROR Traceback (most recent call last):
File "/home/spuser/IsoNet/bin/refine.py", line 25, in run
run_whole(args)
File "/home/spuser/IsoNet/bin/refine.py", line 105, in run_whole
check_gpu(args)
File "/home/spuser/IsoNet/bin/refine.py", line 275, in check_gpu
raise RuntimeError('No GPU detected, Please check your CUDA version and installation')
RuntimeError: No GPU detected, Please check your CUDA version and installation**

Here is my system info:
System: Centos 7,
CUDA version: 10.2
Driver version: 440.31
GPU: 4X RTX2080Ti

Any thoughts?

Best,
Pengxin

Random half 1 and 2, Resolution Estimation

Thank you so much for this wonderful work!

One small question. I find that in your paper, the subtomos are randomly divided into two random halves. From each half, a predict is performed. Therefore, you will get two corrected tomograms, and make resolution estimation possible.

Screen Shot 2021-12-22 at 18 54 17

However, in your code, I can not find the corresponding part.

So, how can I generate two comparing corrected tomograms?

Looking forward to your reply.

Best wishes.

gui TODO list

Preparation:
Add snrfalloff deconvstength percentages into the initial star table
3dmod view multiple files
界面窗口可以缩更小

Refine:
使用subprocess.Popen 代替 os.system 界面可以不卡住
添加nohup勾选框: 选择了则可以在后台运行refine,否则,在APP 窗口关闭时使用Popen_instance.kill() 停止进程。
添加continue_from(选择.json文件) ,更改Denoise setting为: noise_level (tuple), noise_star_iter (tuple ), noise_mode (1 or 2)
Predict:
添加use deconvolved勾选框
tomogram star file 默认值改为tomograms.star (现在是tomogram.star)

Issue with Keras / function mae

Hi,
I'm trying to help some users use the isonet.py program. We are running on a Rocky Linux 9 platform (RHEL linux 9 clone). The user is trying to run a refine job and we're seeing errors like the ones below. In a naive google search looking at the issue it seems that some of the loss functions need to be serialized unless I should be using a different package version (below is the full list of packages in the conda environment).
Please advise,
Jeff

$ isonet.py refine ./subtomo.star --gpuID 0 --preprocessing_ncpus 16 --iterations 4 --noise_start_iter 10,15,20,25 --noise_level 0.05,0.1,0.15,0.2
07-10 15:44:31, INFO
######Isonet starts refining######

07-10 15:44:34, INFO Start Iteration1!
07-10 15:44:34, WARNING The results folder already exists
The old results folder will be renamed (to results~)
/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/lib/python3.12/site-packages/keras/src/layers/activations/leaky_relu.py:41: UserWarning: Argument alpha is deprecated. Use negative_slope instead.
warnings.warn(
07-10 15:44:36, WARNING You are saving your model as an HDF5 file via model.save() or keras.saving.save_model(model). This file format is considered legacy. We recommend using instead the native Keras format, e.g. model.save('my_model.keras') or keras.saving.save_model(model, 'my_model.keras').
07-10 15:44:39, INFO Noise Level:0.0
07-10 15:45:22, INFO Done preparing subtomograms!
07-10 15:45:22, INFO Start training!
07-10 15:45:23, ERROR Traceback (most recent call last):
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/IsoNet/bin/refine.py", line 128, in run
history = train_data(args) #train based on init model and save new one as model_iter{num_iter}.h5
^^^^^^^^^^^^^^^^
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/IsoNet/models/unet/train.py", line 93, in train_data
history = train3D_continue('{}/model_iter{:0>2d}.h5'.format(settings.result_dir,settings.iter_count),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/IsoNet/models/unet/train.py", line 38, in train3D_continue
model = load_model( model_file)
^^^^^^^^^^^^^^^^^^^^^^^
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/lib/python3.12/site-packages/keras/src/saving/saving_api.py", line 189, in load_model
return legacy_h5_format.load_model_from_hdf5(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/lib/python3.12/site-packages/keras/src/legacy/saving/legacy_h5_format.py", line 155, in load_model_from_hdf5
**saving_utils.compile_args_from_training_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/lib/python3.12/site-packages/keras/src/legacy/saving/saving_utils.py", line 143, in compile_args_from_training_config
loss = _deserialize_nested_config(losses.deserialize, loss_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/lib/python3.12/site-packages/keras/src/legacy/saving/saving_utils.py", line 202, in _deserialize_nested_config
return deserialize_fn(config)
^^^^^^^^^^^^^^^^^^^^^^
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/lib/python3.12/site-packages/keras/src/losses/init.py", line 149, in deserialize
return serialization_lib.deserialize_keras_object(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/lib/python3.12/site-packages/keras/src/saving/serialization_lib.py", line 575, in deserialize_keras_object
return deserialize_keras_object(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/lib/python3.12/site-packages/keras/src/saving/serialization_lib.py", line 678, in deserialize_keras_object
return _retrieve_class_or_fn(
^^^^^^^^^^^^^^^^^^^^^^
File "/panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo/lib/python3.12/site-packages/keras/src/saving/serialization_lib.py", line 812, in _retrieve_class_or_fn
raise TypeError(
TypeError: Could not locate function 'mae'. Make sure custom classes are decorated with @keras.saving.register_keras_serializable(). Full object config: {'module': 'keras.metrics', 'class_name': 'function', 'config': 'mae', 'registered_name': 'mae'}


$ conda list

packages in environment at /panfs/hisoftware/rocky9/miniconda3/py39_23.1.0/envs/tomo:

Name Version Build Channel

_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
absl-py 2.1.0 pypi_0 pypi
astunparse 1.6.3 pypi_0 pypi
bzip2 1.0.8 hd590300_5 conda-forge
ca-certificates 2024.6.2 hbcca054_0 conda-forge
certifi 2024.6.2 pypi_0 pypi
charset-normalizer 3.3.2 pypi_0 pypi
fire 0.6.0 pypi_0 pypi
flatbuffers 24.3.25 pypi_0 pypi
gast 0.6.0 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.64.1 pypi_0 pypi
h5py 3.11.0 pypi_0 pypi
idna 3.7 pypi_0 pypi
imageio 2.34.2 pypi_0 pypi
keras 3.4.1 pypi_0 pypi
lazy-loader 0.4 pypi_0 pypi
ld_impl_linux-64 2.40 hf3520f5_7 conda-forge
libclang 18.1.1 pypi_0 pypi
libexpat 2.6.2 h59595ed_0 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc-ng 14.1.0 h77fa898_0 conda-forge
libgomp 14.1.0 h77fa898_0 conda-forge
libnsl 2.0.1 hd590300_0 conda-forge
libsqlite 3.46.0 hde9e2c9_0 conda-forge
libuuid 2.38.1 h0b41bf4_0 conda-forge
libxcrypt 4.4.36 hd590300_1 conda-forge
libzlib 1.3.1 h4ab18f5_1 conda-forge
markdown 3.6 pypi_0 pypi
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 2.1.5 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
ml-dtypes 0.3.2 pypi_0 pypi
mrcfile 1.5.0 pypi_0 pypi
namex 0.0.8 pypi_0 pypi
ncurses 6.5 h59595ed_0 conda-forge
networkx 3.3 pypi_0 pypi
numpy 1.26.4 pypi_0 pypi
openssl 3.3.1 h4ab18f5_1 conda-forge
opt-einsum 3.3.0 pypi_0 pypi
optree 0.11.0 pypi_0 pypi
packaging 24.1 pypi_0 pypi
pillow 10.4.0 pypi_0 pypi
pip 24.0 pyhd8ed1ab_0 conda-forge
protobuf 4.25.3 pypi_0 pypi
pygments 2.18.0 pypi_0 pypi
pyqt5 5.15.10 pypi_0 pypi
pyqt5-qt5 5.15.14 pypi_0 pypi
pyqt5-sip 12.13.0 pypi_0 pypi
python 3.12.4 h194c7f8_0_cpython conda-forge
readline 8.2 h8228510_1 conda-forge
requests 2.32.3 pypi_0 pypi
rich 13.7.1 pypi_0 pypi
scikit-image 0.24.0 pypi_0 pypi
scipy 1.14.0 pypi_0 pypi
setuptools 70.1.1 pyhd8ed1ab_0 conda-forge
six 1.16.0 pypi_0 pypi
tensorboard 2.16.2 pypi_0 pypi
tensorboard-data-server 0.7.2 pypi_0 pypi
tensorflow 2.16.1 pypi_0 pypi
termcolor 2.4.0 pypi_0 pypi
tifffile 2024.6.18 pypi_0 pypi
tk 8.6.13 noxft_h4845f30_101 conda-forge
tqdm 4.66.4 pypi_0 pypi
typing-extensions 4.12.2 pypi_0 pypi
tzdata 2024a h0c530f3_0 conda-forge
urllib3 2.2.2 pypi_0 pypi
werkzeug 3.0.3 pypi_0 pypi
wheel 0.43.0 pyhd8ed1ab_1 conda-forge
wrapt 1.16.0 pypi_0 pypi
xz 5.2.6 h166bdaf_0 conda-forge

Isonet support NVIDIA A100

Dear Author,
First thanks a lot for this powerful software !
I have a question: Is IsoNet support CUDA 11.2 and NVIDIA A100 with tensorflow-gpu_2.7?
I install isonet with conda python3.9 and tensorflow-gpu_2.7+cuda11.2+NVIDIA A100 get this error:

(py39) [root@Isonet]$ isonet.py refine subtomo.star --gpuID 0,1,2,3 --iterations 30 --noise_start_iter 10,15,20,25 --noise_level 0.05,0.1,0.15,0.2
04-16 22:14:09, INFO
######Isonet starts refining######

04-16 22:14:27, INFO Note: detected 128 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
04-16 22:14:27, INFO Note: NumExpr detected 128 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
04-16 22:14:27, INFO NumExpr defaulting to 8 threads.
04-16 22:14:30, WARNING The results folder already exists before the 1st iteration
The old results folder will be renamed (to results~)
04-16 22:14:50, INFO Done preperation for the first iteration!
04-16 22:14:50, INFO Start Iteration1!
/data1/apps/miniconda3/envs/py39/lib/python3.9/site-packages/keras/optimizer_v2/adam.py:105: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(Adam, self).init(name, **kwargs)
/data1/apps/miniconda3/envs/py39/lib/python3.9/site-packages/keras/engine/functional.py:1410: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
layer_config = serialize_layer_fn(layer)
04-16 22:14:54, INFO Noise Level:0.0
2022-04-16 22:15:06.826516: F tensorflow/stream_executor/cuda/cuda_driver.cc:153] Failed setting context: CUDA_ERROR_NOT_INITIALIZED: initialization error

Thanks a lot!

Mrc header reading error

Hi, I have a segmentation that I would like use as a mask for isonet. 3dmod can read the file but when I go to extract subtomos I get the following error:######Isonet starts extracting subtomograms######
04-04 16:43:59, WARNING [isonet.py:295] subtomo directory exists, the current directory will be overwritten04-04 16:43:59, INFO [prepare.py:37] Extract from deconvolved tomogram ./deconv/21aug12a_series0031_8.00Apx.mrc04-04 16:44:02, ERROR [isonet.py:314] Traceback (most recent call last):
File "/home/cryoetsoft/IsoNet/bin/isonet.py", line 307, in extract
extract_subtomos(d_args)
File "/home/cryoetsoft/IsoNet/preprocessing/prepare.py", line 47, in extract_subtomos
with mrcfile.open(it.rlnMaskName) as m:
File "/home/betsy/.local/lib/python3.10/site-packages/mrcfile/load_functions.py", line 139, in open
return NewMrc(name, mode=mode, permissive=permissive,
File "/home/betsy/.local/lib/python3.10/site-packages/mrcfile/mrcfile.py", line 115, in init
self._read(header_only)
File "/home/betsy/.local/lib/python3.10/site-packages/mrcfile/mrcfile.py", line 131, in _read
super(MrcFile, self)._read(header_only)
File "/home/betsy/.local/lib/python3.10/site-packages/mrcfile/mrcinterpreter.py", line 173, in _read
self._read_header()
File "/home/betsy/.local/lib/python3.10/site-packages/mrcfile/mrcinterpreter.py", line 211, in _read_header
raise ValueError(msg)
ValueError: Map ID string not found - not an MRC file, or file is corrupt. I'm guess there is something in the mrc file header that is causing this!! Thanks!

Error Reading some .mrc files

Reading some .mrc files I get the following error: RuntimeWarning: Unrecognised machine stamp: 0x00 0x00 0x00 0x00.
I have solved it by adding to the IsoNet code in the instructions "with mrcfile.open(.........)" permissive=True
e.g.
with mrcfile.open(subtomo_name, mode='r', permissive=True) as s:
....

ValueError during extraction

Hi there! Thank you for developing IsoNet - it's really exciting! I've been reprocessing some data with it, but keep getting the following ValueError with each of my data sets when I reach the first extraction step (using the GUI):

08-30 14:08:44, INFO [prepare.py:63] Extract from deconvolved tomogram ./deconv/4-PB-NLGP-S_ts10.mrc Traceback (most recent call last):
File "/data/kate/IsoNetFolder/IsoNet/bin/isonet.py", line 287, in extract extract_subtomos(d_args)
File "/data/kate/IsoNetFolder/IsoNet/preprocessing/prepare.py", line 79, in extract_subtomos seeds=create_cube_seeds(orig_data, it.rlnNumberSubtomo, settings.crop_size,mask=mask_data)
File "/data/kate/IsoNetFolder/IsoNet/preprocessing/cubes.py", line 31, in create_cube_seeds sample_inds = np.random.choice(len(valid_inds[0]), nCubesPerImg, replace=len(valid_inds[0]) < nCubesPerImg)
File "mtrand.pyx", line 902, in numpy.random.mtrand.RandomState.choice
ValueError: a must be greater than 0 unless no samples are taken

I've had a look through the cited scripts from the error message, but none of them trace back to the origin of the value error, and I'm not sure how to open a .mrc file to see what in my data is set as 0.

Do you have any advice about what the issue could be, and how I can resolve it?

Thank you!

Fail on first iternation

Hi, I have followed the tutorial and have gotten to the refinement stage. In the refinement tab, I fill in the required information, leave the default values, and hit the "Refine button", which changes to a red "Stop" button. The program starts, and I see a spike in my disc and CPU usage. After a while, the log updates to say that it has begun fitting. However, after about five minutes, the "Stop" button reverts to "Refine", and I see my CPU usage go back to baseline. I don't see any error messages. I left it alone for about an hour to see if it updated, but it did not.

01-11 19:11:53, INFO Start Iteration1!
/home/melb/clones/IsoNet/bin/isonet.py refine ./subtomo.star --gpuID 0 --result_dir ./HIVtomo/testresults --preprocessing_ncpus 10 --iterations 30 --epochs 10 --learning_rate 0.0004 --noise_level 0.05,0.1,0.15,0.2 --noise_start_iter 10,15,20,25 --noise_mode ramp --drop_out 0.3 --unet_depth 3 --convs_per_depth 3 --kernel 3,3,3 --filter_base 64
01-11 19:18:54, INFO
######Isonet starts refining######

01-11 19:18:56, INFO Done preperation for the first iteration!
01-11 19:18:56, INFO Start Iteration1!
01-11 19:21:44, INFO Noise Level:0.0
01-11 19:21:55, INFO Done preparing subtomograms!
01-11 19:21:55, INFO Start training!
01-11 19:21:55, INFO Loaded model from disk
01-11 19:21:55, INFO begin fitting

Refine | AttributeError: 'Item' object has no attribute 'rlnCropSize'

Running Isonet on Ubuntu 18.04. Went through the tutorial without any problem. Now applying to my own dataset. I received this error when trying to run Refine.

isonet.py refine 00_IsonetTrial.star  --gpuID 0  --iterations 30 --noise_start_iter 10,15,20,25 --noise_level 0.05,0.1,0.15,0.2
04-29 15:39:18, INFO     
######Isonet starts refining######

04-29 15:39:18, ERROR    Traceback (most recent call last):
  File "/programs/x86_64-linux/isonet/0.1/IsoNet/bin/refine.py", line 25, in run
    run_whole(args)
  File "/programs/x86_64-linux/isonet/0.1/IsoNet/bin/refine.py", line 47, in run_whole
    args.crop_size = md._data[0].rlnCropSize
AttributeError: 'Item' object has no attribute 'rlnCropSize'

Curious where this rlnCropSize attribute is missing from.

deconv file size limit

Dear IsoNet team,

Is there a file size limit that IsoNet could perform deconv function?

When I performed IsoNet using tomogram of pixel size 2.48 with a dimension of 409640961200, the function gets "killed" during the deconv function run.
When I tested the same tomogram, but a cropped one with a dimension of 401401201 worked well.
Also, the same tomogram but higher binning size (pixel size of 4.96) with a dimension of 20482048601 worked well.

I also tried to use various numbers of cpu (i.e. ncpu=3, 10, 15, 20, 30, 50, 60), but all didn't help for the full tomogram with pixel size of 2.48. Do you have a suggestion on this please?

Thank you so much,
Joy

CUDA_ERROR_NOT_INITIALIZED

My environment is here:
Ubuntu 20.04
Kernel version: 5.8.0-43-generic
Nvidia Driver version: 460.27.04
CUDA Version: 11.2.0
cuDnn version: 8.1.1
Tensorflow version: 2.6.0
Screenshot from 2021-10-07 17-42-01

Cannot continue refinement

Hi,

I'm trying to continue a previous refinement run of IsoNet v0.2 using the following command line:

isonet.py refine Position_5_72_d3_subtomo.star -result_dir Position_5_72_d3_results --gpuID $CUDA_VISIBLE_DEVICES --continue_from Position_5_72_d3_results/refine_iter10.json

However I get the following error:

09-23 15:14:46, INFO     
######Isonet starts refining######

09-23 15:14:46, INFO     
######Isonet Continues Refining######

09-23 15:15:08, INFO     Start Iteration11!
09-23 15:15:08, INFO     Continue from previous model: Position_5_72_d3_results/model_iter10.h5 of iteration 10 and predict subtomograms                 
09-23 15:15:08, INFO     Start predicting subtomograms!
09-23 15:16:54, INFO     Done predicting subtomograms!
09-23 15:16:54, ERROR    Traceback (most recent call last):
  File "/scicore/home/engel0006/GROUP/pool-engel/soft/isonet/IsoNet/bin/refine.py", line 100, in run
    if num_iter>=args.noise_start_iter[0] and (not os.path.isdir(args.noise_dir) or len(os.listdir(args.noise_dir))< num_noise_volume ):
UnboundLocalError: local variable 'num_noise_volume' referenced before assignment

Does anyone know how to fix this, or is this a bug?

Thank you!

Although the program works fine, Create data folder error! when the folder already exist

2020-07-31:13:36:37,642 INFO [mwr3D.py:72] Done training!
2020-07-31:13:36:37,642 INFO [mwr3D.py:75] Start predicting!
2020-07-31:13:36:58,105 INFO [mwr3D.py:80] Done predicting!
2020-07-31:13:36:58,106 INFO [mwr3D.py:86] Done Iteration1!
2020-07-31:13:36:59,843 INFO [mwr3D.py:53] Start Iteration2!
2020-07-31:13:36:59,844 INFO [mwr3D.py:56] noise_factor:0
2020-07-31:13:36:59,844 ERROR [mwr3D.py:61] Create data folder error!
2020-07-31:13:37:14,778 INFO [mwr3D.py:63] Done getting cubes!
2020-07-31:13:37:24,591 INFO [train.py:125] Loaded model from disk
2020-07-31:13:37:24,632 INFO [train.py:141] begin fitting

CUDA_ERROR_NOT_INITIALIZED: initialization error

Hi,
I am trying to run IsoNet on a dataset. It is installed as described on the github page. I am using tensorflow version 2.5.0 with an Nividia 2080RTX 8GB GPU running CUDA 11.4. I made the deconvolved tomos, after which I made the masks and extracted 50 subtomos from every tomogram. After that, i tried to train the NN and ran into the following message

07-28 13:27:54, INFO
######Isonet starts refining######

07-28 13:27:55, WARNING The results folder already exists before the 1st iteration
The old results folder will be renamed (to results~)
07-28 13:27:58, INFO Done preperation for the first iteration!
07-28 13:27:58, INFO Start Iteration1!
/home/user/software/anaconda3/envs/isonet/lib/python3.8/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:374: UserWarning: The lr argument is deprecated, use learning_rate instead.
warnings.warn(
/home/user/software/anaconda3/envs/isonet/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py:494: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
warnings.warn('Custom mask layers require a config and must override '
07-28 13:27:59, INFO Noise Level:0.0
2021-07-28 13:28:12.359664: F tensorflow/stream_executor/cuda/cuda_driver.cc:210] Failed setting context: CUDA_ERROR_NOT_INITIALIZED: initialization error
2021-07-28 13:28:12.552163: F tensorflow/stream_executor/cuda/cuda_driver.cc:210] Failed setting context: CUDA_ERROR_NOT_INITIALIZED: initialization error
2021-07-28 13:28:13.096353: F tensorflow/stream_executor/cuda/cuda_driver.cc:210] Failed setting context: CUDA_ERROR_NOT_INITIALIZED: initialization error
2021-07-28 13:28:13.298862: F tensorflow/stream_executor/cuda/cuda_driver.cc:210] Failed setting context: CUDA_ERROR_NOT_INITIALIZED: initialization error
2021-07-28 13:28:13.750324: F tensorflow/stream_executor/cuda/cuda_driver.cc:210] Failed setting context: CUDA_ERROR_NOT_INITIALIZED: initialization error
2021-07-28 13:28:13.800403: F tensorflow/stream_executor/cuda/cuda_driver.cc:210] Failed setting context: CUDA_ERROR_NOT_INITIALIZED: initialization error
2021-07-28 13:28:15.548473: F tensorflow/stream_executor/cuda/cuda_driver.cc:210] Failed setting context: CUDA_ERROR_NOT_INITIALIZED: initialization error
2021-07-28 13:28:15.595047: F tensorflow/stream_executor/cuda/cuda_driver.cc:210] Failed setting context: CUDA_ERROR_NOT_INITIALIZED: initialization error
^CProcess ForkPoolWorker-19:
Process ForkPoolWorker-23:
Process ForkPoolWorker-22:
Process ForkPoolWorker-17:
Process ForkPoolWorker-18:
Can you suggest what the cause of the problem is?

Error, probably caused by negative sqrt, should add eps

[lytao@gabar t3]$ python3 ~/software/mwr/bin/mwr_cli.py make_mask ../t2/tomogram_gaussian_wedge-1.mrc tomogram_mask.mrc --percentile=20 --side=8
Gaussian_filter
maximum_filter
/home/lytao/software/mwr/util/filter.py:86: RuntimeWarning: invalid value encountered in sqrt
out = np.sqrt((s2 - s**2 / ns) / ns)
/home/lytao/software/mwr/util/filter.py:87: RuntimeWarning: invalid value encountered in greater
out = out>np.std(tomo)*threshold
mask generated

hipassnyquist

Dear team,

Could you let me know how to disable "hipassnyquist" option at all?
I tried to put "0" or "false", then it gave errors of which it can't divide by 0.
I don't like any blurring effect on the tomogram when deconvolve job is done.
I tried to use a really little value such as 0.00000000000000001, but that didn't really show big difference from the default value of 0.02.

Thank you,
Joy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.