Code Monkey home page Code Monkey logo

dynamic's People

Contributors

bryanhe avatar chahalinder0007 avatar douyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dynamic's Issues

value error in num_samples=0

I am getting value error in num_samples in each execution. Although I have downloaded all the avi files and 2 csv files in a4c-video-dir, still I am getting this error?

(my_syft_env) root@XXXXXX:/workspace/xxx/yyy/dynamic# cmd="import echonet; echonet.utils.segmentation.run(modelname="deeplabv3_resnet50",
save_segmentation=True,
pretrained=False)"
(my_syft_env) root@XXXXXX:/workspace/xxx/yyy/dynamic# python3 -c "${cmd}"
namespace(DATA_DIR='/workspace/xxx/yyy/dynamic/a4c-video-dir/', FILENAME='echonet.cfg')
Traceback (most recent call last):
File "", line 1, in
File "/workspace/xxx/yyy/dynamic/echonet/utils/segmentation.py", line 101, in run
mean, std = echonet.utils.get_mean_and_std(echonet.datasets.Echo(split="train"))
File "/workspace/xxx/yyy/dynamic/echonet/utils/init.py", line 103, in get_mean_and_std
dataloader = torch.utils.data.DataLoader(
File "/root/miniconda3/envs/my_syft_env/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 262, in init
sampler = RandomSampler(dataset, generator=generator) # type: ignore
File "/root/miniconda3/envs/my_syft_env/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 103, in init
raise ValueError("num_samples should be a positive integer "
ValueError: num_samples should be a positive integer value, but got num_samples=0

Any leads would be helpful.
Thanks in advance.

dataset dicom image and mask

thank you for share the project and video files,i want to retrain the segmentation model,but i can't get any video image labels.
can you share it?

thank you!!

chenjunqiang

Possible confidence interval issue

return func(a, b), bootstraps[round(0.05 * len(bootstraps))], bootstraps[round(0.95 * len(bootstraps))]

You are here calculating a 90% confidence interval with bootstrapping. This is at least how I understand it since 0.95-0.05=0.9. Tell me if I am wrong.

What makes me confused is that in your article, you state having calculated 95% confidence intervals for the dice score: "the Dice similarity coefficient for the end-systolic tracing was 0.903 (95% confidence interval of 0.901–0.906)".

Could you clarify this for me and the whole research community?

test_predictions.csv : is not being generated !

After running "echonet video" , log.csv and best.pt are being generated as stated. But, test_predictions.csv file is not being generated no matter how many times i run it. (Btw, "echonet segmentation --save_video" worked without any problem.)

Am i missing something while running "echonet video" ?

echonet for windows

IMHO it would be very useful to have some documentation on how to install and use echonet on a windows platform. I would be glad to help the echonet programmer(s) with this :-)

Object 'highVar' not found

Issue with: scripts/beat_analysis.R

There were a couple of issues that I faced while trying to run this file. I was able to resolve all of them except this one:

image

The issue is at Line # 72 with this command: beatByBeatAnalysis <- beatByBeatAnalysis[beatByBeatAnalysis$Filename %in% highVar,]

Could u help? What am I missing here? The word 'highVar' appears only once in the entirety of this git repo,

the Echo dataset cannot be used in a dataloader

Hello, I am referring to your code, but I have been making an error:AttributeError: Can't pickle local object 'run..collate_fn',my system is win10, I noticed that you have a function involved in this kind of problem, could you please tell me how to use it and modify it, I don't quite understand, thank you!

when i run the video.py ,i received this error

sorry to trouble you ,i wonder that did you run the video.py,i run 45 epochs succuessfully but receive this error
RuntimeError: CUDA out of memory. Tried to allocate 5.38 GiB (GPU 0; 23.70 GiB total capacity; 11.14 GiB already allocated; 2.90 GiB free; 14.29 GiB reserved in total by PyTorch)。
my cmd modelname="r2plus1d_18", frames=32, period=2, pretrained=True, batch_size=8,run_test=True
i run it using NVIDIA RTX3090 (24g) pytorch 1.7 and change the batch_size from 8 to 2 but no use。can you give me some help and what's your environment? Thank you very much

__init__() got an unexpected keyword argument 'data_dir'

Please help with the following error. I have cloned the repository as it is into my Google Colab notebook. init() got an unexpected keyword argument 'data_dir'.
I am currently looking for segmentation only.

It is saying this for all the arguemnts.

image

image

Thank you so much for the help in advance.

Thanks!

The generation of segmentation maps

Thank you very much for releasing the code and the dataset.
When I run your code to generate the segmentation maps, it only produces 10024 videos with segmentation maps, while the dataset has 10030 videos in total. Why there is a difference?

fullsize dataset

Hi
Thanks for the dataset you have already with the public to reproduce the result. Do you have any plan to share the full-size dataset without any down-sample and post-processing?

Error in Initialization Notebook

Thank you for the wonderful repo. I am trying to do an external test with echonet dynamic dataset.

RuntimeError: Caught RuntimeError in DataLoader worker process 1.
 0%|                                                                                            | 0/2 [00:00<?, ?it/s]
loading weights from  D:\stanford_AIMI\weights\r2plus1d_18_32_2_pretrained
cuda is not available, cpu weights
EXTERNAL_TEST ['0X1AADD51FAA94E4E.avi', '0X1AB987597AF39E3B.avi', '0X1ABAD3C70E6D0F27.avi', '0X1ABE578AF99E8F3E.avi', '0X1ACB73BE8C1F2C0C.avi', '0X1ACC87A912A57EDA.avi', '0X1AD23DC4055A4B6A.avi', '0X1AD3CDEC841DA50.avi', '0X1ADDEA184822F38E.avi', '0X1ADEAFA4D59610C8.avi', 'avi']
 50%|██████████████████████████████████████████                                          | 1/2 [00:02<00:02,  2.53s/it]
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-5-e618c2550cd5> in <module>
     37 print(ds.split, ds.fnames)
     38 
---> 39 mean, std = echonet.utils.get_mean_and_std(ds)
     40 
     41 kwargs = {"target_type": "EF",
Original Traceback (most recent call last):
  File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data\_utils\worker.py", line 202, in _worker_loop
    data = fetcher.fetch(index)
  File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py", line 83, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py", line 83, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py", line 63, in default_collate
    return default_collate([torch.as_tensor(b) for b in batch])
  File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data\_utils\collate.py", line 55, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 16, 112, 112] at entry 0 and [3, 16, 0, 0] at entry 2

Visualising the Loss curves?

Hey,
I hope you are all doing well and having a great time. I see that log.csv stores the test/train loss data. Would you be comfortable supplying us with your code snippet for reading these files and visualizing the loss curves?
Regards,
Zaigham

Misleading comment

In echo.py, function getitem, line 247, there is a comment:

# Select random clips
        video = tuple(video[:, s + self.period * np.arange(length), :, :] for s in start)

This is not correct, as whether the clips are random or not has been specified in an earlier part of the code (lines 203 to 208).

get the opposite experimental data

I replayed the code and found that the accuracy of the diastolic graph of the test set was 90.1 and the accuracy of the systolic graph was 92.7. Is the paper written backwards? Or did I make a mistake?

ValueError: num_samples should be a positive integer value, but got num_samples=0

Hey,
So I downloaded the git and downloaded the data set into a subfolder inside the git. The installation of echonet went fine. But I get this error no matter I try to run any of the given functions.

image

I have tried renaming the downloaded folder with the .csv files and the videos, and have also edited that echonet.cfg file. I don't really understand what is going on.

Calculate EF from Masks or Keypoints

Hello, I am trying to predict EF from segmentation masks or key points (ED & ES). I am wondering if there is an available code for that.

Thank you!

Trace fails sanity checks during export to lite for android : torch.jit.trace()

Code:

import cv2
import math
import torch
import torchvision
import numpy as np
import matplotlib.pyplot as plt
from torchsummary import summary
from torch.utils.mobile_optimizer import optimize_for_mobile

device=None
seed=0
lr=1e-4
weight_decay=1e-4
lr_step_period=15
model_name="r2plus1d_18"
weights = "file_path.pt"

np.random.seed(seed)
torch.manual_seed(seed)

if device is None:
    print("in if cond")
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model = torchvision.models.video.__dict__[model_name](pretrained=True)

model.fc = torch.nn.Linear(model.fc.in_features, 1)
model.fc.bias.data[0] = 55.6
if device.type == "cuda":
    model = torch.nn.DataParallel(model)
model.to(device)

if weights is not None:
    checkpoint = torch.load(weights)
    model.load_state_dict(checkpoint['state_dict'])

model.eval()

summary(model, (3, 1, 112, 112))

example = torch.rand((1, 3, 1, 112, 112))
traced_script_module = torch.jit.trace(model, example)
traced_script_module_optimized = optimize_for_mobile(traced_script_module)
traced_script_module_optimized._save_for_lite_interpreter("path_to_save.ptl")

The sanity check problem can be avoided by specifying check_trace= False
But in that case the save_for_lite_interpreter fails:

RuntimeError:
Could not export Python function call 'Scatter'. Remove calls to Python functions before export.

Using pytorch documentation to export the EF prediction model for Android

Is this as issue in the input shape? What should be the exact input shape for the first layer?

How to use sub-sampled clips around the lowest segmentation Areas for EF calculation?

There are some issues with the InitializationNotebook.ipynb:

  1. Some EchoNet functions haven't been called properly. I had to modify the code to make them work.

  2. Secondly, the Segmentation model isn't being initialized and used for segmentation. I was able to mitigate this issue as well by looking at this git pull.

  3. Lastly, and more importantly, I don't think the code is using the segmentation areas to generate subsampled clips and calculate the EF on that. Am I right with this judgment? Could you provide guidelines/python-code on how to go about generating the subsampled clips?

scripts/beat_by_beat_analysis.R: Permission denied

Hi, I'm trying to run the code of your project and check the results for better understanding. I've run the first two parts, segmentation of the left ventricle and ejection fraction computation, but I'm unable to run the third part that is beat_to_beat cardiac function assesment. In your project you have mentioned to run scripts/beat_analysis.R whereas no file exists with this name. There is a file named scripts/beat_by_beat_analysis.R but when I try to run it, it says permission denied. Kindly guide me further on how to solve this.

Screenshot from 2021-07-26 11-27-58

Segmentation for one given video

Hello,

When training the segmentation model it only outputs and saved segmentation videos associated with the test set. I am wondering if it is possible, given the trained model, to produce and save segmentations videos of the videos associated to the train and validation set.

Thank you,

Output directory

Hello. I'm tryng to use EchoNet but I have computational problems (each epoch needs more than 5 hours), so is it possible to have the output directory? In that way, I could use the trained network to do transfer learning.
Thank you!

Ventricle Segmentation Exits With Error and Doesn't Generate Videos

First of all, thanks for this amazing software. I'm having some trouble running your code.

I ran echonet segmentation --save_video and waited for the epochs to run. In the middle of the third epoch, the program just exits with the following error:

Traceback (most recent call last):
  File "C:\Users\akash\AppData\Local\Programs\Python\Python39\Scripts\echonet-script.py", line 33, in <module>
    sys.exit(load_entry_point('echonet==1.0.0', 'console_scripts', 'echonet'))
  File "C:\Users\akash\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 1137, in _call__
    return self.main(*args, **kwargs)
  File "C:\Users\akash\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 1862, in main
    rv = self.invoke(ctx)
  File "C:\Users\akash\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 1668, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "C:\Users\akash\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "C:\Users\akash\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 763, in invoke
    return _callback(*args, **kwargs)
  File "C:\Users\akash\AppData\Local\Programs\Python\Python39\lib\site-packages\echonet-1.0.0- 
    py3.9.egglechonet\utils\segmentation.py", line 175, in run
  File "C:\Users\akash\AppData\Local\Programs\Python\Python39\lib\site-packages\echonet-1.0.0-py3.9.egglechonet\utils\segmentation.py", line 432, in run_epoch
  File "C:\Users\akash\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\_tensor.py", line 255, in backward
    torch.autograd. backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "C:\Users\akash\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\autograd\__init__.py", line 147, in backward
    Variable._execution_engine.run_backward
RuntimeError: could not create a primitive

The error is similar to the one presented here. However I'm not able to muster any solution from what's in that thread. Has anyone run into this error for echonet and can give some advice?

I'm running 64-bit Python on Windows 10 with 16 GB RAM.

ValueError: num_samples should be a positive integer value, but got num_samples=0

Hello. I have run the code on google colab for a dry run and it was working just fine. I am now running it on a GPU-enabled server. But getting this error. "ValueError: num_samples should be a positive integer value, but got num_samples=0".

I have ensured that the Videos/ folder and the 2 csvs are in the same folder from where I am running the python script.
I have a doubt if the BATCH SIZE and NUM_WORKERS can be the reason for this. And if yes, what should I change it to, increase or decrease?

I have attached screenshots and copied the content of the error (output.stdout) and the code in my python script which I run.

Please help, and thanks in advance!!!!

THE ERROR:

Traceback (most recent call last):
File "echoDynamic.py", line 18, in
run_test=True)
File "/home/ashivdeo/Echonet/Echonet/echonet/utils/segmentation.py", line 100, in run
mean, std = echonet.utils.get_mean_and_std(echonet.datasets.Echo(split="train"))
File "/home/ashivdeo/Echonet/Echonet/echonet/utils/init.py", line 103, in get_mean_and_std
dataset, batch_size=batch_size, num_workers=num_workers, shuffle=True)
File "/home/ashivdeo/Echonet/Echonet/test/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 270, in init
sampler = RandomSampler(dataset, generator=generator) # type: ignore[arg-type]
File "/home/ashivdeo/Echonet/Echonet/test/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 103, in init
"value, but got num_samples={}".format(self.num_samples))
ValueError: num_samples should be a positive integer value, but got num_samples=0

THE CODE:

import sys
sys.path.append("/home/ashivdeo/Echonet/Echonet/")

import echonet
echonet.utils.segmentation.run(
num_epochs=50,
modelname="deeplabv3_resnet50",
pretrained=False,
output=None,
device=None,
n_train_patients=None,
num_workers=0,
batch_size=1,
seed=0,
lr_step_period=None,
save_segmentation=True,
block_size=2,
run_test=True)

git_ecdy_2
git_ecdy_1

Echo.py: erroneous calculation of "missing" files?

echo.py line 112: missing = set(self.fnames) - set(os.listdir(self.root / "Videos"))

The set(self.fnames) gives the set of filenames without their extensions.
Subtracting set(os.listdir(self.root / "Videos")) (which does include extensions) leads to all filenames being considered "missing".

This then raises a FileNotFoundError, line 117.

wondering if anyone else has encountered this...

Test in initialization (TypeError: enabled must be a bool (got str))

0%| | 0/4 [00:00<?, ?it/s]
loading weights from D:\stanford_AIMI\weights\r2plus1d_18_32_2_pretrained
cuda is not available, cpu weights
EXTERNAL_TEST ['0X1A030EFDD45062FA.avi', '0X1A05DFFFCAFB253B.avi', '0X1A09BE7969DA1508.avi', '0X1A0A263B22CCD966.avi', '0X1A193A8138F4DD0F.avi', '0X1A296F5FCD5A0ED8.avi', '0X1A2A76BDB5B98BED.avi', '0X1A2C60147AF9FDAE.avi', '0X1A2E9496910EFF5B.avi', '0X1A349D84388BD74B.avi', '0X1A36AE0874972BBE.avi', '0X1A3D565B371DC573.avi', '0X1A3E7BF1DFB132FB.avi', '0X1A481BE9AE4F2DCD.avi', '0X1A494FC3B214947B.avi', '0X1A58B506ED05C1D4.avi', '0X1A58C9DFE12C7953.avi', '0X1A5FAE3F9D37794E.avi', '0X1A62D321A1A1821B.avi', '0X1A6ACFE7B286DAFC.avi', '0X1A75CDAE16981DBC.avi', '0X1A76A1A8448B456.avi', '0X1A8D85542DBE8204.avi', '0X1A8F20B8BF0B4B45.avi', '0X1A970F020826E7FA.avi', '0X1A9D7251E9464D49.avi']
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:03<00:00, 1.12it/s]

TypeError Traceback (most recent call last)
in
39 ds.crops="all"
40 test_dataloader = torch.utils.data.DataLoader(ds, batch_size = 1, num_workers = 5, shuffle = True, pin_memory=(device.type == "cuda"))
---> 41 loss, yhat, y = echonet.utils.video.run_epoch(model, test_dataloader, "test", None, device)#, save_all=True)#, blocks=25)
42
43 with open(output, "w") as g:

~\dynamic-master\echonet\utils\video.py in run_epoch(model, dataloader, train, optim, device, save_all, block_size)
312 y = []
313
--> 314 with torch.set_grad_enabled(train):
315 with tqdm.tqdm(total=len(dataloader)) as pbar:
316 for (X, outcome) in dataloader:

~\anaconda3\lib\site-packages\torch\autograd\grad_mode.py in init(self, mode)
200 def init(self, mode: bool) -> None:
201 self.prev = torch.is_grad_enabled()
--> 202 torch._C._set_grad_enabled(mode)
203
204 def enter(self) -> None:

TypeError: enabled must be a bool (got str)

InitializationNotebook.ipynb

Inside the script, the command echonet.utils.get_mean_and_std(ds, num_workers=2) gives me an error, this is what it tells me
Note: I don’t have CUDA only CPU

runfile('C:/Users/Alberto/Sin título5.py', wdir='C:/Users/Alberto')
The weights are at D:\Echonet\Weights
Segmentation Weights already present
EF Weights already present
loading weights from D:\Echonet\Weights\r2plus1d_18_32_2_pretrained
cuda is not available, cpu weights
EXTERNAL_TEST ['0X101026B90DAE7E95.avi']
0%| | 0/1 [00:05<?, ?it/s]
Traceback (most recent call last):

File "C:\Users\Alberto\AppData\Roaming\Python\Python39\site-packages\torch\utils\data\dataloader.py", line 986, in _try_get_data
data = self._data_queue.get(timeout=timeout)

File "C:\Program Files\Python39\lib\multiprocessing\queues.py", line 114, in get
raise Empty

Empty

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

File "C:\Users\Alberto\AppData\Local\Temp\ipykernel_8536\825686444.py", line 1, in <cell line: 1>
runfile('C:/Users/Alberto/Sin título5.py', wdir='C:/Users/Alberto')

File "C:\Users\Alberto\AppData\Roaming\Python\Python39\site-packages\debugpy_vendored\pydevd_pydev_bundle\pydev_umd.py", line 175, in runfile
execfile(filename, namespace)

File "C:\Users\Alberto\AppData\Roaming\Python\Python39\site-packages\debugpy_vendored\pydevd_pydev_bundle_pydev_execfile.py", line 25, in execfile
exec(compile(contents + "\n", file, 'exec'), glob, loc)

File "C:/Users/Alberto/Sin título5.py", line 97, in
mean, std = echonet.utils.get_mean_and_std(ds, num_workers=2)

File "c:\users\alberto\src\echonet\echonet\utils_init_.py", line 110, in get_mean_and_std
for (x, *_) in tqdm.tqdm(dataloader):

File "C:\Users\Alberto\AppData\Roaming\Python\Python39\site-packages\tqdm\std.py", line 1178, in iter
for obj in iterable:

File "C:\Users\Alberto\AppData\Roaming\Python\Python39\site-packages\torch\utils\data\dataloader.py", line 517, in next
data = self._next_data()

File "C:\Users\Alberto\AppData\Roaming\Python\Python39\site-packages\torch\utils\data\dataloader.py", line 1182, in _next_data
idx, data = self._get_data()

File "C:\Users\Alberto\AppData\Roaming\Python\Python39\site-packages\torch\utils\data\dataloader.py", line 1148, in _get_data
success, data = self._try_get_data()

File "C:\Users\Alberto\AppData\Roaming\Python\Python39\site-packages\torch\utils\data\dataloader.py", line 999, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e

RuntimeError: DataLoader worker (pid(s) 6784, 1192) exited unexpectedly

OOM error when running EF prediction model on "all clips"

Similar to this closed issue, when I run echonet video --run_test --batch_size 12, I am able to complete training just fine on 4 1080ti GPUs. However, I get a memory error whenever I reach the part of the script that runs inference on the validation set when using "all clips" (line 234).

I don't believe it's a GPU memory (vRAM) issue because no combination of adjusting batch size, block size, num_workers, etc. alleviates the problem. Plus, my error message is either "Bus Error" or "RuntimeError: DataLoader worker (pid xxx) is killed by signal: Killed" (not CUDA OOM). I've also observed that with 32 GB of shared CPU memory I get through 12% of iterations before the error, and with 50 GB CPU memory I get to 20% before the error... which leads me to believe I don't have enough CPU memory.

This begs the question -- if 50 GB memory gets me 20% of the way, do I really need 250 GB of CPU memory to accommodate this inference step? Any advice for how to remedy this?

The segmentation code is not working; there is a problem in tensor dimension

The InitializationNotebook is not working properly. As segmentation is the core of this repository, it would be great if the developers can fix the issue and update their code.

The error below has been reported many times:

        RuntimeError                              Traceback (most recent call last)

~\AppData\Local\Temp\2\ipykernel_6640\517464012.py in <cell line: 19>()
32 x = x.to(device)
33 #print('x shape', x.shape)
---> 34 y = np.concatenate([model(x[i:(i + block), :, :, :])["out"].detach().cpu().numpy() for i in range(0, x.shape[0], block)]).astype(np.float16)
35 print(y.shape)
36 start = 0

RuntimeError: Given groups=1, weight of size [45, 3, 1, 7, 7], expected input[1, 104, 3, 112, 112] to have 3 channels, but got 104 channels instead

Bad Segmentation Masks on Some Data

Hi @douyang , thank you so much for maintaining this repository.

I trained a model and then did a prediction on test dataset. However I noticed that there are some samples that have bad segmentation masks. May I ask you to confirm this issue? Do you also get the same bad segmentation masks on the following data/video? Thank you again!

508efe58925d.avi
bad_0X1CF4B07994B62DBB

c4a056b2af8a.avi
bad_0X43DE853BD6E0C849

edd3b73c00dd.avi
bad_0X53BD50EB0C43D30D

Error trying to convert DICOM to AVI

I'm trying to convert a DICOM clip to an AVI file, but it doesn't work.
When debugging, line 14 in cell beginning with
def makeVideo(fileToProcess, destinationFolder):
I'm handed the following error messege:
Exception has occurred: IndexError x index 0 is out of bounds for axis 0 with size 0 File "D:\pythonprojects\echonet\dynamic\scripts\convertDICOM.py" yCrop = np.where(mean<1[0][0]

testarray value begins with [ 0, 127, 127], and
mean is array([84.908.......

Are there special requirements on the input DICOM clips? This is a clip of 1024 columns and 768 rows and about 130 frames.

External test

I am trying to run on an Initialization notebook with external test videos which is already in avi format.

RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 83, in default_collate
return [default_collate(samples) for samples in transposed]
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 83, in
return [default_collate(samples) for samples in transposed]
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 63, in default_collate
return default_collate([torch.as_tensor(b) for b in batch])
File "C:\Users\MadanB\anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 55, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 16, 472, 636] at entry 0 and [3, 16, 600, 800] at entry

RuntimeError: DataLoader worker (pid 37531) is killed by signal: Killed.

Hi,

When I run the trained model on test data, I always get this error:

41%|███████▍ | 525/1276 [37:39<43:27, 3.47s/it, 29.33 (4.19) / 27.63]tensor([[62.2167]], device='cuda:0')
41%|███████▍ | 526/1276 [37:41<40:23, 3.23s/it, 29.30 (2.20) / 27.65]tensor([[64.5487]], device='cuda:0')
41%|███████ | 527/1276 [37:45<40:46, 3.27s/it, 29.27 (13.39) / 27.67]tensor([[58.4618]], device='cuda:0')
41%|██████▌ | 528/1276 [37:47<38:45, 3.11s/it, 29.43 (157.65) / 27.72]tensor([[61.3680]], device='cuda:0')
41%|███████▍ | 529/1276 [37:51<39:09, 3.15s/it, 29.40 (8.24) / 27.73]tensor([[61.9610]], device='cuda:0')
42%|███████▍ | 530/1276 [37:57<51:21, 4.13s/it, 29.33 (4.85) / 27.71]tensor([[64.5215]], device='cuda:0')
42%|███████▍ | 531/1276 [38:03<57:19, 4.62s/it, 29.26 (2.91) / 27.70]tensor([[49.3714]], device='cuda:0')
42%|███████ | 532/1276 [38:07<56:39, 4.57s/it, 29.26 (28.22) / 27.68]tensor([[48.1342]], device='cuda:0')
42%|███████ | 533/1276 [38:10<50:35, 4.09s/it, 29.28 (47.93) / 27.67]tensor([[29.6015]], device='cuda:0')
42%|██████▎ | 534/1276 [38:20<1:12:10, 5.84s/it, 29.37 (72.82) / 27.62]tensor([[66.5562]], device='cuda:0')
42%|██████▎ | 535/1276 [38:26<1:11:19, 5.78s/it, 29.34 (10.90) / 27.66]tensor([[20.8735]], device='cuda:0')
42%|███████▌ | 536/1276 [38:27<53:51, 4.37s/it, 29.33 (9.51) / 27.66]tensor([[40.1236]], device='cuda:0')
42%|███████▏ | 537/1276 [38:30<50:27, 4.10s/it, 29.33 (27.54) / 27.65]
Traceback (most recent call last):
File "run_ef.py", line 3, in
echonet.utils.video.run(modelname="r2plus1d_18",frames=32, period=2,pretrained=True,batch_size=8)
File "/home/yzhang8/dynamic/echonet/utils/video.py", line 184, in run
blocks=2)
File "/home/yzhang8/dynamic/echonet/utils/video.py", line 266, in run_epoch
tmp = model(X[j:(j + blocks), ...])
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torchvision/models/video/resnet.py", line 233, in forward
x = self.layer4(x)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torchvision/models/video/resnet.py", line 107, in forward
out = self.conv2(out)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 81, in forward
exponential_average_factor, self.eps)
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1670, in batch_norm
training, momentum, eps, torch.backends.cudnn.enabled
File "/home/yzhang8/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 37531) is killed by signal: Killed.

Prominence argument: misspelled or intended?

In scripts/InitializationNotebook.ipynb:

try:
    trim_min = sorted(size)[round(len(size) ** 0.05)]
except:
    import code; code.interact(local=dict(globals(), **locals()))
trim_max = sorted(size)[round(len(size) ** 0.95)]
trim_range = trim_max - trim_min
peaks = set(scipy.signal.find_peaks(-size, distance=20, prominence=(0.50 * trim_range))[0])

I've noticed the indices of sorted size/area list are calculated by the length of the video, but I can't grasp why using exponent (len ** 0.05/0.95) instead of multiple (len * 0.05/0.95) to obtain a trimmed mean?

ConvertDICOMtoAVI for own Dataset

Hi, thank you for sharing your code. I have a question regarding the conversion of a dicom dataset to avi with the ConvertDICOMToAVI.ipynb. When I run the code for a personal dataset, I get the following exception:

Cell In[35], line 14, in makeVideo(fileToProcess, destinationFolder)
---> 14 yCrop = np.where(mean<1)[0][0]
IndexError: index 0 is out of bounds for axis 0 with size 0

When I change the "mean<1" to a higher number the code runs, but how do I choose this number? I put something random right now but the avi video does not look very good.
Are there also other things I would need to change in the code when using a personal dataset?
Thank you in advance:)

Some cases with strange masks

Hi, thank for your work!
I have worked with the EchoNet-Dynamic dataset and I found some cases that have strange masks.
Some examples are
image
image
image
image
And I will put the list filenames that have strange masks in
list_files_with_strange_masks.zip
Could you check these files.
Thank you so much!

empty dataframe as a result of the merge

The following line is creating an empty dataframe. Is it because we are missing to do an outer join (, all = TRUE) instead of an inner join?

beatByBeat <- merge(sizeRelevantFrames, data, by.x = c("Filename", "Frame"), by.y = c("V1", "V2"))

When I add , all = TRUE, I get results otherwise it is an empty dataframe.

ConvertDICMToAVI

Thank you for the code, but I have two questions about the preprocessing code.
1.yCrop = np.where(mean<1)[0][0] in code #2,why the mean<1, what is the reason for chossing 1 instead of other numbers?
2.What is the function difference between #1 and #2?
code#1smallOutput = outputA[int(height/10):(height - int(height/10)), int(height/10):(height - int(height/10))]

code#2 ```
frame0 = testarray[0]
mean = np.mean(frame0, axis=1)
mean = np.mean(mean, axis=1)
yCrop = np.where(mean<1)[0][0]
testarray = testarray[:, yCrop:, :, :]

        bias = int(np.abs(testarray.shape[2] - testarray.shape[1])/2)
        if bias>0:
            if testarray.shape[1] < testarray.shape[2]:
                testarray = testarray[:, :, bias:-bias, :]
            else:
                testarray = testarray[:, bias:-bias, :, :]

Beat Detection at Train Time?

Hello all,

First I want to congratulate you on this paper, it is well written and clearly demonstrates clinical value. One aspect of the paper that I must have missed has to do with how the 3D CNN is trained. From what I can tell, it produces a single classification (the ejection fraction) for the entire clip, which itself is comprised of multiple ventricular contractions. I believe that during test time, a single ventricular contraction is identified using the segmentation model, and then fed into the classifier. This process is repeated for each ventricular contraction contained in the clip, and then the classifier's predictions are averaged. If I understand correctly, you are inputting the entire clip during training, but inputting the clip in fragments during evaluation. Is this something that is accounted for or is this handled naturally by the model?

Quick Question on VolumeTracings.csv

Are there any formal papers on the definitions of X1, X2, Y1, Y2? I am not a cardiologist so I was not exactly sure what they represented. I've only seen them used in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4694312/, so the lack of use of those values raised a doubt.

Also, in VolumeTracings.csv, assuming that X1, X2, Y1, Y2 are properties of the heart that would vary over the course of the video, why are there different values of those for the same frame in the dataset provided by Stanford? For example:
FileName,X1,Y1,X2,Y2,Frame
0X100009310A3BD7FC.avi,51.26041667,15.34895833,64.93229167,69.125,46
0X100009310A3BD7FC.avi,50.03761083,17.16784126,53.36722189,16.32132997,46

Possible error in calculation of DICE scores

In segmentation.py:
https://github.com/echonet/dynamic/blob/master/echonet/utils/segmentation.py

, lines #175-#178, there are the following 4 lines of code:

loss, large_inter, large_union, small_inter, small_union = echonet.utils.segmentation.run_epoch(model, dataloader, phase == "train", optim, device)

overall_dice = 2 * (large_inter.sum() + small_inter.sum()) / (large_union.sum() + large_inter.sum() + small_union.sum() + small_inter.sum())

large_dice = 2 * large_inter.sum() / (large_union.sum() + large_inter.sum())

small_dice = 2 * small_inter.sum() / (small_union.sum() + small_inter.sum())

If I understand correctly, the large_inter, large_union, small_inter, small_union returned by run_epoch() are lists, with a length equal to the number of dataset items. To output one (large, small or overall) DICE value for the whole epoch, it would be reasonable to take the average (large, small or overall) DICE accross dataset items. However, the above code does something different: It sums the enumerators separately, sums the denominators separately, and then divides them.
This is not equal to averaging the individual DICE scores.

Was it done this way intentionally?

Segmentation setup problems

Hi,

Thanks for sharing your dataset and code.

I'm having some trouble getting set up to use echonet and I'm wondering if I have understood everything correctly, or if there is some more documentation that I am missing.

  1. I'd like to segment the LV of my own data, do I need to have expert tracings for ED and ES?
  2. Am I correct in understanding that FileList.csv and VolumeTracings.csv are inputs into the model regardless whether using training, testing or validating?
  3. Is there a method to get only the segmentation masks out between ED and ES?

@goutnet

Segmentation Mask Identification

Is there a way I can identify the pixels belonging to the segmentation masks?

I performed some comparison across the pixel intensities for a given frame in a given video-clip for a variety of videos-clips. Based on the code, I thought that all pixels belonging to the segmentation mask would have a value of 255. Nevertheless, this is not the case. For example, there are frames from target that not only contain pixels with intensity being 255 but other, like 240, 247, 230 and 192.

Based on the difference on the frequencies of pixel intensities, it is clear that the target frame has a lot more pixel which intensities are very high compared with the input frame. In that sense, the mask is associated with high-value pixel intensities. But I would like a straight way of identifying such pixels, for example "Pixels with value superior to X in the predicted frame, belong to the segmentation mask".

Thank you :)

Unexpected noise in the echonet dataset (the mask should have taken care of it, but it doesn't)

This is what I did as follows:

  1. Randomly select 100 videos out of Echonet.

  2. Choose their very first frame (later on I repeated this process with frames other than the very first frame of the videos and saw the same results).

  3. Apply a binary threshold to them s.t. any pixel with a value greater than 0 lights up. What we see here is a step like noise on the left and right edges of the ultrasound pie. Should it be there?

image

  1. Apply the mask given in the DICOM conversion notebook to them. The image on the left is the result from step 3, the image on the right is the same image after step 4. Please note the difference between the two.

image

All of the sampled images gave similar results. If the mask provided had been used on the original echonet dataset, then we should not have seen any difference between the results of step 3 and 4 (the mask is applied after the frames have been resized to 112*112 size and no other modification is applied to the images after the application of the masks). What this means is that either noise was introduced later on to the videos or the dataset had a mask applied to it which was different from the one made available.

Could you provide any idea of why I am seeing these results?

The results could be reproduced with this code.

import glob
from random import sample
import cv2
import torch,torchvision
import numpy as np

all_files = glob.glob('Videos/*'+'.avi')
Sampled_vids=sample(all_files,100)

def mask(output):
dimension = output.shape[0]
m1, m2 = np.meshgrid(np.arange(dimension), np.arange(dimension))
mask = ((m1+m2)>int(dimension/2) + int(dimension/10))
mask *= ((m1-m2)<int(dimension/2) + int(dimension/10))
mask = np.reshape(mask, (dimension, dimension)).astype(np.int8)
maskedImage = cv2.bitwise_and(output, output, mask = mask)
numpy_horizontal = np.hstack((output, maskedImage))
return numpy_horizontal

for sample in Sampled_vids[:10]:
Sample=cv2.VideoCapture(sample)
ret, frame = Sample.read()
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
(thresh, blackAndWhiteImage) = cv2.threshold(frame, 0, 255, cv2.THRESH_BINARY)
cv2.imshow("result",mask(blackAndWhiteImage))
cv2.waitKey(0)

cv2.destroyAllWindows()

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.