Code Monkey home page Code Monkey logo

ct-gan's People

Contributors

ymirsky avatar ymirsky1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ct-gan's Issues

about attack_pipeline.py

Hello Dr.mirsky.
I tried implement your code for creating injector and remover models.
I used world coord for 169 sample E.g :
filename | z | x | y
LIDC-IDRI-0003/1-001.dcm | -169.6239465 | -47.40284557 | -30.171209
LIDC-IDRI-0011/1-001.dcm | -67.08815205 | -72.97101044 | 49.02652606
LIDC-IDRI-0012/1-001.dcm | -196.1703197 | 92.33199711 | 0.471155965
LIDC-IDRI-0013/1-001.dcm | -124.4263249 | -36.51308198 | -65.21501284
LIDC-IDRI-0014/1-001.dcm | -135.9209832 | 59.49693039 | -6.696593702
for creating injector model.

After completing the generator training process, the D loss, acc and G loss reach :

[Epoch 199/200] [Batch 342/347] [D loss: 0.000427, acc: 100%] [G loss: 4.560094] time: 1 day, 4:55:22.522646
[Epoch 199/200] [Batch 343/347] [D loss: 0.000721, acc: 100%] [G loss: 4.883993] time: 1 day, 4:55:24.019246
[Epoch 199/200] [Batch 344/347] [D loss: 0.000610, acc: 100%] [G loss: 5.273649] time: 1 day, 4:55:25.515293
[Epoch 199/200] [Batch 345/347] [D loss: 0.001337, acc: 100%] [G loss: 4.006495] time: 1 day, 4:55:27.011006

When using the generator model to inject into a new scan by running the 3A_inject_evidence.py , after adding noise touch ups in the attack_pipeline.py

print("Adding noise touch-ups...")

In output i receive these runtime warnings :

Adding noise touch-ups...
/home/Desktop/final2/env/lib/python3.8/site-packages/numpy/core/_methods.py:233: RuntimeWarning: Degrees of freedom <= 0 for slice
  ret = _var(a, axis=axis, dtype=dtype, out=out, ddof=ddof,
/home/Desktop/final2/env/lib/python3.8/site-packages/numpy/core/_methods.py:194: RuntimeWarning: invalid value encountered in true_divide
  arrmean = um.true_divide(
/home/Desktop/final2/env/lib/python3.8/site-packages/numpy/core/_methods.py:226: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
/home/Desktop/final2/env/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3372: RuntimeWarning: Mean of empty slice.
  return _methods._mean(a, axis=axis, dtype=dtype,
/home/Desktop/final2/env/lib/python3.8/site-packages/numpy/core/_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
touch-ups complete

And as a result of the injection process, some saved scan looks like this:

123

For some scans, I don't receive the runtime warnings and the injection is applied to the saved scan.
Can you help me with this?

About the acc and loss of training

Hi,
I am recreating your network and I have a problem with training.
To save time, I reduced the number of enhanced pictures by rotation to 4 and reduced the number of filters to 32.
When I trained for 200 epochs, my results are as follows:
D_loss acc G_loss
0.004051 | 100 | 6.248324 | 2 days, 12:04:16.416641
0.003934 | 100 | 5.108233 | 2 days, 12:04:35.168347
0.010535 | 100 | 4.973368 | 2 days, 12:04:53.246595
0.004152 | 100 | 5.723831 | 2 days, 12:05:10.234157
0.006562 | 100 | 5.827019 | 2 days, 12:05:28.685273
0.003435 | 100 | 5.879334 | 2 days, 12:05:46.087877
I want to know if your results are similar to me.
Because GAN expects the acc of D to be 50%, and the loss of G is as small as possible,this result does not meet expectations.
Do you have a way to optimize your network?
Your work has helped me a lot.
Thanks,
Best wishes.

about world2vox method

Hello Dr.mirsky.
I have a problem in world2vox method and this line

world_coord = np.dot(np.linalg.inv(np.dot(orientation, np.diag(spacing))), world_coord - origin)

orientation value is [1. 0. 0. 0. 1. 0.] and spacing value is in format [z , y , x] which after np.diag that value is

[[z 0 0]
 [0 y 0]
 [0 0 x]]

so in np.dot(orientation, np.diag(spacing)) we cannot calculate np.dot because orientation and spacing cannot be multiplied and then throw exception with this message:
ValueError: shapes (6,) and (3,3) not aligned: 6 (dim 0) != 3 (dim 0)
Can you help me about this?

Discriminator - activation function

Hello, I would like to ask why you didn't use any activation function after the last convolutional layer in the discriminator.

def build_discriminator(self):
    def d_layer(layer_input, filters, f_size=4, bn=True):
        """Discriminator layer"""
        d = Conv3D(filters, kernel_size=f_size, strides=2, padding='same')(layer_input)
        d = LeakyReLU(alpha=0.2)(d)
        if bn:
            d = BatchNormalization(momentum=0.8)(d)
        return d

    img_A = Input(shape=self.img_shape)
    img_B = Input(shape=self.img_shape)

    # Concatenate image and conditioning image by channels to produce input
    model_input = Concatenate(axis=-1)([img_A, img_B])

    d1 = d_layer(model_input, self.df, bn=False)
    d2 = d_layer(d1, self.df * 2)
    d3 = d_layer(d2, self.df * 4)
    d4 = d_layer(d3, self.df * 8)

    validity = Conv3D(1, kernel_size=4, strides=1, padding='same')(d4)

    return Model([img_A, img_B], validity)

Also the shape of validity is (2,2,2,1) which is not clear to me. Shouldn't it be just one neuron -> real or fake like its depicted in your publication in figure 6?

About DICOM output

I have a problem with output "3_A_inject_evidence.py"
After running "3_A_inject_evidence.py", the DICOM output on the online DICOM viewer is shown below:
dicom

Can you help me about this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.