Code Monkey home page Code Monkey logo

dtfd-mil's People

Contributors

hrzhang1123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

dtfd-mil's Issues

Inquiry Regarding FROC Score Calculation

We have encountered challenges in replicating the FROC score calculations. To ensure accuracy and consistency, we would greatly appreciate your assistance with the following inquiries:

  • Could you kindly provide us with the complete pipeline code you used to calculate the FROC score?

If not could you please guide us on:

  • How did you determine the coordinates of the points used in the calculations?

  • If clustering was involved in the selection of coordinates, could you elaborate on the process? For instance, did you extract the count of clusters from the XML file containing tumor region coordinates before doing the clustering on model probability scores? Additionally, was the center of each cluster chosen for calculations?

  • Could you specify the number of coordinates you chose for the FROC score calculations (or if it's not constant how you choose this number)?

Question about TCGA datasets

Hello,
Thanks for your amazing work!

Could you please provide more details on generating TCGA datasets? I found on the official website that TCGA-LUAD contains 585 cases and more than 1000 slides.

Questions about CAMELYON16

Hello,
Thanks for your amazing work!

I have read your paper carefully. You said in the paper, " There are in total 3.7 millions patches from the CAMELYON-16 dataset", which means about 9,200 patches per slide.
However, after I processed this dataset using CLAM, I got around 15,867 patches for each slide, which is much more bigger and even double compared with former work TransMIL and DSMIL.
Can you tell me how you compute the patch number and the way you preprocess slides?

Looking forward for your replay!

Query pertaining to Attention map

Hello,
I found your repository amazing and it helped me alot. however, I am facing some issue related to attention map. I am stuck at this point how did you generate the attention map for an external test data. Can you please help me out. Thanking in anticipation. My email [email protected].

How are there 8.3 million patches in TCGA Lung dataset?

Hi @hrzhang1123 and thank you for your great work. Recently I have downloaded TCGA-LUAD and TCGA-LUSC datasets and have used the patch tiling code in your repo (level 1, 256x256 patch, 0.8 threshold) but have found that there are only about 3 million patches. I noticed that this TCGA dataset has very few levels and the higher level is 4 times smaller on each dimension than the lower level.

Can you provide some more details about the preprocessing on TCGA dataset?

mDATA_train的问题

请问mDATA_train长什么模样?是个什么形式?有没有生成的源代码?谢谢

wrong gradient calculation code?

Hi @hrzhang1123,

With the torch version 1.12, the code raises

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], which is output 0 of AsStridedBackward0, is at version 6; expected version 5 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

at

 64 loss1 = ce_cri(gSlidePred, tslideLabel).mean()
 65 optimizer1.zero_grad()

---> 66 loss1.backward()
67 torch.nn.utils.clip_grad_norm_(attCls.parameters(), 5)
68 optimizer1.step()

This can be resolved by calling optimizer0.step() right before optimizer1.step(). This makes sense, because updating the weights before performing backward propagations of loss1 would result in incorrect weights being used. Can you consider reviewing on this?

Code for generating mDATA_train.pkl and mDATA_test.pkl

Thank you for your great work. I have studied on your code for some days and find out that I cannot generate the same pickle dataset with the existing code in this repo. I used the ./Patch_Generation/gen_patch_noLabel_stride_MultiProcessing_multiScales.py with default settings, and then use ResNet50 with 'https://download.pytorch.org/models/resnet50-0676ba61.pth' and 'https://download.pytorch.org/models/resnet50-19c8e357.pth' as pertained weights (with the same usage of CLAM) but they both cannot generate the same embedding as the provided pickle dataset. For many cases, the number of patches is different; and for all the cases, the specific embedding results are different (I just print them out and compare them with each other). On my self-generated dataset, DTFT-MIL can only achieve a ~80%AUC, but it is still better than AB-MIL.

I was recently researching on some different patch embedding methods for WSIs and I really want to figure out how you generate these pickle files. Appreciate it if you could release that part of code. Thank you very much.

Some of comparison results are presented below:

test_071 summary:
theirs: torch.Size([12999, 1024]) tensor([[0.0919, 0.0167, 0.0310,  ..., 0.0175, 0.0319, 0.0038],
        [0.1232, 0.0111, 0.0156,  ..., 0.0162, 0.0568, 0.0118],
        [0.1152, 0.0135, 0.0156,  ..., 0.0050, 0.0260, 0.0015],
        ...,
        [0.1021, 0.0017, 0.0194,  ..., 0.0127, 0.0466, 0.0149],
        [0.0827, 0.0208, 0.0043,  ..., 0.0054, 0.0076, 0.0967],
        [0.0640, 0.0026, 0.0143,  ..., 0.0223, 0.0424, 0.1296]])
mine: (13006, 1024) [[0.09298387 0.00787506 0.02702368 ... 0.08614514 0.01721004 0.00283041]
 [0.09529838 0.01143376 0.0269012  ... 0.03425035 0.01737321 0.00083463]
 [0.09971049 0.00616635 0.01358768 ... 0.0593804  0.00567114 0.00102372]
 ...
 [0.08578846 0.00922636 0.02506541 ... 0.09502593 0.01051461 0.00113597]
 [0.07405391 0.01740379 0.0229316  ... 0.06174733 0.03320831 0.00102358]
 [0.09061661 0.05501088 0.05803587 ... 0.05294361 0.0326161  0.00388881]]
test_031 summary:
theirs: torch.Size([13752, 1024]) tensor([[0.0931, 0.0106, 0.0188,  ..., 0.0085, 0.0415, 0.0575],
        [0.1111, 0.0124, 0.0286,  ..., 0.0113, 0.0258, 0.0075],
        [0.1193, 0.0152, 0.0323,  ..., 0.0139, 0.0465, 0.0139],
        ...,
        [0.0584, 0.0585, 0.0218,  ..., 0.0122, 0.0002, 0.1125],
        [0.0586, 0.0357, 0.0281,  ..., 0.0133, 0.0005, 0.1037],
        [0.0636, 0.0592, 0.0136,  ..., 0.0075, 0.0143, 0.1043]])
mine: (13764, 1024) [[0.07189967 0.01406772 0.02444521 ... 0.07287923 0.00849304 0.00516289]
 [0.10283884 0.01936848 0.01984986 ... 0.04216565 0.01332673 0.00346617]
 [0.11241646 0.01175074 0.03860182 ... 0.05830961 0.01462294 0.00079632]
 ...
 [0.07396013 0.04159826 0.00564874 ... 0.02188577 0.00073006 0.00016783]
 [0.06995964 0.04401815 0.00859164 ... 0.04087209 0.00284375 0.00054352]
 [0.0927572  0.02555182 0.02619865 ... 0.0330878  0.00973753 0.00094654]]
test_125 summary:
theirs: torch.Size([10996, 1024]) tensor([[0.1653, 0.0149, 0.0187,  ..., 0.0484, 0.0415, 0.0699],
        [0.2085, 0.0278, 0.0174,  ..., 0.0252, 0.0290, 0.0999],
        [0.1266, 0.0156, 0.0103,  ..., 0.0177, 0.0398, 0.1068],
        ...,
        [0.1122, 0.0305, 0.0122,  ..., 0.0101, 0.0179, 0.0349],
        [0.0987, 0.0081, 0.0252,  ..., 0.0163, 0.0142, 0.0225],
        [0.0867, 0.0079, 0.0204,  ..., 0.0235, 0.0080, 0.0156]])
mine: (11041, 1024) [[0.12961207 0.01852305 0.02139583 ... 0.10492406 0.02471678 0.00241166]
 [0.15194337 0.0361916  0.01106476 ... 0.08194499 0.02958321 0.0019473 ]
 [0.10148122 0.02862263 0.02404334 ... 0.07089917 0.02177665 0.00432276]
 ...
 [0.11810204 0.01990168 0.01058996 ... 0.01700757 0.02043242 0.00322259]
 [0.10601783 0.03609397 0.03217479 ... 0.01905232 0.011692   0.00126212]
 [0.1016505  0.00980685 0.01058203 ... 0.0203333  0.0086121  0.00389707]]
test_110 summary:
theirs: torch.Size([12864, 1024]) tensor([[0.1172, 0.0124, 0.0400,  ..., 0.0092, 0.0180, 0.0010],
        [0.0952, 0.0139, 0.0384,  ..., 0.0189, 0.0591, 0.0248],
        [0.0723, 0.0263, 0.0037,  ..., 0.0100, 0.0387, 0.1008],
        ...,
        [0.1052, 0.0058, 0.0032,  ..., 0.0149, 0.0425, 0.0081],
        [0.1191, 0.0088, 0.0146,  ..., 0.0087, 0.0196, 0.0025],
        [0.0859, 0.0058, 0.0196,  ..., 0.0079, 0.0363, 0.0063]])
mine: (12946, 1024) [[0.10266176 0.02140197 0.05080983 ... 0.03921993 0.01773798 0.00050414]
 [0.09473811 0.02023825 0.04183763 ... 0.06355729 0.03108444 0.00175354]
 [0.0589302  0.03030478 0.0036041  ... 0.05283598 0.01199774 0.00228471]
 ...
 [0.09021327 0.01495336 0.01816933 ... 0.04273062 0.01474771 0.00072367]
 [0.08725388 0.01715141 0.02492931 ... 0.0606238  0.01151766 0.00086627]
 [0.0977964  0.00998643 0.01750416 ... 0.10974295 0.01128516 0.00104049]]
test_105 summary:
theirs: torch.Size([26527, 1024]) tensor([[0.0924, 0.0070, 0.0340,  ..., 0.0469, 0.0254, 0.0076],
        [0.0950, 0.0236, 0.0607,  ..., 0.0334, 0.0249, 0.0224],
        [0.1184, 0.0173, 0.0223,  ..., 0.0055, 0.0269, 0.0390],
        ...,
        [0.0663, 0.0139, 0.0143,  ..., 0.0299, 0.0280, 0.0116],
        [0.0599, 0.0054, 0.0071,  ..., 0.0161, 0.0281, 0.0061],
        [0.0746, 0.0072, 0.0411,  ..., 0.0279, 0.0177, 0.0152]])
mine: (26601, 1024) [[0.10228045 0.00747321 0.02582811 ... 0.06796867 0.01846039 0.0022058 ]
 [0.09738027 0.01689206 0.05214978 ... 0.08914381 0.00974165 0.00197865]
 [0.11008362 0.02129332 0.02399252 ... 0.04785663 0.01990145 0.00316518]
 ...
 [0.07212009 0.01247469 0.02837365 ... 0.08804499 0.01969467 0.00208189]
 [0.05772971 0.01329219 0.0195193  ... 0.05670583 0.01508122 0.00170784]
 [0.09474991 0.01130168 0.03967498 ... 0.07255559 0.01188296 0.0012493 ]]
test_126 summary:
theirs: torch.Size([8940, 1024]) tensor([[0.0893, 0.0206, 0.0147,  ..., 0.0147, 0.0345, 0.0663],
        [0.0594, 0.0252, 0.0020,  ..., 0.0027, 0.0149, 0.0840],
        [0.0433, 0.0323, 0.0219,  ..., 0.0245, 0.0006, 0.0621],
        ...,
        [0.1481, 0.0079, 0.0100,  ..., 0.0186, 0.0968, 0.0134],
        [0.0976, 0.0206, 0.0382,  ..., 0.0179, 0.0258, 0.0033],
        [0.0864, 0.0101, 0.0443,  ..., 0.0121, 0.0193, 0.0022]])
mine: (8940, 1024) [[0.06437023 0.04059621 0.00818595 ... 0.07174402 0.01044872 0.01311521]
 [0.04844892 0.02401701 0.00824833 ... 0.10263587 0.00762321 0.00679111]
 [0.05775674 0.04278142 0.010421   ... 0.03412992 0.00189229 0.00298131]
 ...
 [0.11047183 0.00725013 0.00907472 ... 0.06723502 0.01963555 0.00097641]
 [0.10491706 0.0279361  0.05064427 ... 0.08815575 0.01043572 0.00123892]
 [0.10548959 0.03577846 0.04627747 ... 0.08880924 0.0110344  0.00038748]]
test_067 summary:
theirs: torch.Size([11970, 1024]) tensor([[0.0846, 0.0306, 0.0418,  ..., 0.0071, 0.0482, 0.0375],
        [0.0726, 0.0231, 0.0153,  ..., 0.0139, 0.0249, 0.0201],
        [0.0756, 0.0614, 0.0295,  ..., 0.0128, 0.0061, 0.0320],
        ...,
        [0.0615, 0.0195, 0.0096,  ..., 0.0017, 0.0031, 0.0577],
        [0.0557, 0.0225, 0.0082,  ..., 0.0125, 0.0019, 0.0649],
        [0.0573, 0.0339, 0.0058,  ..., 0.0053, 0.0007, 0.0605]])
mine: (11981, 1024) [[0.07709298 0.02003443 0.03685188 ... 0.06376243 0.02709399 0.00962392]
 [0.07427253 0.00816574 0.04358419 ... 0.05247562 0.03792135 0.00928744]
 [0.06991082 0.0126879  0.04042016 ... 0.09083964 0.01071144 0.00086453]
 ...
 [0.07603463 0.02236492 0.01474374 ... 0.07660963 0.00512043 0.00195582]
 [0.07476898 0.01848562 0.00822219 ... 0.0666826  0.00248903 0.00190519]
 [0.05809681 0.02737433 0.00720578 ... 0.04657438 0.00149838 0.00253386]]

How the heatmaps in the paper are produced?

Thank you for your fantastic work!

I have some minor questions.
It seems that the heatmaps you show in the paper are post-processed.
First, the heatmaps you show are only part of the slide rather than the whole WSI.
Second, do you get the heatmaps by a threshold to suppress the background normal tissue patch and highlight the tumor patch?

Could you please elaborate on it in detail and release related code?

image

Val Set question

Thank you for open source. But i have a question that where is you val_set link. I didn't found that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.