Abstract—A comprehensive and extensive knowledge of the immunological landscape in tumour stroma can give critical information for the choice of type of immunotherapy and thus improve cancer patient survival. For melanoma, this information is mostly gathered from stroma scans. Frequently, however, stroma scans consist of low resolution image series with a few manually taken high resolution images. An important increase of information content could be obtained if all scans would have high resolution. The goal of this project was to improve low resolution melanoma scans to high resolutions, by a machine learning super resolution model. In order to test the performance of such model, we first produced artificial low resolution images by 2- and 4-fold downsampling and compression from a set of high resolution melonoma snapshots obtained from the CHUV hospital. The resulting low resolution versions were enhanced in a trained model, based on a U-Net architecture with a ResNet encoder and feature loss. This model up-scales low resolution images, taking into account their underlying texture and filling in the unknown details. The resulting high resolution versions were visually close to the ground truth medical images with 2-fold but not 4-fold down-sampled low resolution images. Image comparison measures were largely comparable between high resolution predicted and ground truth medical images, but less accurate for the 4-fold downsampled images. Our results show a very promising development of low to high resolution training. We suspect that results can be further improved by changing the model network's architecture pretrained composites to be trained on data features closer to those observed on medical images.
marvande / deep_learning_image_super_resolution Goto Github PK
View Code? Open in Web Editor NEWProject on super-resolution of medical images with the LTS4 and the CHUV at the Swiss Institute of Technology Lausanne.