DFNet
Keras code for our paper "DFNet: Discriminative feature extraction and integration network for salient object detection"
Our paper can be found at this link.
You can download the pre-computed saliency maps from Google Drive for datasets DUTS-TE, ECSSD, DUT-OMRON, PASCAL-S, HKU-IS, SOD, THUR15K.
Framework
Modules:
Comparison with the state-of-the-art
1- Quantitative comparison
2- Qualitative comparison
Our Sharpening Loss vs. Cross-entropy Loss visual comparison
Our Sharpening Loss guides the network to output saliency maps with higher certainty and less blurry salient objects which are much closer to the ground truth compared to the Cross-entropy Loss.
Usage
If you want to train the model with VGG16 Backbone, you can run
python main.py --batch_size=8 --Backbone_model "VGG16"
You can also try one of the following three options as the Backbone_model: "ResNet50" or "NASNetMobile" or "NASNetLarge"
In addition to batch_size and Backbone_model, you can set these training configurations: learning_rate, epochs, train_set_directory, save_directory, use_multiprocessing, show_ModelSummary
Citation
@article{noori2020dfnet,
title={DFNet: Discriminative feature extraction and integration network for salient object detection},
author={Noori, Mehrdad and Mohammadi, Sina and Majelan, Sina Ghofrani and Bahri, Ali and Havaei, Mohammad},
journal={Engineering Applications of Artificial Intelligence},
volume={89},
pages={103419},
year={2020},
publisher={Elsevier}
}