Code Monkey home page Code Monkey logo

dscmr's Introduction

DSCMR

Liangli Zhen*, Peng Hu*, Xu Wang, Dezhong Peng, Deep Supervised Cross-modal Retrieval[C], IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019: 10394-10403. (*denotes equal contribution, PyTorch Code)

Abstract

Cross-modal retrieval aims to enable flexible retrieval across different modalities. The core of cross-modal retrieval is how to measure the content similarity between different types of data. In this paper, we present a novel cross-modal retrieval method, called Deep Supervised Cross-modal Retrieval (DSCMR). It aims to find a common representation space, in which the samples from different modalities can be compared directly. Specifically, DSCMR minimises the discrimination loss in both the label space and the common representation space to supervise the model learning discriminative features. Furthermore, it simultaneously minimises the modality invariance loss and uses a weight sharing strategy to eliminate the cross-modal discrepancy of multimedia data in the common representation space to learn modality-invariant features. Comprehensive experimental results on four widely-used benchmark datasets demonstrate that the proposed method is effective in cross-modal learning and significantly outperforms the state-of-the-art cross-modal retrieval methods.

Framework

DSCMR

Result

Performance comparison in terms of mAP scores on the Pascal Sentence dataset. The highest score is shown in boldface.

Citing DSCMR

If you find MAN useful in your research, please consider citing:

@inproceedings{zhen2019deep,
  title={Deep Supervised Cross-Modal Retrieval},
  author={Zhen, Liangli and Hu, Peng and Wang, Xu and Peng, Dezhong},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={10394--10403},
  year={2019}
}

dscmr's People

Contributors

penghu-cs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

dscmr's Issues

NUS-WIDE-10K数据集

请问,论文中使用到的NUS-WIDE-10K的处理后的数据集可以提供一下吗?现在不容易找到NUS-WIDE-10K的数据。

原始数据集

您好,您的这个工作是先将数据集通过VGG和文本编码器进行处理之后得到的特征,再将特征通过DSCMR进行处理。我想问一下您,可以直接使用原始的数据集进行训练吗

怎么检索啊

模型训练完了以后怎么用它检索啊 这里面有提供无标签检索的代码吗

关于代码的问题

您好!我想请问:
在train_model.py中,计算loss的输入里面,为什么要输入两个一样的labels,这样labels的loss不就没有意义了吗?
loss = calc_loss(view1_feature, view2_feature, view1_predict,
view2_predict, labels, labels, alpha, beta)

关于特征提取

关于代码中所提到的,使用vgg19提取图像特征,textcnn提取文本特征,且所有的对比实验都是在提取好特征的基础上进行的,请问VGG19和textCNN是要在训练集中预训练,然后使用预训练的模型提取特征吗

Feature extraction for Pascal Sentence

The original dataset of Pascal Sentence consists of 5 sentences per image. I wanted to ask what method did you follow to extract the text features for the same. Is the feature calculated per sentence and then an average is taken? Please clarify.

k-fold validation

are the performance metrics presented after 5-/10-fold cross-validation?

Validation set

It seems your data split follows "Cross-modal Retrieval with Correspondence Autoencoder", am I right?

Can I ask how the validation set was used? In your code here the test set is used for validation. Am I right to say the validation set is ignored?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.