Code Monkey home page Code Monkey logo

mfas's Introduction

PWC

MFAS: Multimodal Fusion Architecture Search

This code

This is an implementation of the paper:

@inproceedings{perez2019mfas,
  title={Mfas: Multimodal fusion architecture search},
  author={P{\'e}rez-R{\'u}a, Juan-Manuel and Vielzeuf, Valentin and Pateux, St{\'e}phane and Baccouche, Moez and Jurie, Fr{\'e}d{\'e}ric},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={6966--6975},
  year={2019}
}

Usage

We focus on the NTU experiments in this repo. The file main_found_ntu.py is used to train and test architectures that were already found. Pick one of them by using the --conf N argument. This script should be easy to modify if you want to try other architectures.

Our best found architecture on NTU is slightly different to the one reported in the paper, it can be tested like so:

python main_found_ntu.py --datadir ../../Data/NTU --checkpointdir ../../Data/NTU/checkpoints/ --use_dataparallel --test_cp best_3_1_1_1_3_0_1_1_1_3_3_0_0.9134.checkpoint --conf 4 --inner_representation_size 128 --batchnorm

To test the architecture from the paper, you can run:

python main_found_ntu.py --datadir ../../Data/NTU --checkpointdir ../../Data/NTU/checkpoints/ --use_dataparallel --test_cp conf_[[3_0_0]_[1_3_0]_[1_1_1]_[3_3_0]]_both_0.896888457572633.checkpoint

Of course, set your own Data and Checkpoints directories.

Download the pretrained checkpoints

We provide pretrained backbones for RGB and skeleton modalities as well as some pretrained found architectures in here: Google Drive link

mfas's People

Contributors

jperezrua avatar slyviacassell avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

mfas's Issues

Extending the work to approaches not utilising pre-trained feature extractors

Hi,
Congratulations on the work. It seems really intriguing.
I came across a line in the paper:

However, the reader should consider that our fusion approach is in fact not limited to neural networks as primary feature extractors.

I was wondering if you could elaborate on this a little bit.

I was hoping to use a similar approach as mentioned in the paper but I don't want to restrict the search to pre-trained detectors. If I want to search for pre-fusion and post-fusion layers as well, do you think the current framework can handle that? And what would be a good starting point?

Skeleton Net Low Accuracy

I am trying to reproduce the unimodal and multimodal results reported in the paper. I got following accuracies by running the scripts provided in this repo:

best_3_1_1_1_3_0_1_1_1_3_3_0_0.9134.checkpoint: 90.03%
conf_[[3_0_0][1_3_0][1_1_1]_[3_3_0]]_both_0.896888457572633.checkpoint: 88.64%

As you see, the results reasonable (still about 1% less than the numbers you got) which implies that I have setup the dataset correctly.

On the other hand, I get very different results from Skeleton unimodal net. I used the provided pre-trained checkpoints for each modality and loaded them into models.central.Visual and models.central.Skeleton modules. I wrote a simple script to forward and compute the accuracy of these modules. The result (especially for skeleton net) are very different from the paper

skeleton_32frames_85.24.checkpoint: 48.02%
rgb_8frames_83.91.checkpoint: 85.23%

Do you have any idea what I am doing wrong here? I would appreciate your comment.

AVMNIST: my test acc is 65%!

Hi
Thanks for sharing your nice work,
I tried the AVmnist code for uni-modal image classification with different hyper-parameters, but I could not get results better than 65-6% while 75% acc is reported in the paper. Would you kindly guide me how to fix that?
Thanks

AV-MNIST Dataset

Hi, I am unable to find the AV-MNIST dataset online. Could you kindly share a link? I am just starting out so am hoping to start with a less complex dataset.

Thanks

Preparing MM_IMDB Dataset

Dear Authors,

Would you mind sharing the scripts to prepare the mm_imdb raw dataset? Or could you tell me if my interpretation is right or not.

This part in your datasets/mm_imdb.py:

image = np.load(imagepath)
label = np.load(labelpath)
text = np.load(textpath)

The "image" is the poster image, the "label" is "genres", and the text is "plot", here a sample from mm_imdb raw dataset:

   "plot": [
       "A stationary camera looks at a large anvil with a blacksmith behind it and one on either side. The smith in the middle draws a        heated metal rod from the fire, places it on the anvil, and all three begin a rhythmic hammering. After several blows, the metal goes          back in the fire. One smith pulls out a bottle of beer, and they each take a swig. Then, out comes the glowing metal and the hammering         resumes.",
       "Three men hammer on an anvil and pass a bottle of beer around."
   ],
   "votes": 1335,
   "title": "Blacksmith Scene",
   "smart canonical title": "Blacksmith Scene",
   "long imdb canonical title": "Blacksmith Scene (1893)",
   "certificates": [
       "USA:Unrated"
   ],
   "long imdb title": "Blacksmith Scene (1893)",
   "country codes": [
       "us"
   ],
   "smart long imdb canonical title": "Blacksmith Scene (1893)",
   "cover url": "http://ia.media-imdb.com/images/M/MV5BNDg0ZDg0YWYtYzMwYi00ZjVlLWI5YzUtNzBkNjlhZWM5ODk5XkEyXkFqcGdeQXVyNDk0MDg4NDk@._V1.      _SX100_SY75_.jpg",
   "sound mix": [
       "lent"
   ],
   "genres": [
       "Short"
   ],

RuntimeError: Error(s) in loading state_dict for Searchable_Skeleton_Image_Net:

Unexpected key(s) in state_dict: "fusion_layers.0.2.weight", "fusion_layers.0.2.bias", "fusion_layers.0.2.running_mean", "fusion_layers.0.2.running_var", "fusion_layers.0.2.num_batches_tracked", "fusion_layers.1.2.weight", "fusion_layers.1.2.bias", "fusion_layers.1.2.running_mean", "fusion_layers.1.2.running_var", "fusion_layers.1.2.num_batches_tracked", "fusion_layers.2.2.weight", "fusion_layers.2.2.bias", "fusion_layers.2.2.running_mean", "fusion_layers.2.2.running_var", "fusion_layers.2.2.num_batches_tracked", "fusion_layers.3.2.weight", "fusion_layers.3.2.bias", "fusion_layers.3.2.running_mean", "fusion_layers.3.2.running_var", "fusion_layers.3.2.num_batches_tracked".

I am testing the network you provided, i am getting the above error regarding the fusion layer weights.

Could you please provide a checkpoint file that has the fusion layer weights as well.

prepare NTU RGB+D dataset

I downloaded the dataset according to your instructions, but I stuck in "change all video clips resolution to 256x256 30fps and copy them to / ntrgbd"_ rgb/avi_ 256x256_ 30/ directory.โ€How can I change all video clips resolution to 256x256 30fps?
Thank you in advance for your answer.

MM_IMDB Searchable and AV-MNIST Dataset

Dear Author,

Thanks for this work! I'm trying to reproduce the result, first I want to know if av-mnist a public dataset? Because I can't find it. So I'm trying to use mmimdb. And got some questions in addition to #8 :

  1. I still have some question about preparing the mmimdb, for the image sizes are different, did you crop it or pad before converting .jpeg to .npy?
  2. I didn't see a searchable class specialized for mmimdb, does that mean I should just use the ModelSearcher() for it?
  3. Also there seems to be 27 classes in mmimdb, not "23" in the paper.
    Counter({'Drama': 13967, 'Comedy': 8592, 'Romance': 5364, 'Thriller': 5192, 'Crime': 3838, 'Action': 3550, 'Adventure': 2710, 'Horror': 2703, 'Documentary': 2082, 'Mystery': 2057, 'Sci-Fi': 1991, 'Fantasy': 1933, 'Family': 1668, 'Biography': 1343, 'War': 1335, 'History': 1143, 'Music': 1045, 'Animation': 997, 'Musical': 841, 'Western': 705, 'Sport': 634, 'Short': 471, 'Film-Noir': 338, 'News': 64, 'Adult': 4, 'Talk-Show': 2, 'Reality-TV': 1})
    Maybe the last four classes are excluded?

Sincerely,
Somedaywilldo

accuracy of the image unimodal network on AVMNIST

Hi, when I run the code training the unimodal image network (LeNet5 structure, as depicted in the paper) on the disturbed MNIST (25% energy removed), I obtain an accuracy ~53% instead of ~74% as described in the paper. I also tested in an extreme case when only 1% energy is removed, which gives an accuracy 95% as expected. This implies the problem lies in my dataset, instead of training settings, I believe.

I was wondering what might be the issue? Or have you ever came across this problem before? Thanks for your time.

What does _create_alphas() function mean?

Hi, thank u for your great work! ๐Ÿ‘

I am a little confused about the meaning of "self.alphas = self._create_alphas()" in Searchable_xxx_Net class? What was _create_alphas() function used for?

Git Clone failed due to using the reserved name as a folder name on windows

Hi, juanmanpr, thank you for your open source MFAS. But I encounter a bug when I clone your repository on windows. The details of the bug are shown in the follow:

Cloning into 'mfas'...
remote: Enumerating objects: 63, done.
remote: Counting objects: 100% (63/63), done.
remote: Compressing objects: 100% (55/55), done.
remote: Total 63 (delta 20), reused 38 (delta 6), pack-reused 0
Unpacking objects: 100% (63/63), done.
fatal: cannot create directory at 'models/aux': Invalid argument
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'

This bug is caused by the forbidden file name you used in your repository. Here is what MicroSoft said:

Do not use the following reserved names for the name of a file: CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. Also avoid these names followed immediately by an extension; for example, NUL.txt is not recommended. For more information, see Namespaces.

So I hope you can rename the folder model/aux. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.