Code Monkey home page Code Monkey logo

a2summ's People

Contributors

boheumd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

a2summ's Issues

How to calculate rho and tau in SumMe and TVSum

I saw in the paper that you reported both F1, rho and tau. However, when I tried to reproduce the results, I can not find any function that calculate rho and tau result. How could I produce the value of rho and tau?

Inference Code

Thanks for this great work and congratulations for the paper being accepted in CVPR 2023. Can you please provide the inference code for a single video? It will be extremely helpful.

Doubts about data sources for text modal

Hi author, I would like to ask how you got the transcribed text corresponding to the videos in the SumMe and TVSum datasets? Was it created manually or did you use an existing model?I am very much looking forward to your answer.

package

When I ran the code, the Terminal got the error "ModuleNotFoundError: No module named 'ortools.algorithms.pywrapknapsack_solver'". And I have installed "ortools", could you please help me?
And how can I get the F-socre of each video in SumMe/TVSum?
Thanks.

When will the complete code be released?

Hi!

thanks for your contribution for open source. I just read A2Summ and very interested in your model, Saddly, I found the repo only contains README, and my question is when will complete code of A2Summ be committed?

Best wishes!

Code

When will the code be made public?

F-score results

Thanks for sharing your open-source code!
I have run the project and got F-score on the SumMe and TVSum datasets. But the results seem different from those in the paper. The following are the results I got.

For SumMe:
F1_results: {'split0': 0.4879799819599918, 'split1': 0.5798566120133917, 'split2': 0.5529651338830718, 'split3': 0.5264276927288803, 'split4': 0.4501540520029801}
F1-score: 0.5195

For TVSum:
F1_results: {'split0': 0.6302811067187952, 'split1': 0.5925235378291761, 'split2': 0.6476801856701864, 'split3': 0.6284219094681587, 'split4': 0.6346191839437079}
F1-score: 0.6267

The results on the SumMe dataset are quite different from those in the paper.
The parameters in the code have no other modification except setting num_workers to 0 due to the code running error. I completed the experiment with the following settings, which are inconsistent with the settings in the readme file.

  • pytorch 1.13.1, cuda 11.6, and torchversion 0.14.1
    I think the difference in these settings may not be the main reason for so much difference.
    Could you tell me what causes this?
    Looking forward to your answer.

what model exactly for the features?

Hi, I wanted to know what version of CLIP you used and how did you get the features for Roberta ? Is it through average pooling or you took the features of the [CLS] token ?

How to get the labels for the multimodel summary?

Thx for your great work!But I am confused by the labels in the multimodel datasets(CNN and Daily Mail).How did you get the labels for the summary?I have read the aritle for many times, but I can't find anything about it. Thx for your reply!

new dataset

running a new dataset for your model
getting error "h5py objects cannot be pickled"
All requirements are satisfied and are taken special care

python train.py --dataset ${dataset}

Hi Bo,
when I choose a CNN data set training model, it is impossible to generate the model_best_video.pt, and only the model_best_text.pt. However, when using the Daliy_mail dataset training model, the model_best_text.pt and model_best_video.pt can be generated. I checked that the code did not find the reason. Do you know what the reason is?
Looking forward to your reply, thank you very much, I wish you all the best!

Question about logs

Thanks for your contribution for open source!
I just reviewed logs using tensorboard. It seems that before using SummaryWriter.add_scalar, may be you need to add some prefix like split_{index} for datasets like SumMe.
image

image above generated from training A2summ using SumMe dataset.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.