zlai0 / mast Goto Github PK
View Code? Open in Web Editor NEWMAST: A Memory-Augmented Self-supervised Tracker (CVPR 2020)
Home Page: https://zlai0.github.io/MAST/
MAST: A Memory-Augmented Self-supervised Tracker (CVPR 2020)
Home Page: https://zlai0.github.io/MAST/
Hello:
Thanks for you sharing your perfect work. You have said that you pretrain the model with pair of input frame, and then use the network in the training for multiple reference frames.
I have two questions as folllows:
1.what's the batchsize do you use for trainning using only short or only long term memory to get the table 5 only short j-
mean 57.3 and F-mean 61.8
2.what's the batchsize do you use for trainning multiple reference frames.
Thank you very much!
Thanks for your amazing work. In the Implementation Details of your paper, you describe an approach for the Image Feature Alignment, which I would like to try. Could you please indicate the location of it in your code or give more details about it?
hello,thank you for your perfect work.
i have a question about SpatialCorrelationSampler . Does this acculate the similarity or sampler the neighbourhood?
thank you very much!
Hi, Were you able to figure out the reason for the model not converging? (in pytorch > 1.1).
I am facing the same issue in pytorch 1.6 as well.
Hi,
May I ask which version you used for training? (2018 or 2019)
And did you use the train_all_frames.zip
or train.zip
?
Thx.
Thanks for your great job ! I hava trained my model successfully. But when i try to test my model on benmark.py, i notice that a 22G GPU is needed. But i only have 2 GPUs which has 12G memory. I tried to set the cuda decive but it didn't work. Can you give me some advice on my problem? Looking forwad to your reply !
Please change your dataset and the test always reports an error “cannot reshape array of size 1 into shape (3)” ,how to solve?
Very perfect work!
I admit that the test results on the provided pre-trained model are the same as the released ones. However, with the given codes, I cannot train a model with similar performance on the YouTube-VOS dataset on my own. My model just got Js=0.405 and Fs=0.481 with 30 epochs' training. What's the problem?
When I check the training model in main.py (Ln 184), I find it's only trained on pairwise data. Could you please release the code of the long and short term memory as told in the paper? Is it the reason why I cannot achieve a higher score?
Hi,
May I know how you test on YouTube-VOS since some new objects are added in middle frames in that dataset?
In that case, I am confused about how to handle objects that added in middle frames, such as how to decide memory usage, how to merge segmentation results, e.t.c.
Thanks.
Hi @zlai0, thanks for the nice work and for sharing the code. I have a couple of questions and would appreciate your help.
Thank you in advance!
Hello:
thank you very much for your sharing.
Could you please release the code of multiple frames training as told in the paper?
thanks
Thank you for sharing such a great job!
I have some questions about the implementation details for this paper.
I would be so grateful if you can help me with these details. Looking forward to the coming code.
Hi Zhiang,
I'm trying to reproduce your results howerwer I'm confused what to change in the training script for the 2 phases or training described in the paper.
Step 1:
"During training, we first pretrain the network with a pair of
input frames, i.e. one reference frame and one target frame
are fed as inputs. One of the color channels is randomly
dropped with probability p = 0:5. We train our model endto-
end using a batch size of 24 for 1M iterations with the
Adam optimizer. The initial learning rate is set to 1e-3,
and halved after 0.4M, 0.6M and 0.8M iterations."
Step 2:
"We then
finetune the model using multiple reference frames (our full
memory-augmented model) with a small learning rate of 2e-
5 for another 1M iterations"
Cold you provide commands similar commands in the scripts/train
for this purpose?
For example, do you user2gpus and set bsize=12, or 1 GPU and bsize=24.
And what is needed to change to enable "full_memort_augmented_model" in main.py
for step 2?
if anyone else figured this out before please let me know
thanks,
Hi, when I ran benchmark.py to test, it raised such error. I wonder what is the palette.txt file? I couldn't find it indeed. Looking forward to your answer!
Thanks for your sharing. And I would like to know whether I can test my own video?
Hi,
I'm really inspired by your work, and would love to check out the code! It's been 4 months since your initial commit. When do you plan to open-source?
Thank you so much!
Hi,
I could see that the variable offset0
is used in this line:
Line 101 in a57b043
However, it is calculated from the last for loop
over nsearch
. i.e., it won't contain the correct offset for each searched images.
Here:
Lines 70 to 80 in a57b043
Ideally, im_col0
could have been collected inside the same for loop
for every image in nsearch
to reflect correct offset for each image.
Could you please clarify why is this not done?
Thanks!
Thank you for sharing such a great job !
The code now seems to consider only one pair of images (one reference frame and one target frame).
But the memory bank containing multiple past frames is a crucial component of MAST.
I would like to know how to implement this part of the code. Is it possible to do this by modifying ref_num
in YouTubeVOSTrain.py ?
Traceback (most recent call last):
File "benchmark.py", line 184, in
main()
File "benchmark.py", line 54, in main
test(TrainImgLoader, model, log)
File "benchmark.py", line 95, in test
anno_1 = annotations[i+1]
IndexError: list index out of range
Hi and thanks for sharing the code. I faced an issue with the offset values for frames in the memory bank. It seems the same offset (offset0) is being used for all fames.here
Shouldn't a specific offset for each frame in the memory be used instead? Thank you in advance.
Thanks for sharing codes. When I run the training code, I find an issue as follow:
YTVOSTrainLoader.py", line 28, in lab_preprocess
image = cv2.cvtColor(image, cv2.COLOR_BGR2Lab)
cv2.error: OpenCV(4.3.0) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3, 4>; VDcn = cv::impl::{anonymous}::Set<3>; VDepth = cv::impl::{anonymous}::Set<0, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::::NONE; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]'
Invalid number of channels in input image:
'VScn::contains(scn)'
where
'scn' is 1
I think it may be raised by converting grayscale images to color space. Can you provide some solutions?
Hello:
Thanks for your perfect work. In table 5,memory is divided into only short, only long, and mixed .I want to know how many reference frames in only short to get the result J-mean 0.573 F-mean 0.618 ?
thank you very much !(^_^)
Sorry to bother. I tried to run the code you provided, but got raise the batch.exc_type(batch.exc_msg) error after hundreds of batches, note that photos have been resized to [256,256], and drop_last=True, I don't know other related factors to fix this.
Lines 43 to 44 in a57b043
This is essentially
drop_ch_num = 1
drop_ch_ind = [np.random.choice([1, 2])]
In the paper all three channels of Lab are decorrelated. Is it on purpose that L
channel is preserved?
Great work! Thanks in advance.
Hi, thank you for sharing such an awesome project. I have some questions about the details for your paper.
In the step 4 of Algorithm 1 (Section 3.3.1), you said "the output pixel's labels is determined by aggregating the labels of the ROI pixels". My question is how you treat several frames' ROI.
Do you use the algorithm similar as STM. For example, if the Query's size is H * W * C (C is the number of channels), and the size of your restricted attention in all previous T Keys is P. Do you calculate an affinity matrix with size (HW * TPP)?
Or you just do the same thing as Cycle-consistency. That is, a K-NN strategy and it average all previous predictions.
I really appreciate it if you could solve my concerns. Looking forward to your official code. Thank you!
Sorry to bother, but I cant find this file in YouTube-VOS data set. Thank you
Sorry to bother. Another question. So far, my experience in coding is not so much. I want to reduce the memories by changing codes to process frame by frame but don't know how. Could you please give me some hints? Thank you so much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.