def forward(self, retar_img, global_img=None): if global_img is None: print('Lack of global_image!') x_r = F.relu(self.bn1(self.conv5(self.vggmodel(retar_img)))) x_r = F.relu(self.bn2(self.conv6(x_r))) x_r = x_r.view(x_r.size(0), -1) x = x_r action = self.classifier(x) print('Q is:', action) return action
Hello, author. Thank you for the code. I'm a little confused , as mentioned in the paper, "The global feature is used as a reference and enables the retargeted image not to deviate too far from the original image in a fine-grained way. The size of each feature representation vector is 1600. We concatenate the two representation vectors together, And then throw them into the fully connected layer to get a three-dimensional vector." Each time the global feature is concatenated with the local feature before input to the fully connected layer. But the code only input the retargeted image to convolutional layer to extract local features, then only the local feature is input to the fully connected layer. Am I wrong to understand this?
Hello, author. Can training codes be shared? My code ability is weak, I do not know how to realize the loss function part in the training process? I would appreciate it if you could share the training code with me!
I'm sorry to bother you again. I tried to write train.py myself, but I kept getting all kinds of errors. We really appreciate you providing your train.py. I wonder how checkpoint2d.pth.tar is trained. Thank you very much!
I came from the problem of Cdvdtsp. I saw the same confusion as you,my results are the same as yours, and there is a gap with the author'sresults. I would like to ask if you have solved that problem. If you have solved it, how to solve it? Sorry to interrupt
Hi, you used some other methods like WSSDCNN and Cycle-IR for comparison. I can't find any useful codes for these methods. Could you help me that how you implement these methods and used them for comparison? @ZYzhouya