Code Monkey home page Code Monkey logo

rodeo's People

Contributors

manoja328 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

vishwa30 noodlesz

rodeo's Issues

[Questions] About the Edge Box Proposals Extraction

Hi Manoj,

Thank you for your great work! I notice that the README only provides the Edgebox proposals file for the VOC 2007, and I also check the file extract_coco_features.py that there do not include any EdgeBox proposals for the MSCOCO 2014.

This makes me confused because based on your paper, you mentioned that you are using the edge box proposals following the ILWFOD [CVPR 2017] in Section 5.5 Implementation Details. Do I misunderstand this detail? Is there any reason for the missing edge box proposals for the MSCOCO 2014 dataset?

The other question is: how can I obtain the edge box, e.g. your provided edge box proposals file for VOC 2007?

Best,
Mingfu

Missing the parameter pq_features when I run the train_better.py file

Hi, an error occurs when I run the train_better.py file:

Traceback (most recent call last):
  File "train_better.py", line 296, in <module>
    loss_dict = model(images, targets)
  File "/home/cy/.conda/envs/RODEO/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/data/cy/rodeo-master/frcnn_mod.py", line 57, in forward
    raise ValueError("In training mode, targets should be passed")
ValueError: In training mode, targets should be passed

I find I need a parameter pq_features here:

rodeo/frcnn_mod.py

Lines 43 to 77 in c7f340a

def forward(self, images, pq_features, targets=None):
"""
Arguments:
images (list[Tensor]): images to be processed
targets (list[Dict[Tensor]]): ground-truth boxes present in the image (optional)
Returns:
result (list[BoxList] or dict[Tensor]): the output from the model.
During training, it returns a dict[Tensor] which contains the losses.
During testing, it returns list[BoxList] contains additional fields
like `scores`, `labels` and `mask` (for Mask R-CNN models).
"""
if self.training and targets is None:
raise ValueError("In training mode, targets should be passed")
original_image_sizes = [img.shape[-2:] for img in images]
images, targets = self.transform(images, targets)
features = self.backbone(pq_features)
if isinstance(features, torch.Tensor):
features = OrderedDict([(0, features)])
proposals, proposal_losses = self.rpn(images, features, targets)
proposals = [p.to(features[0].device) for p in proposals]
detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)
losses = {}
losses.update(detector_losses)
losses.update(proposal_losses)
if self.training:
return losses
return detections

but in train_better.py file, there are only images and targets:
loss_dict = model(images, targets)

Is the problem here? How to solve it?
Looking for your reply. Thanks.

The incremental process

Hello,I have some doubts about the incremental process. Can you elaborate on how program files are incremental? For example, what should I do after I initialize G with the VOC dataset?

Where is get_data128 function?

So pleasure to view this project.
I have a question about get_distillationinfo in train_better.py

def get_distillinfo(model, dl):
    save = {}
    print("dumping info ......")
    model.eval()
    with torch.no_grad():
        for ii, (images, targets) in tqdm(enumerate(dl), total=len(dl)):
            images = list(image.to(device) for image in images)
            targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
            for image, target in zip(images, targets):
                image_id = '{0:06d}'.format(target['image_id'].item())
                info = model.get_data128([image], [target])
                save[image_id] = info
        return save

where can i find the get_data128() function? I don't see it in frcnn_mod.py ?

Not able to create environment

Hello, I am getting Solving environment failed error when trying to create environment using the given yml file. This can be mostly because I am using Windows machine and there may be some platform dependencies issues. So is there a way I can run this in Windows as well?
Screenshot (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.