Code Monkey home page Code Monkey logo

bbaug's People

Contributors

harpalsahota avatar sadransh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

bbaug's Issues

if it's possible to apply for image with single boundingbox

Hello,

thanks for cool project.

My images/ground truth images have single bounding box, instead of two bounding boxes. Is it possible to modify code, so that the augmentation works for images with single bounding box? When I applied your code, I got following error.


TypeError Traceback (most recent call last)
in
17 # e.g. [[x_min, y_min, x_max, y_max], [x_min, y_min, x_max, y_max]]
18 # Labels are the class labels for the bounding boxes as an iterable of ints e.g. [1,0]
---> 19 img_aug, bbs_aug = policy_container.apply_augmentation(random_policy, image, bounding_boxes, labels)
20 # image_aug: numpy array of the augmented image
21 # bbs_aug: numpy array of augmneted bounding boxes in format: [[label, x_min, y_min, x_man, y_max],...]

/opt/conda/lib/python3.6/site-packages/bbaug/policies/policies.py in apply_augmentation(self, policy, image, bounding_boxes, labels)
480 [
481 BoundingBox(*bb, label=label)
--> 482 for bb, label in zip(bounding_boxes, labels)
483 ],
484 image.shape

/opt/conda/lib/python3.6/site-packages/bbaug/policies/policies.py in (.0)
480 [
481 BoundingBox(*bb, label=label)
--> 482 for bb, label in zip(bounding_boxes, labels)
483 ],
484 image.shape

TypeError: init() missing 1 required positional argument: 'y2'

Apply augmentations on entire batch

Looking at the documentation, it seems that I would apply the augmentation one image at a time. Is there a way to apply this to an entire batch?

TypeError: __init__() got multiple values for argument 'label'

Code reproduced:

class ExampleDataset:
    def __init__(self, root, policy_container=None, is_train=True):
        self.root = root
        self.data_type = "train" if is_train else "val"
        self.policy_container = policy_container
        # main
        # self.imgs = list(sorted(glob.glob(f'{root}/images/{self.data_type}/*.jpg')))
        # self.boxes = list(sorted(glob.glob(f'{root}/labels/{self.data_type}/*.txt')))
        # test
        self.imgs = list(sorted(glob.glob(f'{root}/test_data/*.jpg')))
        self.boxes = list(sorted(glob.glob(f'{root}/test_data/*txt')))
        self.out_dir = f'{root}/test_data_out/'
        if not os.path.exists(self.out_dir):
            os.mkdir(self.out_dir)

    def __len__(self):
        return len(self.imgs)
        
    # def __getitem__(self, idx):
    #     img = np.array(Image.open(self.imgs[idx]))
    #     boxes_path = self.boxes[idx]
    #     height, width, _ = img.shape
    #     # For convenience I’ve hard coded the label and co-ordinates as label, x_min, y_min, x_max, y_max
    #     # for each bounding box in the image. For your own model you will need to load
    #     # in the coordinates and do the appropriate transformations.
    #     boxes = []
    #     labels = []
    #     with open(boxes_path, 'r') as in_box:
    #         for line in in_box:
    #             if line:
    #                 line = line.split()
    #                 xywh = list(map(int, map(float, line[1:])))
    #                 xyxy = self.convert_xyxy(xywh, width=width, height=height)
    #                 boxes.append(xyxy)
    #                 labels.append(int(line[0]))
        
    #     if self.policy_container:

    #         # Select a random sub-policy from the policy list
    #         random_policy = self.policy_container.select_random_policy()
    #         print(random_policy)

    #         # Apply this augmentation to the image, returns the augmented image and bounding boxes
    #         # The boxes must be at a pixel level. e.g. x_min, y_min, x_max, y_max with pixel values
    #         img_aug, bbs_aug = self.policy_container.apply_augmentation(
    #             random_policy,
    #             img,
    #             boxes,
    #             labels,
    #         )
    #         labels = np.array(labels)
    #         boxes = np.hstack((np.vstack(labels), np.array(boxes))) # Add the labels to the boxes
    #         bbs_aug= np.array(bbs_aug)
            
    #         # Only return the augmented image and bounded boxes if there are
    #         # boxes present after the image augmentation
    #         if bbs_aug.size > 0:
    #             return img, boxes, img_aug, bbs_aug
    #         else:
    #             return img, boxes, [], np.array([])
    #     return img, boxes

    def run(self, num_random=10):
        for idx in range(len(self.imgs)):
            img = np.array(Image.open(self.imgs[idx]))
            boxes_path = self.boxes[idx]
            height, width, _ = img.shape
            # For convenience I’ve hard coded the label and co-ordinates as label, x_min, y_min, x_max, y_max
            # for each bounding box in the image. For your own model you will need to load
            # in the coordinates and do the appropriate transformations.
            boxes = []
            labels = []
            with open(boxes_path, 'r') as in_box:
                for line in in_box:
                    if line:
                        line = line.split()
                        xywh = list(map(int, map(float, line[1:])))
                        xyxy = self.convert_xyxy(xywh, width=width, height=height)
                        boxes.append(xyxy)
                        labels.append(int(line[0]))
            
            if self.policy_container:
                # run $num_random times
                print("Processing: " + self.imgs[idx])
                i = 0
                for i in range(num_random):

                    # Select a random sub-policy from the policy list
                    random_policy = self.policy_container.select_random_policy()

                    # Apply this augmentation to the image, returns the augmented image and bounding boxes
                    # The boxes must be at a pixel level. e.g. x_min, y_min, x_max, y_max with pixel values
                    img_aug, bbs_aug = self.policy_container.apply_augmentation(
                        random_policy,
                        img,
                        boxes,
                        labels
                    )
                    labels = np.array(labels)
                    boxes = np.hstack((np.vstack(labels), np.array(boxes))) # Add the labels to the boxes
                    bbs_aug= np.array(bbs_aug)
                    
                    # Only return the augmented image and bounded boxes if there are
                    # boxes present after the image augmentation
                    if bbs_aug.size > 0:
                        print("Step: " + str(i))
                        print(random_policy)
                        # img, boxes, img_aug, bbs_aug
                        # to write
                        cv2.imwrite(str(i)+"_bbaug_"+self.imgs[idx], img_aug)
                        with open(str(i)+"_bbaug_"+self.boxes[idx], "w") as fw:
                            fw.writelines(bbs_aug)
                        i += 1
    
    def convert_xyxy(self, xywh, width, height):
        x, w = xywh[0] * width, xywh[2] * width
        y, h = xywh[1] * height, xywh[3] * height
        x1 = x - w / 2
        x2 = x + w / 2
        y1 = y - h / 2
        y2 = y + h / 2

        return list(map(int, [x1, x2, y1, y2]))

Help: Generate annotation boxes for Augmented images

@harpalsahota thanks for this great repo.

I am working on an object detection project using the Detectron2 framework. I would like to try out augmentations suggested by Google's Brain. I was wondering if there is a way to generate annotation boxes for augmented images along with the following information.

Detecron2 expects labels in the below format

{'file_name': 'train_images/85fbb8dffe30d2b1.jpg', # image path 
 'height': 768,                                                          # resulting image height 
 'width': 1024,                                                         # resulting width
 'annotations': [{'bbox': [2.0, 0.0, 1022.0, 766.0],   # annotation boxes
   'bbox_mode': 0,
   'category_id': 24}]}                                              # class label

Could you help me with this?

Ensure Repeatability

Is there a way to set the seed so that I can recreate how the policies were selected? I would like to have some type of record of augmentations applied so I can verify my training steps in the future when I have more data.

GPU augmentations

Cool project.

Does any of the policies do the augmentation process on the GPU? Maybe something like Nvidia Dali? (Obviously not exactly...)

integration with yolov5

I am new to computer vision and I am seeking to integrating the policies with yolov5. Any help will be appreciated

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.