Code Monkey home page Code Monkey logo

craft-pytorch's Introduction

CRAFT: Character-Region Awareness For Text detection

Official Pytorch implementation of CRAFT text detector | Paper | Pretrained Model | Supplementary

Youngmin Baek, Bado Lee, Dongyoon Han, Sangdoo Yun, Hwalsuk Lee.

Clova AI Research, NAVER Corp.

Sample Results

Overview

PyTorch implementation for CRAFT text detector that effectively detect text area by exploring each character region and affinity between characters. The bounding box of texts are obtained by simply finding minimum bounding rectangles on binary map after thresholding character region and affinity scores.

teaser

Updates

13 Jun, 2019: Initial update 20 Jul, 2019: Added post-processing for polygon result 28 Sep, 2019: Added the trained model on IC15 and the link refiner

Getting started

Install dependencies

Requirements

  • PyTorch>=0.4.1
  • torchvision>=0.2.1
  • opencv-python>=3.4.2
  • check requiremtns.txt
pip install -r requirements.txt

Training

The code for training is not included in this repository, and we cannot release the full training code for IP reason.

Test instruction using pretrained model

  • Download the trained models
Model name Used datasets Languages Purpose Model Link
General SynthText, IC13, IC17 Eng + MLT For general purpose Click
IC15 SynthText, IC15 Eng For IC15 only Click
LinkRefiner CTW1500 - Used with the General Model Click
  • Run with pretrained model
python test.py --trained_model=[weightfile] --test_folder=[folder path to test images]

The result image and socre maps will be saved to ./result by default.

Arguments

  • --trained_model: pretrained model
  • --text_threshold: text confidence threshold
  • --low_text: text low-bound score
  • --link_threshold: link confidence threshold
  • --cuda: use cuda for inference (default:True)
  • --canvas_size: max image size for inference
  • --mag_ratio: image magnification ratio
  • --poly: enable polygon type result
  • --show_time: show processing time
  • --test_folder: folder path to input images
  • --refine: use link refiner for sentense-level dataset
  • --refiner_model: pretrained refiner model

Links

Citation

@inproceedings{baek2019character,
  title={Character Region Awareness for Text Detection},
  author={Baek, Youngmin and Lee, Bado and Han, Dongyoon and Yun, Sangdoo and Lee, Hwalsuk},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={9365--9374},
  year={2019}
}

License

Copyright (c) 2019-present NAVER Corp.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

craft-pytorch's People

Contributors

youngminbaek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

craft-pytorch's Issues

calculate the loss in fine-tuning stage

how to calculate the loss in fine-tuning stage? when I process weakly-supervised training stage that I calculate the loss of each image in the batch and then add them divide batch size, is that right?

Very slow as loading model in CPU mode by celery

I found it stucked when loading model as a celery task in CPU mode. Such things happens very frequetly as the celery task starts at first time.
When I use ptrace, I found there're some stuch in "futex", when I use ltrace, I found all the time is memcpy/memset/malloc.
Another celery task is recognition by pytorch model, that's ok.
I wonder if there is a futex in CPU mode when loading the model?

Pre-trained model performance on ICDAR15

Hi, thanks for your wonderful work! I meet some questions when evaluating your model. I only could get: precision --85.1, recall -- 79.4, h-mean -- 82.2 on ICDAR 2015 dataset. Which is lower than the results reported in the paper.

I am wondering that if you get similar results?

Training details about generating pseudo groundtruth

Hi, thanks for your work. I have some questions about the detail of generating pseudo ground-truth.

  1. In Figure 6, I am confused that why can't we just input the whole image to generate the pseudo ground truth (non-text region area are padded with 0).

  2. How to crop the image?
    As I understand for the paper Figure 4, we should first crop the text region and then feed them to the network. Is that right? However, in the ICDAR2015, the word ground truths aren't the normal rectangle. 1) How do we crop it? (rotate it and then crop?).

I'm confused about model architecture

CRAFT-pytorch/craft.py

Lines 58 to 80 in 7afb2bd

def forward(self, x):
""" Base network """
sources = self.basenet(x)
""" U network """
y = torch.cat([sources[0], sources[1]], dim=1)
y = self.upconv1(y)
y = F.interpolate(y, size=sources[2].size()[2:], mode='bilinear', align_corners=False)
y = torch.cat([y, sources[2]], dim=1)
y = self.upconv2(y)
y = F.interpolate(y, size=sources[3].size()[2:], mode='bilinear', align_corners=False)
y = torch.cat([y, sources[3]], dim=1)
y = self.upconv3(y)
y = F.interpolate(y, size=sources[4].size()[2:], mode='bilinear', align_corners=False)
y = torch.cat([y, sources[4]], dim=1)
feature = self.upconv4(y)
y = self.conv_cls(feature)
return y.permute(0,2,3,1), feature

Last stage's output shape is (H/32,W/32)
F.interpolate is look like up-sampling function.
Your model have three up-sampling function.
But predict shape is (H/2,W/2).
how can do this ?

So i have a question about this

self.slice5 = torch.nn.Sequential(
nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6),
nn.Conv2d(1024, 1024, kernel_size=1)

self.slice5 - Does layer have down-sampling ?

train

Hi, I would like to try to train the network, you download the script for training?

Bounding box is shuffled. How to sort it?

First of all thanks very much for your model. I have used your pretrained model for text detection. I have got the bounding box coordinates as txt files. The resulted bounding boxes are shuffled and i could not sort it . When there is a presence of curved text in a same line like in the image below, the order gets shuffled and i need to sort it before passing it to text extraction model.

image

I have not used POLY mode. For the above image, the model outputs a txt file in which the bb coordinates are as follows. I have added the detected text for better explanation of my problem. In this case the detection order is

146,36,354,34,354,82,146,84 "Australian"

273,78,434,151,411,201,250,129 "Collection"

146,97,250,97,250,150,146,150 "vine"

77,166,131,126,154,158,99,197 "Old"

242,215,361,241,354,273,235,248 "Valley"

140,247,224,219,234,250,150,277 "Eden"

194,298,306,296,307,324,194,325 "Shiraz"

232,406,363,402,364,421,233,426 "Vintage"

152,402,216,405,215,425,151,422 "2008"

124,470,209,480,207,500,122,490 "South"

227,481,387,472,389,494,228,503 "Australia"

222,562,312,564,311,585,222,583 "Gibson"

198,564,217,564,217,584,198,584 "by"

386,570,421,570,421,600,386,600 "750 ml"

But the expected output is Australian->old->vine->collection->Eden->Valley->shiraz->2008->vintage->south->Australia->by->GIBSON->750ml.

Why the result is different from local run of the repo and the web demo version?

I tested my images first on the web demo, but when I test using the repo locally on my machine I get different results. Is there a difference in the parameters or you are using another weight. I need to know the difference to apply it to my local version.

Website demo result that recognizes line by line[what I need]
Screenshot from 2019-10-02 14-40-36

Local repo result that recognizes word by word

res_4558

How to get rectified polygons from polygon points?

In the paper you state
"Moreover, with our polygon representation, the
curved images can be rectified into straight text images,
which are also shown in Fig. 11. We believe this ability for
rectification can further be of use for recognition tasks."
My question is from a set of polygon points, how can I reconstruct the rectified image?
Can you kindly point me towards the correct direction? many thanks in advance

region map and affinity map

I found that my model's gaussian map are not regulation, mellow and full as yours. And it's not good enough for detecting big words. Could you give me some advice?

take inference using onnx runtime

I am referencing from the issue #4

@hiepph : This issue is a reference to #4 (comment)

When I run this script I get error:

  File "onxx_inference.py", line 27, in <module>
    boxes = craft_utils.adjustResultCoordinates(boxes, ratio_w, ratio_h)
  File "/home/ubuntu/ajinkya/CRAFT-pytorch/craft_utils.py", line 242, in adjustResultCoordinates
    polys[k] *= (ratio_w * ratio_net, ratio_h * ratio_net)

Any idea on how to fix it ? Also how to cv2.imwrite() the output image with boxes using onxx_inference.py ?

The reimplementation results are not good

image
This is my reimplementation results. I found that it is not good enough and recall is much lower than your results. Could you give me some advice or matters needing attention. Thanks a lot

about trainging details?

1、what is the size of input image?
2、Did you adjust the lr during the training?
3、What data augment methods are used in the data preprocessing process?
4、when I train SynthText dataset that it's very slow because of the huge number of images, are there some advice can give me about how to accelerate the training process.
5 、How long have you trained the model and how many gpus did you use?

Sort the text detected on basis of order of appearance

First of all thanks very much for your model. I have used your pretrained model for text detection. I have got the bounding box coordinates as txt files and converted the polygons to rectangles for cropping the text area in the image. The resulted bounding boxes are shuffled and i could not sort it out. When there is a presence of curved text in a same line like in the image below, the order gets shuffled and i need to sort it before passing it to text extraction model.

image

Converted the polygons to rectangles for cropping text areas

image

I have not used POLY mode. For the above image, the model outputs a txt file in which the bb coordinates are as follows. I have added the detected text for better explanation of my problem. In this case the detection order is

146,36,354,34,354,82,146,84 "Australian"

273,78,434,151,411,201,250,129 "Collection"

146,97,250,97,250,150,146,150 "vine"

77,166,131,126,154,158,99,197 "Old"

242,215,361,241,354,273,235,248 "Valley"

140,247,224,219,234,250,150,277 "Eden"

194,298,306,296,307,324,194,325 "Shiraz"

232,406,363,402,364,421,233,426 "Vintage"

152,402,216,405,215,425,151,422 "2008"

124,470,209,480,207,500,122,490 "South"

227,481,387,472,389,494,228,503 "Australia"

222,562,312,564,311,585,222,583 "Gibson"

198,564,217,564,217,584,198,584 "by"

386,570,421,570,421,600,386,600 "750 ml"

But the expected output is Australian->old->vine->collection->Eden->Valley->shiraz->2008->vintage->south->Australia->by->GIBSON->750ml.

Export to ONNX

I'm trying to export from pth to ONNX format:

import torch
from torch.autograd import Variable
import cv2
import imgproc
from craft import CRAFT

# load net
net = CRAFT()     # initialize
net = net.cuda()
net = torch.nn.DataParallel(net)

net.load_state_dict(torch.load('./craft_mlt_25k.pth'))
net.eval()


# load data
image = imgproc.loadImage('./misc/test.jpg')

# resize
img_resized, target_ratio, size_heatmap = imgproc.resize_aspect_ratio(image, 1280, interpolation=cv2.INTER_LINEAR, mag_ratio=1.5)
ratio_h = ratio_w = 1 / target_ratio

# preprocessing
x = imgproc.normalizeMeanVariance(img_resized)
x = torch.from_numpy(x).permute(2, 0, 1)    # [h, w, c] to [c, h, w]
x = Variable(x.unsqueeze(0))                # [c, h, w] to [b, c, h, w]
x = x.cuda()

# trace export
torch.onnx.export(net,
                  x,
                  'onnx/craft.onnx',
                  export_params=True,
                  verbose=True)

But then encountered this error:

RuntimeError: tuple appears in op that does not forward tuples (VisitNode at /opt/conda/conda-bld/pytorch_1556653114079/work/torch/csrc/jit/passes/lower_tuples.cpp:117)

Followed these issue pytorch/pytorch#5315 and pytorch/pytorch#13397, it turnned out that nn.DataParallel wrapper doesn't support trace export for ONNX.

Is there a workaround for this?

The network structure in the code is different from that in the paper

In paper, VGG_bn output size is w/32, h/32, 512,
in code VGG_bn output 'h_fc7'(in vgg16_bn.py) like this,
relu2_2 shape: torch.Size([1, 384, 384, 128])
relu3_2 shape: torch.Size([1, 192, 192, 256])
relu4_3 shape: torch.Size([1, 96, 96, 512])
relu5_3 shape: torch.Size([1, 48, 48, 512])
fc7 shape: torch.Size([1, 48, 48, 1024]),
input size is 786, 786, 3
784/48 =16...

about results

I use Synthtext to train CRAFT, but the result is very bad.
this is mine
image
image

this is yours
image
image

I don't know what caused this problem.I guess the difference is caused by loss, i use ohem, all postive (gt > 0.1) and negative(gt<0.1), pos:neg=1:3,and loss divede by number of neg and pos.
this is train loss, Its value looks smaller. Can you give me some guidance?
image

about watershed labeling.

   hi, I found that it is not so easy to split the Gaussian distribution map.

    can you provide details of the watered algorithm? 
    For example, the binarization method used here? 
   The details of the initial marker in the cv2.watershed() interface function in opencv? 
   Or is it using a different function interface?

Problems detecting single characters or numbers

Hello,

I can't get bboxes of single character words. The characters are detected, but because they don't have link they don't get bounding boxes. Any way to specify that if there is only one character link threshold dont matter?

Gaussian heatmap ?

In paper, They create Ground Truth Label use Gaussian heatmap by other application. Can you show me algorithm create Gaussian heatmap? Thanks

Solution for error on cpu-only machine

net.load_state_dict(copyStateDict(torch.load(args.trained_model)))

When I run test.py on my computer that does not have cuda installed, I get the following error even though --cuda=False. I attached the picture, but I also put the error below.

image

raise RuntimeError ('Attempting to deserialize object on a CUDA'
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available () is False.
If you are running on a CPU-only machine, please use torch.load with map_location = 'cpu' to map your storages to the CPU.

In this case, modify the following line in main of test.py and the code will run well. Can you add this part to main code for cpu-only users? I put an example below.

    net.load_state_dict(copyStateDict(torch.load(args.trained_model)))

An example of a modified code:

    if args.cuda:
         net.load_state_dict(copyStateDict(torch.load(args.trained_model)))
     else:
         net.load_state_dict(copyStateDict(torch.load(args.trained_model, map_location='cpu')))

Thank you for publishing this good STD model.

Postprocossing understanding

When reading PostProcessing code, I was confused with these 2 parts. Your explanation is appreciated.

  1. the bounding box padding intuition
        niter = int(math.sqrt(size * min(w, h) / (w * h)) * 2)
        # size / (w * h) is the boundbox coverage ratio
        # 2 * min(w, h) is the area of padding alone longer edge.
  1. the diamond-shape align
    It this a hotfix for presents of single char, typically Asian chars?

Text detection takes longer time to return the results around 6-8 seconds even in GPU mode

Hi,
I am using your pre trained model craft_mlt_25k.pth for text detection. i have modified the test.py as per my use case and it always process only one image for single call. In cpu mode it takes on average of 12 to 14 seconds to process the single image(480*640) and in GPU (google colab) it takes around 7 seconds.

Especially, the call to forward pass (y, _ = net(x)) to craft network takes longer time to return the propability tensor.

Is there any way that i can speed it up? Thanks in advance.

How to evaluate text detection results without word level annotation

Dataset like ICDAR2015 has word level annotation, so that craft can be evaluated easily by IoU.

How can we evaluate the performance of craft with annotation like text region? Such as labeling the entire text area "Hello World!" with a single rectangle.

It is my pleasure to get the answers.

Detection areas out of image bound

Hello. I have some text right at the border of image. The bounding box for that text contains some coordinates that are our of image bounds.

Cuda out of memory error

Hi,
I was trying to run the command
python test.py --trained_model=./craft_mlt_25k.pth --test_folder=./test_data and got the error.

RuntimeError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 3.95 GiB total capacity; 2.88 GiB already allocated; 58.25 MiB free; 42.71 MiB cached)

There is no other process running on my laptop and I have a 4GB NVIDIA 1080 TI GPU.

Evaluation Script for ICDAR 2013, 2015

I have been trying to evaluate your model on ICDAR 2013 and ICDAR 2015, and the F-score I get are 88.74 and 70.46 respectively, which is a far cry from what is mentioned in the paper. Could you provide the evaluation script and also clarify the dataset(s) on which the pre-trained model provided by you was trained on?

Thank You.

Parallel inference

In the demo app I see the images are being queued and inference is taken on them one after another. Is it possible to run inference in parallel and not in queue as shown in the demo. Please explain ?

.

.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.