Code Monkey home page Code Monkey logo

deepguidedfilter's Introduction

Fast End-to-End Trainable Guided Filter

[Project] [Paper] [arXiv] [Demo] [Home]

Official implementation of Fast End-to-End Trainable Guided Filter.
Faster, Better and Lighter for pixel-wise image prediction.

Overview

DeepGuidedFilter is the author's implementation of:

Fast End-to-End Trainable Guided Filter
Huikai Wu, Shuai Zheng, Junge Zhang, Kaiqi Huang
CVPR 2018

With our method, FCNs can run 10-100 times faster w/o performance drop.

Contact: Hui-Kai Wu ([email protected])

Get Started

Prepare Environment [Python>=3.6]

  1. Download source code from GitHub.
    git clone https://github.com/wuhuikai/DeepGuidedFilter
    
    cd DeepGuidedFilter && git checkout release
  2. Install dependencies.
    conda install opencv=3.4
    conda install pytorch=1.1 torchvision=0.2 cudatoolkit=9.0 -c pytorch
    
    pip install -r requirements.txt 
  3. (Optional) Install dependencies for MonoDepth.
    cd ComputerVision/MonoDepth
    
    pip install -r requirements.txt

Ready to GO !

Image Processing

cd ImageProcessing/DeepGuidedFilteringNetwork

python predict.py  --task auto_ps \
                   --img_path ../../images/auto_ps.jpg \
                   --save_folder . \
                   --model deep_guided_filter_advanced \
                   --low_size 64 \
                   --gpu 0

See Here or python predict.py -h for more details.

Semantic Segmentation with Deeplab-Resnet

  1. Enter the directory.
    cd ComputerVision/Deeplab-Resnet
  2. Download the pretrained model [Google Drive|BaiduYunPan].
  3. Run it now !
    python predict_dgf.py --img_path ../../images/segmentation.jpg --snapshots [MODEL_PATH]

Note:

  1. Result is in ../../images.
  2. Run python predict_dgf.py -h for more details.

Saliency Detection with DSS

  1. Enter the directory.
    cd ComputerVision/Saliency_DSS
  2. Download the pretrained model [Google Drive|BaiduYunPan].
  3. Try it now !
    python predict.py --im_path ../../images/saliency.jpg \
                      --netG [MODEL_PATH] \
                      --thres 161 \
                      --dgf --nn_dgf \
                      --post_sigmoid --cuda

Note:

  1. Result is in ../../images.
  2. See Here or python predict.py -h for more details.

Monocular Depth Estimation

  1. Enter the directory.
    cd ComputerVision/MonoDepth
  2. Download and Unzip Pretrained Model [Google Drive|BaiduYunPan]
  3. Run on an Image
    python monodepth_simple.py --image_path ../../images/depth.jpg --checkpoint_path [MODEL_PATH] --guided_filter

Note:

  1. Result is in ../../images.
  2. See Here or python monodepth_simple.py -h for more details.

Guided Filtering Layer

Install Released Version

  • PyTorch Version
    pip install guided-filter-pytorch
  • Tensorflow Version
    pip install guided-filter-tf

Usage

  • PyTorch Version
    from guided_filter_pytorch.guided_filter import FastGuidedFilter
    
    hr_y = FastGuidedFilter(r, eps)(lr_x, lr_y, hr_x)
    from guided_filter_pytorch.guided_filter import GuidedFilter
    
    hr_y = GuidedFilter(r, eps)(hr_x, init_hr_y)
    from guided_filter_pytorch.guided_filter import ConvGuidedFilter
    
    hr_y = ConvGuidedFilter(r, norm)(lr_x, lr_y, hr_x)
    
  • Tensorflow Version
    from guided_filter_tf.guided_filter import fast_guided_filter
    
    hr_y = fast_guided_filter(lr_x, lr_y, hr_x, r, eps, nhwc)
    from guided_filter_tf.guided_filter import guided_filter
    
    hr_y = guided_filter(hr_x, init_hr_y, r, eps, nhwc)

Training from scratch

Prepare Training Environment

git checkout master

conda install opencv=3.4
conda install pytorch=1.1 torchvision=0.2 cudatoolkit=9.0 -c pytorch

pip uninstall Pillow
pip install -r requirements.txt

# (Optional) For MonoDepth
pip install -r ComputerVision/MonoDepth/requirements.txt 

Start to Train

Citation

@inproceedings{wu2017fast,
  title     = {Fast End-to-End Trainable Guided Filter},
  author    = {Wu, Huikai and Zheng, Shuai and Zhang, Junge and Huang, Kaiqi},
  booktitle = {CVPR},
  year = {2018}
}

deepguidedfilter's People

Contributors

2wins avatar wuhuikai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepguidedfilter's Issues

can't reproduce results of Saliency_DSS with dgf

Thanks for sharing! I download the code and run it as readme. Without any modification, I can not reproduce the results of Saliency_DSS with dgf. Maybe the hyperparameters of training need to finetune?
With the provided pretrained checkpoint, I can get satisfactory results (I didn't measure the results yet.)

RuntimeError: Error(s) in loading state_dict for network_dss:

When I use pretrained model on Saliency Detection with DSS, it shows:
RuntimeError: Error(s) in loading state_dict for network_dss:
Unexpected key(s) in state_dict: "guided_map_conv1.weight", "guided_map_conv1.bias", "guided_map_conv2.weight", "guided_map_conv2.bias".

My environment is windows10. Could you help me?

Traceback (most recent call last):
File "D:/DeepGuidedFilter_master/ComputerVision/Saliency_DSS/predict.py", line 49, in
netG.load_state_dict(torch.load(opt.netG))
File "C:\Users\86187\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 830, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for network_dss:
Unexpected key(s) in state_dict: "guided_map_conv1.weight", "guided_map_conv1.bias", "guided_map_conv2.weight", "guided_map_conv2.bias".

Missing License

There is no license file for this repo. Any restrictions on usage? MIT License would be great ;-)

About the range of input tensor

Hi, I want to use this module in another model and I wonder whether it requires the range of tensor to be [-1, 1] or [0,1]?

Thanks so much!

I got "error: unrecognized arguments:" when I run test_time.py on Image processing. Could you pls help me?

=================================== ERRORS ====================================
________________________ ERROR collecting test_time.py ________________________
test_time.py:16: in
args = parser.parse_args()
C:\Users\86187\anaconda3\lib\argparse.py:1758: in parse_args
self.error(msg % ' '.join(argv))
C:\Users\86187\anaconda3\lib\argparse.py:2508: in error
self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
C:\Users\86187\anaconda3\lib\argparse.py:2495: in exit
_sys.exit(status)
E SystemExit: 2
------------------------------- Captured stderr -------------------------------
usage: _jb_pytest_runner.py [-h] [--gpu GPU] [--low_size LOW_SIZE]
[--full_size FULL_SIZE] [--iter_size ITER_SIZE]
[--model_id MODEL_ID]
_jb_pytest_runner.py: error: unrecognized arguments: D:/DeepGuidedFilter_master/ImageProcessing/DeepGuidedFilteringNetwork/test_time.py
!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
============================== 1 error in 1.22s ===============================

Process finished with exit code 2

an application about guided filter to disentangle convolution effect from GAN image

Hi,
I have a preliminary idea about the application of guided filter.
GAN is more popular way to generate image for pre-process step on some area for example object recognition , style transfer.
but generator is a decoder which through many convolution layers, so that image become "convolution effect".
what I think about is FastGuidedFilter or GuidedFilter (not ConvGuidedFilter) can help to disentangle .
For example I would like to generated image more "real world image", I use FastGuidedFilter and generated image as input and real world image as guided image, then want to output a real picture.
Does this theory work?

Computer crash with convert_dng_to_tiff.py script

I am having this issue with the convert_dng_to_tif.py script where my computer crashes immediately after running the script and the only option I am left with is a hard restart. I have even left it for a long time after running the script but that doesn't seem to solve the problem. The only change I have made to the script is in the path of the input file. Any help will be appreciated. Thanks.

DGF code in segmentation

Thanks for your excellent work
When I view and run the code of "./ComputerVision/Deeplab-Resnet/predict_dgf.py" with the released pth file, I observed that the guided filter layer just work as the following:
image
where the breakpoint is in ./ComputerVision/Deeplab-Resnet/deeplab_resnet.py
It seems that firstly using low-resolution image "x" to get low-resolution "output" and using original high-resolution image "im" to get guided_map "g", then upsample "output" to get coarse high-resolution "output", finally using guided_layer to get fine high-resolution "output".
In fact, in the guided_layer, it just calculate the A and b using guided_map "g" and coarse high-resolution "output", like the following:
image
where the breakpoint is in ./GuidedFilteringLayer/GuidedFIlter_PyTorch/guided_filter_pytorch/guided_filter.py

So I cannot know whether there use the end-to-end guided layer in paper's Figure 2 like the following:
image
I guess it's just the DGFs version mentioned in paper's subsection 4.2 like the following?
image
image

关于输入是RGB图像三通道,输出是alpha单通道信息

依照这个方式,
image
我使用了咱们项目的GuidedFilter_PyTorch,我有一个困惑,我输入的是一张RGB图像(三通道),我想得到对应的alpha(单通道),我该如何处理的,我发现计算的过程中,维度是不匹配的,难以计算,我该如何对RGB处理,我尝试使用RGB的均值mean,我发现会丢失信息,我想基于https://github.com/PeterL1n/RobustVideoMatting/blob/master/model/deep_guided_filter.py 实现,只由RGB输出对应的alpha,而不包含其他

Contributing a Multi-channel Guided Filter

Hello,

At request of @mangdian in issue #14 , I am sharing code for my implementation of a Guided Filter for PyTorch that supports a multi-channel guide image and 1 channel source image. The code is linked to in url below. Can you please advise whether I should make my own repo for this, or whether we can add it here? I thought it might be more helpful to the community to put implementations for the guided filter in one place.

https://gist.github.com/adgaudio/21f6aa699113c766c2c9ddd4c6144425

Also, if anyone wishes to collaborate with me on academic papers, I would be happy to meet you over a web call or email with you.

Cheers,
Alex

OutOfRangeError (see above for traceback): RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 8, current size 0) [[node shuffle_batch (defined at D:\DeepGuidedFilter_master\ComputerVision\MonoDepth\monodepth_dataloader.py:71) ]]

When I tried to train monodepth.main, It shows:
OutOfRangeError (see above for traceback): RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 8, current size 0)
[[node shuffle_batch (defined at D:\DeepGuidedFilter_master\ComputerVision\MonoDepth\monodepth_dataloader.py:71) ]]
I have no idea if it's only happening on windows environment, could you help me with that?

Question about implementation of FastGuidedFilter

Hello!

Thank you for this well written library. I have been comparing your implementation to the Guided Filter paper (http://kaiminghe.com/publications/pami12guidedfilter.pdf) and the most recent note (from 2015) on arXiv for the Fast Guided Filter implementation: https://arxiv.org/pdf/1505.00996.pdf

There appears to be a difference between the original paper (Section 5, EXPERIMENTS, under "Joint Upsampling") and step 5 of Algorithm 2 of the 2015 paper. Specifically, the original paper says, in the joint upsampling section, to remove step 5 (specifically, "bilinearly upsample [...] to the fine scale (replacing the mean filter on a and b),") but the 2015 paper includes the mean filter.

Your implementation follows the original paper. Just above this line in the code, one could add a box filter for A and for b just before interpolation.

mean_A = F.interpolate(A, (h_hrx, w_hrx), mode='bilinear', align_corners=True)

Is the argument in support of removing step 5 simply that bilinear interpolation (when upsampling) acts like a mean filter? (I think that isn't correct but am not sure).

I was wondering whether you were aware of this difference, and whether you have done any testing to evaluate the guided filter with and without step 5? I ran a simple test comparing FastGuidedFilter (with no down sampling) against OpenCV and found that sq. difference increases (from 160 to >600) with inclusion of step 5. But then I implemented the extension to the (fast) guided filter to support color guide image and grayscale filter image and found the mean filter decreased the error (from 2.5 to .09). So this leaves me wondering.

... Perhaps at the end of the day the difference in performance doesn't matter too much.

I cant generate Ground Truth img in Image Processing

When I use "matlab -r "prepare_dataset( '512'); exit" -nodisplay" command on Matlab, It only generate an empty folder named "512", but without any TIF image inside it .
The Matlab only gives a warning "Warning: Unrecognized command line option: nodisplay."
Could you please help me?

Coarse to fine using network

Hello! First of all, nice paper!

I was wondering if I can use one of the networks and get a refined sky mask (output from a segmentation network) like I was using the classical guidedFilter algorithm.

Thank you!

The usage of "AdaptiveNorm" layer in the modules

Hi huikai, thank your for your excellent work.
And I want to know something the adaptiveNorm layer in your module.
I find the inplementation of adaptivenorm layer in https://github.com/wuhuikai/DeepGuidedFilter/blob/master/ImageProcessing/DeepGuidedFilteringNetwork/module.py#L27.
But the weight factor w_1 is 0. The output of this layer equals the input. How can this layer use as a normalized layer? or you change this weight factor in your training procedure?
Hope for your reply! Thank you again.

Why is Guided Filter not differentiable?

Hello!

I am having a hard time understanding the understanding a claim in the paper that the original guided filter is not differentiable.

Original guided filter does not have any learnable parameters in it, but aren't they still able flow the gradients backward as their operations (mean filter, linear operation) are differentiable?

Thanks.

What is low resolution image in context of semantic segmentation

In semantic segmentation the default setup is:

  • input image
  • backbone which returns features at strides 2, 4, 8, 16, 32
  • decoder which combines features and produces logits with stride-2 (or 4 as in DeepLabV3+)
  • head which resizes logits to image size and makes predictions (conv+softmax/sigmoid)

So what should be a low resolution image for ConvGuidedFilter in this case?
Should i resize original image to logits size (stride 2 or 4) or i should use stride-2/4 features from backbone?

How to use this layer in my model?

from guided_filter_pytorch.guided_filter import FastGuidedFilter

hr_y = FastGuidedFilter(r, eps)(lr_x, lr_y, hr_x)
r, eps,lr_x,lr_y,hr_xr,What are these parameters referring to?

from guided_filter_pytorch.guided_filter import GuidedFilter

hr_y = GuidedFilter(r, eps)(hr_x, init_hr_y)

Backprop causes NaN

Using your Pytorch implementation, inference works great (output images look beautiful). Unfortunately, if I try to use this layer as part of a model and run an optimizer through it, I'm getting NaN gradients. I can try to debug this, but curious if you have seen this issue before?

Question about epsilon

Hi @wuhuikai , thanks for this amazing work.
I'm working on a project with a need for disparity estimation using CNN, and I'm trying to adapt your Deep Guided Filter into my networks. I want to preserve object boundaries in disparity maps while keeping textures on objects not affecting the disparity estimation. With the default parameters of your FastGuidedFilter implementation in PyTorch I get preliminary results like this:
image

where I probably should have set dgf_eps to 1e-2 as suggested in your monoDepth code.

The question is as follows:
What do you think if we set this epsilon hyperparameter into a learnable one(regulating its range inside (0, 1) with sigmoid etc.) and let the network decide its value? Have you tried this in your experiments?

Thanks for your time.

AttributeError: Can't pickle local object 'SuDataset.__init__.<locals>.append'

When I run "DeepGuidedFilteringNetwork/train_hr.py" on Windows10, It shows an AttributeError, It seems like multiprocessing problem, Could you pls help me?

File "D:/DeepGuidedFilter_master/ImageProcessing/DeepGuidedFilteringNetwork/train_hr.py", line 46, in
run(config, keep_vis=True)
File "D:\DeepGuidedFilter_master\ImageProcessing\DeepGuidedFilteringNetwork\train_base.py", line 78, in run
for idx, imgs in enumerate(train_loader):
File "C:\Users\86187\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\86187\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init
w.start()
File "C:\Users\86187\anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\86187\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\86187\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\86187\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\86187\anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'SuDataset.init..append'

Not able to run pre-trained model on mac.

I am getting the following error
main() File "predict_dgf.py", line 37, in main model.eval().cuda('cpu') File "/Applications/anaconda/lib/python2.7/site-packages/torch/nn/modules/module.py", line 216, in cuda return self._apply(lambda t: t.cuda(device)) File "/Applications/anaconda/lib/python2.7/site-packages/torch/nn/modules/module.py", line 146, in _apply module._apply(fn) File "/Applications/anaconda/lib/python2.7/site-packages/torch/nn/modules/module.py", line 146, in _apply module._apply(fn) File "/Applications/anaconda/lib/python2.7/site-packages/torch/nn/modules/module.py", line 152, in _apply param.data = fn(param.data) File "/Applications/anaconda/lib/python2.7/site-packages/torch/nn/modules/module.py", line 216, in <lambda> return self._apply(lambda t: t.cuda(device)) File "/Applications/anaconda/lib/python2.7/site-packages/torch/_utils.py", line 61, in _cuda with torch.cuda.device(device): File "/Applications/anaconda/lib/python2.7/site-packages/torch/cuda/__init__.py", line 207, in __enter__ self.prev_idx = torch._C._cuda_getDevice() AttributeError: 'module' object has no attribute '_cuda_getDevice'

I don't have cuda on my laptop. I have set the gpu0 attribute too. I tried out a few things :

  1. changed map_location = {'cuda0':'cpu'} in torch.load() again faced the same issue.
  2. changed map_location = lambda storage in torch.load() again faced the same issue.
    Is there an environment change anywhere in the code that needs to be done?

The question about setting FINE_SIZE in Config

Since the default value is -1 and there's no change in whole repo. The following data fragment shows the "RandomCrop" will never employed. May I ask about the reason for settting FINE_SIZE as the parameter?

RandomCrop(fine_size) if fine_size > 0 else None,

About the structure of ConvGuidedFilter

Hi, I'm confused when I try to understand the structure of ConvGuidedFilter. Could you please tell me which part of this code is about "Dilated Conv" and which part is about"pointwise convolution Block"?
And the transformation function F(I) is “conv_a” in the code, right?

class ConvGuidedFilter(nn.Module):
    def __init__(self, radius=1, norm=nn.BatchNorm2d):# batchnorm 归一化
        super(ConvGuidedFilter, self).__init__()

        self.box_filter = nn.Conv2d(3, 3, kernel_size=3, padding=radius, dilation=radius, bias=False, groups=3)
        self.conv_a = nn.Sequential(nn.Conv2d(6, 32, kernel_size=1, bias=False),
                                    norm(32),
                                    nn.ReLU(inplace=True),
                                    nn.Conv2d(32, 32, kernel_size=1, bias=False),
                                    norm(32),
                                    nn.ReLU(inplace=True),
                                    nn.Conv2d(32, 3, kernel_size=1, bias=False))
        self.box_filter.weight.data[...] = 1.0

    def forward(self, x_lr, y_lr, x_hr):
        _, _, h_lrx, w_lrx = x_lr.size()
        _, _, h_hrx, w_hrx = x_hr.size()

        N = self.box_filter(x_lr.data.new().resize_((1, 3, h_lrx, w_lrx)).fill_(1.0)) #添加噪声
        ## mean_x
        mean_x = self.box_filter(x_lr)/N
        ## mean_y
        mean_y = self.box_filter(y_lr)/N
        ## cov_xy
        cov_xy = self.box_filter(x_lr * y_lr)/N - mean_x * mean_y
        ## var_x
        var_x  = self.box_filter(x_lr * x_lr)/N - mean_x * mean_x

        ## A
        A = self.conv_a(torch.cat([cov_xy, var_x], dim=1))
        ## b
        b = mean_y - A * mean_x

        ## mean_A; mean_b
        mean_A = F.interpolate(A, (h_hrx, w_hrx), mode='bilinear', align_corners=True)  #上采样
        mean_b = F.interpolate(b, (h_hrx, w_hrx), mode='bilinear', align_corners=True)  #上采样

        return mean_A * x_hr + mean_b#mean_A = Ah  mean_b = bh

System gets hanged and f measure difference

Hi,
I am having the following issues/doubts:

  1. My system gets hanged when i try to run the saliency part of the code, for both test.py and predict.py. I use the pretrained model shared. Can you please help.
  2. The paper states the F measure to be around 91% on DSS. However, the DSS method itself has reported a 92% F measure. Can you please explain the reason for the difference.

where can I get the supplementary material that describe the forward algorithm?

Hi

Thanks for the great paper, I am still confused to understand the description in the Algorithm 1, could you tell me where can I find the supplementary material(The equations for propagating the gradients through guided filtering layer are shown in Algorithm 1, while the corresponding forward algorithm is presented in the supplementary material)?

Thanks in advance!

How is dehazing done in the paper

I noticed that the code in https://github.com/danaberman/non-local-dehazing is from matlab. I would like to know how to combine the DeepGuidedFilter in the paper with it. Thank you very much

which guided filtering function is self.gf called?

Excuse me, in the "DeepGuidedFilter/ImageProcessing/DeepGuidedFilteringNetwork/module.py" script, in the DeepGuidedFilterGuidedMapConvGF function, which guided filtering function is self.gf called?
The three parameters OL, IL, Ih ,in the paper correspond to the variable in the code.
Thank you!

class DeepGuidedFilterGuidedMapConvGF(DeepGuidedFilterConvGF):
    def __init__(self, radius=1, dilation=0, c=16, layer=5):
        super(DeepGuidedFilterGuidedMapConvGF, self).__init__(radius, layer)
        self.guided_map = nn.Sequential(
            nn.Conv2d(3, c, 1, bias=False) if dilation==0 else \
                nn.Conv2d(3, c, 3, padding=dilation, dilation=dilation, bias=False),
            AdaptiveNorm(c),
            nn.LeakyReLU(0.2, inplace=True),
            nn.Conv2d(c, 3, 1)
        )
    def forward(self, x_lr, x_hr):
        return self.gf(self.guided_map(x_lr), self.lr(x_lr), self.guided_map(x_hr)).clamp(0, 1)

About training auto-ps task on Fivek dataset

Hello, I'm a beginer and I have some questions:

have you tried to train auto-ps task on fivek dataset at original image size?

the implementation in paper is resize to 512s and random resized from 512 to 1672 without data augmentation(e.g. random crop & random rotate) and you set fine-size = -1 in your default setting.

If i wanna train auto-ps task on the original image size, is it better to apply data augmentation(random crop, random flip & rotate) strategy?

thank you very much

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.