Code Monkey home page Code Monkey logo

paddlepaddle / paddlegan Goto Github PK

View Code? Open in Web Editor NEW
7.7K 108.0 1.2K 163.58 MB

PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, Wav2Lip, picture repair, image editing, photo2cartoon, image style transfer, GPEN, and so on.

License: Apache License 2.0

Python 96.54% Shell 2.56% CMake 0.42% C++ 0.48%
gan cyclegan pix2pix super-resolution image-generation image-editing motion-transfer photo2cartoon psgan resolution

paddlegan's Introduction

English | 简体中文

PaddleGAN

PaddleGAN provides developers with high-performance implementation of classic and SOTA Generative Adversarial Networks, and supports developers to quickly build, train and deploy GANs for academic, entertainment and industrial usage.

GAN-Generative Adversarial Network, was praised by "the Father of Convolutional Networks" Yann LeCun (Yang Likun) as [One of the most interesting ideas in the field of computer science in the past decade]. It's the one research area in deep learning that AI researchers are most concerned about.

Licensepython version

🎪 Hot Activities

🚀 Recent Updates

Document Tutorial

Installation

Starter Tutorial

Model Tutorial

Composite Application

Online Tutorial

You can run those projects in the AI Studio to learn how to use the models above:

Online Tutorial link
Motion Driving-multi-personal "Mai-ha-hi" Click and Try
Restore the video of Beijing hundreds years ago Click and Try
Motion Driving-When "Su Daqiang" sings "unravel" Click and Try

Examples

Face Morphing

Image Translation

Old video restore

Motion driving

Super resolution

Makeup shifter

Face cartoonization

Realistic face cartoonization

Photo animation

Lip-syncing

NEW try out the Lip-Syncing web demo on Huggingface Spaces using Gradio: Hugging Face Spaces

Changelog

  • v2.1.0 (2021.12.8)

    • Release a video super-resolution model PP-MSVSR and multiple pre-training weights
    • Release several SOTA video super-resolution models and their pre-trained models such as BasicVSR, IconVSR and BasicVSR++
    • Release the light-weight motion-driven model(Volume compression: 229M->10.1M), and optimized the fusion effect
    • Release high-resolution FOMM and Wav2Lip pre-trained models
    • Release several interesting applications based on StyleGANv2, such as face inversion, face fusion and face editing
    • Released Baidu’s self-developed and effective style transfer model LapStyle and its interesting applications, and launched the official website experience page
    • Release a light-weight image super-resolution model PAN
  • v2.0.0 (2021.6.2)

  • v2.0.0-beta (2021.3.1)

    • Completely switch the API of Paddle 2.0.0 version.
    • Release of super-resolution models: ESRGAN, RealSR, LESRCNN, DRN, etc.
    • Release lip migration model: Wav2Lip
    • Release anime model of Street View: AnimeGANv2
    • Release face animation model: U-GAT-IT, Photo2Cartoon
    • Release SOTA generation model: StyleGAN2
  • v0.1.0 (2020.11.02)

    • Release first version, supported models include Pixel2Pixel, CycleGAN, PSGAN. Supported applications include video frame interpolation, super resolution, colorize images and videos, image animation.
    • Modular design and friendly interface.

Community

Scan OR Code below to join [PaddleGAN QQ Group:1058398620], you can get offical technical support here and communicate with other developers/friends. Look forward to your participation!

PaddleGAN Special Interest Group(SIG)

It was first proposed and used by ACM(Association for Computing Machinery) in 1961. Top International open source organizations including Kubernates all adopt the form of SIGs, so that members with the same specific interests can share, learn knowledge and develop projects. These members do not need to be in the same country/region or the same organization, as long as they are like-minded, they can all study, work, and play together with the same goals~

PaddleGAN SIG is such a developer organization that brings together people who interested in GAN. There are frontline developers of PaddlePaddle, senior engineers from the world's top 500, and students from top universities at home and abroad.

We are continuing to recruit developers interested and capable to join us building this project and explore more useful and interesting applications together.

SIG contributions:

  • zhen8838: contributed to AnimeGANv2.
  • Jay9z: contributed to DCGAN and updated install docs, etc.
  • HighCWu: contributed to c-DCGAN and WGAN. Support to use paddle.vision.datasets.
  • hao-qiang & minivision-ai : contributed to the photo2cartoon project.

Contributing

Contributions and suggestions are highly welcomed. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring. When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA. Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. For more, please reference contribution guidelines.

License

PaddleGAN is released under the Apache 2.0 license.

paddlegan's People

Contributors

birdylx avatar ceci3 avatar ctkindle avatar gbstack avatar hao-qiang avatar highcwu avatar hnmizuho avatar hysunflower avatar joejiong avatar kongdebug avatar larastustu avatar leesusu avatar lielinjiang avatar lijianshe02 avatar littletomatodonkey avatar lyl120117 avatar lzzyzlbb avatar mmglove avatar niuliling123 avatar qingqing01 avatar simonsliang avatar wanghuancoder avatar wangna11bd avatar wangnaa avatar wwhio avatar xreki avatar yanhuidua avatar yixinkristy avatar zhen8838 avatar zhengleyizly avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paddlegan's Issues

Bug in ppgan/metric/metric_util.py

ppgan/metric/metric_util.py:69:11: F821 undefined name '_convert_input_type_range'
    img = _convert_input_type_range(img)
          ^
ppgan/metric/metric_util.py:76:15: F821 undefined name '_convert_output_type_range'
    out_img = _convert_output_type_range(out_img, img_type)
              ^
2     F821 undefined name '_convert_input_type_range'

No module named 'ppgan'

File "tools/first-order-demo.py", line 18, in
from ppgan.apps.first_order_predictor import FirstOrderPredictor
ModuleNotFoundError: No module named 'ppgan'

关于Spectralnorm

根据nn.SpectralNorm的文档描述,如果input(weight)是fc层的权重,则应设置为0;如果input(weight)是conv层的权重,则应设置为1。默认值:0。但是在ppgan/models/discriminators/nlayers.py 和ppgan/models/generators/deoldify.py中对卷积层施加SpectralNorm时均使用了默认值:0。

requirements.txt?

能够提供requirements.txt吗,以便直到所使用的所有模块的版本

Failed to build DAIN

When I'm trying to build DAIN with PaddleGAN/applications/DAIN/pwcnet/correlation_op/make.sh, I get this:
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/include
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/libs
cc: error trying to exec 'cc1plus': execvp: No such file or directory
g++: error: correlation_op.cu.o: No such file or directory

I run this sh right after I cloned PaddleGAN, is there anything else I need to do before that?

Python崩溃

蚂蚁呀嘿项目
windows10
python3.8.8
cuda_11.1
cudnn-11.2
GPU:750Ti
60961d601716f2566022097d24361d1

paddlepaddle没有upfirdn2d这种上采样方法

编译如果在aistudio上nvcc编译会报错,请问该怎么解决呢?
aistudio@jupyter-93077-1071352:~$ nvcc test_nvcc.cu -c -o relu_op.cu.o -ccbin cc -DPADDLE_WITH_CUDA -DEIGEN_USE_GPU -DPADDLE_USE_DSO -DPADDLE_WITH_MKLDNN -Xcompiler -fPIC -std=c++11 -Xcompiler -fPIC -w --expt-relaxed-constexpr -O3 -DNVCC -I ${include_dir}
cc: error trying to exec 'cc1plus': execvp: No such file or directory

BUG

setup.py 中 requirements 拼写错误

Cannot test wgan_mnist.yaml

After training a wgan with wgan_mnist.yaml I use the code below to test the network:

!python tools/main.py --config-file configs/wgan_mnist.yaml --evaluate-only --load output_dir/GANModel-2020-12-15-15-54/epoch_20_weight.pdparams

However, I cannot get any results in the new 'visual_train/' folder. How can I get the testing result properly?

first-order-demo运行遇到的问题

运行环境:

Ai Studio高级版

前置操作:

  1. ! python3 -m pip install paddlepaddle-gpu==2.0.0b0 -i https://mirror.baidu.com/pypi/simple
  2. ! git clone https://hub.fastgit.org/PaddlePaddle/PaddleGAN.git
  3. cd PaddleGAN && python -u applications/tools/first-order-demo.py --driving_video datasets/unravel.flv --source_image datasets/ssc.jpg --relative --adapt_scale

报错信息

Traceback (most recent call last):
  File "applications/tools/first-order-demo.py", line 18, in <module>
    from ppgan.apps.first_order_predictor import FirstOrderPredictor
  File "/home/aistudio/PaddleGAN/ppgan/apps/__init__.py", line 2, in <module>
    from .deepremaster_predictor import DeepRemasterPredictor
  File "/home/aistudio/PaddleGAN/ppgan/apps/deepremaster_predictor.py", line 24, in <module>
    from ppgan.models.generators.remaster import NetworkR, NetworkC
  File "/home/aistudio/PaddleGAN/ppgan/models/__init__.py", line 20, in <module>
    from .makeup_model import MakeupModel
  File "/home/aistudio/PaddleGAN/ppgan/models/makeup_model.py", line 30, in <module>
    from ..datasets.makeup_dataset import MakeupDataset
  File "/home/aistudio/PaddleGAN/ppgan/datasets/__init__.py", line 15, in <module>
    from .unpaired_dataset import UnpairedDataset
  File "/home/aistudio/PaddleGAN/ppgan/datasets/unpaired_dataset.py", line 4, in <module>
    from .base_dataset import BaseDataset, get_transform
  File "/home/aistudio/PaddleGAN/ppgan/datasets/base_dataset.py", line 10, in <module>
    from .transforms import transforms as T
  File "/home/aistudio/PaddleGAN/ppgan/datasets/transforms/__init__.py", line 1, in <module>
    from .transforms import PairedRandomCrop, PairedRandomHorizontalFlip
  File "/home/aistudio/PaddleGAN/ppgan/datasets/transforms/transforms.py", line 23, in <module>
    TRANSFORMS.register(T.Transpose)
AttributeError: module 'paddle.vision.transforms' has no attribute 'Transpose'

用百度的AI Studio来修复,最后一步EDVR的时候,出现这个报错是怎么回事?

Model EDVR proccess start..
[03/25 11:23:58] ppgan INFO: Found /home/aistudio/.cache/ppgan/edvr_infer_model.tar
[03/25 11:23:58] ppgan INFO: Decompressing /home/aistudio/.cache/ppgan/edvr_infer_model.tar...
2021-03-25 11:23:58,229-WARNING: The old way to load inference model is deprecated. model path: /home/aistudio/.cache/ppgan/edvr_infer_model/EDVR_model.pdmodel, params path: /home/aistudio/.cache/ppgan/edvr_infer_model/EDVR_params.pdparams
  0%|                                                  | 0/1427 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "tools/video-enhance.py", line 114, in <module>
    frames_path, temp_video_path = predictor.run(temp_video_path)
  File "/home/aistudio/PaddleGAN/PaddleGAN/ppgan/apps/edvr_predictor.py", line 173, in run
    outs = self.base_forward(np.array(data_feed_in))
  File "/home/aistudio/PaddleGAN/PaddleGAN/ppgan/apps/base_predictor.py", line 61, in base_forward
    feed=feed_dict)
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1110, in run
    six.reraise(*sys.exc_info())
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/six.py", line 703, in reraise
    raise value
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1108, in run
    return_merged=return_merged)
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1238, in _run_impl
    use_program_cache=use_program_cache)
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1328, in _run_program
    [fetch_var_name])
ValueError: In user code:

    File "/root/miniconda3/envs/py37_paddle1.8/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2610, in append_op
    attrs=kwargs.get("attrs", None))

    File "/root/miniconda3/envs/py37_paddle1.8/lib/python3.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
    return self.main_program.current_block().append_op(*args, **kwargs)

    File "/root/miniconda3/envs/py37_paddle1.8/lib/python3.7/site-packages/paddle/fluid/layers/nn.py", line 7445, in reshape
    "XShape": x_shape})

    File "/workspace/PaddleGAN/applications/EDVR/models/edvr/edvr_model.py", line 361, in EDVRArch
    L3_fea = fluid.layers.reshape(L3_fea, [-1, N, nf, shape[3]//4, shape[4]//4])

    File "/workspace/PaddleGAN/applications/EDVR/models/edvr/edvr_model.py", line 436, in net
    TSA_only = self.TSA_only)

    File "/workspace/PaddleGAN/applications/EDVR/models/edvr/edvr.py", line 104, in build_model
    out = videomodel.net(self.feature_input[0])

    File "inference_model.py", line 84, in save_inference_model
    infer_model.build_model()

    File "inference_model.py", line 123, in <module>
    save_inference_model(args)


    InvalidArgumentError: The 'shape' attribute in ReshapeOp is invalid. The input tensor X'size must be divisible by known capacity of 'shape'. But received X's shape = [5, 128, 103, 135], X's size = 8899200, 'shape' is [-1, 5, 128, 102, 135], known capacity of 'shape' is -8812800.
      [Hint: Expected output_shape[unk_dim_idx] * capacity == -in_size, but received output_shape[unk_dim_idx] * capacity:-8812800 != -in_size:-8899200.] (at /paddle/paddle/fluid/operators/reshape_op.cc:208)
      [operator < reshape2 > error]

Installation error.

Python version: Python 2.7.12

Collecting imageio-ffmpeg
  Downloading http://pypi.doubanio.com/packages/44/51/8a16c76b2a19ac2af82001985c80d3caca4c373528855cb27e12b39373fb/imageio-ffmpeg-0.3.0.tar.gz (13 kB)
ERROR: Package 'imageio-ffmpeg' requires a different Python: 2.7.12 not in '>=3.4'

imageio-ffmpeg

no such file xx.jpg

export那一步,把jpg和mp4放在“同一个”目录下,不提示no such file xx.mp4,提示报错no such file xx.jpg。不知道有遇到类似问题的么?谢谢。

please change CUDAPlace(0) to be CPUPlace

项目:AnimeGANv2
CPU:i7-6700K
执行:python tools/main.py --config-file configs/animeganv2_pretrain.yaml
错误:Cannot use GPU because you have installed CPU version PaddlePaddle.
If you want to use GPU, please try to install GPU version PaddlePaddle by: pip install paddlepaddle-gpu
If you only have CPU, please change CUDAPlace(0) to be CPUPlace()
怎么切换呢?在main.py中加入import paddle paddle.set_device('cpu') 不得行

Wav2Lip 使用单卡GPU模式出现

RuntimeError: Error(s) in loading state_dict for Wav2Lip

源代码是使用的cpu,
#device = 'cpu'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Using {} for inference.'.format(device))

Is training a cartoon2photo model is possible?

Dear authors,

Thanks for sharing a great project.

As I see you are able to train a model a photo2caroon model, but I would like to train a cartoon2photo model?
Based on your knowledge, does the cartoon2photo work well?

Thanks for your time.

About the downloading

>>> deoldify = DeOldifyPredictor()
W1027 06:42:46.683764 70588 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 9.0
W1027 06:42:46.730727 70588 device_context.cc:346] device: 0, cuDNN Version: 7.5.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 853496/853496 [01:39<00:00, 8609.03it/s]
>>>
  1. 没有提示信息,进度条在干啥。
  2. 下载后模型保存在PaddleGAN中,没有缓存。

0

0

Error in First Order motion model.

100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1759/1759 [00:59<00:00, 29.40it/s]
/root/miniconda3/lib/python3.7/site-packages/scikit_image-0.15.0-py3.7-linux-x86_64.egg/skimage/util/dtype.py:135: UserWarning: Possible precision loss when converting from float32 to uint8
  .format(dtypeobj_in, dtypeobj_out))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/paddle/work/github/PaddleGAN/ppgan/apps/first_order_predictor.py", line 142, in run
    fps=fps)
  File "/root/miniconda3/lib/python3.7/site-packages/imageio-2.5.0-py3.7.egg/imageio/core/functions.py", line 336, in mimwrite
    writer = get_writer(uri, format, "I", **kwargs)
  File "/root/miniconda3/lib/python3.7/site-packages/imageio-2.5.0-py3.7.egg/imageio/core/functions.py", line 174, in get_writer
    request = Request(uri, "w" + mode, **kwargs)
  File "/root/miniconda3/lib/python3.7/site-packages/imageio-2.5.0-py3.7.egg/imageio/core/request.py", line 126, in __init__
    self._parse_uri(uri)
  File "/root/miniconda3/lib/python3.7/site-packages/imageio-2.5.0-py3.7.egg/imageio/core/request.py", line 283, in _parse_uri
    raise FileNotFoundError("The directory %r does not exist" % dn)
FileNotFoundError: The directory '/paddle/work/github/PaddleGAN/test/output' does not exist

error when running `applications/tools/animeganv2.py`

code is like:
python applications/tools/animeganv2.py --input_image path/to/gakki.png --output_path path/to/animeGAN
errors are:

--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0   paddle::imperative::Tracer::TraceOp(std::string const&, paddle::imperative::NameVarBaseMap const&, paddle::imperative::NameVarBaseMap const&, paddle::framework::AttributeMap, std::map<std::string, std::string, std::less<std::string >, std::allocator<std::pair<std::string const, std::string > > > const&)
1   paddle::imperative::Tracer::TraceOp(std::string const&, paddle::imperative::NameVarBaseMap const&, paddle::imperative::NameVarBaseMap const&, paddle::framework::AttributeMap, paddle::platform::Place const&, bool, std::map<std::string, std::string, std::less<std::string >, std::allocator<std::pair<std::string const, std::string > > > const&)
2   paddle::imperative::PreparedOp::Run(paddle::imperative::NameVarBaseMap const&, paddle::imperative::NameVarBaseMap const&, paddle::framework::AttributeMap const&)
3   std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::GemmConvKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::GemmConvKernel<paddle::platform::CPUDeviceContext, double> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
4   paddle::operators::GemmConvKernel<paddle::platform::CPUDeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const
5   paddle::operators::math::Im2ColFunctor<(paddle::operators::math::ColFormat)0, paddle::platform::CPUDeviceContext, float>::operator()(paddle::platform::CPUDeviceContext const&, paddle::framework::Tensor const&, std::vector<int, std::allocator<int> > const&, std::vector<int, std::allocator<int> > const&, std::vector<int, std::allocator<int> > const&, paddle::framework::Tensor*, paddle::framework::DataLayout)
6   paddle::framework::SignalHandle(char const*, int)
7   paddle::platform::GetCurrentTraceBackString[abi:cxx11]()

----------------------
Error Message Summary:
----------------------
FatalError: `Segmentation fault` is detected by the operating system.
  [TimeInfo: *** Aborted at 1615879561 (unix time) try "date -d @1615879561" if you are using GNU date ***]
  [SignalInfo: *** SIGSEGV (@0x7f56573e42c0) received by PID 11689 (TID 0x7f60fec05740) from PID 1463698112 ***]

Segmentation fault (core dumped)

my environment:
ubuntu 18.04,python 3.8.5,paddlepaddle-gpu 2.0.0

bug for wav2lip when load a image

Get error when use wav2lip

python tools/wav2lip.py --face ..\data\test.png --audio output\audio.m4a --outfile output\output.mp4

return with

Traceback (most recent call last):
  File "tools/wav2lip.py", line 107, in <module>
    predictor.run()
  File "paddlegan\ppgan\apps\wav2lip_predictor.py", line 206, in run
    mel_idx_multiplier = 80. / fps
ZeroDivisionError: float division by zero

This may caused by ".." in the image name, as the code to get format in file "paddlegan\ppgan\apps\wav2lip_predictor.py" need to be change.

def run(self):
        print(self.args.face)
        if not os.path.isfile(self.args.face):
            raise ValueError(
                '--face argument must be a valid path to video/image file')

        elif self.args.face.split('.')[1] in ['jpg', 'png', 'jpeg']: # It's unsafe to get format like this when input filename involve "./" or "../"
            full_frames = [cv2.imread(self.args.face)]
            fps = self.args.fps

ImportError: cannot import name 'Photo2CartoonPredictor'

Installation seems fine. But when I run from ppgan.apps import Photo2CartoonPredictor

ImportError Traceback (most recent call last)
in ()
----> 1 from ppgan.apps import Photo2CartoonPredictor

ImportError: cannot import name 'Photo2CartoonPredictor'

实时的first order motion制作困难吗

First order motion model的任务是image animation,给定一张源图片,给定一个驱动视频,生成一段视频

假如这里的视频是摄像头采集的人像,想制作一个实时的表情迁移,难度如何?在轻薄本的CPU上运行的话算力要求大吗?

1 U-GAT-IT 无法在mac下使用

E0302 16:59:14.194175 332123584 pybind.cc:1415] Cannot use GPU because you have installed CPU version PaddlePaddle.
If you want to use GPU, please try to install GPU version PaddlePaddle by: pip install paddlepaddle-gpu
If you only have CPU, please change CUDAPlace(0) to be CPUPlace().

代码已替换为CPUPlace(), 但是还是报错

FatalError: A serious error (Segmentation fault) is detected by the operating system. (at /paddle/paddle/fluid/platform/init.cc:303)

环境:ubuntu 16 cuda 9.0 cudnn 7.6.5 ,运行 python tools/video-enhance.py时会报错:
You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed./usr/local/python3.7.1/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py:943: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
collections.MutableMapping.register(ParseResults)
C++ Traceback (most recent call last):

0 paddle::framework::SignalHandle(char const*, int)
1 paddle::platform::GetCurrentTraceBackString()


Error Message Summary:

FatalError: A serious error (Segmentation fault) is detected by the operating system. (at /paddle/paddle/fluid/platform/init.cc:303)
[TimeInfo: *** Aborted at 1607998503 (unix time) try "date -d @1607998503" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x8a0b00) received by PID 30462 (TID 0x7f9e73213700) from PID 9046784 ***]

Segmentation fault

Wrong size of dataloader

When I run the wgan training, I set the dataset as mnist, and the dataloader returns data like: [64,1,28, 28].

But when I set it as Cifar10, it returns [64, 32, 32, 3].

Is there something wrong with the paddlegan dataloader or I should always swap the dimension when using RGB images?

Pixel2Style2Pixel 是否已经out of date?

docs/zh_CN/tutorials/pixel2style2pixel.md里的参数配置:

cd applications/
python -u tools/styleganv2.py \
       --input_image <替换为输入的图像路径> \
       --output_path <替换为生成图片存放的文件夹> \
       --weight_path <替换为你的预训练模型路径> \
       --model_type ffhq-inversion \
       --seed 233 \
       --size 1024 \
       --style_dim 512 \
       --n_mlp 8 \
       --channel_multiplier 2 \
       --cpu

可实际所调用的tools/styleganv2.py里并无参数input_image?

import ppgan.datasets.transforms.functional as custom_F AttributeError: module 'ppgan' has no attribute 'datasets'

python tools/video-enhance.py --input /home/hust/PaddleGAN/applications/video/1.mp4 --process_order DAIN DeOldify EDVR --output /home/hust/output/
Traceback (most recent call last):
File "tools/video-enhance.py", line 18, in
from ppgan.apps import DAINPredictor
File "/home/hust/PaddleGAN/ppgan/apps/init.py", line 16, in
from .deepremaster_predictor import DeepRemasterPredictor
File "/home/hust/PaddleGAN/ppgan/apps/deepremaster_predictor.py", line 24, in
from ppgan.models.generators.remaster import NetworkR, NetworkC
File "/home/hust/PaddleGAN/ppgan/models/init.py", line 20, in
from .makeup_model import MakeupModel
File "/home/hust/PaddleGAN/ppgan/models/makeup_model.py", line 32, in
from ..datasets.makeup_dataset import MakeupDataset

File "/home/hust/PaddleGAN/ppgan/datasets/init.py", line 15, in
from .unpaired_dataset import UnpairedDataset
File "/home/hust/PaddleGAN/ppgan/datasets/unpaired_dataset.py", line 18, in
from .base_dataset import BaseDataset, get_transform
File "/home/hust/PaddleGAN/ppgan/datasets/base_dataset.py", line 24, in
from .transforms import transforms as T
File "/home/hust/PaddleGAN/ppgan/datasets/transforms/init.py", line 15, in
from .transforms import PairedRandomCrop, PairedRandomHorizontalFlip, Add, ResizeToScale
File "/home/hust/PaddleGAN/ppgan/datasets/transforms/transforms.py", line 22, in
import ppgan.datasets.transforms.functional as custom_F
AttributeError: module 'ppgan' has no attribute 'datasets'

can you help me? what is wrong?thinks

会出现module版本的问题

进行!python3 -m pip install --upgrade ppgan时,会出现ERROR: umap-learn 0.5.1 has requirement numba>=0.49, but you'll have numba 0.48.0 which is incompatible. ERROR: pynndescent 0.5.2 has requirement numba>=0.51.2, but you'll have numba 0.48.0 which is incompatible. 这样的错误。可以怎么解决?
我是在colab上面运行的。。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.