Code Monkey home page Code Monkey logo

paconvert's Introduction

PaConvert

PaddlePaddle Code Convert Toolkits

重要

  • 本工具由Paddle团队官方维护与建设,所有转换代码均已经过测试,欢迎大家使用与反馈
  • 当前共支持约1300+个Pytorch API的一键转换,我们通过300+个Pytorch模型测试,代码行数平均转换率约为 90+%
  • 本工具基于 PyTorch 最新 release 与 Paddle develop API 映射表 实现,表中API均经过详细对比分析,欢迎查阅

概述

PaConvert全称是 代码转换工具,能自动将其它深度学习框架训练或推理的代码,转换为PaddlePaddle的代码,方便快速自动地 模型代码迁移

目前支持自动转换Pytorch代码,其它深度学习框架的支持后续新增中,其原理是通过Python AST语法树分析,将输入代码生成为抽象语法树,对其进行解析、遍历、匹配、编辑、替换、插入等各种操作,然后得到基于PaddlePaddle的抽象语法树,最后生成Paddle的代码。

转换会尽量保持原代码的风格与结构,将代码中其它深度学习框架的接口转换为调用PaddlePaddle的接口。

转换时会尽可能保证原代码的行数不变,但某些情形下原来的1行代码会转换成多行。例如:

转换前:

import torch
y = torch.transpose(image, 1, 0)

转换后:

import paddle
x = image
perm_0 = list(range(x.ndim))
perm_0[1] = 0
perm_0[0] = 1
y = paddle.transpose(x=x, perm=perm_0)

这是由于两者API的用法差异,无法通过一行代码来完成,需要增加若干辅助行来实现相同功能。

转换过程中不会改动原文件,会将原项目文件一一转换到 out_dir 指定的文件夹中,方便前后对比。对不同类型的文件的处理逻辑分别为:

  • Python代码文件:识别代码中调用其它深度学习框架的接口并转换
  • requirements.txt: 替换其中的安装依赖为 paddlepaddle-gpu
  • 其他文件:原样拷贝

安装与使用

由于使用了一些较新的Python语法树特性,你需要使用>=python3.8的解释器。

  1. 使用pip安装
python3.8 -m pip install -U paconvert
paconvert --in_dir torch_project [--out_dir paddle_project] [--exclude_dirs exclude_dirs] [--log_dir log_dir] [--log_level "DEBUG"] [--run_check 1]
  1. 使用源码安装
git clone https://github.com/PaddlePaddle/PaConvert.git
python3.8 paconvert/main.py --in_dir torch_project [--out_dir paddle_project] [--exclude_dirs exclude_dirs] [--log_dir log_dir] [--log_level "DEBUG"] [--run_check 1]

参数介绍

--in_dir        输入torch项目文件,可以为单个文件或文件夹
--out_dir       可选,输出paddle项目文件,可以为单个文件或文件夹,默认在当前目录下创建./paddle_project/
--exclude_dirs  可选,排除转换的文件或文件夹,排除多项时请用逗号分隔,默认不排除
--log_dir       可选,输出日志的路径,默认会在终端上打印日志
--log_level     可选,打印log等级,仅支持"INFO" "DEBUG",默认"INFO"
--run_check     可选,工具自检

转换示例

以下面Pytorch代码为例,转换前:

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.Linear as Linear
import torch.nn.functional as F

class MyNet(nn.Module):
    test = "str"

    def __init__(self):
        self._fc1 = torch.nn.Linear(10, 10)
        self._fc2 = nn.Linear(10, 10)
        self._fc3 = Linear(10, 10)

    @torch.no_grad()
    def forward(self, x):
        x = self._fc1(x)
        x = self._fc2(x)
        x = self._fc3(x)
        y = torch.add(x, x)
        return F.relu(y)

net = MyNet()

sgd = optim.SGD(net.parameters(), lr=0.1, momentum=0.9)
lr = optim.lr_scheduler.MultiStepLR(sgd, milestones=[2, 4, 6], gamma=0.8)

转换后:

import paddle


class MyNet(paddle.nn.Layer):
    test = 'str'

    def __init__(self):
        self._fc1 = paddle.nn.Linear(in_features=10, out_features=10)
        self._fc2 = paddle.nn.Linear(in_features=10, out_features=10)
        self._fc3 = paddle.nn.Linear(in_features=10, out_features=10)

    @paddle.no_grad()
    def forward(self, x):
        x = self._fc1(x)
        x = self._fc2(x)
        x = self._fc3(x)
        y = paddle.add(x=x, y=x)
        return paddle.nn.functional.relu(x=y)


net = MyNet()
>>>>>>sgd = torch.optim.SGD(net.parameters(), lr=0.1, momentum=0.9)
>>>>>>lr = torch.optim.lr_scheduler.MultiStepLR(sgd, milestones=[2, 4, 6], gamma=0.8)

打印信息如下:

===========================================
PyTorch to Paddle Convert Start ------>:
===========================================
Start convert /workspace/test_code.py --> /workspace/PaConvert/paddle_project/test_code.py
[test_code.py:1] remove 'import torch'
[test_code.py:2] remove 'import torch.nn as nn'
[test_code.py:3] remove 'import torch.optim as optim'
[test_code.py:4] remove 'import torch.nn.Linear as Linear'
[test_code.py:5] remove 'import torch.nn.functional as F'
[test_code.py] add 'import paddle' in first line
[test_code.py:25] [Not Support] convert torch.optim.SGD to Paddle is not supported currently
[test_code.py:26] [Not Support] convert torch.optim.lr_scheduler.MultiStepLR to Paddle is not supported currently
Finish convert /workspace/test_code.py --> /workspace/PaConvert/paddle_project/test_code.py


========================================
Convert Summary:
========================================
There are 10 Pytorch APIs in this Project:
 8  Pytorch APIs have been converted to Paddle successfully!
 2  Pytorch APIs are not supported to convert to Paddle currently!
 Convert Rate is: 80.000%

For these 2 Pytorch APIs that currently do not support to convert, which have been marked by >>>>>> before the line,
please refer to https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/guides/model_convert/convert_from_pytorch/pytorch_api_mapping_cn.html
and convert it by yourself manually. In addition, these APIs will be supported in future.

Thank you to use Paddle Code Convert Tool. You can make any suggestions to us.

转换完成后,会打印 转换总结 ,包含 API总数、转换成功数、未转换数、转换率 。如未指定 out_dir ,则会在当前目录下 ./paddle_project/ 并输出到此目录。如未指定 log_dir ,则会在终端打印日志。

例如,上述代码里一共有10个Pytorch API,其中8个被成功转换,因此转换率为 80.00% ,如果项目中有多个python文件,则会统计所有文件的累计数据。

对于转换成功的API:代码风格上会略有变化,会 补全API全名、补全参数关键字、移除注释、移除多余空行 。因为在 源码->语法树->源码 的过程中,会采用标准写法来生成代码,而 注释、空行 等代码无法被语法树识别,将被移除。

对于不支持转换的API:将 补全为Pytorch API全名,同时在行前通过 >>>>>> 的形式加以标记,用户必须对该API进行人工手动转换,然后删除标记,否则代码无法运行。

贡献代码

欢迎你向我们贡献代码,详细开发步骤请参考 贡献代码教程

paconvert's People

Contributors

atlantisming avatar co63oc avatar enkilee avatar greatv avatar jeff41404 avatar justld avatar li-fangyu avatar liyulingyue avatar lokezhou avatar longranger2 avatar minleminzui avatar redcontritio avatar rockdog22 avatar sanbuphy avatar shuaills avatar tomoko-hjf avatar txyugood avatar ustiniankw avatar xuxinyi389 avatar zh-hike avatar zhengqiwen1997 avatar zhwesky2010 avatar zpceng314 avatar zyckk4 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

paconvert's Issues

got the wrong result when using paconvet

  • pytorch
import torch.nn as nn
from functools import partial


class test(nn.Module):
    def __init__(self, in_channels, out_channels, norm_func=nn.LayerNorm):
        super(test, self).__init__()
        self.norm = norm_func(in_channels)
        self.linear = nn.Linear(in_channels, out_channels)
    
    def forward(self, x):
        x = self.norm(x)
        x = self.linear(x)
        return x

if __name__ == "__main__":
    model = test(10, 10, partial(nn.LayerNorm, eps=0.2))
  • paddle
import paddle
from functools import partial


class test(paddle.nn.Layer):

    def __init__(self, in_channels, out_channels, norm_func=paddle.nn.LayerNorm
        ):
        super(test, self).__init__()
        self.norm = norm_func(in_channels)
        self.linear = paddle.nn.Linear(in_features=in_channels,
            out_features=out_channels)

    def forward(self, x):
        x = self.norm(x)
        x = self.linear(x)
        return x


if __name__ == '__main__':
    model = test(10, 10, partial(paddle.nn.LayerNorm, eps=0.2))
Traceback (most recent call last):
  File "/home/greatx/repos/PaConvert/paddle_project/test.py", line 21, in <module>
    model = test(10, 10, partial(paddle.nn.LayerNorm, eps=0.2))
  File "/home/greatx/repos/PaConvert/paddle_project/test.py", line 10, in __init__
    self.norm = norm_func(in_channels)
TypeError: LayerNorm.__init__() got an unexpected keyword argument 'eps'

eps should be converted to epsilon.

Torch code 转 Paddle code 失败

使用命令pytest tests/test_cummin.py测试测试案例时失败,发现在test_project/paddle_temp.py中的代码没有更改为Paddle code,请问该如何解决该问题。

报错信息如下:

================================================= test session starts =================================================
platform win32 -- Python 3.9.18, pytest-7.4.2, pluggy-1.3.0 -- D:\anaconda2\envs\hackthon\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\lfy\Desktop\PaConvert\tests
configfile: pytest.ini
plugins: anyio-4.0.0, cov-4.1.0
collected 1 item

tests\test_cummin.py::test_case_1 FAILED

====================================================== FAILURES =======================================================
_____________________________________________________ test_case_1 _____________________________________________________

    def test_case_1():
        pytorch_code = textwrap.dedent(
            """
            import torch
            x = torch.tensor([[1.0, 1.0, 1.0],
                            [2.0, 2.0, 2.0],
                            [3.0, 3.0, 3.0]])
            result = torch.cummin(x, 0)
            """
        )
>       obj.run(pytorch_code, ["result"])

tests\test_cummin.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests\apibase.py:89: in run
    self.compare(
tests\apibase.py:165: in compare
    self.compare(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <apibase.APIBase object at 0x0000024B5A25EF70>, name = 'torch.cummin'
pytorch_result = tensor([[1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.]])
paddle_result = tensor([[1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.]]), check_value = True
check_dtype = True, check_stop_gradient = True, rtol = 1e-06, atol = 0.0

    def compare(
        self,
        name,
        pytorch_result,
        paddle_result,
        check_value=True,
        check_dtype=True,
        check_stop_gradient=True,
        rtol=1.0e-6,
        atol=0.0,
    ):
        """
        compare tensors' data, shape, requires_grad, dtype
        args:
            name: pytorch api name
            pytorch_result: pytorch Tensor
            paddle_result: paddle Tensor
            check_value: If false, the value will not be checked
            check_dtype: If false, the dtype will not be checked
            check_stop_gradient: If false, the stop gradient will not be checked
        """
        if isinstance(pytorch_result, dict):
            assert isinstance(paddle_result, dict), "paddle result should be dict"
            assert len(pytorch_result) == len(
                paddle_result
            ), "paddle result have different length with pytorch"
            pytorch_result_k = [k for k in pytorch_result.keys()]
            pytorch_result_v = [v for v in pytorch_result.values()]
            paddle_result_k = [k for k in paddle_result.keys()]
            paddle_result_v = [v for v in paddle_result.values()]
            self.compare(
                self.pytorch_api,
                pytorch_result_k,
                paddle_result_k,
                check_value,
                check_dtype,
                check_stop_gradient,
                rtol,
                atol,
            )
            self.compare(
                self.pytorch_api,
                pytorch_result_v,
                paddle_result_v,
                check_value,
                check_dtype,
                check_stop_gradient,
                rtol,
                atol,
            )
            return

        if isinstance(pytorch_result, (tuple, list)):
            assert isinstance(
                paddle_result, (tuple, list)
            ), "paddle result should be list/tuple"
            assert len(pytorch_result) == len(
                paddle_result
            ), "paddle result have different length with pytorch"
            for i in range(len(pytorch_result)):
                self.compare(
                    self.pytorch_api,
                    pytorch_result[i],
                    paddle_result[i],
                    check_value,
                    check_dtype,
                    check_stop_gradient,
                    rtol,
                    atol,
                )
            return

        if isinstance(pytorch_result, (bool, np.number, int, str, type(None))):
            assert type(paddle_result) == type(
                pytorch_result
            ), "paddle result's type [{}] should be the same with pytorch's type [{}]".format(
                type(paddle_result), type(pytorch_result)
            )
            if check_value:
                assert (
                    pytorch_result == paddle_result
                ), "API ({}): pytorch result is {}, but paddle result is {}".format(
                    name, pytorch_result, paddle_result
                )
            return

        if pytorch_result.requires_grad:
            pytorch_numpy, paddle_numpy = (
                pytorch_result.detach().numpy(),
                paddle_result.numpy(False),
            )
        elif pytorch_result.is_conj():
            pytorch_numpy, paddle_numpy = (
                pytorch_result.resolve_conj().numpy(),
                paddle_result.numpy(False),
            )
        else:
            (
                pytorch_numpy,
                paddle_numpy,
>           ) = pytorch_result.cpu().numpy(), paddle_result.numpy(False)
E           TypeError: numpy() takes 0 positional arguments but 1 was given

tests\apibase.py:205: TypeError
-------------------------------------------------- Captured log call --------------------------------------------------
INFO     Converter_0:utils.py:91 ===========================================
INFO     Converter_0:utils.py:91 PyTorch to Paddle Convert Start ------>:
INFO     Converter_0:utils.py:91 ===========================================
INFO     Converter_0:utils.py:91 Start convert file: C:\Users\lfy\Desktop\PaConvert\test_project\pytorch_temp.py --> C:\Users\lfy\Desktop\PaConvert\test_project\paddle_temp.py
INFO     Converter_0:utils.py:91 Finish convert C:\Users\lfy\Desktop\PaConvert\test_project\pytorch_temp.py --> C:\Users\lfy\Desktop\PaConvert\test_project\paddle_temp.py

INFO     Converter_0:utils.py:91
===========================================
INFO     Converter_0:utils.py:91 Convert Summary:
INFO     Converter_0:utils.py:91 ===========================================
INFO     Converter_0:utils.py:91 There are 0 Pytorch APIs in this Project:
INFO     Converter_0:utils.py:91  0  Pytorch APIs have been converted to Paddle successfully!
INFO     Converter_0:utils.py:91  0  Pytorch APIs are not supported to convert to Paddle currently!
INFO     Converter_0:utils.py:91  Convert Rate is: 0.000%
INFO     Converter_0:utils.py:91
Thank you to use Paddle Code Convert Tool. You can make any suggestions to us.
=============================================== short test summary info ===============================================
FAILED tests\test_cummin.py::test_case_1 - TypeError: numpy() takes 0 positional arguments but 1 was given
================================================== 1 failed in 3.25s ==================================================

got wrong output

 git clone https://github.com/facebookresearch/detectron2.git
cd PaConvert
python paconvert/main.py --in_dir ../detectron2/ --out_dir ../detectron2.paddle
# Copyright (c) Facebook, Inc. and its affiliates.
"""
This file contains primitives for multi-gpu communication.
This is useful when doing distributed training.
"""

import functools
import numpy as np
import torch
import torch.distributed as dist

_LOCAL_PROCESS_GROUP = None
_MISSING_LOCAL_PG_ERROR = (
    "Local process group is not yet created! Please use detectron2's `launch()` "
    "to start processes and initialize pytorch process group. If you need to start "
    "processes in other ways, please call comm.create_local_process_group("
    "num_workers_per_machine) after calling torch.distributed.init_process_group()."
)
  • Converted: detectron2.paddle/detectron2/utils/comm.py
import paddle
"""
This file contains primitives for multi-gpu communication.
This is useful when doing distributed training.
"""
import functools
import numpy as np
_LOCAL_PROCESS_GROUP = None
_MISSING_LOCAL_PG_ERROR = (
>>>>>>    "Local process group is not yet created! Please use detectron2's `launch()` to start processes and initialize pytorch process group. If you need to start processes in other ways, please call comm.create_local_process_group(num_workers_per_machine) after calling torch.distributed.init_process_group()."
    )

torch_scatter库的paddle转换

torch_scatter 是一个torch的扩增库,目前在科学计算、神经辐射场等领域均有用到,例如第四期黑客松的赛题No.173 Point-NeRF就涉及torch_scatter中的scatter_min算子,无法通过paddle复现,像简单的scatter_add倒是可以组合复现,但是也很大的增加了复现的成本。请问您这边有计划推出torch_scatter的paddle转换吗

Conversion fails when `\u` is in the code

code

import numpy as np

def extract_vertices(lines):
	'''extract vertices info from txt lines
	Input:
		lines   : list of string info
	Output:
		vertices: vertices of text regions <numpy.ndarray, (n,8)>
		labels  : 1->valid, 0->ignore, <numpy.ndarray, (n,)>
	'''
	labels = []
	vertices = []
	for line in lines:
		vertices.append(list(map(int,line.rstrip('\n').lstrip('\ufeff').split(',')[:8])))
		label = 0 if '###' in line else 1
		labels.append(label)
	return np.array(vertices), np.array(labels)

output

python paconvert/main.py --in_dir ~/repos/bug_test/ --out_dir ~/repos/bug_test_
===========================================
PyTorch to Paddle Convert Start ------>:
===========================================
Start convert file: /home/greatx/repos/bug_test/test.py --> /home/greatx/repos/bug_test_/test.py
Traceback (most recent call last):
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/sre_parse.py", line 1051, in parse_template
    this = chr(ESCAPES[this][1])
KeyError: '\\u'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/greatx/repos/PaConvert/paconvert/main.py", line 145, in <module>
    main()
  File "/home/greatx/repos/PaConvert/paconvert/main.py", line 131, in main
    converter.run(args.in_dir, args.out_dir, args.exclude_dirs)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/converter.py", line 88, in run
    self.transfer_dir(in_dir, out_dir, exclude_dir_list)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/converter.py", line 186, in transfer_dir
    self.transfer_dir(old_path, new_path, exclude_dir_list)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/converter.py", line 164, in transfer_dir
    self.transfer_file(old_path, new_path)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/converter.py", line 202, in transfer_file
    self.transfer_node(root, old_path)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/converter.py", line 242, in transfer_node
    trans.transform()
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 81, in transform
    self.visit(self.root)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
    node = super(BaseTransformer, self).visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
    return visitor(node)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 295, in visit_Module
    super(BaseTransformer, self).generic_visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
    value = self.visit(value)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
    node = super(BaseTransformer, self).visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
    return visitor(node)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 247, in visit_FunctionDef
    super(BaseTransformer, self).generic_visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
    value = self.visit(value)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
    node = super(BaseTransformer, self).visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
    return visitor(node)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 277, in visit_For
    super(BaseTransformer, self).generic_visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
    value = self.visit(value)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
    node = super(BaseTransformer, self).visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
    return visitor(node)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 666, in visit_Expr
    new_node = self.visit(old_value)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
    node = super(BaseTransformer, self).visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
    return visitor(node)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 363, in visit_Call
    super(BasicTransformer, self).generic_visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
    value = self.visit(value)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
    node = super(BaseTransformer, self).visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
    return visitor(node)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 363, in visit_Call
    super(BasicTransformer, self).generic_visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
    value = self.visit(value)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
    node = super(BaseTransformer, self).visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
    return visitor(node)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 363, in visit_Call
    super(BasicTransformer, self).generic_visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 494, in generic_visit
    value = self.visit(value)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
    node = super(BaseTransformer, self).visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
    return visitor(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 503, in generic_visit
    new_node = self.visit(old_value)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 86, in visit
    node = super(BaseTransformer, self).visit(node)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/ast.py", line 418, in visit
    return visitor(node)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 539, in visit_Call
    return self.trans_class_method(node, torch_class_api)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/transformer/basic_transformer.py", line 556, in trans_class_method
    node_list = matcher.get_paddle_class_nodes(
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 508, in get_paddle_class_nodes
    self.parse_func(func)
  File "/home/greatx/repos/PaConvert/paconvert/../paconvert/base.py", line 390, in parse_func
    new_paddle_api = re.sub(
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/re.py", line 209, in sub
    return _compile(pattern, flags).sub(repl, string, count)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/re.py", line 326, in _subx
    template = _compile_repl(template, pattern)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/re.py", line 317, in _compile_repl
    return sre_parse.parse_template(repl, pattern)
  File "/home/greatx/miniconda3/envs/jit-env/lib/python3.10/sre_parse.py", line 1054, in parse_template
    raise s.error('bad escape %s' % this, len(this))
re.error: bad escape \u at position 26

torch.nn.functional.pad无法转换成功

结果如下:

    def _relative_position_to_absolute_position(self, x):
        """
        x: [b, h, l, 2*l-1]
        ret: [b, h, l, l]
        """
        batch, heads, length, _ = x.shape
>>>        x = torch.nn.functional.pad(x, commons.convert_pad_shape([[0, 0], [
            0, 0], [0, 0], [0, 1]]))
        """Class Method: *.view, can not convert, please check whether it is torch.Tensor.*/Optimizer.*/nn.Module.*/torch.distributions.Distribution.*/torch.autograd.function.FunctionCtx.*/torch.profiler.profile.*/torch.autograd.profiler.profile.*, and convert manually"""
>>>        x_flat = x.view([batch, heads, length * 2 * length])
>>>        x_flat = torch.nn.functional.pad(x_flat, commons.convert_pad_shape(
            [[0, 0], [0, 0], [0, length - 1]]))
        """Class Method: *.view, can not convert, please check whether it is torch.Tensor.*/Optimizer.*/nn.Module.*/torch.distributions.Distribution.*/torch.autograd.function.FunctionCtx.*/torch.profiler.profile.*/torch.autograd.profiler.profile.*, and convert manually"""
>>>        x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[:,
            :, :length, length - 1:]
        return x_final

    def _absolute_position_to_relative_position(self, x):
        """
        x: [b, h, l, l]
        ret: [b, h, l, 2*l-1]
        """
        batch, heads, length, _ = x.shape
>>>        x = torch.nn.functional.pad(x, commons.convert_pad_shape([[0, 0], [
            0, 0], [0, 0], [0, length - 1]]))
        """Class Method: *.view, can not convert, please check whether it is torch.Tensor.*/Optimizer.*/nn.Module.*/torch.distributions.Distribution.*/torch.autograd.function.FunctionCtx.*/torch.profiler.profile.*/torch.autograd.profiler.profile.*, and convert manually"""
>>>        x_flat = x.view([batch, heads, length ** 2 + length * (length - 1)])
>>>        x_flat = torch.nn.functional.pad(x_flat, commons.convert_pad_shape(
            [[0, 0], [0, 0], [length, 0]]))
        """Class Method: *.view, can not convert, please check whether it is torch.Tensor.*/Optimizer.*/nn.Module.*/torch.distributions.Distribution.*/torch.autograd.function.FunctionCtx.*/torch.profiler.profile.*/torch.autograd.profiler.profile.*, and convert manually"""
>>>        x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
        return x_final

转换出现bug

  • torch
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import List, Tuple, Optional
import math

# Calculate asymmetric TensorFlow-like 'SAME' padding for a convolution
def get_same_padding(x: int, kernel_size: int, stride: int, dilation: int):
    if isinstance(x, torch.Tensor):
        return torch.clamp(((x / stride).ceil() - 1) * stride + (kernel_size - 1) * dilation + 1 - x, min=0)
    else:
        return max((math.ceil(x / stride) - 1) * stride + (kernel_size - 1) * dilation + 1 - x, 0)


# Dynamically pad input x with 'SAME' padding for conv with specified args
def pad_same(
        x,
        kernel_size: List[int],
        stride: List[int],
        dilation: List[int] = (1, 1),
        value: float = 0,
):
    ih, iw = x.size()[-2:]
    pad_h = get_same_padding(ih, kernel_size[0], stride[0], dilation[0])
    pad_w = get_same_padding(iw, kernel_size[1], stride[1], dilation[1])
    x = F.pad(x, (pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2), value=value)
    return x


def avg_pool2d_same(x, kernel_size: List[int], stride: List[int], padding: List[int] = (0, 0),
                    ceil_mode: bool = False, count_include_pad: bool = True):
    # FIXME how to deal with count_include_pad vs not for external padding?
    x = pad_same(x, kernel_size, stride)
    return F.avg_pool2d(x, kernel_size, stride, (0, 0), ceil_mode, count_include_pad)
  • paddlepaddle
import sys
sys.path.append('/home/greatx/repos/PaConvert/paddle_project/utils')
import paddle_aux
import paddle
from typing import List, Tuple, Optional
import math


def get_same_padding(x: int, kernel_size: int, stride: int, dilation: int):
    if isinstance(x, paddle.Tensor):
        return paddle.clip(x=((x / stride).ceil() - 1) * stride + (
            kernel_size - 1) * dilation + 1 - x, min=0)
    else:
        return max((math.ceil(x / stride) - 1) * stride + (kernel_size - 1) *
            dilation + 1 - x, 0)


def pad_same(x, kernel_size: List[int], stride: List[int], dilation: List[
    int]=(1, 1), value: float=0):
    ih, iw = x.shape[-2:]
    pad_h = get_same_padding(ih, kernel_size[0], stride[0], dilation[0])
    pad_w = get_same_padding(iw, kernel_size[1], stride[1], dilation[1])
    x = paddle_aux._FUNCTIONAL_PAD(pad=(pad_w // 2, pad_w - pad_w // 2, 
        pad_h // 2, pad_h - pad_h // 2), value=value, x=x)
    return x


def avg_pool2d_same(x, kernel_size: List[int], stride: List[int], padding:
    List[int]=(0, 0), ceil_mode: bool=False, count_include_pad: bool=True):
    x = pad_same(x, kernel_size, stride)
    return paddle.nn.functional.avg_pool2d(kernel_size=kernel_size, stride=
        stride, padding=(0, 0), ceil_mode=ceil_mode, x=x, exclusive=
        notcount_include_pad)

count_include_pad -> not count_include_pad -> notcount_include_pad

【社区治理】co63oc 发起 Committer 身份申请

问题描述 Please describe your issue

基本信息
申请人 GitHub ID PaConvert Repo 整体 merge PR 数 PaConvert Repo 整体 review PR 数 PaConvert Repo 整体报告 Issue 数
@co63oc 50 0 0
社区贡献概况

以下是此前在本 repo 里做过的贡献:
实现若干算子转换规则

高质量 merge PR 展示
PR 号 PR 标题 PR 简介 Reviewer
#197 转换规则 No. 196/197/232/233/319 - zhwesky2010
#198 转换规则 No. 323/333 - zhwesky2010
#199 转换规则 No. 349/350 - zhwesky2010
担保人意见

@Ligoml @luotao1

Pytorch-Paddle代码转换工具开源任务

问题描述

大家好,为了实现将 PyTorch 代码自动化的转写成 Paddle 代码,从而提升模型迁移的效率,我们建设了 代码自动转换工具: PaddlePaddle Code Convert Toolkits,目前已支持了1000+个Pytorch API的自动转换,我们此次对外开放408个API的转换规则开发,欢迎大家提 PR 来一起支持自动转换 🎉🎉🎉。

通过本次活动,你可以更详细地了解 PyTorch 框架与 Paddle 框架用法及设计差异,提升自己对深度学习框架的熟悉程度。

🍻 你需要做的是

我们已将任务记录在《在线任务明细表》,为方便大家自由选择所熟悉的API,此次不对API进行分组,大家可自由选择一个或多个API来实现自动转换,认领时直接在本issue下回复认领的任务ID(至少1个,建议一次认领多个,且越多约好)。欢迎大家认领任务和提 PR~

  1. Fork PaddlePaddle/docsPaddlePaddle/PaConvert 两个Github Reop。

  2. 书写API映射关系:PR提交到 PaddlePaddle/docs 下,需要为每个 API 新增对应的 md 文件并放入docs/guides/model_convert/convert_from_pytorch/api_difference 对应的目录下,文件名为PyTorch API名。如果已存在该API的映射关系,则无需新增 md 文件,只需要检查并校正之前的文档是否正确,如果与后面的AST规则有差异,则需要修改文档

  3. API映射关系请参考 《API映射关系-格式规范》 ,PR标题格式:映射文档 No. xxx/yyy/zzz,PR描述附上本issue。请严格根据格式规范来书写文档,避免因格式问题增加不必要的review成本,不满足格式规范的PR将不予合入

  4. API映射关系相当于人工转换的思路,在其完成后,即可开发AST自动转换规则:参考《AST转换规则开发步骤》 中步骤3~5,PR标题格式:转换规则 No. xxx/yyy/zzz,PR描述附上本issue与上述文档PR。请严格根据文档要求来开发代码,避免增加不必要的review成本,不满足要求的PR将不予合入。

  5. 每1个任务No.xxx 均包含 1个映射关系文档PR + 1个转换规则PR,两者均合入该任务才算完成,请尽可能一次性提交多个任务,以提高review效率。

  6. review方式:评论里 @zhwesky2010 review,请及时修改review意见,review通过后将合入代码。

✨ 注:

  1. 该任务时间:PR 截止合入时间是2023/10/30。

  2. 认领规则:直接在 issue 下回复认领的任务 ID(建议一次认领多个)。

  3. 提交PR前请参照官网安装pre-commit,检查代码格式。否则CI可能无法通过。

  4. PR请先通过CI检查后再发起review,避免增加不必要的review成本。

  5. PR标题格式:映射文档 No. xxx/yyy/zzz转换规则 No. xxx/yyy/zzz,PR描述均需附上本issue,后者PR描述的 PR Docs 里需要写上前者PR的链接。

  6. 如果该API在paddle中存在 功能缺失、功能Bug、功能diff 等问题,导致无法转换,请直接在本issue下回复。我们会定期确认并记录API功能问题,同时对于此问题点,可暂不开发Matcher但仍需开发 屏蔽版本的测试case(单测屏蔽方式可查询开发文档或参考已有单测的代码)。问题描述参考如下格式:

    • torch.diff 问题:paddle仅支持n=1,torch的n可支持任意值
    • torch.nonzero 问题:指定as_tuple时行为不一致,paddle会多一维且不合理
    • torch.dstack 问题:功能缺失
  7. 任务明细表中已对映射关系的分类进行了初步粗略标注,仅供参考,最终以开发者分析为主。

  8. 历史上的 good first issue 列表,也欢迎来提 PR 解决~ 欢迎联系花花加入社区,和我们一起快乐开源

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.