tencent / forward Goto Github PK
View Code? Open in Web Editor NEWA library for high performance deep learning inference on NVIDIA GPUs.
License: Other
A library for high performance deep learning inference on NVIDIA GPUs.
License: Other
你好, 编译的时候报cublas_device找不到,具体如下:
TensorRT Version: 7.2.3.4
CUDA Version: 10.2
CUDNN Version: 7.4
Operating System: ubuntu18.04
Python Version (if applicable): 3.7
PyTorch Version (if applicable): 1.8
错误信息:
[ 97%] Linking CXX shared library ../../bin/libfwd_torch.so
[ 97%] Built target fwd_torch
Scanning dependencies of target forward
[ 98%] Building CXX object source/py_fwd/CMakeFiles/forward.dir/py_forward.cpp.o
[100%] Linking CXX shared module ../../bin/forward.cpython-37m-x86_64-linux-gnu.so
/usr/bin/x86_64-linux-gnu-ld: cannot find -lCUDA_cublas_device_LIBRARY-NOTFOUND
collect2: error: ld returned 1 exit status
source/py_fwd/CMakeFiles/forward.dir/build.make:117: recipe for target 'bin/forward.cpython-37m-x86_64-linux-gnu.so' failed
make[2]: *** [bin/forward.cpython-37m-x86_64-linux-gnu.so] Error 1
CMakeFiles/Makefile2:687: recipe for target 'source/py_fwd/CMakeFiles/forward.dir/all' failed
make[1]: *** [source/py_fwd/CMakeFiles/forward.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
cublas_device这个库找不到,cuda10以后这个库就废弃了吧。
[ERROR] 2021-06-30 13:49:56,627 trt_keras_parser.cpp(197): Creating FlattenDesc failed! Please Check implementation and inputs.
[ERROR] 2021-06-30 13:49:56,627 keras_engine.cpp(129): Parse Keras Graph failed
如上,请大佬关注哈~
Describe the bug
I can convert the pytorch model (my resnet classification model) by the Forward. When I try to convert tensorflow pb model, I cannot build the saved pb model after obtaining the forword.**.so successfully and can import forward. When I tested this code:" engine = builder.build('./test_tfmodel.pb', dummy_inputs)", the problem "tf_graph_parser.cpp(48): Creating input desc is failed." was appeared and "已放弃(吐核)".
TensorRT Version: 7.2.1.6
NVIDIA GPU: GTX1080TI
NVIDIA Driver Version: 450.80.02
CUDA Version: 11.0
CUDNN Version: 8.0.4
Operating System: 7.5
Python Version (if applicable): 3.6.13
Tensorflow Version (if applicable): tensorflow==1.15.0(cpu)
PyTorch Version (if applicable): 1.7.1
To Reproduce
Steps to reproduce the behavior:
1.
cmake .. \
-DTensorRT_ROOT=/data/wind/TensorRT-7.2.1.6/
-DENABLE_LOGGING=OFF
-DENABLE_PROFILING=OFF
-DENABLE_DYNAMIC_BATCH=OFF
-DBUILD_PYTHON_LIB=ON
-DPYTHON_EXECUTABLE=/root/anaconda3/envs/xyang/bin/python
-DENABLE_TORCH=OFF
-DENABLE_TENSORFLOW=ON
-DENABLE_KERAS=OFF
make -j
then I import forward doesn't get wrong.
import numpy as np
import forward
# 1. 构建 Engine
builder = forward.TfBuilder()
# img = torch.randn(1, 784)
img = np.ones([1,784], dtype='float32')
dummy_inputs = {'inputs': img}
infer_mode = 'float32' # float32 / float16 / int8_calib / int8
builder.set_mode(infer_mode)
engine = builder.build('./test_tfmodel.pb', dummy_inputs)
Screenshots
When I ./unit_test --gtest_filter=TestTfNodes.*
, the following error(已放弃(吐核)) happened. But I can import forward successfully.
Thanks you
There is a layer in tensorflow slim named "Flatten", it includes servel tensorflow operations like: "Shape", "StridedSlice" and "Reshape". The test code like this.
import numpy as np
import os
def create_tf_flatten(model_file):
import tensorflow as tf
import tf_slim as slim
with tf.Session() as sess:
x1 = tf.placeholder(shape=(None,299,299,3),dtype=tf.float32, name='x')
op = slim.flatten(x1)
sess.run(tf.global_variables_initializer())
feed_dict = {x1: np.ones((1,299,299,3))}
print(sess.run(op, feed_dict))
graphdef = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['Flatten/flatten/Reshape'])
tf.train.write_graph(graphdef, './', model_file, as_text=False)
return feed_dict
def forward_transfer(model_file, dummy_input):
import forward
# 1. 构建 Engine
builder = forward.TfBuilder()
infer_mode = 'float32'
builder.set_mode(infer_mode)
tf_engine = builder.build(model_file, dummy_input)
# save engine
engine_path = os.path.splitext(model_file)[0] + '.engine'
tf_engine.save(engine_path)
def test_forward(model_file, dummy_input):
import forward
engine_path = os.path.splitext(model_file)[0] + '.engine'
# load saved engine
tf_engine = forward.TfEngine()
tf_engine.load(engine_path)
inputs = dummy_input
outputs = tf_engine.forward(inputs)
print(outputs)
model_file = 'tf_model.pb'
create_tf_flatten(model_file)
x = {'x':np.ones([1,299,299,3],dtype='float32')}
forward_transfer(model_file, x)
test_forward(model_file, x)
TensorRT Version: 7.1.3.4
NVIDIA GPU: T4
NVIDIA Driver Version: 410.104
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System: ubuntu 18.04
Python Version (if applicable): 3.6.9
Tensorflow Version (if applicable): 1.15.0
PyTorch Version (if applicable): 1.7.0
Describe the bug
第一个问题是即使plugin继承了IPluginV2DynamicExt,但是不能用于动态输入当中,因为getOutputDimensions()的写法有误。
第二个问题是这个插件的padding过程在某些GPU卡上输出结果是错的,具体原因未知。如在2080ti中可以正常输出结果,但是在A100出错
TensorRT Version: 7.2.1
NVIDIA GPU: 2080TI & A100
NVIDIA Driver Version:
CUDA Version: 11.0
CUDNN Version:
Operating System:
Python Version (if applicable):
Tensorflow Version (if applicable):
PyTorch Version (if applicable):
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
拜读完相关文档及code,了解到Forward直接使用trt的network class 逐个翻译 原始模型的每个layer,相当于给tf、torch实现了相应的parser(就像onnx-parser一样)。同时了解到目前nv自身也有类似的开源项目如trtorch、tf-trt,请问Forward 与这些项目相比的优劣分别是哪些点
谢谢!
I've done the 'cmake' part successfully. But, when I run 'make', error happens like:
[ 5%] Building NVCC (Device) object source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/trt_engine_generated_emb_layer_norm_kernel.cu.o
In file included from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/block_discontinuity.cuh:37:0,
from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/block_histogram_sort.cuh:37,
from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/block_histogram.cuh:36,
from /home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/cub.cuh:38,
from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/common/bert_plugin_util.h:33,
from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/emb_layer_norm_plugin/emb_layer_norm_kernel.cu:36:
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:238:61: warning: missing terminating " character
asprmt"bfi.b32 %0, %1, %2, %3;" : "=r"(ret) : ar"(x), "r"(x), b, in) - 1;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:282:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:284:2: error: #endif without #if
#endif
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:294:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:296:2: error: #endif without #if
#endif
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:306:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:308:2: error: #endif without #if
#endif}
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:319:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:321:46: warning: missing terminating " character
3;" : "word(ret) : word("(x), "rc_bit-of("(x), flags() + z;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:322:2: error: #endif without #if
#endif
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:334:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:336:46: warning: missing terminating " character
3;" : "word(ret) : word("(x), "rc_bit-of("(x), flags() + z;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:337:2: error: #endif without #if
#endif
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:349:2: error: #else without #if
#else
^~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:351:44: warning: missing terminating " character
3;" : "word(ret) : word("(x), "rc_lane("(x), flags() + z;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:352:2: error: #endif without #if
#endif
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:371:61: warning: missing terminating " character
asfma.rz.ffi.b32 %0, %1, %2, %3;" f"(d(ret)f: ar"(xf, "r"(xf, cr) - 1;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:375:2: error: #endif without #if
#endif // DOXYGEN_SHOULD_SKIP_THIS
^~~~~
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:383:20: warning: missing terminating " character
vo orileasexit;")>())
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:395:20: warning: missing terminating " character
vo orileasllup;")>()x;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:406:18: warning: missing terminating " character
tef adIdx"urn x;
^
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/block/specializations/../../block/../util_ptx.cuh:475:1: error: unterminated comment
/**
^
In file included from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/common/bert_plugin_util.h:33:0,
from /home/agx/SCW/Forward-master/source/trt_engine/trt_network_crt/plugins/emb_layer_norm_plugin/emb_layer_norm_kernel.cu:36:
/home/agx/SCW/Forward-master/source/third_party/cub-1.8.0/cub/cub.cuh:54:10: fatal error: device/device_run_length_encode.cuh: No such file or directory
#include "device/device_run_length_encode.cuh"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
CMake Error at trt_engine_generated_emb_layer_norm_kernel.cu.o.cmake:220 (message):
Error generating
/home/agx/SCW/Forward-master/build/source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/./trt_engine_generated_emb_layer_norm_kernel.cu.o
source/trt_engine/CMakeFiles/trt_engine.dir/build.make:1591: recipe for target 'source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/trt_engine_generated_emb_layer_norm_kernel.cu.o' failed
make[2]: *** [source/trt_engine/CMakeFiles/trt_engine.dir/trt_network_crt/plugins/emb_layer_norm_plugin/trt_engine_generated_emb_layer_norm_kernel.cu.o] Error 1
CMakeFiles/Makefile2:556: recipe for target 'source/trt_engine/CMakeFiles/trt_engine.dir/all' failed
make[1]: *** [source/trt_engine/CMakeFiles/trt_engine.dir/all] Error 2
Makefile:90: recipe for target 'all' failed
make: *** [all] Error 2
Wonder what's wrong...
Describe the bug
使用for-torch 优化模型时出错,以resnest50d为例:
代码如下:
`
origion_model = timm.create_model('resnest50d', pretrained=True)
infer_mode = 'float16' # float32 / float16 / int8_calib / int8
jit_model = torch.jit.script(origion_model).cpu().eval()
model_path="resnest50d_bs_{}-half_{}_jit_cpu.pt".format(batch_size,half)
jit_model.save(model_path)
dummy = torch.randn(batch_size, 3, 244, 244)
builder = forward.TorchBuilder()
builder.set_mode(infer_mode)
engine = builder.build(model_path, dummy)
print(engine)
outputs = engine.forward(dummy)
print(outputs)
os._exit(-1)
`
TensorRT Version: 7.2.3.4
NVIDIA GPU: T4
NVIDIA Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version: 7
Operating System: ubuntu 18.04
Python Version (if applicable): 3.6
Tensorflow Version (if applicable): 2.3.4
PyTorch Version (if applicable): 1.10.1
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
Describe the bug
A clear and concise description of what the bug is.
执行命令:
cmake .. -DTensorRT_ROOT=/home/soft/wp/TensorRT-8.2.0.6 -DENABLE_TORC H=ON -DENABLE_TORCH_PLUGIN=ON -DCMAKE_PREFIX_PATH=/home/soft/wp/libtorch
错误日志:
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda (found version "11.1")
-- CUDA_NVCC_FLAGS: -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75
-- Using the single-header code from /home/soft/wp/Forward/source/third_party/json/single_include/
-- Found TensorRT: /home/soft/wp/TensorRT-8.2.0.6/lib/libnvinfer.so;/home/soft/wp/TensorRT-8.2.0.6/lib/libnvinfer_plugin.so;/home/soft/wp/TensorRT-8.2.0.6/lib/libnvonnxparser.so;/home/soft/wp/TensorRT-8.2.0.6/lib/libnvparsers.so (found version "8.2.0")
-- Found CUDA: /usr/local/cuda (found version "11.1")
-- Caffe2: CUDA detected: 11.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 11.1
-- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so
-- Found cuDNN: v? (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
CMake Error at /home/soft/wp/libtorch/share/cmake/Caffe2/public/cuda.cmake:174 (message):
PyTorch requires cuDNN 7 and above.
Call Stack (most recent call first):
/home/soft/wp/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:88 (include)
/home/soft/wp/libtorch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
CMakeLists.txt:248 (find_package)
-- Configuring incomplete, errors occurred!
ls /usr/local/cuda/lib64 | grep cudnn
TensorRT Version: TensorRT-8.2.0.6
NVIDIA GPU: P8
NVIDIA Driver Version: 465.19.01
CUDA Version: 11.1
CUDNN Version: 8.2.1.32
Operating System: Ubuntu 16.04.2 LTS
Python Version (if applicable): 3.7
Tensorflow Version (if applicable): 2.6.0
PyTorch Version (if applicable): 1.9.0+cu111
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
Describe the bug
编译 Forward-PyTorch时报错:
cmake 命令:
cmake .. -DENABLE_TORCH=ON -DBUILD_PYTHON_LIB=ON -DPYTHON_EXECUTABLE="/usr/local/bin/python"
是安装pytorch的不对吗?
TensorRT Version: 7.2.3.4
NVIDIA GPU: T4
NVIDIA Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version:
Operating System: ubuntu 18.04
Python Version (if applicable): 3.6
Tensorflow Version (if applicable): 2.3.4
PyTorch Version (if applicable): 1.10.1
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
Describe the bug
A clear and concise description of what the bug is.
neither show error nor warning, it core dump, when transfer keras model to trt. This keras model is DenseNet121 from keras.
code like that:
import numpy as np
import os
def save_keras_model(model_file):
from keras.models import Model
from keras.layers import Dense, Dropout
import keras.backend as K
from keras.applications.densenet import DenseNet121
import keras.layers as layers
import keras.models as models
import keras.utils as utils
class densenet121(object):
def __init__(self, image_size):
self.base_model = DenseNet121(input_shape=(image_size, image_size, 3),
include_top=False, pooling='avg',
backend=K,
layers=layers,
models=models,
utils=utils,
weights=None)
x = Dropout(0.75)(self.base_model.output)
x = Dense(3, activation='softmax', name='top_layer')(x)
self.model = Model(self.base_model.input, x)
print("Densenet121")
model = densenet121(512).model
model.save(model_file)
def forward_transfer(model_file):
import forward
# 1. 构建 Engine
builder = forward.KerasBuilder()
infer_mode = 'float32' # Infer Mode: float32 / float16 / int8_calib / int8
batch_size = 1
max_workspace_size = 1<<32
builder.set_mode(infer_mode)
engine = builder.build(model_file, batch_size)
engine_path = os.path.splitext(model_file)[0]+'.engine'
engine.save(engine_path)
def test_forward(model_file,inputs):
import forward
engine_path = os.path.splitext(model_file)[0]+'engine'
engine = forward.KerasEngine()
engine.load(engine_path)
# inputs = np.ones(1, 24, 24, 3)
outputs = engine.forward([inputs]) # list_type output
print(outputs)
model_path = 'densenet121.h5'
save_keras_model(model_path)
x = np.ones((1,512,512,3))
forward_transfer(model_path)
test_forward(model_path,x)
TensorRT Version: 7.1.3.4
NVIDIA GPU: T4
NVIDIA Driver Version: 410.104
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System: ubuntu 18.04
Python Version (if applicable): 3.6.9
Tensorflow Version (if applicable): 1.15.0
PyTorch Version (if applicable): 1.7.0
print info:
[INFO ] 2021-05-06 16:16:20,516 trt_keras_parser.cpp(153): Parser::CreateNHWC2NCHWLayerDesc
[INFO ] 2021-05-06 16:16:20,516 keras_activation_creator.h(50): TrtActivationDesc::Create
Segmentation fault (core dumped)
Is forward supporting transformer for neural machine translation?
error info:
[INFO ] 2021-04-08 14:38:59,844 tf_graph_parser.cpp(116): Input = input : input
[INFO ] 2021-04-08 14:38:59,844 tf_shuffle_creator.h(50): TrtShuffleDesc::Create
[INFO ] 2021-04-08 14:38:59,844 tf_softmax_creator.h(49): TrtSoftmaxDesc::Create
[INFO ] 2021-04-08 14:38:59,844 tf_shuffle_creator.h(50): TrtShuffleDesc::Create
[INFO ] 2021-04-08 14:38:59,845 tf_shuffle_creator.h(50): TrtShuffleDesc::Create
[INFO ] 2021-04-08 14:38:59,845 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,916 tf_pooling_creator.h(51): TrtPoolingDesc::Create
[INFO ] 2021-04-08 14:38:59,916 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,916 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,929 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,941 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,941 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,941 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,941 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,941 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,941 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,946 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,947 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,954 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,954 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,954 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,954 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,954 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,954 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,960 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,961 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,966 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,966 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,966 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,966 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,967 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,967 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,973 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,977 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,977 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,977 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,977 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,977 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,977 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,979 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,980 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,982 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,982 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,982 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,982 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,982 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,982 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,984 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,985 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,988 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,988 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,988 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,988 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,989 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,989 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,991 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,992 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,992 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,992 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,992 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,992 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,992 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,993 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,994 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,995 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,995 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,995 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,995 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,995 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,995 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,996 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,997 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,998 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,998 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:38:59,998 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:38:59,998 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,998 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:38:59,998 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:38:59,999 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:38:59,999 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,000 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,000 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,001 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,001 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,001 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,001 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,002 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,002 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,002 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,002 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,002 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,003 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,003 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,004 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,004 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,005 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,005 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_element_wise_creator.h(54): TrtElementWiseDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,006 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,007 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_batch_norm_creator.h(51): TrtNormalizationDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_batch_norm_creator.h(78): BatchNorm Use Scale Implementation
[INFO ] 2021-04-08 14:39:00,007 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_clamp_creator.h(50): TrtClampDesc::Create
[INFO ] 2021-04-08 14:39:00,007 tf_convolution_creator.h(72): TrtConvolutionDesc::Create
[INFO ] 2021-04-08 14:39:00,588 trt_input_creator.h(44): TrtInputDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,588 trt_convolution_creator.h(44): TrtConvolutionNdDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_clamp_creator.h(44): TrtClampDesc::CreateLayer
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[INFO ] 2021-04-08 14:39:00,589 trt_convolution_creator.h(44): TrtConvolutionNdDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_scale_creator.h(43): TrtScaleDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_clamp_creator.h(44): TrtClampDesc::CreateLayer
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[INFO ] 2021-04-08 14:39:00,589 trt_convolution_creator.h(44): TrtConvolutionNdDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_convolution_creator.h(44): TrtConvolutionNdDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_clamp_creator.h(44): TrtClampDesc::CreateLayer
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[INFO ] 2021-04-08 14:39:00,589 trt_convolution_creator.h(44): TrtConvolutionNdDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_scale_creator.h(43): TrtScaleDesc::CreateLayer
[INFO ] 2021-04-08 14:39:00,589 trt_clamp_creator.h(44): TrtClampDesc::CreateLayer
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
[ERROR] 2021-04-08 14:39:00,589 trt_logger.cpp(64): [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input.
There is no problem with my environment and run demo.
But when using bert ,I encounter a bug. Do I need to implement aten::gelu by myself?
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
[ERROR] 2021-04-01 22:03:55,866 torch_desc_manager.cpp(128): Could not find layer create for node aten::gelu: output input.4
[ERROR] 2021-04-01 22:03:55,866 torch_engine.cpp(235): Parse torch module failed
Traceback (most recent call last):
File "tensorrt_forward.py", line 48, in <module>
outputs = engine.forward_with_name(dummy_inputs)
AttributeError: 'NoneType' object has no attribute 'forward_with_name
Describe the bug
When I finished compiling cmake,then I copy forward.cpython-36m-aarch64-linux-gnu.so to a new directory, I run 'python test_forward.py' or ‘import forward’,an error appear ' Segmentation fault (core dumped) '
Device : Jetson Xavier NX
System: Jetpack4.4 [L4T 32.4.4]
TensorRT Version: 7.1.3
CUDA Version: 10.2.89
CUDNN Version: 8.0.0.180
Python Version (if applicable): 3.6.9
Tensorflow Version (if applicable): 1.15.2
Keras Version (if applicable): 2.1.5
What causes such error?
Looking forward to the answer!!
简单测试时报错
步骤如下:
1.先用keras保存resnet50的预训练模型
model = ResNet50()
model.save('./resnet50.h5')
2.按照示例,运行
# 1. 构建 Engine
builder = fwd.KerasBuilder()
infer_mode = 'float32' # Infer Mode: float32 / float16 / int8_calib / int8
batch_size = 1
max_workspace_size = 1 << 32
builder.set_mode(infer_mode)
engine = builder.build(r'./resnet50.h5', batch_size)
# need_save = True
# if need_save:
# engine_path = 'path/to/out/engine'
# engine.save(engine_path)
# engine = fwd.KerasEngine()
# engine.load(engine_path)
# 2. 执行推理
inputs = np.random.randn(1, 224, 224, 3)
outputs = engine.forward([inputs]) # list_type output
print(outputs)
报错如下:
[ERROR] 2021-06-17 09:29:47,713 trt_keras_parser.cpp(90): Load Model failed.
[ERROR] 2021-06-17 09:29:47,713 keras_engine.cpp(129): Parse Keras Graph failed
Traceback (most recent call last):
File "D:/Projects/tencent_forward/workspace/resnet_forward.py", line 26, in <module>
outputs = engine.forward([inputs]) # list_type output
AttributeError: 'NoneType' object has no attribute 'forward'
加载模型那一步失败了,还望指导下,感谢~
TensorRT Version: 7.2.1.6
NVIDIA GPU: RTX 2080 SUPER
NVIDIA Driver Version: 441.22
CUDA Version: 10.2
CUDNN Version: 8.2.0.53
Operating System: Windows 10 专业版
Python Version: 3.8.5
Keras: 2.4.3
h5py: 2.10.0
libtrt_engine.so
, libfwd_torch.so
demo/fwd_cpp
, i get the error:[ 50%] Linking CXX executable test_fwd_engine CMakeFiles/test_fwd_engine.dir/test_fwd_engine.cpp.o:在函数‘main’中: test_fwd_engine.cpp:(.text+0x1c8):对‘fwd::TrtForwardEngine::Load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)’未定义的引用 collect2: error: ld returned 1 exit status CMakeFiles/test_fwd_engine.dir/build.make:110: recipe for target 'test_fwd_engine' failed make[2]: *** [test_fwd_engine] Error 1 CMakeFiles/Makefile2:96: recipe for target 'CMakeFiles/test_fwd_engine.dir/all' failed make[1]: *** [CMakeFiles/test_fwd_engine.dir/all] Error 2 Makefile:102: recipe for target 'all' failed make: *** [all] Error 2
tensorrt 版本:6.0.1.5
CUDA :10.1
下面是build/CMakeFiles/CMakeError.log的日志,请问应该如何处理
Performing C SOURCE FILE Test CMAKE_HAVE_LIBC_PTHREAD failed with the following output:
Change Dir: /opt/Forward/build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/gmake cmTC_fe5dd/fast && /usr/bin/gmake -f CMakeFiles/cmTC_fe5dd.dir/build.make CMakeFiles/cmTC_fe5dd.dir/build
gmake[1]: 进入目录“/opt/Forward/build/CMakeFiles/CMakeTmp”
Building C object CMakeFiles/cmTC_fe5dd.dir/src.c.o
/opt/rh/devtoolset-7/root/usr/bin/cc -fPIC -DCMAKE_HAVE_LIBC_PTHREAD -o CMakeFiles/cmTC_fe5dd.dir/src.c.o -c /opt/Forward/build/CMakeFiles/CMakeTmp/src.c
Linking C executable cmTC_fe5dd
/usr/local/bin/cmake -E cmake_link_script CMakeFiles/cmTC_fe5dd.dir/link.txt --verbose=1
/opt/rh/devtoolset-7/root/usr/bin/cc -fPIC -DCMAKE_HAVE_LIBC_PTHREAD CMakeFiles/cmTC_fe5dd.dir/src.c.o -o cmTC_fe5dd
CMakeFiles/cmTC_fe5dd.dir/src.c.o:在函数‘main’中:
src.c:(.text+0x2f):对‘pthread_create’未定义的引用
src.c:(.text+0x3b):对‘pthread_detach’未定义的引用
src.c:(.text+0x47):对‘pthread_cancel’未定义的引用
src.c:(.text+0x58):对‘pthread_join’未定义的引用
src.c:(.text+0x6c):对‘pthread_atfork’未定义的引用
collect2: error: ld returned 1 exit status
gmake[1]: *** [cmTC_fe5dd] 错误 1
gmake[1]: 离开目录“/opt/Forward/build/CMakeFiles/CMakeTmp”
gmake: *** [cmTC_fe5dd/fast] 错误 2
Source file was:
#include <pthread.h>
void* test_func(void* data)
{
return data;
}
int main(void)
{
pthread_t thread;
pthread_create(&thread, NULL, test_func, NULL);
pthread_detach(thread);
pthread_cancel(thread);
pthread_join(thread, NULL);
pthread_atfork(NULL, NULL, NULL);
pthread_exit(NULL);
return 0;
}
Determining if the function pthread_create exists in the pthreads failed with the following output:
Change Dir: /opt/Forward/build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/gmake cmTC_cbc42/fast && /usr/bin/gmake -f CMakeFiles/cmTC_cbc42.dir/build.make CMakeFiles/cmTC_cbc42.dir/build
gmake[1]: 进入目录“/opt/Forward/build/CMakeFiles/CMakeTmp”
Building C object CMakeFiles/cmTC_cbc42.dir/CheckFunctionExists.c.o
/opt/rh/devtoolset-7/root/usr/bin/cc -fPIC -DCHECK_FUNCTION_EXISTS=pthread_create -o CMakeFiles/cmTC_cbc42.dir/CheckFunctionExists.c.o -c /usr/local/share/cmake-3.17/Modules/CheckFunctionExists.c
Linking C executable cmTC_cbc42
/usr/local/bin/cmake -E cmake_link_script CMakeFiles/cmTC_cbc42.dir/link.txt --verbose=1
/opt/rh/devtoolset-7/root/usr/bin/cc -fPIC -DCHECK_FUNCTION_EXISTS=pthread_create CMakeFiles/cmTC_cbc42.dir/CheckFunctionExists.c.o -o cmTC_cbc42 -lpthreads
/opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: 找不到 -lpthreads
collect2: error: ld returned 1 exit status
gmake[1]: *** [cmTC_cbc42] 错误 1
gmake[1]: 离开目录“/opt/Forward/build/CMakeFiles/CMakeTmp”
gmake: *** [cmTC_cbc42/fast] 错误 2
编译报错
先按照指引cmake,有test failed,但是完成了。
Selecting Windows SDK version to target Windows 10.0.19042.
CMake Deprecation Warning at CMakeLists.txt:28 (cmake_policy):
The OLD behavior for policy CMP0074 will be removed from a future version
of CMake.
The cmake-policies(7) manual explains that the OLD behaviors of all
policies are deprecated and that a policy should be set to OLD only under
specific short-term circumstances. Projects should be ported to the NEW
behavior and not rely on setting a policy to OLD.
Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.2 (found version "10.2")
CUDA_NVCC_FLAGS: -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75
Found TensorRT: D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvinfer.lib;D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvinfer_plugin.lib;D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvonnxparser.lib;D:/Software/TensorRT-7.2.1.6.Windows10.x86_64.cuda-10.2.cudnn8.0/TensorRT-7.2.1.6/lib/nvparsers.lib (found version "7.2.1")
Using the single-header code from D:/Projects/tencent_forward/Forward/source/third_party/json/single_include/
Use HDF5 on third_party: D:/Projects/tencent_forward/Forward/source/third_party/hdf5
Warnings Configuration: default: /DWIN32 /D_WINDOWS /W3 : /DWIN32 /D_WINDOWS /W3 /GR /EHsc /W3 /WX-
Check for STD namespace
Check for STD namespace - found
Performing CXX Test OLD_HEADER_FILENAME - Failed
Performing CXX Test HDF_NO_NAMESPACE - Failed
Performing CXX Test HDF_NO_STD - Failed
Performing CXX Test BOOL_NOTDEFINED - Failed
Performing CXX Test NO_STATIC_CAST - Failed
Performing CXX Test CXX_HAVE_OFFSETOF - Failed
Configuring done
Generating done
然后在项目生成的时候报错:
严重性 代码 说明 项目 文件 行
错误 C2664 “void std::vector<fwd::NamedTensor,std::allocator<_Ty>>::push_back(const fwd::NamedTensor &)”: 无法将参数 1 从“initializer list”转换为“fwd::NamedTensor &&” trt_engine D:\Projects\tencent_forward\Forward\source\trt_engine\trt_engine\trt_buffer_manager.h 116
错误 C1083 无法打开源文件: “D:\Projects\tencent_forward\Forward\build\source\third_party\hdf5\H5Tinit.c”: No such file or directory hdf5 D:\Projects\tencent_forward\Forward\build\source\third_party\hdf5\src\c1 1
错误 C2664 “nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::parse(nlohmann::detail::input_adapter &&,const std::function<bool (int,nlohmann::detail::parser<nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>>::parse_event_t,BasicJsonType &)>,const bool)”: 无法将参数 1 从“std::string”转换为“nlohmann::detail::input_adapter &&” fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp 72
错误 C2679 二进制“=”: 没有找到接受“std::string”类型的右操作数的运算符(或没有可接受的转换) fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp 147
错误 C2679 二进制“=”: 没有找到接受“bool”类型的右操作数的运算符(或没有可接受的转换) fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp 166
错误 C2679 二进制“=”: 没有找到接受“std::string”类型的右操作数的运算符(或没有可接受的转换) fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp 175
错误 C2679 二进制“=”: 没有找到接受“std::basic_string<char,std::char_traits<char>,std::allocator<char>>”类型的右操作数的运算符(或没有可接受的转换) fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp 205
错误 C2679 二进制“=”: 没有找到接受“const char [11]”类型的右操作数的运算符(或没有可接受的转换) fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp 206
错误 C2679 二进制“=”: 没有找到接受“const std::string”类型的右操作数的运算符(或没有可接受的转换) fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp 209
错误 C2679 二进制“=”: 没有找到接受“std::vector<std::vector<std::vector<std::string,std::allocator<_Ty>>,std::allocator<std::vector<_Ty,std::allocator<_Ty>>>>,std::allocator<std::vector<std::vector<_Ty,std::allocator<_Ty>>,std::allocator<std::vector<_Ty,std::allocator<_Ty>>>>>>”类型的右操作数的运算符(或没有可接受的转换) fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.cpp 213
错误 C2666 “fwd::operator ==”: 3 个重载有相似的转换 fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.h 62
错误 C2666 “fwd::operator ==”: 3 个重载有相似的转换 fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.h 62
错误 C2666 “fwd::operator ==”: 3 个重载有相似的转换 fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\keras_cpp_api.h 62
错误 C2664 “void std::vector<fwd::TrtLayerOutput,std::allocator<_Ty>>::push_back(const fwd::TrtLayerOutput &)”: 无法将参数 1 从“initializer list”转换为“fwd::TrtLayerOutput &&” fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp 157
错误 C2679 二进制“=”: 没有找到接受“initializer list”类型的右操作数的运算符(或没有可接受的转换) fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp 177
错误 C2664 “void std::vector<fwd::TrtLayerOutput,std::allocator<_Ty>>::push_back(const fwd::TrtLayerOutput &)”: 无法将参数 1 从“initializer list”转换为“fwd::TrtLayerOutput &&” fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp 185
错误 C2664 “void std::vector<fwd::TrtLayerOutput,std::allocator<_Ty>>::push_back(const fwd::TrtLayerOutput &)”: 无法将参数 1 从“initializer list”转换为“fwd::TrtLayerOutput &&” fwd_keras D:\Projects\tencent_forward\Forward\source\fwd_keras\keras_cvt\trt_keras_parser.cpp 201
错误 C4579 'nlohmann::detail::static_const<nlohmann::detail::from_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2235
错误 C2131 表达式的计算结果不是常数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2235
错误 C4579 'nlohmann::detail::static_const<nlohmann::detail::to_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2235
错误 C4579 'nlohmann::detail::static_const<nlohmann::detail::from_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2235
错误 C2131 表达式的计算结果不是常数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2235
错误 C4579 'nlohmann::detail::static_const<nlohmann::detail::to_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2235
错误 C4579 'nlohmann::detail::static_const<nlohmann::detail::from_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2235
错误 C2131 表达式的计算结果不是常数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2235
错误 C4579 'nlohmann::detail::static_const<nlohmann::detail::to_json_fn>::value': in-class initialization for type 'const T' is not yet implemented; static member will remain uninitialized at runtime but use in constant-expressions is supported fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2235
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2780 “unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”: 应输入 2 个参数,却提供了 1 个 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2893 未能使函数模板“unknown-type nlohmann::adl_serializer<T,void>::from_json(BasicJsonType &&,ValueType &) noexcept(<expr>)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2513
错误 C2783 “unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “ValueType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept(<expr>) const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “BasicJsonType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2893 未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2893 未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “ValueType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept(<expr>) const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “BasicJsonType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2893 未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2893 未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “ValueType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept(<expr>) const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “BasicJsonType nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2783 “nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer> nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) const”: 未能为“__formal”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2893 未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept const”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2893 未能使函数模板“unknown-type nlohmann::basic_json<std::map,std::vector,std::string,bool,int64_t,uint64_t,double,std::allocator,nlohmann::adl_serializer>::get(void) noexcept”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 2516
错误 C2784 “const _Ty *std::begin(const std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“const std::valarray<_Ty> &”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “_Ty *std::begin(std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::valarray<_Ty> &”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “_Ty *std::begin(_Ty (&)[_Size]) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“_Ty (&)[_Size]”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2893 未能使函数模板“unknown-type std::begin(const _Container &)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2893 未能使函数模板“unknown-type std::begin(_Container &)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “const _Elem *std::begin(std::initializer_list<_Elem>) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::initializer_list<_Elem>”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2039 “iterator_category”: 不是“nlohmann::detail::iterator_traits<unknown-type,void>”的成员 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2146 语法错误: 缺少“>”(在标识符“iterator_category”的前面) fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “const _Ty *std::begin(const std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“const std::valarray<_Ty> &”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “_Ty *std::begin(std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::valarray<_Ty> &”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “_Ty *std::begin(_Ty (&)[_Size]) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“_Ty (&)[_Size]”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2893 未能使函数模板“unknown-type std::begin(const _Container &)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2893 未能使函数模板“unknown-type std::begin(_Container &)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “const _Elem *std::begin(std::initializer_list<_Elem>) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::initializer_list<_Elem>”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2039 “iterator_category”: 不是“nlohmann::detail::iterator_traits<unknown-type,void>”的成员 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2146 语法错误: 缺少“>”(在标识符“iterator_category”的前面) fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “const _Ty *std::begin(const std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“const std::valarray<_Ty> &”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “_Ty *std::begin(std::valarray<_Ty> &)”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::valarray<_Ty> &”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “_Ty *std::begin(_Ty (&)[_Size]) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“_Ty (&)[_Size]”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2893 未能使函数模板“unknown-type std::begin(const _Container &)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2893 未能使函数模板“unknown-type std::begin(_Container &)”专用化 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2784 “const _Elem *std::begin(std::initializer_list<_Elem>) noexcept”: 未能从“add_rvalue_reference<const ContiguousContainer>::type”为“std::initializer_list<_Elem>”推导 模板 参数 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2039 “iterator_category”: 不是“nlohmann::detail::iterator_traits<unknown-type,void>”的成员 fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
错误 C2146 语法错误: 缺少“>”(在标识符“iterator_category”的前面) fwd_keras D:\Projects\tencent_forward\Forward\source\third_party\json\single_include\nlohmann\json.hpp 4290
一共失败了3个
10>------ 已启动全部重新生成: 项目: ALL_BUILD, 配置: Debug x64 ------
10> Building Custom Rule D:/Projects/tencent_forward/Forward/CMakeLists.txt
========== 全部重新生成: 成功 7 个,失败 3 个,跳过 0 个 ==========
这是什么原因?请问怎么解决?
TensorRT Version: 7.2.1.6
NVIDIA GPU: RTX 2080 SUPER
NVIDIA Driver Version: 441.22
CUDA Version: 10.2
CUDNN Version: 8.2.0.53
Operating System: Windows 10 专业版
Python Version: 3.8.5
Describe the bug
An error is reported at the end of the program, free(): invalid pointer.
TensorRT Version: 7.2.3.4
NVIDIA GPU: TITAN Xp
NVIDIA Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version: 8.0.4
Operating System: CentOS 8
Python Version (if applicable): 3.6.8
Tensorflow Version (if applicable): --
PyTorch Version (if applicable): 1.3.1 (libtorch cpu)
To Reproduce
Build Forward:
cmake -DENABLE_LOGGING=OFF
-DENABLE_PROFILING=OFF
-DENABLE_DYNAMIC_BATCH=OFF
-DENABLE_TORCH=ON
-DBUILD_PYTHON_LIB=OFF
-DPYTHON_EXECUTABLE=/usr/bin/python3
-DENABLE_TENSORFLOW=OFF -DENABLE_KERAS=OFF
-DTORCH_CMAKE_PATH=/usr/local/lib/libtorch/share/cmake/Torch/
..
Steps to reproduce the behavior:
Expected behavior
None
Additional context
The problem occurs in Forward cmake with libtorch.
When we use Forward which cmake with python, the problem do not come up.
已经按照vs2017.bat内容cmake成功,下一步如何运行demo?求教
我看项目中采用TrtNetworkDesc
来进行模型转换,那是否考虑将TrtNetworkDesc
保存下来,这样做到模型转换和推理分离。可以在不同trt版本上用同样的TrtNetworkDesc
。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.