Code Monkey home page Code Monkey logo

modelbox-ai / modelbox Goto Github PK

View Code? Open in Web Editor NEW
125.0 5.0 36.0 12.75 MB

A high performance, high expansion, easy to use framework for AI application. 为AI应用的开发者提供一套统一的高性能、易用的编程框架,快速基于AI全栈服务、开发跨端边云的AI行业应用,支持GPU,NPU加速。

Home Page: https://modelbox-ai.com

License: Apache License 2.0

CMake 8.47% Shell 0.67% C++ 85.46% Cuda 0.25% C 1.98% Java 1.63% Python 1.53% PureBasic 0.01%
modelbox mediapipe deep-learning pipeline inference mindsopre pytorch cloud-service gpu mindspore

modelbox's People

Contributors

bingo1234588 avatar carlosleegit avatar chenkanhw avatar dream-runner-yu avatar fujl avatar hzhyhx1117 avatar kendychina avatar modelboxdeng avatar pansgg avatar pymumu avatar tau233 avatar yamal-shang avatar zhongyfeng avatar zxk114 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

modelbox's Issues

Python test case crash

python测试用例在ubuntu 2004下crash

    @unittest.skip("disable thread for no delete")
    def test_flow_op_thread(self):

    @unittest.skip("disable thread for no delete")
    def test_flow_op(self):

可能导致原因是py_resize.py下列代码:
将numpy的数据push到outpu时异常。

    def process(self, data_ctx):
        in_bl = data_ctx.input("resize_in")
        out_bl = data_ctx.output("resize_out")

        for buffer in in_bl:
            np_image = np.array(buffer, copy= False)
            resize_image = Image.fromarray(np_image).resize((self.width_config, self.height_config))
            out_bl.push_back(np.array(resize_image))

        modelbox.info("ResizeFlowunit process")
        return modelbox.Status.StatusCode.STATUS_SUCCESS

Mnist_mind实例运行失败

运行环境信息 | System information (请提供足够详细的信息 | Please provide as much relevant information as possible)

使用dockers
modelbox/modelbox-develop-mindspore_1.9.0-cann_6.0.1-d310p-ubuntu-x86_64:latest

npu -info
image

描述问题 | Describe the current behavior:
运行Mnist_mind实例,失败
错误信息:request invalid, job config is invalid, Not found, build graph failed, please check graph config. -> create flowunit 'mnist_infer' failed. -> current environment does not support the inference type: 'mindspore:cpu'
同时新建单元后面功能无法圈选
b60234120c29493007fa462a0ea39c9

期望的行为 | Describe the expected behavior:

重现步骤描述 | Standalone code to reproduce the issue:
使用acl_inference 的om模型也无法运行

提供具体重现问题的步骤,如果可能,提供相关的截图信息,日志信息。
Provide a reproducible test case that is the bare minimum necessary to replicate the problem.

日志信息 | Logs

收集ModelBox的运行日志,路径为/var/log/modelbox
Please Provide modleobx logs, log path /var/log/modelbox

其他信息 | Other Info.

Demuxer丢帧机制优化

  1. 目前demuxer在发现是rtsp的时候,会有一个丢帧机制,最小帧buffer 32,目前我在解码后面加了一个帧率控制, 但是有可能控制不精准,可能会导致demuxer这里直接丢帧, 会导致这一个gop的帧无法被处理。
  2. 希望这么修改: (设置一个session的bool变量)
    auto has_packet = std::make_shared<std::atomic>();
    *has_packet = false;
    data_ctx->GetSessionContext()->SetPrivate(HAS_PACKET_FLAG, has_packet);

如果packet_cache_.size 大于(比如3),那么设置*has_packet =true, 否则 *has_packet =false
我在后面的帧率控制单元,读这个变量, 微调帧率限制值

运行推理功能,出现报错

[2022-10-27 15:31:15,764][ INFO][ flow.cc:97 ] run flow dectection_sedna/src/graph/graph_dectection_sedna.toml
[2022-10-27 15:31:15,793][ INFO][ driver.cc:715 ] Gather scan info success, drivers count 46
[2022-10-27 15:31:15,793][ INFO][ driver.cc:961 ] begin scan virtual drivers
[2022-10-27 15:31:16,739][ INFO][virtualdriver_inference.cc:80 ] Add virtual driver /root/dectection_sedna/src/flowunit/helm_infer/helm_infer.toml success
[2022-10-27 15:31:16,755][ INFO][virtualdriver_python.cc:78 ] Add virtual driver /root/dectection_sedna/src/flowunit/yolo3_post/yolo3_post.toml success
[2022-10-27 15:31:16,779][ INFO][ driver.cc:963 ] end scan virtual drivers
[2022-10-27 15:31:16,782][ INFO][ graph_manager.cc:304 ] graph.format : graphviz
[2022-10-27 15:31:16,785][ WARN][ driver_desc.cc:74 ] set cuda device flags 0 failed, cuda ret 35
[2022-10-27 15:31:16,786][ERROR][ device_cuda.cc:87 ] count device failed, cuda ret 35
[2022-10-27 15:31:17,333][ WARN][ flowunit.cc:117 ] inference is not match, you can use a-z, A-Z, 1-9, _ and uppercase the first character.
[2022-10-27 15:31:17,333][ WARN][virtualdriver_inference.cc:358 ] check group type failed , your group_type is inference, the right group_type is a or a/b , for instance input or input/http.
[2022-10-27 15:31:17,336][ WARN][ flowunit.cc:117 ] generic is not match, you can use a-z, A-Z, 1-9, _ and uppercase the first character.
[2022-10-27 15:31:17,336][ WARN][virtualdriver_python.cc:358 ] check group type failed , your group_type is generic, the right group_type is a or a/b , for instance input or input/http.
[2022-10-27 15:31:17,338][ INFO][ graph.cc:116 ] Build graph name:graph_dectection_sedna, id:3a8b9fb4-b241-434a-87e9-747213c50737
[2022-10-27 15:31:17,338][ INFO][ graph_manager.cc:218 ] node name : helm_infer6
[2022-10-27 15:31:17,338][ INFO][ graph_manager.cc:223 ] input port : images
[2022-10-27 15:31:17,338][ INFO][ graph_manager.cc:229 ] output port : boxes
[2022-10-27 15:31:17,338][ INFO][ graph_manager.cc:229 ] output port : classes
[2022-10-27 15:31:17,338][ INFO][ graph_manager.cc:229 ] output port : scores
[2022-10-27 15:31:17,338][ INFO][ graph_manager.cc:218 ] node name : normalize5
[2022-10-27 15:31:17,338][ INFO][ graph_manager.cc:223 ] input port : in_data
[2022-10-27 15:31:17,338][ INFO][ graph_manager.cc:229 ] output port : out_data
[2022-10-27 15:31:17,338][ INFO][ graph_manager.cc:218 ] node name : packed_planar_transpose4
[2022-10-27 15:31:17,338][ INFO][ graph_manager.cc:223 ] input port : in_image
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:229 ] output port : out_image
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:218 ] node name : resize3
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:223 ] input port : in_image
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:229 ] output port : out_image
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:218 ] node name : video_decoder2
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:223 ] input port : in_video_packet
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:229 ] output port : out_video_frame
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:218 ] node name : video_demuxer1
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:223 ] input port : in_video_url
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:229 ] output port : out_video_packet
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:218 ] node name : video_input0
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:229 ] output port : out_video_url
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:218 ] node name : videoencoder
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:223 ] input port : in_video_frame
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:218 ] node name : yolo3_post
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:223 ] input port : in_boxes
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:223 ] input port : in_classes
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:223 ] input port : in_image
[2022-10-27 15:31:17,339][ INFO][ graph_manager.cc:223 ] input port : in_scores
[2022-10-27 15:31:17,340][ INFO][ graph_manager.cc:229 ] output port : out_image
[2022-10-27 15:31:17,340][ INFO][ graph.cc:641 ] begin build node helm_infer6
[2022-10-27 15:31:17,342][ WARN][ flowunit.cc:117 ] inference is not match, you can use a-z, A-Z, 1-9, _ and uppercase the first character.
[2022-10-27 15:31:17,342][ WARN][virtualdriver_inference.cc:358 ] check group type failed , your group_type is inference, the right group_type is a or a/b , for instance input or input/http.
[2022-10-27 15:31:17,343][ INFO][ graph.cc:647 ] build node helm_infer6 success
[2022-10-27 15:31:17,344][ INFO][ graph.cc:641 ] begin build node normalize5
[2022-10-27 15:31:17,344][ INFO][ graph.cc:647 ] build node normalize5 success
[2022-10-27 15:31:17,344][ INFO][ graph.cc:641 ] begin build node packed_planar_transpose4
[2022-10-27 15:31:17,344][ INFO][ graph.cc:647 ] build node packed_planar_transpose4 success
[2022-10-27 15:31:17,344][ INFO][ graph.cc:641 ] begin build node resize3
[2022-10-27 15:31:17,345][ INFO][ graph.cc:647 ] build node resize3 success
[2022-10-27 15:31:17,345][ INFO][ graph.cc:641 ] begin build node video_decoder2
[2022-10-27 15:31:17,345][ INFO][ graph.cc:647 ] build node video_decoder2 success
[2022-10-27 15:31:17,345][ INFO][ graph.cc:641 ] begin build node video_demuxer1
[2022-10-27 15:31:17,345][ INFO][ graph.cc:647 ] build node video_demuxer1 success
[2022-10-27 15:31:17,345][ INFO][ graph.cc:641 ] begin build node video_input0
[2022-10-27 15:31:17,346][ INFO][ graph.cc:647 ] build node video_input0 success
[2022-10-27 15:31:17,346][ INFO][ graph.cc:641 ] begin build node videoencoder
[2022-10-27 15:31:17,346][ INFO][ graph.cc:647 ] build node videoencoder success
[2022-10-27 15:31:17,346][ INFO][ graph.cc:641 ] begin build node yolo3_post
[2022-10-27 15:31:17,350][ WARN][ flowunit.cc:117 ] generic is not match, you can use a-z, A-Z, 1-9, _ and uppercase the first character.
[2022-10-27 15:31:17,350][ WARN][virtualdriver_python.cc:358 ] check group type failed , your group_type is generic, the right group_type is a or a/b , for instance input or input/http.
[2022-10-27 15:31:17,351][ INFO][ graph.cc:647 ] build node yolo3_post success
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, helm_infer6:boxes -> yolo3_post:in_boxes
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, helm_infer6:classes -> yolo3_post:in_classes
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, helm_infer6:scores -> yolo3_post:in_scores
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, normalize5:out_data -> helm_infer6:images
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, packed_planar_transpose4:out_image -> normalize5:in_data
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, resize3:out_image -> packed_planar_transpose4:in_image
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, video_decoder2:out_video_frame -> resize3:in_image
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, video_decoder2:out_video_frame -> yolo3_post:in_image
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, video_demuxer1:out_video_packet -> video_decoder2:in_video_packet
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, video_input0:out_video_url -> video_demuxer1:in_video_url
[2022-10-27 15:31:17,351][ INFO][ graph.cc:368 ] add link, yolo3_post:out_image -> videoencoder:in_video_frame
[2022-10-27 15:31:17,353][ INFO][tensorflow_inference_common.cc:352 ] is_save_model: 0
[2022-10-27 15:31:17,353][ INFO][tensorflow_inference_common.cc:138 ] model path: /root/dectection_sedna/src/flowunit/helm_infer/model.pb
[2022-10-27 15:31:17,354][ INFO][flowunit_group.cc:393 ] node: packed_planar_transpose4 get batch size is 8
[2022-10-27 15:31:17,356][ INFO][flowunit_group.cc:393 ] node: resize3 get batch size is 8
[2022-10-27 15:31:17,356][ INFO][flowunit_group.cc:393 ] node: normalize5 get batch size is 8
[2022-10-27 15:31:17,358][ INFO][flowunit_group.cc:393 ] node: video_decoder2 get batch size is 1
[2022-10-27 15:31:17,363][ INFO][flowunit_group.cc:393 ] node: video_demuxer1 get batch size is 1
[2022-10-27 15:31:17,362][ INFO][flowunit_group.cc:393 ] node: video_input0 get batch size is 8
[2022-10-27 15:31:17,364][ INFO][session_context.cc:40 ] session context start se id:c3d57c49-96b5-4725-9f88-240685d00050
[2022-10-27 15:31:17,371][ INFO][flowunit_group.cc:393 ] node: videoencoder get batch size is 1
2022-10-27 15:31:21.637629: E tensorflow/core/common_runtime/session_factory.cc:48] Two session factories are being registered underGRPC_SESSION
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/descriptor_database.cc:118] File already exists in database: tensorflow/core/data/service/common.proto
[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/descriptor.cc:1379] CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
[2022-10-27 15:31:21,639][ WARN][flowunit_group.cc:363 ] yolo3_post: open failed: code: Invalid argument, errmsg: import yolo3_post@Yolo3_postFlowUnit failed: CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
[2022-10-27 15:31:21,640][ INFO][flowunit_group.cc:393 ] node: yolo3_post get batch size is 1
[2022-10-27 15:31:21,640][ERROR][ node.cc:384 ] open flowunit yolo3_post failed
2022-10-27 15:31:27.630367: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

运行获取函数地址报错 undefined symbol: DriverDescription

在提交BUG之前,请阅读帮助文档的FAQ,或在现有issue中搜索是否有类似问题。
Before submitting a bug, please read the FAQ of the help documentation, or search for similar issues in existing issues.

运行环境信息 | System information (请提供足够详细的信息 | Please provide as much relevant information as possible)

  • 操作系统信息 | Operation System information
  • 设备信息 | Device information
  • ModelBox版本 | ModelBox version
  • GPU或加速卡信息 | GPU or NPU information
  • 使用的推理引擎 | Inference engine information
  • 编程语言 | Programming language (C++, Python, Java)

描述问题 | Describe the current behavior:

期望的行为 | Describe the expected behavior:

重现步骤描述 | Standalone code to reproduce the issue:

提供具体重现问题的步骤,如果可能,提供相关的截图信息,日志信息。
Provide a reproducible test case that is the bare minimum necessary to replicate the problem.

日志信息 | Logs

收集ModelBox的运行日志,路径为/var/log/modelbox
Please Provide modleobx logs, log path /var/log/modelbox

其他信息 | Other Info.

[2024-03-01 19:24:49.205][ WARN][ driver.cc:1112] /usr/local/lib/libmodelbox-common-cpu-iam_auth.so : dlsym DriverDescription failed, /usr/local/lib/libmodelbox-common-cpu-iam_auth.so: undefined symbol: DriverDescription
[2024-03-01 19:24:49.208][ WARN][ driver.cc:1112] /usr/local/lib/libmodelbox-drivers-common-filerequester.so.1.0.0 : dlsym DriverDescription failed, /usr/local/lib/libmodelbox-drivers-common-filerequester.so.1.0.0: undefined symbol: DriverDescription

加载动态库获取函数地址报错,函数是内置,编译也是按照开发手册。虽然不影响构建运行flow

ModelBox与IVS1800或3800适配过吗

请确认提交的需求,而不是问题或求助。
Please make sure that this is a feature request.

运行环境信息 | System information (请提供足够详细的信息 | Please provide as much relevant information as possible)

  • 操作系统信息 | Operation System information
  • GPU或加速卡信息 | GPU or NPU information
  • 使用的推理引擎 | Inference engine information
  • 编程语言 | Programming language (C++, Python, Java)
  • 是否愿意贡献此需求 | Are you willing to contribute it(Yes/No)

需求应用场景 | Describe the feature

是否修改当前API接口,怎么修改 | Will this change the current api? How?

特性受益用户 | Benefit users

建议的方案 | Suggest solution

其他补充信息 | Other info


python需要增加int64_t 的meta获取,设置

win版本验证,python功能单元获取视频帧的timestamp,发现报错。
需要在
modelbox\src\drivers\common\python\modelbox_api\modelbox_api.cc 这个文件里面 增加2行:

  1. static std::vector kBufferObjectConvertFunc 里面增加DataGet<int64_t, py::int_>,
  2. static std::vector kBufferBaseObjectFunc 里面最好也增加一个对应的set, DataSet<Buffer, py::int_, int64_t>

关于架构图的一些请教

https://github.com/modelbox-ai/modelbox/blob/main/docs/Design.md
1、APP Server
业务服务组件,包含IVA,OCR等服务组件,IVA为C++接口,OCR为python接口。
其中IVA业务为异步业务,OCR为同步数据业务。

对于上述说明看起来很模糊, 可以在详细一点么,或者可以贴出对应的代码在哪?

2、Adapter层中Device Adapter 和 inference adapter 在代码库的位置可以贴一下么?

最新的docker镜像版本为1.6.1,能否提供2.0以上版本的dev和runtime镜像呢

在提交BUG之前,请阅读帮助文档的FAQ,或在现有issue中搜索是否有类似问题。
Before submitting a bug, please read the FAQ of the help documentation, or search for similar issues in existing issues.

运行环境信息 | System information (请提供足够详细的信息 | Please provide as much relevant information as possible)

  • 操作系统信息 | Operation System information
  • 设备信息 | Device information
  • ModelBox版本 | ModelBox version
  • GPU或加速卡信息 | GPU or NPU information
  • 使用的推理引擎 | Inference engine information
  • 编程语言 | Programming language (C++, Python, Java)

描述问题 | Describe the current behavior:

期望的行为 | Describe the expected behavior:

重现步骤描述 | Standalone code to reproduce the issue:

提供具体重现问题的步骤,如果可能,提供相关的截图信息,日志信息。
Provide a reproducible test case that is the bare minimum necessary to replicate the problem.

日志信息 | Logs

收集ModelBox的运行日志,路径为/var/log/modelbox
Please Provide modleobx logs, log path /var/log/modelbox

其他信息 | Other Info.

c++实现flowunit运行失败

开发环境:rk3568开发板 rknpu,HiLens控制台提供的RK系列ModelBox SDK
graph:

graphconf = """digraph video_test {
    node [shape=Mrecord]
    queue_size = 4
    batch_size = 1
    input1[type=input,flowunit=input,device=cpu,deviceid=0]
    data_source_parser[type=flowunit, flowunit=data_source_parser, device=cpu, deviceid=0]
    video_demuxer[type=flowunit, flowunit=video_demuxer, device=cpu, deviceid=0]
    video_decoder[type=flowunit, flowunit=video_decoder, device=rknpu, deviceid=0, pix_fmt=bgr]
    resize[type=flowunit,flowunit=resize,device=rknpu,deviceid=0, image_width=224, image_height=224]
    face_phone_infer[type=flowunit, flowunit=face_phone_infer, device=rknpu, deviceid=0]
    step1_post[type=flowunit,flowunit=step1_post,device=cpu,deviceid=0]
    draw_box[type=flowunit,flowunit=draw_box,device=cpu,deviceid=0]
    video_out[type=flowunit, flowunit=video_out, device=rknpu, deviceid=0]
    

    input1:input -> data_source_parser:in_data
    data_source_parser:out_video_url -> video_demuxer:in_video_url
    video_demuxer:out_video_packet -> video_decoder:in_video_packet
    video_decoder:out_video_frame -> resize:in_image
    resize:out_image -> face_phone_infer:Input
    face_phone_infer:Output -> step1_post:in_1
    video_decoder:out_video_frame -> draw_box:img_in
    step1_post:out_1 -> draw_box:box_in
    draw_box:out -> video_out:in_video_frame

}"""

编译信息:

root@iTOP-RK3568:/mnt/mmc/zgm/modelbox/rk3568/workspace/video_test $ ./build_project.sh
-- Configuring done
-- Generating done
-- Build files have been written to: /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/build
Scanning dependencies of target modelbox-unit-cpu-step1_post
[ 40%] Built target modelbox-unit-cpu-draw_box
[ 60%] Building CXX object flowunit_cpp/step1_post/CMakeFiles/modelbox-unit-cpu-step1_post.dir/step1_post.cc.o
[ 80%] Linking CXX shared library libmodelbox-unit-cpu-step1_post.so
[100%] Built target modelbox-unit-cpu-step1_post
Install the project...
-- Install configuration: "Debug"
-- Up-to-date: /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/etc/flowunit/cpp/libmodelbox-unit-cpu-draw_box.so.1.0.0
-- Up-to-date: /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/etc/flowunit/cpp/libmodelbox-unit-cpu-draw_box.so.1
-- Up-to-date: /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/etc/flowunit/cpp/libmodelbox-unit-cpu-draw_box.so
-- Installing: /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/etc/flowunit/cpp/libmodelbox-unit-cpu-step1_post.so.1.0.0
-- Up-to-date: /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/etc/flowunit/cpp/libmodelbox-unit-cpu-step1_post.so.1
-- Set runtime path of "/mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/etc/flowunit/cpp/libmodelbox-unit-cpu-step1_post.so.1.0.0" to ""
-- Up-to-date: /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/etc/flowunit/cpp/libmodelbox-unit-cpu-step1_post.so
dos2unix converting file /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/graph/modelbox.conf  to to Unix format...
dos2unix converting file /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/graph/video_test.toml  to to Unix format...
dos2unix converting file /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/model/face_phone_infer/face_phone_infer.toml  to to Unix format...
dos2unix converting file /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/bin/mock_task.toml  to to Unix format...

build success: you can run main.sh in ./bin folder

报错信息:

root@iTOP-RK3568:/mnt/mmc/zgm/modelbox/rk3568/workspace/video_test $ ./bin/main.sh
debain os need load libgomp
[2023-06-30 15:21:01,722][ WARN][    iva_config.cc:143 ] update vas url failed. Fault, no vas projectid or iva endpoint
[2023-06-30 15:21:01,723][ WARN][         timer.cc:208 ] Schedule timer failed, timer is not running.
[2023-06-30 15:21:03,161][ERROR][        driver.cc:347 ] dlopen /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/bin/../etc/flowunit/cpp/libmodelbox-unit-cpu-draw_box.so.1.0.0 failed, error: /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/bin/../etc/flowunit/cpp/libmodelbox-unit-cpu-draw_box.so.1.0.0: undefined symbol: _ZN2cv8fastFreeEPv
[2023-06-30 15:21:03,832][ WARN][flowunit_manager.cc:342 ] CreateFlowUnit: draw_box failed, code: Not found, errmsg: can not find flowunit [type: cpu, name:draw_box], Please check if the 'device' configured correctly or if the flowunit library exists.
[2023-06-30 15:21:03,833][ERROR][         graph.cc:644 ] code: Not found, errmsg: create flowunit 'draw_box' failed.
[2023-06-30 15:21:03,833][ERROR][          flow.cc:537 ] build graph failed, Not found, build graph failed, please check graph config. -> create flowunit 'draw_box' failed. -> can not find flowunit [type: cpu, name:draw_box], Please check if the 'device' configured correctly or if the flowunit library exists.
[2023-06-30 15:21:03,833][ERROR][   iva_manager.cc:192 ] IvaManager::Start: modelbox_job Build failed, ret: code: Not found, errmsg: build graph failed, please check graph config.
[2023-06-30 15:21:03,833][ERROR][    iva_plugin.cc:50  ] IvaPlugin start failed
[2023-06-30 15:21:03,833][ERROR][        server.cc:70  ] Plugin, start failed, /mnt/mmc/zgm/modelbox/rk3568/workspace/video_test/bin/../../../modelbox-rk-aarch64/lib/modelbox-iva-plugin.so
[2023-06-30 15:21:03,833][ERROR][          main.cc:242 ] server start failed !

draw_box 流单元:

MODELBOX_FLOWUNIT(draw_boxFlowUnit, desc) {
  /*set flowunit attributes*/
  desc.SetFlowUnitName(FLOWUNIT_NAME);
  desc.SetFlowUnitGroupType("Image");
  desc.AddFlowUnitInput(modelbox::FlowUnitInput("img_in", "cpu"));
  desc.AddFlowUnitInput(modelbox::FlowUnitInput("box_in", "cpu"));
  desc.AddFlowUnitOutput(modelbox::FlowUnitOutput("out"));
  desc.SetFlowType(modelbox::NORMAL);
  desc.SetDescription(FLOWUNIT_DESC);
  /*set flowunit parameter */
  desc.AddFlowUnitOption(modelbox::FlowUnitOption(
      "height", "int", true, "224", "model height"));
  desc.AddFlowUnitOption(modelbox::FlowUnitOption(
      "width", "int", true, "224", "model width"));
}

生成的库是存在的,设备也指定了,定位不到错误。

如何基于RockChip构建?

查看配置发现需要引入 /opt/rockchip,这些头文件从哪里获取?谢谢

find_path(ROCKCHIP_RGA_INCLUDE NAMES im2d.h rga.h
HINTS ${HINTS_ROCKCHIP_PATH}/rk-rga/include)
mark_as_advanced(ROCKCHIP_RGA_INCLUDE)

find_path(ROCKCHIP_MPP_INCLUDE NAMES rk_mpi.h rk_type.h
HINTS ${HINTS_ROCKCHIP_PATH}/rkmpp/include/rockchip)
mark_as_advanced(ROCKCHIP_MPP_INCLUDE)

find_path(RKNN_INCLUDE NAMES rknn_api.h
HINTS ${HINTS_ROCKCHIP_PATH}/rknn/include)
mark_as_advanced(RKNN_INCLUDE)

find_path(RKNPU2_INCLUDE NAMES rknn_api.h
HINTS ${HINTS_ROCKCHIP_PATH}/rknnrt/include)
mark_as_advanced(RKNPU2_INCLUDE)

表情识别demo 部署失败

按照文档https://modelbox-ai.com/modelbox-book/cases/emotion-detection.html进行了表情识别的样例的实践,在开发环境中通过打包应用,生产了一个deb安装包和一个tar包;然后将deb文件和tar包文件拷贝到modelbox/modelbox-runtime-libtorch_1.9.1-cuda_10.2-ubuntu-x86_64:latest此镜像启动的容器内进行部署,部署步骤如下:
1、dpkg -i modelbox-application-1.0.0-Linux-emotion.deb
2、tar -xzvf modelbox-application-1.0.0-Linux.tar.gz
3、modelbox-tool -verbose INFO flow -run /opt/modelbox/application/emotion/graph/emotion.toml
结果在执行3时报错:
[2022-11-08 06:29:15,497][ INFO][ flow.cc:97 ] run flow /opt/modelbox/application/emotion/graph/emotion.toml
[2022-11-08 06:29:15,500][ INFO][ driver.cc:152 ] wait for subprocess 67 process finished
[2022-11-08 06:29:15,974][ INFO][ driver.cc:61 ] scan process log:
[2022-11-08 06:29:15.500][ INFO][ driver.cc:889 ] Scan dir: /opt/modelbox/application/emotion/flowunit
[2022-11-08 06:29:15.500][ INFO][ driver.cc:889 ] Scan dir: /usr/local/lib
[2022-11-08 06:29:15.506][ WARN][ driver.cc:1112] /usr/local/lib/libmodelbox-common-cpu-iam_auth.so : dlsym DriverDescription failed, /usr/local/lib/libmodelbox-common-cpu-iam_auth.so: undefined symbol: DriverDescription
[2022-11-08 06:29:15.509][ WARN][ driver.cc:1112] /usr/local/lib/libmodelbox-drivers-common-filerequester.so.1.0.0 : dlsym DriverDescription failed, /usr/local/lib/libmodelbox-drivers-common-filerequester.so.1.0.0: undefined symbol: DriverDescription
[2022-11-08 06:29:15.510][ WARN][ driver.cc:1112] /usr/local/lib/libmodelbox-drivers-common-fuse.so.1.0.0 : dlsym DriverDescription failed, /usr/local/lib/libmodelbox-drivers-common-fuse.so.1.0.0: undefined symbol: DriverDescription
[2022-11-08 06:29:15.560][ WARN][ driver.cc:1112] /usr/local/lib/libmodelbox-unit-cpu-obs_client.so : dlsym DriverDescription failed, /usr/local/lib/libmodelbox-unit-cpu-obs_client.so: undefined symbol: DriverDescription
[2022-11-08 06:29:15.576][ WARN][ driver.cc:1097] /usr/local/lib/libmodelbox-unit-cpu-python.so.1.0.0 : dlopen failed, libpython3.7m.so.1.0: cannot open shared object file: No such file or directory
[2022-11-08 06:29:15,976][ INFO][ driver.cc:764 ] Gather scan info success, drivers count 45
[2022-11-08 06:29:15,976][ INFO][ driver.cc:1010] begin scan virtual drivers
[2022-11-08 06:29:15,978][ INFO][virtualdriver_inference.cc:80 ] Add virtual driver /opt/modelbox/application/emotion/flowunit/face_detect/face_detect.toml success
[2022-11-08 06:29:15,981][ INFO][virtualdriver_inference.cc:80 ] Add virtual driver /opt/modelbox/application/emotion/flowunit/emotion_infer/emotion_infer.toml success
[2022-11-08 06:29:15,985][ WARN][ driver.cc:1045] virtual driver init failed, code: Not found, errmsg: can not find python flowunit
[2022-11-08 06:29:15,986][ INFO][virtualdriver_python.cc:86 ] Add virtual driver /opt/modelbox/application/emotion/flowunit/draw_emotion/draw_emotion.toml success
[2022-11-08 06:29:15,987][ INFO][virtualdriver_python.cc:86 ] Add virtual driver /opt/modelbox/application/emotion/flowunit/face_post/face_post.toml success
[2022-11-08 06:29:15,988][ INFO][virtualdriver_python.cc:86 ] Add virtual driver /opt/modelbox/application/emotion/flowunit/collapse_emotion/collapse_emotion.toml success
[2022-11-08 06:29:15,989][ INFO][virtualdriver_python.cc:86 ] Add virtual driver /opt/modelbox/application/emotion/flowunit/custom_resize/custom_resize.toml success
[2022-11-08 06:29:15,990][ INFO][virtualdriver_python.cc:86 ] Add virtual driver /opt/modelbox/application/emotion/flowunit/expand_box/expand_box.toml success
[2022-11-08 06:29:15,999][ INFO][ driver.cc:1012] end scan virtual drivers
[2022-11-08 06:29:16,346][ WARN][virtualdriver_python.cc:308 ] the key group type is empty, so classify it into Undefined.
[2022-11-08 06:29:16,346][ WARN][ flowunit.cc:515 ] is not match, you can use a-z, A-Z, 1-9, _ and uppercase the first character.
[2022-11-08 06:29:16,346][ WARN][ flowunit.cc:396 ] check group type failed , your group_type is , the right group_type is a or a/b , for instance input or input/http.
[2022-11-08 06:29:16,347][ WARN][virtualdriver_python.cc:308 ] the key group type is empty, so classify it into Undefined.
[2022-11-08 06:29:16,347][ WARN][ flowunit.cc:515 ] is not match, you can use a-z, A-Z, 1-9, _ and uppercase the first character.
[2022-11-08 06:29:16,347][ WARN][ flowunit.cc:396 ] check group type failed , your group_type is , the right group_type is a or a/b , for instance input or input/http.
[2022-11-08 06:29:16,348][ WARN][virtualdriver_python.cc:308 ] the key group type is empty, so classify it into Undefined.
[2022-11-08 06:29:16,348][ WARN][ flowunit.cc:515 ] is not match, you can use a-z, A-Z, 1-9, _ and uppercase the first character.
[2022-11-08 06:29:16,348][ WARN][ flowunit.cc:396 ] check group type failed , your group_type is , the right group_type is a or a/b , for instance input or input/http.
[2022-11-08 06:29:16,348][ WARN][virtualdriver_python.cc:308 ] the key group type is empty, so classify it into Undefined.
[2022-11-08 06:29:16,348][ WARN][ flowunit.cc:515 ] is not match, you can use a-z, A-Z, 1-9, _ and uppercase the first character.
[2022-11-08 06:29:16,348][ WARN][ flowunit.cc:396 ] check group type failed , your group_type is , the right group_type is a or a/b , for instance input or input/http.
[2022-11-08 06:29:16,349][ WARN][virtualdriver_python.cc:308 ] the key group type is empty, so classify it into Undefined.
[2022-11-08 06:29:16,349][ WARN][ flowunit.cc:515 ] is not match, you can use a-z, A-Z, 1-9, _ and uppercase the first character.
[2022-11-08 06:29:16,349][ WARN][ flowunit.cc:396 ] check group type failed , your group_type is , the right group_type is a or a/b , for instance input or input/http.
[2022-11-08 06:29:16,352][ INFO][ graph_manager.cc:353 ] graph.format : graphviz
[2022-11-08 06:29:16,353][ INFO][ graph.cc:116 ] Build graph name:emotion_detection, id:faa75409-0052-417e-856c-81735f04d805
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : collapse_emotion
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : confidence
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : predicts
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : custom_resize
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_image
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_image
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : draw_emotion
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_emotion
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_face
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : emotion_infer
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : input
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : confidence
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : predicts
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : expand_box
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : roi_image
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : face_detect
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : input
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_cls
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_conf
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_loc
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : face_mean
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : face_normalize
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : face_post
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_cls
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_conf
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_image
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_loc
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : has_face
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : no_face
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : face_resize
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_image
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_image
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : face_transpose
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_image
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_image
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : image_transpose
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_image
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_image
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : mean
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : normalize
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_data
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : video_input
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_video_url
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : videodecoder
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_video_packet
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_video_frame
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : videodemuxer
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_video_url
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:271 ] output port : out_video_packet
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:260 ] node name : videoencoder
[2022-11-08 06:29:16,353][ INFO][ graph_manager.cc:265 ] input port : in_video_frame
[2022-11-08 06:29:16,353][ INFO][ graph.cc:641 ] begin build node collapse_emotion
[2022-11-08 06:29:16,354][ WARN][virtualdriver_python.cc:308 ] the key group type is empty, so classify it into Undefined.
[2022-11-08 06:29:16,354][ WARN][ flowunit.cc:515 ] is not match, you can use a-z, A-Z, 1-9, _ and uppercase the first character.
[2022-11-08 06:29:16,354][ WARN][ flowunit.cc:396 ] check group type failed , your group_type is , the right group_type is a or a/b , for instance input or input/http.
[2022-11-08 06:29:16,354][ WARN][flowunit_manager.cc:342 ] CreateFlowUnit: collapse_emotion failed, code: Not found, errmsg: not found flowunit collapse_emotion
[2022-11-08 06:29:16,354][ERROR][ graph.cc:644 ] code: Not found, errmsg: create flowunit 'collapse_emotion' failed.
[2022-11-08 06:29:16,354][ERROR][ flow.cc:537 ] build graph failed, Not found, build graph failed, please check graph config. -> create flowunit 'collapse_emotion' failed. -> not found flowunit collapse_emotion
[2022-11-08 06:29:16,354][ERROR][ flow.cc:106 ] build flow failed, Not found, build graph failed, please check graph config. -> create flowunit 'collapse_emotion' failed. -> not found flowunit collapse_emotion

想知道是什么原因,该如何成功部署这个服务?谢谢!

ascend的video-decoder需要增加timestamp属性

使用 uint64_t time_stamp = acldvppGetStreamDescTimestamp(input); 获取timestamp
但是要注意: 这个api设计的有问题, timestamp是int64_t属性的, 某些推流在第一帧的时候会返回一个负数,这里就变成了一个很大的正数。。
所以,赋值的时候, 要把这个强转成int64_t

拉取libpytorch镜像失败

failed to register layer: ApplyLayer exit status 1 stdout: stderr: symlink libtorch_cpu.so /usr/local/lib/�: invalid or incomplete multibyte or wide character

生成配置更加通用

生成配置这个感觉可以写的更加通用,

本质上是读取toml文件,修改后,再写成toml文件。

所以,内存里面存储的是toml的结构体,写的时候,直接调用toml写就好了。

#215 (comment)

toml11 local version is not correct

**运行环境信息 | 无(代码检视发现)

  • 操作系统信息 | Linux
  • 设备信息 | NA
  • ModelBox版本 | master分支
  • GPU或加速卡信息 | NA
  • 使用的推理引擎 | NA
  • 编程语言 | Cmake脚本

**描述问题 | thirdparty/CMake/local-package.in文件中指定的toml11版本是3.5.0,与thirdparty/CMake/pre-download.in中指定的版本3.7.0不一致。

**期望的行为 | 更新thirdparty/CMake/local-package.in文件中指定的toml11版本为3.7.0,保持一致。

**重现步骤描述 | NA
**日志信息 | NA
**其他信息 | NA

modelbox支持rk的npu吗

我看docker-hub里有modelbox/modelbox-build-rockchip-rknnrt-356x-ubuntu-aarch64这个镜像,这个sdk能选择rk的npu 推理rknn模型可行吗,推拉流想用硬件编码需要指定什么device

请问如何构建 22.04 版本的 ascend 镜像?

请确认提交的需求,而不是问题或求助。
Please make sure that this is a feature request.

运行环境信息 | System information (请提供足够详细的信息 | Please provide as much relevant information as possible)

  • 操作系统信息 | Operation System information

Linux

  • GPU或加速卡信息 | GPU or NPU information

NPU

  • 使用的推理引擎 | Inference engine information

CANN

  • 编程语言 | Programming language (C++, Python, Java)

Python

  • 是否愿意贡献此需求 | Are you willing to contribute it(Yes/No)

Yes

需求应用场景 | Describe the feature

Ascend A2 开发板上使用 modelbox npu 能力

是否修改当前API接口,怎么修改 | Will this change the current api? How?

特性受益用户 | Benefit users

建议的方案 | Suggest solution

其他补充信息 | Other info

目前 20.04 系统容器中挂载的 npu 驱动无法使用,排查到 glib 兼容问题导致


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.