π Lite.Ai.ToolKit: A lite C++ toolkit of awesome AI models, such as Object Detection, Face Detection, Face Recognition, Segmentation, Matting, etc. See Model Zoo and ONNX Hub, MNN Hub, TNN Hub, NCNN Hub. [β€οΈ Star πππ» this repo to support me if it does any helps to you, thanks ~ ]
English | δΈζζζ‘£ | MacOS | Linux | Windows
- Simply and User friendly. Simply and Consistent syntax like lite::cv::Type::Class, see examples.
- Minimum Dependencies. Only OpenCV and ONNXRuntime are required by default, see build.
- Lots of Algorithm Modules. Contains 10+ modules with 80+ AI models and 500+ weights now.
Consider to cite it as follows if you use Lite.Ai.ToolKit in your projects.
@misc{lite.ai.toolkit2021,
title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
url={https://github.com/DefTruth/lite.ai.toolkit},
note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
author={Yan Jun},
year={2021}
}
A high level Training and Evaluating Toolkit for Face Landmarks Detection is available at torchlm.
Some prebuilt lite.ai.toolkit libs for MacOS(x64) and Linux(x64) are available, you can download the libs from the release links. Further, prebuilt libs for Windows(x64) and Android will be coming soon ~ Please, see issues#48 for more details of the prebuilt plan and refer to releases for more available prebuilt libs.
- lite0.1.1-osx10.15.x-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.8.1.zip
- lite0.1.1-osx10.15.x-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.9.0.zip
- lite0.1.1-osx10.15.x-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.10.0.zip
- lite0.1.1-ubuntu18.04-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.8.1.zip
- lite0.1.1-ubuntu18.04-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.9.0.zip
- lite0.1.1-ubuntu18.04-ocv4.5.2-ffmpeg4.2.2-onnxruntime1.10.0.zip
In Linux, in order to link the prebuilt libs, you need to export lite.ai.toolkit/lib
to LD_LIBRARY_PATH first.
export LD_LIBRARY_PATH=YOUR-PATH-TO/lite.ai.toolkit/lib:$LD_LIBRARY_PATH
To quickly setup lite.ai.toolkit
, you can follow the CMakeLists.txt
listed as belows. ππ
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
include_directories(${LITE_AI_DIR}/include)
link_directories(${LITE_AI_DIR}/lib})
set(TOOLKIT_LIBS lite.ai.toolkit onnxruntime)
set(OpenCV_LIBS opencv_core opencv_imgcodecs opencv_imgproc opencv_video opencv_videoio)
add_executable(lite_yolov5 examples/test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 ${TOOLKIT_LIBS} ${OpenCV_LIBS})
- Core Features
- Quick Start
- RoadMap
- Important Updates
- Supported Models Matrix
- Build Docs
- Model Zoo
- Examples
- License
- References
- Contribute
#include "lite/lite.h"
static void test_default()
{
std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
std::vector<lite::types::Boxf> detected_boxes;
cv::Mat img_bgr = cv::imread(test_img_path);
yolov5->detect(img_bgr, detected_boxes);
lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
cv::imwrite(save_img_path, img_bgr);
delete yolov5;
}
Click here to see details of Important Updates!
Date | Model | C++ | Paper | Code | Awesome | Type |
---|---|---|---|---|---|---|
γ2022/04/03γ | MODNet | link | AAAI 2022 | code | matting | |
γ2022/03/23γ | PIPNtet | link | CVPR 2021 | code | face::align | |
γ2022/01/19γ | YOLO5Face | link | arXiv 2021 | code | face::detect | |
γ2022/01/07γ | SCRFD | link | CVPR 2021 | code | face::detect | |
γ2021/12/27γ | NanoDetPlus | link | blog | code | detection | |
γ2021/12/08γ | MGMatting | link | CVPR 2021 | code | matting | |
γ2021/11/11γ | YoloV5_V_6_0 | link | doi | code | detection | |
γ2021/10/26γ | YoloX_V_0_1_1 | link | arXiv 2021 | code | detection | |
γ2021/10/02γ | NanoDet | link | blog | code | detection | |
γ2021/09/20γ | RobustVideoMatting | link | WACV 2022 | code | matting | |
γ2021/09/02γ | YOLOP | link | arXiv 2021 | code | detection |
- / = not supported now.
- β = known work and official supported now.
- βοΈ = known work, but unofficial supported now.
- β = in my plan, but not coming soon, maybe a few months later.
Class | Size | Type | Demo | ONNXRuntime | MNN | NCNN | TNN | MacOS | Linux | Windows | Android |
---|---|---|---|---|---|---|---|---|---|---|---|
YoloV5 | 28M | detection | demo | β | β | β | β | β | βοΈ | βοΈ | β |
YoloV3 | 236M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
TinyYoloV3 | 33M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
YoloV4 | 176M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
SSD | 76M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
SSDMobileNetV1 | 27M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
YoloX | 3.5M | detection | demo | β | β | β | β | β | βοΈ | βοΈ | β |
TinyYoloV4VOC | 22M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
TinyYoloV4COCO | 22M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
YoloR | 39M | detection | demo | β | β | β | β | β | βοΈ | βοΈ | β |
ScaledYoloV4 | 270M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
EfficientDet | 15M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
EfficientDetD7 | 220M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
EfficientDetD8 | 322M | detection | demo | β | / | / | / | β | βοΈ | βοΈ | / |
YOLOP | 30M | detection | demo | β | β | β | β | β | βοΈ | βοΈ | β |
NanoDet | 1.1M | detection | demo | β | β | β | β | β | βοΈ | βοΈ | β |
NanoDetPlus | 4.5M | detection | demo | β | β | β | β | β | βοΈ | βοΈ | β |
NanoDetEffi... | 12M | detection | demo | β | β | β | β | β | βοΈ | βοΈ | β |
YoloX_V_0_1_1 | 3.5M | detection | demo | β | β | β | β | β | βοΈ | βοΈ | β |
YoloV5_V_6_0 | 7.5M | detection | demo | β | β | β | β | β | βοΈ | βοΈ | β |
GlintArcFace | 92M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | β |
GlintCosFace | 92M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | / |
GlintPartialFC | 170M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | / |
FaceNet | 89M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | / |
FocalArcFace | 166M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | / |
FocalAsiaArcFace | 166M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | / |
TencentCurricularFace | 249M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | / |
TencentCifpFace | 130M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | / |
CenterLossFace | 280M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | / |
SphereFace | 80M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | / |
PoseRobustFace | 92M | faceid | demo | β | / | / | / | β | βοΈ | βοΈ | / |
NaivePoseRobustFace | 43M | faceid | demo | β | / | / | / | β | βοΈ | βοΈ | / |
MobileFaceNet | 3.8M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | β |
CavaGhostArcFace | 15M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | β |
CavaCombinedFace | 250M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | / |
MobileSEFocalFace | 4.5M | faceid | demo | β | β | β | β | β | βοΈ | βοΈ | β |
RobustVideoMatting | 14M | matting | demo | β | β | / | β | β | βοΈ | βοΈ | β |
MGMatting | 113M | matting | demo | β | β | / | β | β | βοΈ | βοΈ | / |
MODNet | 24M | matting | demo | β | β | β | β | β | βοΈ | βοΈ | / |
MODNetDyn | 24M | matting | demo | β | / | / | / | β | βοΈ | βοΈ | / |
BackgroundMattingV2 | 20M | matting | demo | β | β | / | β | β | βοΈ | βοΈ | / |
BackgroundMattingV2Dyn | 20M | matting | demo | β | / | / | / | β | βοΈ | βοΈ | / |
UltraFace | 1.1M | face::detect | demo | β | β | β | β | β | βοΈ | βοΈ | β |
RetinaFace | 1.6M | face::detect | demo | β | β | β | β | β | βοΈ | βοΈ | β |
FaceBoxes | 3.8M | face::detect | demo | β | β | β | β | β | βοΈ | βοΈ | β |
FaceBoxesV2 | 3.8M | face::detect | demo | β | β | β | β | β | βοΈ | βοΈ | β |
SCRFD | 2.5M | face::detect | demo | β | β | β | β | β | βοΈ | βοΈ | β |
YOLO5Face | 4.8M | face::detect | demo | β | β | β | β | β | βοΈ | βοΈ | β |
PFLD | 1.0M | face::align | demo | β | β | β | β | β | βοΈ | βοΈ | β |
PFLD98 | 4.8M | face::align | demo | β | β | β | β | β | βοΈ | βοΈ | β |
MobileNetV268 | 9.4M | face::align | demo | β | β | β | β | β | βοΈ | βοΈ | β |
MobileNetV2SE68 | 11M | face::align | demo | β | β | β | β | β | βοΈ | βοΈ | β |
PFLD68 | 2.8M | face::align | demo | β | β | β | β | β | βοΈ | βοΈ | β |
FaceLandmark1000 | 2.0M | face::align | demo | β | β | β | β | β | βοΈ | βοΈ | β |
PIPNet98 | 44.0M | face::align | demo | β | β | β | β | β | βοΈ | βοΈ | β |
PIPNet68 | 44.0M | face::align | demo | β | β | β | β | β | βοΈ | βοΈ | β |
PIPNet29 | 44.0M | face::align | demo | β | β | β | β | β | βοΈ | βοΈ | β |
PIPNet19 | 44.0M | face::align | demo | β | β | β | β | β | βοΈ | βοΈ | β |
FSANet | 1.2M | face::pose | demo | β | β | / | β | β | βοΈ | βοΈ | β |
AgeGoogleNet | 23M | face::attr | demo | β | β | β | β | β | βοΈ | βοΈ | β |
GenderGoogleNet | 23M | face::attr | demo | β | β | β | β | β | βοΈ | βοΈ | β |
EmotionFerPlus | 33M | face::attr | demo | β | β | β | β | β | βοΈ | βοΈ | β |
VGG16Age | 514M | face::attr | demo | β | β | β | β | β | βοΈ | βοΈ | / |
VGG16Gender | 512M | face::attr | demo | β | β | β | β | β | βοΈ | βοΈ | / |
SSRNet | 190K | face::attr | demo | β | β | / | β | β | βοΈ | βοΈ | β |
EfficientEmotion7 | 15M | face::attr | demo | β | β | β | β | β | βοΈ | βοΈ | β |
EfficientEmotion8 | 15M | face::attr | demo | β | β | β | β | β | βοΈ | βοΈ | β |
MobileEmotion7 | 13M | face::attr | demo | β | β | β | β | β | βοΈ | βοΈ | β |
ReXNetEmotion7 | 30M | face::attr | demo | β | β | / | β | β | βοΈ | βοΈ | / |
EfficientNetLite4 | 49M | classification | demo | β | β | / | β | β | βοΈ | βοΈ | / |
ShuffleNetV2 | 8.7M | classification | demo | β | β | β | β | β | βοΈ | βοΈ | β |
DenseNet121 | 30.7M | classification | demo | β | β | β | β | β | βοΈ | βοΈ | / |
GhostNet | 20M | classification | demo | β | β | β | β | β | βοΈ | βοΈ | β |
HdrDNet | 13M | classification | demo | β | β | β | β | β | βοΈ | βοΈ | β |
IBNNet | 97M | classification | demo | β | β | β | β | β | βοΈ | βοΈ | / |
MobileNetV2 | 13M | classification | demo | β | β | β | β | β | βοΈ | βοΈ | β |
ResNet | 44M | classification | demo | β | β | β | β | β | βοΈ | βοΈ | / |
ResNeXt | 95M | classification | demo | β | β | β | β | β | βοΈ | βοΈ | / |
DeepLabV3ResNet101 | 232M | segmentation | demo | β | β | β | β | β | βοΈ | βοΈ | / |
FCNResNet101 | 207M | segmentation | demo | β | β | β | β | β | βοΈ | βοΈ | / |
FastStyleTransfer | 6.4M | style | demo | β | β | β | β | β | βοΈ | βοΈ | β |
Colorizer | 123M | colorization | demo | β | β | / | β | β | βοΈ | βοΈ | / |
SubPixelCNN | 234K | resolution | demo | β | β | / | β | β | βοΈ | βοΈ | β |
SubPixelCNN | 234K | resolution | demo | β | β | / | β | β | βοΈ | βοΈ | β |
InsectDet | 27M | detection | demo | β | β | / | β | β | βοΈ | βοΈ | β |
InsectID | 22M | classification | demo | β | β | β | β | β | β | βοΈ | βοΈ |
PlantID | 30M | classification | demo | β | β | β | β | β | β | βοΈ | βοΈ |
- MacOS: Build the shared lib of Lite.Ai.ToolKit for MacOS from sources. Note that Lite.Ai.ToolKit uses onnxruntime as default backend, for the reason that onnxruntime supports the most of onnx's operators.
git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git # latest
cd lite.ai.toolkit && sh ./build.sh # On MacOS, you can use the built OpenCV, ONNXRuntime, MNN, NCNN and TNN libs in this repo.
π‘ Linux and Windows.
- lite.ai.toolkit/opencv2
cp -r you-path-to-downloaded-or-built-opencv/include/opencv4/opencv2 lite.ai.toolkit/opencv2
- lite.ai.toolkit/onnxruntime
cp -r you-path-to-downloaded-or-built-onnxruntime/include/onnxruntime lite.ai.toolkit/onnxruntime
- lite.ai.toolkit/MNN
cp -r you-path-to-downloaded-or-built-MNN/include/MNN lite.ai.toolkit/MNN
- lite.ai.toolkit/ncnn
cp -r you-path-to-downloaded-or-built-ncnn/include/ncnn lite.ai.toolkit/ncnn
- lite.ai.toolkit/tnn
cp -r you-path-to-downloaded-or-built-TNN/include/tnn lite.ai.toolkit/tnn
and put the libs into lite.ai.toolkit/lib/(linux|windows) directory. Please reference the build-docs1 for third_party.
- lite.ai.toolkit/lib/(linux|windows)
cp you-path-to-downloaded-or-built-opencv/lib/*opencv* lite.ai.toolkit/lib/(linux|windows)/ cp you-path-to-downloaded-or-built-onnxruntime/lib/*onnxruntime* lite.ai.toolkit/lib/(linux|windows)/ cp you-path-to-downloaded-or-built-MNN/lib/*MNN* lite.ai.toolkit/lib/(linux|windows)/ cp you-path-to-downloaded-or-built-ncnn/lib/*ncnn* lite.ai.toolkit/lib/(linux|windows)/ cp you-path-to-downloaded-or-built-TNN/lib/*TNN* lite.ai.toolkit/lib/(linux|windows)/
Note, your also need to install ffmpeg(<=4.2.2) in Linux to support the opencv videoio module. See issue#203. In MacOS, ffmpeg4.2.2 was been package into lite.ai.toolkit, thus, no installation need in OSX. In Windows, ffmpeg was been package into opencv dll prebuilt by the team of opencv. Please make sure -DWITH_FFMPEG=ON and check the configuration info when building opencv.
- first, build ffmpeg(<=4.2.2) from source.
git clone --depth=1 https://git.ffmpeg.org/ffmpeg.git -b n4.2.2 cd ffmpeg ./configure --enable-shared --disable-x86asm --prefix=/usr/local/opt/ffmpeg --disable-static make -j8 make install
- then, build opencv with -DWITH_FFMPEG=ON, just like
#!/bin/bash mkdir build cd build cmake .. \ -D CMAKE_BUILD_TYPE=Release \ -D CMAKE_INSTALL_PREFIX=your-path-to-custom-dir \ -D BUILD_TESTS=OFF \ -D BUILD_PERF_TESTS=OFF \ -D BUILD_opencv_python3=OFF \ -D BUILD_opencv_python2=OFF \ -D BUILD_SHARED_LIBS=ON \ -D BUILD_opencv_apps=OFF \ -D WITH_FFMPEG=ON make -j8 make install cd ..
after built opencv, you can follow the steps to build lite.ai.toolkit.
-
Windows: You can reference to issue#6
-
Linux: The Docs and Docker image for Linux will be coming soon ~ issue#2
-
Happy News !!! : π You can download the latest ONNXRuntime official built libs of Windows, Linux, MacOS and Arm !!! Both CPU and GPU versions are available. No more attentions needed pay to build it from source. Download the official built libs from v1.8.1. I have used version 1.7.0 for Lite.Ai.ToolKit now, you can download it from v1.7.0, but version 1.8.1 should also work, I guess ~ ππ€ͺπ. For OpenCV, try to build from source(Linux) or down load the official built(Windows) from OpenCV 4.5.3. Then put the includes and libs into specific directory of Lite.Ai.ToolKit.
-
GPU Compatibility for Windows: See issue#10.
-
GPU Compatibility for Linux: See issue#97.
ποΈ How to link Lite.Ai.ToolKit?
* To link Lite.Ai.ToolKit, you can follow the CMakeLists.txt listed belows.cmake_minimum_required(VERSION 3.10) project(lite.ai.toolkit.demo) set(CMAKE_CXX_STANDARD 11) # setting up lite.ai.toolkit set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit) set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include) set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib) include_directories(${LITE_AI_INCLUDE_DIR}) link_directories(${LITE_AI_LIBRARY_DIR}) set(OpenCV_LIBS opencv_highgui opencv_core opencv_imgcodecs opencv_imgproc opencv_video opencv_videoio ) # add your executable set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build) add_executable(lite_rvm examples/test_lite_rvm.cpp) target_link_libraries(lite_rvm lite.ai.toolkit onnxruntime MNN # need, if built lite.ai.toolkit with ENABLE_MNN=ON, default OFF ncnn # need, if built lite.ai.toolkit with ENABLE_NCNN=ON, default OFF TNN # need, if built lite.ai.toolkit with ENABLE_TNN=ON, default OFF ${OpenCV_LIBS}) # link lite.ai.toolkit & other libs.
cd ./build/lite.ai.toolkit/lib && otool -L liblite.ai.toolkit.0.0.1.dylib liblite.ai.toolkit.0.0.1.dylib: @rpath/liblite.ai.toolkit.0.0.1.dylib (compatibility version 0.0.1, current version 0.0.1) @rpath/libopencv_highgui.4.5.dylib (compatibility version 4.5.0, current version 4.5.2) @rpath/libonnxruntime.1.7.0.dylib (compatibility version 0.0.0, current version 1.7.0) ...
cd ../ && tree . βββ bin βββ include βΒ Β βββ lite βΒ Β βΒ Β βββ backend.h βΒ Β βΒ Β βββ config.h βΒ Β βΒ Β βββ lite.h βΒ Β βββ ort βββ lib βββ liblite.ai.toolkit.0.0.1.dylib
- Run the built examples:
cd ./build/lite.ai.toolkit/bin && ls -lh | grep lite -rwxr-xr-x 1 root staff 301K Jun 26 23:10 liblite.ai.toolkit.0.0.1.dylib ... -rwxr-xr-x 1 root staff 196K Jun 26 23:10 lite_yolov4 -rwxr-xr-x 1 root staff 196K Jun 26 23:10 lite_yolov5 ...
./lite_yolov5 LITEORT_DEBUG LogId: ../../../hub/onnx/cv/yolov5s.onnx =============== Input-Dims ============== ... detected num_anchors: 25200 generate_bboxes num: 66 Default Version Detected Boxes Num: 5
To link
lite.ai.toolkit
shared lib. You need to make sure thatOpenCV
andonnxruntime
are linked correctly. A minimum example to show you how to link the shared lib of Lite.AI.ToolKit correctly for your own project can be found at CMakeLists.txt.Lite.Ai.ToolKit contains 80+ AI models with 500+ frozen pretrained files now. Most of the files are converted by myself. You can use it through lite::cv::Type::Class syntax, such as lite::cv::detection::YoloV5. More details can be found at Examples for Lite.Ai.ToolKit. Note, for Google Drive, I can not upload all the *.onnx files because of the storage limitation (15G).
File Baidu Drive Google Drive Docker Hub Hub (Docs) ONNX Baidu Drive code: 8gin Google Drive ONNX Docker v0.1.22.01.08 (28G), v0.1.22.02.02 (400M) ONNX Hub MNN Baidu Drive code: 9v63 β MNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (213M) MNN Hub NCNN Baidu Drive code: sc7f β NCNN Docker v0.1.22.01.08 (9G), v0.1.22.02.02 (197M) NCNN Hub TNN Baidu Drive code: 6o6k β TNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (217M) TNN Hub docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08 # (28G) docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08 # (11G) docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08 # (9G) docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08 # (11G) docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.02.02 # (400M) + YOLO5Face docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.02.02 # (213M) + YOLO5Face docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.02.02 # (197M) + YOLO5Face docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.02.02 # (217M) + YOLO5Face
βοΈ Lite.Ai.ToolKit modules.
Namepace Details lite::cv::detection Object Detection. one-stage and anchor-free detectors, YoloV5, YoloV4, SSD, etc. β lite::cv::classification Image Classification. DensNet, ShuffleNet, ResNet, IBNNet, GhostNet, etc. β lite::cv::faceid Face Recognition. ArcFace, CosFace, CurricularFace, etc. βοΈ lite::cv::face Face Analysis. detect, align, pose, attr, etc. βοΈ lite::cv::face::detect Face Detection. UltraFace, RetinaFace, FaceBoxes, PyramidBox, etc. βοΈ lite::cv::face::align Face Alignment. PFLD(106), FaceLandmark1000(1000 landmarks), PRNet, etc. βοΈ lite::cv::face::pose Head Pose Estimation. FSANet, etc. βοΈ lite::cv::face::attr Face Attributes. Emotion, Age, Gender. EmotionFerPlus, VGG16Age, etc. βοΈ lite::cv::segmentation Object Segmentation. Such as FCN, DeepLabV3, etc. βοΈ οΈ lite::cv::style Style Transfer. Contains neural style transfer now, such as FastStyleTransfer. β οΈ lite::cv::matting Image Matting. Object and Human matting. βοΈ οΈ lite::cv::colorization Colorization. Make Gray image become RGB. β οΈ lite::cv::resolution Super Resolution. β οΈ Correspondence between the classes in Lite.AI.ToolKit and pretrained model files can be found at lite.ai.toolkit.hub.onnx.md. For examples, the pretrained model files for lite::cv::detection::YoloV5 and lite::cv::detection::YoloX are listed as follows.
Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::detection::YoloV5 yolov5l.onnx yolov5 (π₯π₯π₯β) 188Mb lite::cv::detection::YoloV5 yolov5m.onnx yolov5 (π₯π₯π₯β) 85Mb lite::cv::detection::YoloV5 yolov5s.onnx yolov5 (π₯π₯π₯β) 29Mb lite::cv::detection::YoloV5 yolov5x.onnx yolov5 (π₯π₯π₯β) 351Mb lite::cv::detection::YoloX yolox_x.onnx YOLOX (π₯π₯!!β) 378Mb lite::cv::detection::YoloX yolox_l.onnx YOLOX (π₯π₯!!β) 207Mb lite::cv::detection::YoloX yolox_m.onnx YOLOX (π₯π₯!!β) 97Mb lite::cv::detection::YoloX yolox_s.onnx YOLOX (π₯π₯!!β) 34Mb lite::cv::detection::YoloX yolox_tiny.onnx YOLOX (π₯π₯!!β) 19Mb lite::cv::detection::YoloX yolox_nano.onnx YOLOX (π₯π₯!!β) 3.5Mb It means that you can load the the any one
yolov5*.onnx
andyolox_*.onnx
according to your application through the same Lite.AI.ToolKit's classes, such as YoloV5, YoloX, etc.auto *yolov5 = new lite::cv::detection::YoloV5("yolov5x.onnx"); // for server auto *yolov5 = new lite::cv::detection::YoloV5("yolov5l.onnx"); auto *yolov5 = new lite::cv::detection::YoloV5("yolov5m.onnx"); auto *yolov5 = new lite::cv::detection::YoloV5("yolov5s.onnx"); // for mobile device auto *yolox = new lite::cv::detection::YoloX("yolox_x.onnx"); auto *yolox = new lite::cv::detection::YoloX("yolox_l.onnx"); auto *yolox = new lite::cv::detection::YoloX("yolox_m.onnx"); auto *yolox = new lite::cv::detection::YoloX("yolox_s.onnx"); auto *yolox = new lite::cv::detection::YoloX("yolox_tiny.onnx"); auto *yolox = new lite::cv::detection::YoloX("yolox_nano.onnx"); // 3.5Mb only !
ποΈ How to download Model Zoo from Docker Hub?
- Firstly, pull the image from docker hub.
docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08 # (11G) docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08 # (9G) docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08 # (11G) docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08 # (28G)
- Secondly, run the container with local
share
dir usingdocker run -idt xxx
. A minimum example will show you as follows.- make a
share
dir in your local device.
mkdir share # any name is ok.
- write
run_mnn_docker_hub.sh
script like:
#!/bin/bash PORT1=6072 PORT2=6084 SERVICE_DIR=/Users/xxx/Desktop/your-path-to/share CONRAINER_DIR=/home/hub/share CONRAINER_NAME=mnn_docker_hub_d docker run -idt -p ${PORT2}:${PORT1} -v ${SERVICE_DIR}:${CONRAINER_DIR} --shm-size=16gb --name ${CONRAINER_NAME} qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08
- Finally, copy the model weights from
/home/hub/mnn/cv
to your localshare
dir.# activate mnn docker. sh ./run_mnn_docker_hub.sh docker exec -it mnn_docker_hub_d /bin/bash # copy the models to the share dir. cd /home/hub cp -rf mnn/cv share/
The pretrained and converted ONNX files provide by lite.ai.toolkit are listed as follows. Also, see Model Zoo and ONNX Hub, MNN Hub, TNN Hub, NCNN Hub for more details.
ποΈ ONNX Model Hub
Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::detection::YoloV5 yolov5l.onnx yolov5 188Mb lite::cv::detection::YoloV5 yolov5m.onnx yolov5 85Mb lite::cv::detection::YoloV5 yolov5s.onnx yolov5 29Mb lite::cv::detection::YoloV5 yolov5x.onnx yolov5 351Mb lite::cv::detection::YoloX yolox_x.onnx YOLOX 378Mb lite::cv::detection::YoloX yolox_l.onnx YOLOX 207Mb lite::cv::detection::YoloX yolox_m.onnx YOLOX 97Mb lite::cv::detection::YoloX yolox_s.onnx YOLOX 34Mb lite::cv::detection::YoloX yolox_tiny.onnx YOLOX 19Mb lite::cv::detection::YoloX yolox_nano.onnx YOLOX 3.5Mb lite::cv::detection::YoloV3 yolov3-10.onnx onnx-models 236Mb lite::cv::detection::TinyYoloV3 tiny-yolov3-11.onnx onnx-models 33Mb lite::cv::detection::YoloV4 voc-mobilenetv2-yolov4-640.onnx YOLOv4... 176Mb lite::cv::detection::YoloV4 voc-mobilenetv2-yolov4-416.onnx YOLOv4... 176Mb lite::cv::detection::SSD ssd-10.onnx onnx-models 76Mb lite::cv::detection::YoloR yolor-d6-1280-1280.onnx yolor 667Mb lite::cv::detection::YoloR yolor-d6-640-640.onnx yolor 601Mb lite::cv::detection::YoloR yolor-d6-320-320.onnx yolor 584Mb lite::cv::detection::YoloR yolor-e6-1280-1280.onnx yolor 530Mb lite::cv::detection::YoloR yolor-e6-640-640.onnx yolor 464Mb lite::cv::detection::YoloR yolor-e6-320-320.onnx yolor 448Mb lite::cv::detection::YoloR yolor-p6-1280-1280.onnx yolor 214Mb lite::cv::detection::YoloR yolor-p6-640-640.onnx yolor 160Mb lite::cv::detection::YoloR yolor-p6-320-320.onnx yolor 147Mb lite::cv::detection::YoloR yolor-w6-1280-1280.onnx yolor 382Mb lite::cv::detection::YoloR yolor-w6-640-640.onnx yolor 324Mb lite::cv::detection::YoloR yolor-w6-320-320.onnx yolor 309Mb lite::cv::detection::YoloR yolor-ssss-s2d-1280-1280.onnx yolor 90Mb lite::cv::detection::YoloR yolor-ssss-s2d-640-640.onnx yolor 49Mb lite::cv::detection::YoloR yolor-ssss-s2d-320-320.onnx yolor 39Mb lite::cv::detection::TinyYoloV4VOC yolov4_tiny_weights_voc.onnx yolov4-tiny... 23Mb lite::cv::detection::TinyYoloV4VOC yolov4_tiny_weights_voc_SE.onnx yolov4-tiny... 23Mb lite::cv::detection::TinyYoloV4VOC yolov4_tiny_weights_voc_CBAM.onnx yolov4-tiny... 23Mb lite::cv::detection::TinyYoloV4VOC yolov4_tiny_weights_voc_ECA.onnx yolov4-tiny... 23Mb lite::cv::detection::TinyYoloV4COCO yolov4_tiny_weights_coco.onnx yolov4-tiny... 23Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p5-1280-1280.onnx ScaledYOLOv4 270Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p5-640-640.onnx ScaledYOLOv4 270Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p5-320-320.onnx ScaledYOLOv4 270Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p6-1280-1280.onnx ScaledYOLOv4 487Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p6-640-640.onnx ScaledYOLOv4 487Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p6-320-320.onnx ScaledYOLOv4 487Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p7-1280-1280.onnx ScaledYOLOv4 1.1Gb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p7-640-640.onnx ScaledYOLOv4 1.1Gb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p5_-1280-1280.onnx ScaledYOLOv4 270Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p5_-640-640.onnx ScaledYOLOv4 270Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p5_-320-320.onnx ScaledYOLOv4 270Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p6_-1280-1280.onnx ScaledYOLOv4 487Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p6_-640-640.onnx ScaledYOLOv4 487Mb lite::cv::detection::ScaledYoloV4 ScaledYoloV4_yolov4-p6_-320-320.onnx ScaledYOLOv4 487Mb lite::cv::detection::EfficientDet efficientdet-d0.onnx ...EfficientDet... 15Mb lite::cv::detection::EfficientDet efficientdet-d1.onnx ...EfficientDet... 26Mb lite::cv::detection::EfficientDet efficientdet-d2.onnx ...EfficientDet... 32Mb lite::cv::detection::EfficientDet efficientdet-d3.onnx ...EfficientDet... 49Mb lite::cv::detection::EfficientDet efficientdet-d4.onnx ...EfficientDet... 85Mb lite::cv::detection::EfficientDet efficientdet-d5.onnx ...EfficientDet... 138Mb lite::cv::detection::EfficientDet efficientdet-d6.onnx ...EfficientDet... 220Mb lite::cv::detection::EfficientDetD7 efficientdet-d7.onnx ...EfficientDet... 220Mb lite::cv::detection::EfficientDetD8 efficientdet-d8.onnx ...EfficientDet... 322Mb lite::cv::detection::YOLOP yolop-1280-1280.onnx YOLOP 30Mb lite::cv::detection::YOLOP yolop-640-640.onnx YOLOP 30Mb lite::cv::detection::YOLOP yolop-320-320.onnx YOLOP 30Mb lite::cv::detection::NanoDet nanodet_m_0.5x.onnx nanodet 1.1Mb lite::cv::detection::NanoDet nanodet_m.onnx nanodet 3.6Mb lite::cv::detection::NanoDet nanodet_m_1.5x.onnx nanodet 7.9Mb lite::cv::detection::NanoDet nanodet_m_1.5x_416.onnx nanodet 7.9Mb lite::cv::detection::NanoDet nanodet_m_416.onnx nanodet 3.6Mb lite::cv::detection::NanoDet nanodet_g.onnx nanodet 14Mb lite::cv::detection::NanoDet nanodet_t.onnx nanodet 5.1Mb lite::cv::detection::NanoDet nanodet-RepVGG-A0_416.onnx nanodet 26Mb lite::cv::detection::NanoDetEfficientNetLite nanodet-EfficientNet-Lite0_320.onnx nanodet 12Mb lite::cv::detection::NanoDetEfficientNetLite nanodet-EfficientNet-Lite1_416.onnx nanodet 15Mb lite::cv::detection::NanoDetEfficientNetLite nanodet-EfficientNet-Lite2_512.onnx nanodet 18Mb lite::cv::detection::YoloX_V_0_1_1 yolox_x_v0.1.1.onnx YOLOX 378Mb lite::cv::detection::YoloX_V_0_1_1 yolox_l_v0.1.1.onnx YOLOX 207Mb lite::cv::detection::YoloX_V_0_1_1 yolox_m_v0.1.1.onnx YOLOX 97Mb lite::cv::detection::YoloX_V_0_1_1 yolox_s_v0.1.1.onnx YOLOX 34Mb lite::cv::detection::YoloX_V_0_1_1 yolox_tiny_v0.1.1.onnx YOLOX 19Mb lite::cv::detection::YoloX_V_0_1_1 yolox_nano_v0.1.1.onnx YOLOX 3.5Mb lite::cv::detection::YoloV5_V_6_0 yolov5l.640-640.v.6.0.onnx yolov5 178Mb lite::cv::detection::YoloV5_V_6_0 yolov5m.640-640.v.6.0.onnx yolov5 81Mb lite::cv::detection::YoloV5_V_6_0 yolov5s.640-640.v.6.0.onnx yolov5 28Mb lite::cv::detection::YoloV5_V_6_0 yolov5x.640-640.v.6.0.onnx yolov5 331Mb lite::cv::detection::YoloV5_V_6_0 yolov5n.640-640.v.6.0.onnx yolov5 7.5Mb lite::cv::detection::YoloV5_V_6_0 yolov5l6.640-640.v.6.0.onnx yolov5 294Mb lite::cv::detection::YoloV5_V_6_0 yolov5m6.640-640.v.6.0.onnx yolov5 128Mb lite::cv::detection::YoloV5_V_6_0 yolov5s6.640-640.v.6.0.onnx yolov5 50Mb lite::cv::detection::YoloV5_V_6_0 yolov5x6.640-640.v.6.0.onnx yolov5 538Mb lite::cv::detection::YoloV5_V_6_0 yolov5n6.640-640.v.6.0.onnx yolov5 14Mb lite::cv::detection::YoloV5_V_6_0 yolov5l6.1280-1280.v.6.0.onnx yolov5 294Mb lite::cv::detection::YoloV5_V_6_0 yolov5m6.1280-1280.v.6.0.onnx yolov5 128Mb lite::cv::detection::YoloV5_V_6_0 yolov5s6.1280-1280.v.6.0.onnx yolov5 50Mb lite::cv::detection::YoloV5_V_6_0 yolov5x6.1280-1280.v.6.0.onnx yolov5 538Mb lite::cv::detection::YoloV5_V_6_0 yolov5n6.1280-1280.v.6.0.onnx yolov5 14Mb lite::cv::detection::NanoDetPlus nanodet-plus-m_320.onnx nanodet 4.5Mb lite::cv::detection::NanoDetPlus nanodet-plus-m_416.onnx nanodet 4.5Mb lite::cv::detection::NanoDetPlus nanodet-plus-m-1.5x_320.onnx nanodet 9.4Mb lite::cv::detection::NanoDetPlus nanodet-plus-m-1.5x_416.onnx nanodet 9.4Mb lite::cv::detection::InsectDet quarrying_insect_detector.onnx InsectID 22Mb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::classification:EfficientNetLite4 efficientnet-lite4-11.onnx onnx-models 49Mb lite::cv::classification::ShuffleNetV2 shufflenet-v2-10.onnx onnx-models 8.7Mb lite::cv::classification::DenseNet121 densenet121.onnx torchvision 30Mb lite::cv::classification::GhostNet ghostnet.onnx torchvision 20Mb lite::cv::classification::HdrDNet hardnet.onnx torchvision 13Mb lite::cv::classification::IBNNet ibnnet18.onnx torchvision 97Mb lite::cv::classification::MobileNetV2 mobilenetv2.onnx torchvision 13Mb lite::cv::classification::ResNet resnet18.onnx torchvision 44Mb lite::cv::classification::ResNeXt resnext.onnx torchvision 95Mb lite::cv::classification::InsectID quarrying_insect_identifier.onnx InsectID 27Mb lite::cv::classification:PlantID quarrying_plantid_model.onnx PlantID 30Mb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::face::detect::UltraFace ultraface-rfb-640.onnx Ultra-Light... 1.5Mb lite::cv::face::detect::UltraFace ultraface-rfb-320.onnx Ultra-Light... 1.2Mb lite::cv::face::detect::RetinaFace Pytorch_RetinaFace_resnet50.onnx ...Retinaface 104Mb lite::cv::face::detect::RetinaFace Pytorch_RetinaFace_resnet50-640-640.onnx ...Retinaface 104Mb lite::cv::face::detect::RetinaFace Pytorch_RetinaFace_resnet50-320-320.onnx ...Retinaface 104Mb lite::cv::face::detect::RetinaFace Pytorch_RetinaFace_resnet50-720-1080.onnx ...Retinaface 104Mb lite::cv::face::detect::RetinaFace Pytorch_RetinaFace_mobile0.25.onnx ...Retinaface 1.6Mb lite::cv::face::detect::RetinaFace Pytorch_RetinaFace_mobile0.25-640-640.onnx ...Retinaface 1.6Mb lite::cv::face::detect::RetinaFace Pytorch_RetinaFace_mobile0.25-320-320.onnx ...Retinaface 1.6Mb lite::cv::face::detect::RetinaFace Pytorch_RetinaFace_mobile0.25-720-1080.onnx ...Retinaface 1.6Mb lite::cv::face::detect::FaceBoxes FaceBoxes.onnx FaceBoxes 3.8Mb lite::cv::face::detect::FaceBoxes FaceBoxes-640-640.onnx FaceBoxes 3.8Mb lite::cv::face::detect::FaceBoxes FaceBoxes-320-320.onnx FaceBoxes 3.8Mb lite::cv::face::detect::FaceBoxes FaceBoxes-720-1080.onnx FaceBoxes 3.8Mb lite::cv::face::detect::SCRFD scrfd_500m_shape160x160.onnx SCRFD 2.5Mb lite::cv::face::detect::SCRFD scrfd_500m_shape320x320.onnx SCRFD 2.5Mb lite::cv::face::detect::SCRFD scrfd_500m_shape640x640.onnx SCRFD 2.5Mb lite::cv::face::detect::SCRFD scrfd_500m_bnkps_shape160x160.onnx SCRFD 2.5Mb lite::cv::face::detect::SCRFD scrfd_500m_bnkps_shape320x320.onnx SCRFD 2.5Mb lite::cv::face::detect::SCRFD scrfd_500m_bnkps_shape640x640.onnx SCRFD 2.5Mb lite::cv::face::detect::SCRFD scrfd_1g_shape160x160.onnx SCRFD 2.7Mb lite::cv::face::detect::SCRFD scrfd_1g_shape320x320.onnx SCRFD 2.7Mb lite::cv::face::detect::SCRFD scrfd_1g_shape640x640.onnx SCRFD 2.7Mb lite::cv::face::detect::SCRFD scrfd_2.5g_shape160x160.onnx SCRFD 3.3Mb lite::cv::face::detect::SCRFD scrfd_2.5g_shape320x320.onnx SCRFD 3.3Mb lite::cv::face::detect::SCRFD scrfd_2.5g_shape640x640.onnx SCRFD 3.3Mb lite::cv::face::detect::SCRFD scrfd_2.5g_bnkps_shape160x160.onnx SCRFD 3.3Mb lite::cv::face::detect::SCRFD scrfd_2.5g_bnkps_shape320x320.onnx SCRFD 3.3Mb lite::cv::face::detect::SCRFD scrfd_2.5g_bnkps_shape640x640.onnx SCRFD 3.3Mb lite::cv::face::detect::SCRFD scrfd_10g_shape640x640.onnx SCRFD 16.9Mb lite::cv::face::detect::SCRFD scrfd_10g_shape1280x1280.onnx SCRFD 16.9Mb lite::cv::face::detect::SCRFD scrfd_10g_bnkps_shape640x640.onnx SCRFD 16.9Mb lite::cv::face::detect::SCRFD scrfd_10g_bnkps_shape1280x1280.onnx SCRFD 16.9Mb lite::cv::face::detect::YOLO5Face yolov5face-blazeface-640x640.onnx YOLO5Face 3.4Mb lite::cv::face::detect::YOLO5Face yolov5face-l-640x640.onnx YOLO5Face 181Mb lite::cv::face::detect::YOLO5Face yolov5face-m-640x640.onnx YOLO5Face 83Mb lite::cv::face::detect::YOLO5Face yolov5face-n-0.5-320x320.onnx YOLO5Face 2.5Mb lite::cv::face::detect::YOLO5Face yolov5face-n-0.5-640x640.onnx YOLO5Face 4.6Mb lite::cv::face::detect::YOLO5Face yolov5face-n-640x640.onnx YOLO5Face 9.5Mb lite::cv::face::detect::YOLO5Face yolov5face-s-640x640.onnx YOLO5Face 30Mb lite::cv::face::detect::FaceBoxesV2 faceboxesv2-640x640.onnx FaceBoxesV2 4.0Mb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::face::align::PFLD pfld-106-lite.onnx pfld_106_... 1.0Mb lite::cv::face::align::PFLD pfld-106-v3.onnx pfld_106_... 5.5Mb lite::cv::face::align::PFLD pfld-106-v2.onnx pfld_106_... 5.0Mb lite::cv::face::align::PFLD98 PFLD-pytorch-pfld.onnx PFLD... 4.8Mb lite::cv::face::align::MobileNetV268 pytorch_face_landmarks_landmark_detection_56.onnx ...landmark 9.4Mb lite::cv::face::align::MobileNetV2SE68 pytorch_face_landmarks_landmark_detection_56_se_external.onnx ...landmark 11Mb lite::cv::face::align::PFLD68 pytorch_face_landmarks_pfld.onnx ...landmark 2.8Mb lite::cv::face::align::FaceLandmarks1000 FaceLandmark1000.onnx FaceLandm... 2.0Mb lite::cv::face::align::PIPNet98 pipnet_resnet18_10x19x32x256_aflw.onnx PIPNet 44.0Mb lite::cv::face::align::PIPNet68 pipnet_resnet18_10x29x32x256_cofw.onnx PIPNet 44.0Mb lite::cv::face::align::PIPNet29 pipnet_resnet18_10x68x32x256_300w.onnx PIPNet 44.0Mb lite::cv::face::align::PIPNet19 pipnet_resnet18_10x98x32x256_wflw.onnx PIPNet 44.0Mb lite::cv::face::align::PIPNet98 pipnet_resnet101_10x19x32x256_aflw.onnx PIPNet 150.0Mb lite::cv::face::align::PIPNet68 pipnet_resnet101_10x29x32x256_cofw.onnx PIPNet 150.0Mb lite::cv::face::align::PIPNet29 pipnet_resnet101_10x68x32x256_300w.onnx PIPNet 150.0Mb lite::cv::face::align::PIPNet19 pipnet_resnet101_10x98x32x256_wflw.onnx PIPNet 150.0Mb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::face::attr::AgeGoogleNet age_googlenet.onnx onnx-models 23Mb lite::cv::face::attr::GenderGoogleNet gender_googlenet.onnx onnx-models 23Mb lite::cv::face::attr::EmotionFerPlus emotion-ferplus-7.onnx onnx-models 33Mb lite::cv::face::attr::EmotionFerPlus emotion-ferplus-8.onnx onnx-models 33Mb lite::cv::face::attr::VGG16Age vgg_ilsvrc_16_age_imdb_wiki.onnx onnx-models 514Mb lite::cv::face::attr::VGG16Age vgg_ilsvrc_16_age_chalearn_iccv2015.onnx onnx-models 514Mb lite::cv::face::attr::VGG16Gender vgg_ilsvrc_16_gender_imdb_wiki.onnx onnx-models 512Mb lite::cv::face::attr::SSRNet ssrnet.onnx SSR_Net... 190Kb lite::cv::face::attr::EfficientEmotion7 face-emotion-recognition-enet_b0_7.onnx face-emo... 15Mb lite::cv::face::attr::EfficientEmotion8 face-emotion-recognition-enet_b0_8_best_afew.onnx face-emo... 15Mb lite::cv::face::attr::EfficientEmotion8 face-emotion-recognition-enet_b0_8_best_vgaf.onnx face-emo... 15Mb lite::cv::face::attr::MobileEmotion7 face-emotion-recognition-mobilenet_7.onnx face-emo... 13Mb lite::cv::face::attr::ReXNetEmotion7 face-emotion-recognition-affectnet_7_vggface2_rexnet150.onnx face-emo... 30Mb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::faceid::GlintArcFace ms1mv3_arcface_r100.onnx insightface 248Mb lite::cv::faceid::GlintArcFace ms1mv3_arcface_r50.onnx insightface 166Mb lite::cv::faceid::GlintArcFace ms1mv3_arcface_r34.onnx insightface 130Mb lite::cv::faceid::GlintArcFace ms1mv3_arcface_r18.onnx insightface 91Mb lite::cv::faceid::GlintCosFace glint360k_cosface_r100.onnx insightface 248Mb lite::cv::faceid::GlintCosFace glint360k_cosface_r50.onnx insightface 166Mb lite::cv::faceid::GlintCosFace glint360k_cosface_r34.onnx insightface 130Mb lite::cv::faceid::GlintCosFace glint360k_cosface_r18.onnx insightface 91Mb lite::cv::faceid::GlintPartialFC partial_fc_glint360k_r100.onnx insightface 248Mb lite::cv::faceid::GlintPartialFC partial_fc_glint360k_r50.onnx insightface 91Mb lite::cv::faceid::FaceNet facenet_vggface2_resnet.onnx facenet... 89Mb lite::cv::faceid::FaceNet facenet_casia-webface_resnet.onnx facenet... 89Mb lite::cv::faceid::FocalArcFace focal-arcface-ms1m-ir152.onnx face.evoLVe... 269Mb lite::cv::faceid::FocalArcFace focal-arcface-ms1m-ir50-epoch120.onnx face.evoLVe... 166Mb lite::cv::faceid::FocalArcFace focal-arcface-ms1m-ir50-epoch63.onnx face.evoLVe... 166Mb lite::cv::faceid::FocalAsiaArcFace focal-arcface-bh-ir50-asia.onnx face.evoLVe... 166Mb lite::cv::faceid::TencentCurricularFace Tencent_CurricularFace_Backbone.onnx TFace 249Mb lite::cv::faceid::TencentCifpFace Tencent_Cifp_BUPT_Balancedface_IR_34.onnx TFace 130Mb lite::cv::faceid::CenterLossFace CenterLossFace_epoch_100.onnx center-loss... 280Mb lite::cv::faceid::SphereFace sphere20a_20171020.onnx sphere... 86Mb lite::cv::faceid::PoseRobustFace dream_cfp_res50_end2end.onnx DREAM 92Mb lite::cv::faceid::PoseRobustFace dream_ijba_res18_end2end.onnx DREAM 43Mb lite::cv::faceid:NaivePoseRobustFace dream_cfp_res50_naive.onnx DREAM 91Mb lite::cv::faceid:NaivePoseRobustFace dream_ijba_res18_naive.onnx DREAM 43Mb lite::cv::faceid:MobileFaceNet MobileFaceNet_Pytorch_068.onnx MobileFace... 3.8Mb lite::cv::faceid:CavaGhostArcFace cavaface_GhostNet_x1.3_Arcface_Epoch_24.onnx cavaface... 15Mb lite::cv::faceid:CavaCombinedFace cavaface_IR_SE_100_Combined_Epoch_24.onnx cavaface... 250Mb lite::cv::faceid:MobileSEFocalFace face_recognition.pytorch_Mobilenet_se_focal_121000.onnx face_recog... 4.5Mb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::face::pose::FSANet fsanet-var.onnx ...fsanet... 1.2Mb lite::cv::face::pose::FSANet fsanet-1x1.onnx ...fsanet... 1.2Mb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::segmentation::DeepLabV3ResNet101 deeplabv3_resnet101_coco.onnx torchvision 232Mb lite::cv::segmentation::FCNResNet101 fcn_resnet101.onnx torchvision 207Mb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::style::FastStyleTransfer style-mosaic-8.onnx onnx-models 6.4Mb lite::cv::style::FastStyleTransfer style-candy-9.onnx onnx-models 6.4Mb lite::cv::style::FastStyleTransfer style-udnie-8.onnx onnx-models 6.4Mb lite::cv::style::FastStyleTransfer style-udnie-9.onnx onnx-models 6.4Mb lite::cv::style::FastStyleTransfer style-pointilism-8.onnx onnx-models 6.4Mb lite::cv::style::FastStyleTransfer style-pointilism-9.onnx onnx-models 6.4Mb lite::cv::style::FastStyleTransfer style-rain-princess-9.onnx onnx-models 6.4Mb lite::cv::style::FastStyleTransfer style-rain-princess-8.onnx onnx-models 6.4Mb lite::cv::style::FastStyleTransfer style-candy-8.onnx onnx-models 6.4Mb lite::cv::style::FastStyleTransfer style-mosaic-9.onnx onnx-models 6.4Mb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::colorization::Colorizer eccv16-colorizer.onnx colorization 123Mb lite::cv::colorization::Colorizer siggraph17-colorizer.onnx colorization 129Mb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::resolution::SubPixelCNN subpixel-cnn.onnx ...PIXEL... 234Kb Class Pretrained ONNX Files Rename or Converted From (Repo) Size lite::cv::matting::RobustVideoMatting rvm_mobilenetv3_fp32.onnx RobustVideoMatting 14Mb lite::cv::matting::RobustVideoMatting rvm_mobilenetv3_fp16.onnx RobustVideoMatting 7.2Mb lite::cv::matting::RobustVideoMatting rvm_resnet50_fp32.onnx RobustVideoMatting 50Mb lite::cv::matting::RobustVideoMatting rvm_resnet50_fp16.onnx RobustVideoMatting 100Mb lite::cv::matting::MGMatting MGMatting-DIM-100k.onnx MGMatting 113Mb lite::cv::matting::MGMatting MGMatting-RWP-100k.onnx MGMatting 113Mb lite::cv::matting::MODNet modnet_photographic_portrait_matting-1024x1024.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_photographic_portrait_matting-1024x512.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_photographic_portrait_matting-256x256.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_photographic_portrait_matting-256x512.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_photographic_portrait_matting-512x1024.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_photographic_portrait_matting-512x256.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_photographic_portrait_matting-512x512.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_webcam_portrait_matting-1024x1024.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_webcam_portrait_matting-1024x512.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_webcam_portrait_matting-256x256.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_webcam_portrait_matting-256x512.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_webcam_portrait_matting-512x1024.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_webcam_portrait_matting-512x256.onnx MODNet 24Mb lite::cv::matting::MODNet modnet_webcam_portrait_matting-512x512.onnx MODNet 24Mb lite::cv::matting::MODNetDyn modnet_photographic_portrait_matting.onnx MODNet 24Mb lite::cv::matting::MODNetDyn modnet_webcam_portrait_matting.onnx MODNet 24Mb lite::cv::matting::BackgroundMattingV2 BGMv2_mobilenetv2-256x256-full.onnx BackgroundMattingV2 20Mb lite::cv::matting::BackgroundMattingV2 BGMv2_mobilenetv2-512x512-full.onnx BackgroundMattingV2 20Mb lite::cv::matting::BackgroundMattingV2 BGMv2_mobilenetv2-1080x1920-full.onnx BackgroundMattingV2 20Mb lite::cv::matting::BackgroundMattingV2 BGMv2_mobilenetv2-2160x3840-full.onnx BackgroundMattingV2 20Mb lite::cv::matting::BackgroundMattingV2 BGMv2_resnet50-1080x1920-full.onnx BackgroundMattingV2 20Mb lite::cv::matting::BackgroundMattingV2 BGMv2_resnet50-2160x3840-full.onnx BackgroundMattingV2 20Mb lite::cv::matting::BackgroundMattingV2 BGMv2_resnet101-2160x3840-full.onnx BackgroundMattingV2 154Mb lite::cv::matting::BackgroundMattingV2Dyn BGMv2_mobilenetv2_4k_dynamic.onnx BackgroundMattingV2 157Mb lite::cv::matting::BackgroundMattingV2Dyn BGMv2_mobilenetv2_hd_dynamic.onnx BackgroundMattingV2 230Mb More examples can be found at examples.
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg"; std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg"; auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); std::vector<lite::types::Boxf> detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5->detect(img_bgr, detected_boxes); lite::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete yolov5; }
The output is:
Or you can use Newest π₯π₯ ! YOLO series's detector YOLOX or YoloR. They got the similar results.
More classes for general object detection (80 classes, COCO).
auto *detector = new lite::cv::detection::YoloX(onnx_path); // Newest YOLO detector !!! 2021-07 auto *detector = new lite::cv::detection::YoloV4(onnx_path); auto *detector = new lite::cv::detection::YoloV3(onnx_path); auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path); auto *detector = new lite::cv::detection::SSD(onnx_path); auto *detector = new lite::cv::detection::YoloV5(onnx_path); auto *detector = new lite::cv::detection::YoloR(onnx_path); // Newest YOLO detector !!! 2021-05 auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path); auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path); auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path); auto *detector = new lite::cv::detection::EfficientDet(onnx_path); auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path); auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path); auto *detector = new lite::cv::detection::YOLOP(onnx_path); auto *detector = new lite::cv::detection::NanoDet(onnx_path); // Super fast and tiny! auto *detector = new lite::cv::detection::NanoDetPlus(onnx_path); // Super fast and tiny! 2021/12/25 auto *detector = new lite::cv::detection::NanoDetEfficientNetLite(onnx_path); // Super fast and tiny!
Example1: Video Matting using RobustVideoMatting2021π₯π₯π₯. Download model from Model-Zoo2.
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx"; std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4"; std::string output_path = "../../../logs/test_lite_rvm_0.mp4"; std::string background_path = "../../../examples/lite/resources/test_lite_matting_bgr.jpg"; auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads std::vector<lite::types::MattingContent> contents; // 1. video matting. cv::Mat background = cv::imread(background_path); rvm->detect_video(video_path, output_path, contents, false, 0.4f, 20, true, true, background); delete rvm; }
The output is:
More classes for matting (image matting, video matting, trimap/mask-free, trimap/mask-based)
auto *matting = new lite::cv::matting::RobustVideoMatting:(onnx_path); // WACV 2022. auto *matting = new lite::cv::matting::MGMatting(onnx_path); // CVPR 2021 auto *matting = new lite::cv::matting::MODNet(onnx_path); // AAAI 2022 auto *matting = new lite::cv::matting::MODNetDyn(onnx_path); // AAAI 2022 Dynamic Shape Inference. auto *matting = new lite::cv::matting::BackgroundMattingV2(onnx_path); // CVPR 2020 auto *matting = new lite::cv::matting::BackgroundMattingV2Dyn(onnx_path); // CVPR 2020 Dynamic Shape Inference.
Example2: 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png"; std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg"; auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path); lite::types::Landmarks landmarks; cv::Mat img_bgr = cv::imread(test_img_path); face_landmarks_1000->detect(img_bgr, landmarks); lite::utils::draw_landmarks_inplace(img_bgr, landmarks); cv::imwrite(save_img_path, img_bgr); delete face_landmarks_1000; }
The output is:
More classes for face alignment (68 points, 98 points, 106 points, 1000 points)
auto *align = new lite::cv::face::align::PFLD(onnx_path); // 106 landmarks, 1.0Mb only! auto *align = new lite::cv::face::align::PFLD98(onnx_path); // 98 landmarks, 4.8Mb only! auto *align = new lite::cv::face::align::PFLD68(onnx_path); // 68 landmarks, 2.8Mb only! auto *align = new lite::cv::face::align::MobileNetV268(onnx_path); // 68 landmarks, 9.4Mb only! auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path); // 68 landmarks, 11Mb only! auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path); // 1000 landmarks, 2.0Mb only! auto *align = new lite::cv::face::align::PIPNet98(onnx_path); // 98 landmarks, CVPR2021! auto *align = new lite::cv::face::align::PIPNet68(onnx_path); // 68 landmarks, CVPR2021! auto *align = new lite::cv::face::align::PIPNet29(onnx_path); // 29 landmarks, CVPR2021! auto *align = new lite::cv::face::align::PIPNet19(onnx_path); // 19 landmarks, CVPR2021!
Example3: Colorization using colorization. Download model from Model-Zoo2.
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg"; std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg"; auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path); cv::Mat img_bgr = cv::imread(test_img_path); lite::types::ColorizeContent colorize_content; colorizer->detect(img_bgr, colorize_content); if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat); delete colorizer; }
The output is:
More classes for colorization (gray to rgb)
auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx"; std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png"; std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png"; std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png"; auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path); lite::types::FaceContent face_content0, face_content1, face_content2; cv::Mat img_bgr0 = cv::imread(test_img_path0); cv::Mat img_bgr1 = cv::imread(test_img_path1); cv::Mat img_bgr2 = cv::imread(test_img_path2); glint_arcface->detect(img_bgr0, face_content0); glint_arcface->detect(img_bgr1, face_content1); glint_arcface->detect(img_bgr2, face_content2); if (face_content0.flag && face_content1.flag && face_content2.flag) { float sim01 = lite::utils::math::cosine_similarity<float>( face_content0.embedding, face_content1.embedding); float sim02 = lite::utils::math::cosine_similarity<float>( face_content0.embedding, face_content2.embedding); std::cout << "Detected Sim01: " << sim << " Sim02: " << sim02 << std::endl; } delete glint_arcface; }
The output is:
Detected Sim01: 0.721159 Sim02: -0.0626267
More classes for face recognition (face id vector extract)
auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path); // DeepGlint(insightface) auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path); // DeepGlint(insightface) auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface) auto *recognition = new lite::cv::faceid::FaceNet(onnx_path); auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path); auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path); auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace) auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace) auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path); auto *recognition = new lite::cv::faceid::SphereFace(onnx_path); auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path); auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path); auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only ! auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path); auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path); auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !
Example5: Face Detection using SCRFD 2021. Download model from Model-Zoo2.
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/scrfd_2.5g_bnkps_shape640x640.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_face_detector.jpg"; std::string save_img_path = "../../../logs/test_lite_scrfd.jpg"; auto *scrfd = new lite::cv::face::detect::SCRFD(onnx_path); std::vector<lite::types::BoxfWithLandmarks> detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); scrfd->detect(img_bgr, detected_boxes); lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); std::cout << "Default Version Done! Detected Face Num: " << detected_boxes.size() << std::endl; delete scrfd; }
The output is:
More classes for face detection (super fast face detection)
auto *detector = new lite::face::detect::UltraFace(onnx_path); // 1.1Mb only ! auto *detector = new lite::face::detect::FaceBoxes(onnx_path); // 3.8Mb only ! auto *detector = new lite::face::detect::FaceBoxesv2(onnx_path); // 4.0Mb only ! auto *detector = new lite::face::detect::RetinaFace(onnx_path); // 1.6Mb only ! CVPR2020 auto *detector = new lite::face::detect::SCRFD(onnx_path); // 2.5Mb only ! CVPR2021, Super fast and accurate!! auto *detector = new lite::face::detect::YOLO5Face(onnx_path); // 2021, Super fast and accurate!!
Example6: Segmentation using DeepLabV3ResNet101. Download model from Model-Zoo2.
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/deeplabv3_resnet101_coco.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png"; std::string save_img_path = "../../../logs/test_lite_deeplabv3_resnet101.jpg"; auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads lite::types::SegmentContent content; cv::Mat img_bgr = cv::imread(test_img_path); deeplabv3_resnet101->detect(img_bgr, content); if (content.flag) { cv::Mat out_img; cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img); cv::imwrite(save_img_path, out_img); if (!content.names_map.empty()) { for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it) { std::cout << it->first << " Name: " << it->second << std::endl; } } } delete deeplabv3_resnet101; }
The output is:
More classes for segmentation (human segmentation, instance segmentation)
auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path); auto *segment = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path);
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/ssrnet.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg"; std::string save_img_path = "../../../logs/test_lite_ssrnet.jpg"; lite::cv::face::attr::SSRNet *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path); lite::types::Age age; cv::Mat img_bgr = cv::imread(test_img_path); ssrnet->detect(img_bgr, age); lite::utils::draw_age_inplace(img_bgr, age); cv::imwrite(save_img_path, img_bgr); std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl; delete ssrnet; }
The output is:
More classes for face attributes analysis (age, gender, emotion)
auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path); auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path); auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path); auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path); auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path); auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only! auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only! auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions, 13Mb only! auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions auto *attribute = new lite::cv::face::attr::SSRNet(onnx_path); // age estimation, 190kb only!!!
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/densenet121.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg"; auto *densenet = new lite::cv::classification::DenseNet(onnx_path); lite::types::ImageNetContent content; cv::Mat img_bgr = cv::imread(test_img_path); densenet->detect(img_bgr, content); if (content.flag) { const unsigned int top_k = content.scores.size(); if (top_k > 0) { for (unsigned int i = 0; i < top_k; ++i) std::cout << i + 1 << ": " << content.labels.at(i) << ": " << content.texts.at(i) << ": " << content.scores.at(i) << std::endl; } } delete densenet; }
The output is:
More classes for image classification (1000 classes)
auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path); auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); // 8.7Mb only! auto *classifier = new lite::cv::classification::GhostNet(onnx_path); auto *classifier = new lite::cv::classification::HdrDNet(onnx_path); auto *classifier = new lite::cv::classification::IBNNet(onnx_path); auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); // 13Mb only! auto *classifier = new lite::cv::classification::ResNet(onnx_path); auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/fsanet-var.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg"; std::string save_img_path = "../../../logs/test_lite_fsanet.jpg"; auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path); cv::Mat img_bgr = cv::imread(test_img_path); lite::types::EulerAngles euler_angles; fsanet->detect(img_bgr, euler_angles); if (euler_angles.flag) { lite::utils::draw_axis_inplace(img_bgr, euler_angles); cv::imwrite(save_img_path, img_bgr); std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl; } delete fsanet; }
The output is:
More classes for head pose estimation (euler angle, yaw, pitch, roll)
auto *pose = new lite::cv::face::pose::FSANet(onnx_path); // 1.2Mb only!
Example10: Style Transfer using FastStyleTransfer. Download model from Model-Zoo2.
#include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../hub/onnx/cv/style-candy-8.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg"; std::string save_img_path = "../../../logs/test_lite_fast_style_transfer_candy.jpg"; auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path); lite::types::StyleContent style_content; cv::Mat img_bgr = cv::imread(test_img_path); fast_style_transfer->detect(img_bgr, style_content); if (style_content.flag) cv::imwrite(save_img_path, style_content.mat); delete fast_style_transfer; }
The output is:
More classes for style transfer (neural style transfer, others)
auto *transfer = new lite::cv::style::FastStyleTransfer(onnx_path); // 6.4Mb only
The code of Lite.Ai.ToolKit is released under the GPL-3.0 License.
Many thanks to these following projects. All the Lite.AI.ToolKit's models are sourced from these repos.
- RobustVideoMatting (π₯π₯π₯new!!β)
- nanodet (π₯π₯π₯β)
- YOLOX (π₯π₯π₯new!!β)
- YOLOP (π₯π₯new!!β)
- YOLOR (π₯π₯new!!β)
- ScaledYOLOv4 (π₯π₯π₯β)
- insightface (π₯π₯π₯β)
- yolov5 (π₯π₯π₯β)
- TFace (π₯π₯β)
- YOLOv4-pytorch (π₯π₯π₯β)
- Ultra-Light-Fast-Generic-Face-Detector-1MB (π₯π₯π₯β)
Expand for More References.
- headpose-fsanet-pytorch (π₯β)
- pfld_106_face_landmarks (π₯π₯β)
- onnx-models (π₯π₯π₯β)
- SSR_Net_Pytorch (π₯β)
- colorization (π₯π₯π₯β)
- SUB_PIXEL_CNN (π₯β)
- torchvision (π₯π₯π₯β)
- facenet-pytorch (π₯β)
- face.evoLVe.PyTorch (π₯π₯π₯β)
- center-loss.pytorch (π₯π₯β)
- sphereface_pytorch (π₯π₯β)
- DREAM (π₯π₯β)
- MobileFaceNet_Pytorch (π₯π₯β)
- cavaface.pytorch (π₯π₯β)
- CurricularFace (π₯π₯β)
- face-emotion-recognition (π₯β)
- face_recognition.pytorch (π₯π₯β)
- PFLD-pytorch (π₯π₯β)
- pytorch_face_landmark (π₯π₯β)
- FaceLandmark1000 (π₯π₯β)
- Pytorch_Retinaface (π₯π₯π₯β)
- FaceBoxes (π₯π₯β)
In addition, MNN, NCNN and TNN support for some models will be added in the future, but due to operator compatibility and some other reasons, it is impossible to ensure that all models supported by ONNXRuntime C++ can run through MNN, NCNN and TNN. So, if you want to use all the models supported by this repo and don't care about the performance gap of 1~2ms, just let ONNXRuntime as default inference engine for this repo. However, you can follow the steps below if you want to build with MNN, NCNN or TNN support.
- change the
build.sh
withDENABLE_MNN=ON
,DENABLE_NCNN=ON
orDENABLE_TNN=ON
, such as
cd build && cmake \ -DCMAKE_BUILD_TYPE=MinSizeRel \ -DINCLUDE_OPENCV=ON \ # Whether to package OpenCV into lite.ai.toolkit, default ON; otherwise, you need to setup OpenCV yourself. -DENABLE_MNN=ON \ # Whether to build with MNN, default OFF, only some models are supported now. -DENABLE_NCNN=OFF \ # Whether to build with NCNN, default OFF, only some models are supported now. -DENABLE_TNN=OFF \ # Whether to build with TNN, default OFF, only some models are supported now. .. && make -j8
- use the MNN, NCNN or TNN version interface, see demo, such as
auto *nanodet = new lite::mnn::cv::detection::NanoDet(mnn_path); auto *nanodet = new lite::tnn::cv::detection::NanoDet(proto_path, model_path); auto *nanodet = new lite::ncnn::cv::detection::NanoDet(param_path, bin_path);
How to add your own models and become a contributor? For specific steps, please refer to CONTRIBUTING.zh.md, or if you like this project please β€οΈ consider βοΈπ star this repo, as it is the simplest way to support me.
Many thanks to these contributors: πππ
lite.ai.toolkit's People
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
- Secondly, run the container with local