Code Monkey home page Code Monkey logo

videoprocessingframework's Introduction

VideoProcessingFramework

VPF stands for Video Processing Framework. It’s set of C++ libraries and Python bindings which provides full HW acceleration for video processing tasks such as decoding, encoding, transcoding and GPU-accelerated color space and pixel format conversions.

VPF also supports exporting GPU memory objects such as decoded video frames to PyTorch tensors without Host to Device copies.

Prerequisites

VPF works on Linux(Ubuntu 20.04 and Ubuntu 22.04 only) and Windows

  • NVIDIA display driver: 525.xx.xx or above

  • CUDA Toolkit 11.2 or above

    • CUDA toolkit has driver bundled with it e.g. CUDA Toolkit 12.0 has driver 530.xx.xx. During installation of CUDA toolkit you could choose to install or skip installation of the bundled driver. Please choose the appropriate option.
  • FFMPEG

    • Compile FFMPEG with shared libraries
    • or download pre-compiled binaries from a source you trust.
      • During VPF’s “pip install”(mentioned in sections below) you need to provide a path to the directory where FFMPEG got installed.
    • or you could install system FFMPEG packages (e.g. apt install libavfilter-dev libavformat-dev libavcodec-dev libswresample-dev libavutil-dev on Ubuntu)
  • Python 3 and above

  • Install a C++ toolchain either via Visual Studio or Tools for Visual Studio.

    • Recommended version is Visual Studio 2017 and above (Windows only)

Linux

We recommend Ubuntu 20.04 as it comes with a recent enough FFmpeg system packages. If you want to build FFmpeg from source, you can follow https://docs.nvidia.com/video-technologies/video-codec-sdk/12.0/ffmpeg-with-nvidia-gpu/index.html

# Install dependencies
apt install -y \
          libavfilter-dev \
          libavformat-dev \
          libavcodec-dev \
          libswresample-dev \
          libavutil-dev\
          wget \
          build-essential \
          git

# Install CUDA Toolkit (if not already present)
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
sudo apt-get install -y cuda
# Ensure nvcc to your $PATH (most commonly already done by the CUDA installation)
export PATH=/usr/local/cuda/bin:$PATH

# Install VPF
pip3 install git+https://github.com/NVIDIA/VideoProcessingFramework
# or if you cloned this repository
pip3 install .

To check whether VPF is correctly installed run the following Python script

import PyNvCodec

If using Docker via Nvidia Container Runtime, please make sure to enable the video driver capability: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/user-guide.html#driver-capabilities via the NVIDIA_DRIVER_CAPABILITIES environment variable in the container or the --gpus command line parameter (e.g. docker run -it --rm --gpus 'all,"capabilities=compute,utility,video"' nvidia/cuda:12.1.0-base-ubuntu22.04).

Please note that some examples have additional dependencies that need to be installed via pip (pip install .[samples]). Samples using PyTorch will require an optional extension which can be installed via

pip install src/PytorchNvCodec  # install Torch extension if needed (optional), requires "torch" to be installed before

After resolving those you should be able to run make run_samples_without_docker using your local pip installation.

Windows

# Indicate path to your FFMPEG installation (with subfolders `bin` with DLLs, `include`, `lib`)
$env:SKBUILD_CONFIGURE_OPTIONS="-DTC_FFMPEG_ROOT=C:/path/to/your/ffmpeg/installation/ffmpeg/" 
pip install .

To check whether VPF is correctly installed run the following Python script

import PyNvCodec

Please note that some examples have additional dependencies (pip install .[sampels]) that need to be installed via pip. Samples using PyTorch will require an optional extension which can be installed via

pip install src/PytorchNvCodec  # install Torch extension if needed (optional), requires "torch" to be installed before

Docker

For convenience, we provide a Docker images located at docker that you can use to easily install all dependencies for the samples (docker and nvidia-docker are required)

DOCKER_BUILDKIT=1 docker build \
                --tag vpf-gpu \
                -f docker/Dockerfile \
                --build-arg PIP_INSTALL_EXTRAS=torch \
                .
docker run -it --rm --gpus=all vpf-gpu

PIP_INSTALL_EXTRAS can be any subset listed under project.optional-dependencies in pyproject.toml.

Documentation

A documentation for Video Processing Framework can be generated from this repository:

pip install . # install Video Processing Framework
pip install src/PytorchNvCodec  # install Torch extension if needed (optional), requires "torch" to be installed before
pip install sphinx  # install documentation tool sphinx
cd docs
make html

You can then open _build/html/index.html with your browser.

Community Support

If you did not find the information you need or if you have further questions or problems, you are very welcome to join the developer community at NVIDIA. We have dedicated categories covering diverse topics related to video processing and codecs.

The forums are also a place where we would be happy to hear about how you made use of VPF in your project.

videoprocessingframework's People

Contributors

ald2004 avatar dbermond avatar dependabot[bot] avatar ferdnyc avatar frank840306 avatar gedoensmax avatar jc211 avatar jeroenhoogers avatar kumattau avatar lferraz avatar litinglin avatar maxclaey avatar mholtmanns avatar n1mmy avatar nicolone avatar rarzumanyan avatar rnaskulwar avatar romanarzumanyan avatar royinx avatar sandhawalia avatar sniklaus avatar thehamsta avatar tracelessle avatar vladimirgl avatar vtpl1 avatar yozer avatar zhangheli avatar zjdneos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

videoprocessingframework's Issues

ValueError: No AVFormatContext provided.

Build the project successfully but still meet some problems. I just wonder that whether there exists some incompatible version problem exists in below build environment. If not how to fix the current problem? (Already try both mp4 and mov format video) 😄

sys info

PLATFORM: ubuntu18.04
CUDA Version: 10.1
Driver Version: 418.40.04
GPU: 1080ti
Video_Codec_SDK_9.0.20
ffmpeg 4.1.4

ipython code

In [1]: import PyNvCodec

In [2]: nvce = PyNvCodec.PyNvDecoder("/home/wangyulong/13923_screenrecording20191123at4.39.26pm.mov",
   ...: 0)
Decoding on GPU 0
[-----] Can't open /home/wangyulong/13923_screenrecording20191123at4.39.26pm.mov: Invalid data found when processing input

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-2-5a02a40c26e3> in <module>
----> 1 nvce = PyNvCodec.PyNvDecoder("/home/wangyulong/13923_screenrecording20191123at4.39.26pm.mov", 0)

ValueError: No AVFormatContext provided.

In [3]: PyNvCodec.GetNumGpus()
Out[3]: 1

compile

cmake .. -
DVIDEO_CODEC_SDK_DIR=/home/wangyulong/Video_Codec_SDK_9.0.20 -
DGENERATE_PYTHON_BINDINGS:BOOL="1" -
DFFMPEG_DIR=/data/video/ffmpeg-4.1.4/build

-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA 10.1.243
-- Check for working CUDA compiler: /data/cuda/cuda-10.1/cuda/bin/nvcc
-- Check for working CUDA compiler: /data/cuda/cuda-10.1/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.6m.so (found suitable version "3.6.8", minimum required is "3.5")
-- Found PythonInterp: /usr/bin/python3.6 (found version "3.6.8")
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.6m.so
-- pybind11 v2.3.dev0
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- LTO enabled
-- Configuring done
-- Generating done
-- Build files have been written to: /home/wangyulong/VideoProcessingFramework/build

build

sudo make install
-- Performing Test HAS_FLTO - Suc
Scanning dependencies of target TC_CORE
[  6%] Building CXX object PyNvCodec/TC/TC_CORE/CMakeFiles/TC_CORE.dir/src/Task.cpp.o
[ 13%] Building CXX object PyNvCodec/TC/TC_CORE/CMakeFiles/TC_CORE.dir/src/Token.cpp.o
[ 20%] Linking CXX shared library libTC_CORE.so
[ 20%] Built target TC_CORE
Scanning dependencies of target TC
[ 26%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/MemoryInterfaces.cpp.o
[ 33%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/Tasks.cpp.o
[ 40%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/TasksColorCvt.cpp.o
[ 46%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/FFmpegDemuxer.cpp.o
[ 53%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/NvDecoder.cpp.o
[ 60%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/NvEncoder.cpp.o
[ 66%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/NvEncoderCuda.cpp.o
[ 73%] Building CUDA object PyNvCodec/TC/CMakeFiles/TC.dir/src/Resize.cu.o
[ 80%] Linking CUDA device code CMakeFiles/TC.dir/cmake_device_link.o
[ 86%] Linking CXX shared library libTC.so
[ 86%] Built target TC
Scanning dependencies of target PyNvCodec
[ 93%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PyNvCodec.cpp.o
[100%] Linking CXX shared library PyNvCodec.cpython-36m-x86_64-linux-gnu.so
[100%] Built target PyNvCodec
Install the project...
-- Install configuration: ""
-- Installing: /usr/local/bin/libTC_CORE.so
-- Installing: /usr/local/bin/libTC.so
-- Installing: /usr/local/bin/PyNvCodec.cpython-36m-x86_64-linux-gnu.so
-- Up-to-date: /usr/local/bin/SampleDecode.py
-- Up-to-date: /usr/local/bin/SampleEncode.py
-- Up-to-date: /usr/local/bin/SampleTranscode.py
-- Up-to-date: /usr/local/bin/SampleFrameUpload.py
-- Up-to-date: /usr/local/bin/SampleColorConversion.py
-- Up-to-date: /usr/local/bin/SampleSufraceDownload.py
-- Up-to-date: /usr/local/bin/SampleTranscodeOneToN.py

Decode RTSP stream

Im trying to use this to decode an RTSP stream from an IP camera.
I modified the example to use the RTSP url but it seems to only accept files. Is it possible to decode RTSP streams somehow?

import PyNvCodec as nvc

encFile = "rtsp://user:pass@hostname:554/Streaming/Channels/101/"
decFile = open("output.nv12", "wb")

nvDec = nvc.PyNvDecoder(encFile, 0)

while True:
    rawFrame = nvDec.DecodeSingleFrame()
    # Decoder will return zero-size frame if input file is over;
    if not (rawFrame.size):
        break

    frameByteArray = bytearray(rawFrame)
    decFile.write(frameByteArray)

The error message is:

Decoding on GPU 0
rtsp://user:pass@hostname:554/Streaming/Channels/101/
[-----] General error -1330794744 at line 178 in file /VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp
[-----] No AVFormatContext provided.
Traceback (most recent call last):
  File "vpf_test.py", line 7, in <module>
    nvDec = nvc.PyNvDecoder(encFile, 0)
RuntimeError: CUDA error: CUDA_ERROR_INVALID_SOURCE

cmake can not find cuda

-- The CUDA compiler identification is unknown
CMake Error at PyNvCodec/TC/CMakeLists.txt:20 (enable_language):
No CMAKE_CUDA_COMPILER could be found.

Tell CMake where to find the compiler by setting either the environment
variable "CUDACXX" or the CMake cache entry CMAKE_CUDA_COMPILER to the full
path to the compiler, or to the compiler name if it is in the PATH.

but I had set the CUDA path to the environment, but it not work
could you show the complete command build on linux? Thank you:)

cuda error

python3 SampleDecode.py
Decoding on GPU 0
[-----] Media format: QuickTime / MOV (mov,mp4,m4a,3gp,3g2,mj2)
Traceback (most recent call last):
File "SampleDecode.py", line 23, in
nvDec = nvc.PyNvDecoder(encFile, gpuID)
RuntimeError: CUDA error:

RGB to YUV420 to NV12

Thank you for making this library! I have an array of RGB images, each image stored as a Numpy array, and would like to encode them using PyNvEncoder. However, I am currently stuck converting the RGB images to the NV12 format. I am under the impression that NPP does not support this conversion directly, but please correct me if I am wrong [1]. I will thus have to convert the RGB images to YUV420 first but PySurfaceConverter(w, h, PixelFormat.RGB, PixelFormat.YUV420, 0) is unfortunately not implemented. Any suggestions? Thanks!

[1] https://docs.nvidia.com/cuda/npp/group__image__color__model__conversion.html

Cuda error

Video_Codec_SDK_9.0.20
| NVIDIA-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 |
ffmpeg 4.2.1
No problem with compilation
But when I run: python SampleDecode.py
it says:
Traceback (most recent call last):
File "SampleDecode.py", line 24, in
nvDec = nvc.PyNvDecoder(encFile, gpuID)
RuntimeError: xxx/VideoProcessingFramework/PyNvCodec/TC/src/NvDecoder.cpp:568
CUDA error with code -2072787712
No error string available

Decode Error!!!

File "SampleDecode.py", line 23, in
nvDec = nvc.PyNvDecoder(encFile, gpuID)
RuntimeError: CUDA error with code -1No error string available

gcc version: Python 3.7.6
cuda info
NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2
linux version: CentOS release 6.3 (Final)

Decode Error occurred for picture 4766?

Hello!
When I parsed rtsp stream,I encountered the error:
Decode Error occurred for picture 4766。

My config is :
self.nvDec = nvc.PyNvDecoder(encFile, gpuID,
{'acodec': 'copy', 'vcodec': 'copy', 'rtsp_transport': 'tcp',
'max_delay': '5000000', 'bufsize': '30000k'})

Can you help me ?
What causes this problem?

compile error due to variables set to NotFound

system version

system: ubuntu 18.04
CUDA Version: 10.1
Driver Version: 418.40.04
video-codec-sdk: 9.0.20

modify the CMakeLists.txt

#include_directories(${VIDEO_CODEC_SDK_INCLUDE_DIR})
include_directories(/data/dep_lib/video_decode/Video_Codec_SDK_9.0.20/include)

How to config the variable that dependencies in compile time? error occur below

- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA 10.1.243
-- Check for working CUDA compiler: /data/cuda/cuda-10.1/cuda/bin/nvcc
-- Check for working CUDA compiler: /data/cuda/cuda-10.1/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
NVCUVID_LIBRARY
    linked by target "TC" in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC
NVENCODE_LIBRARY
    linked by target "TC" in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC
VIDEO_CODEC_SDK_INCLUDE_DIR
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/inc
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/inc
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/inc
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/inc
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/inc
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/inc
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/src
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/src
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/src
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/src
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/src
   used as include directory in directory /home/wangyulong/VideoProcessingFramework/PyNvCodec/TC/src

-- Configuring incomplete, errors occurred!
See also "/home/wangyulong/VideoProcessingFramework/CMakeFiles/CMakeOutput.log".

error in make :recompile with -fPIC

Another error :
[ 28%] Linking CXX shared library libTC.so
/usr/bin/ld: /usr/local/lib/libavutil.a(error.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
should I modify the makefile or CMakeList?

Pytorch runs so slow while using "Export torch tensor"

When I try to export video frame to Pytorch tensor followed by the new master branch code, everything is fine.

However, the post processing pytorch model would run quite slow about 6s/pic comparing with 0.1s/pic with cv2.VideoCapture.

Does the gpu decoding to torch gpu tensor cost too much resource? But it does not make sense

nvDec = nvc.PyNvDecoder(encFile, gpuID) error

I test this code:
import PyNvCodec as nvc
gpuID = 0
encFile = "big_buck_bunny_1080p_h264.mov"
decFile = open("big_buck_bunny_1080p_h264.nv12", "wb")

nvDec = nvc.PyNvDecoder(encFile, gpuID)

AttributeError: module 'PyNvCodec' has no attribute 'PyNvDecoder'

Using the sdk in multi thread

Thanks for your job. I tried the numpy version and torch version successly.

However, in my case I need to decode multi rtmps at the same time but meet some cuda issues when i initialise second rtmp decoders with python multithreads. What could I do ?

--- I tried the multiprocess version and that's OK

Build error!

hello,i get a problem,please give me some advise,thank you!

image

make error

I follow the wiki to build.

[root@prod-cloudserver-gpu080 build]# make
Scanning dependencies of target TC_CORE
[ 7%] Building CXX object PyNvCodec/TC/TC_CORE/CMakeFiles/TC_CORE.dir/src/Task.cpp.o
[ 14%] Building CXX object PyNvCodec/TC/TC_CORE/CMakeFiles/TC_CORE.dir/src/Token.cpp.o
[ 21%] Linking CXX shared library libTC_CORE.so
[ 21%] Built target TC_CORE
Scanning dependencies of target TC
[ 28%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/MemoryInterfaces.cpp.o
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'void VPF::SurfacePlane::Allocate()':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:224:55: error: 'runtime_error' was not declared in this scope
throw runtime_error("Failed to do cuMemAllocPitch");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceY::Width(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:308:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceY::WidthInBytes(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:316:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceY::Height(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:324:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceY::Pitch(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:332:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual CUdeviceptr VPF::SurfaceY::PlanePtr(uint32_t)':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:340:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceNV12::Width(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:375:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceNV12::WidthInBytes(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:386:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceNV12::Height(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:398:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceNV12::Pitch(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:409:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual CUdeviceptr VPF::SurfaceNV12::PlanePtr(uint32_t)':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:421:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceYUV420::Width(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:465:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceYUV420::WidthInBytes(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:479:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceYUV420::Height(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:493:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceYUV420::Pitch(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:507:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual CUdeviceptr VPF::SurfaceYUV420::PlanePtr(uint32_t)':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:521:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceRGB::Width(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:567:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceRGB::WidthInBytes(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:575:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceRGB::Height(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:583:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceRGB::Pitch(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:591:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual CUdeviceptr VPF::SurfaceRGB::PlanePtr(uint32_t)':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:599:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceRGBPlanar::Width(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:632:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceRGBPlanar::WidthInBytes(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:640:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceRGBPlanar::Height(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:648:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual uint32_t VPF::SurfaceRGBPlanar::Pitch(uint32_t) const':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:656:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp: In member function 'virtual CUdeviceptr VPF::SurfaceRGBPlanar::PlanePtr(uint32_t)':
/opt/VideoProcessingFramework/PyNvCodec/TC/src/MemoryInterfaces.cpp:664:48: error: 'invalid_argument' was not declared in this scope
throw invalid_argument("Invalid plane number");
^
make[2]: *** [PyNvCodec/TC/CMakeFiles/TC.dir/src/MemoryInterfaces.cpp.o] Error 1
make[1]: *** [PyNvCodec/TC/CMakeFiles/TC.dir/all] Error 2
make: *** [all] Error 2

Failed to create new PyNvDecoder instance

Hi @rarzumanyan :

After my decoder process runs for a few hours, nv_dec.DecodeSingleSurface() is empty. So I wrote a piece of code to recreate the PyNvDecoder instance:


while True:
    raw_surface = nv_dec.DecodeSingleSurface()
    if raw_surface.Empty():
        logging.info(f'failed to decode video frame')
        nv_dec = None
        while not isinstance(nv_dec, nvc.PyNvDecoder):
            try:
                time.sleep(30)
                nv_dec = nvc.PyNvDecoder(url, gpu_id, {'rtsp_transport': 'tcp', 'max_delay': '5000000'})
            except Exception as e:
                logging.info(f'failed to recreate the nv_dec: {e}')
                continue
        continue
    ...

But the PyNvDecoder instance cannot be created all the time, and print a error: FFmpegDemuxer: no AVFormatContext provided. Do you know why? How should I locate the error?
Looking forward to your recovery ~

can you support BGR format?

hi!
can you support BGR format?

NV12-->RGB is ok.
NV12-->BGR ?
if i do it in the cpu, the cpu share is high!!!

Compiling with older Video_Codec_SDK

Not able to update NVidia driver I'm forced to use Video_Codec_SDK_8.2.16.
So "cmake .." goes without a hitch. ffmpeg was compiled from source with --enable-nvenc after installing correct version of nv-codec-headers
1
But at "make" stage it breaks and can't figure out what's wrong.
2
Any ideas?

real-time frame encode/decode

Hi Roman, Thank you for the great work! Can you please provide a short example if I would like to encode and decode frames from cameras in real-time? Also, the frame size may change over the time. Should I always create a new encode instance for the change? Thank you!

new make error

make[2]: *** No rule to make target /usr/lib64/libpython3.6m.so', needed by PyNvCodec/PyNvCodec.cpython-36m-x86_64-linux-gnu.so'. Stop.
make[1]: *** [PyNvCodec/CMakeFiles/PyNvCodec.dir/all] Error 2
make: *** [all] Error 2

Version conflicts

Can you please confirm which versions of the following are compatible as i have already tried 6 times with different combinations of drivers, cuda and ffmpeg. There is missing information that is causing delay and headache which should be provided for easy work.

GTX 1060
Ubuntu 18.04
Nvidia Driver: 430.50
Cuda: 10.1 [Cannot change 10.1 because of some dependency with other library]
Cudnn: 7.6.5
Video_Codec_SDK_9.0.20
nv-codec-headers-n9.0.18.3

ffmpeg latest but configured using the following because command given in wiki is not creating shared libraries.
./configure --enable-pic --enable-shared --cc="gcc -m64 -fPIC" --prefix=$(pwd)/build_release

VideoProcessingFramework

Only command that was changed

cmake
-DFFMPEG_DIR:PATH="/opt/github/FFmpeg"
-DVIDEO_CODEC_SDK_DIR:PATH="/home/ashutosh/Downloads/Video_Codec_SDK_9.0.20"
-DGENERATE_PYTHON_BINDINGS:BOOL="1"
-DCMAKE_INSTALL_PREFIX:PATH="/usr/local/vpf"
-DAVCODEC_INCLUDE_DIR:PATH="/opt/github/FFmpeg/build_release/include"
-DAVFORMAT_INCLUDE_DIR:PATH="/opt/github/FFmpeg/build_release/include"
-DAVUTIL_INCLUDE_DIR:PATH="/opt/github/FFmpeg/build_release/include"
-DSWRESAMPLE_LIBRARY="/opt/github/FFmpeg/build_release/lib/libswresample.so"
-DAVFORMAT_LIBRARY="/opt/github/FFmpeg/build_release/lib/libavformat.so"
-DAVCODEC_LIBRARY="/opt/github/FFmpeg/build_release/lib/libavcodec.so"
-DAVUTIL_LIBRARY="/opt/github/FFmpeg/build_release/lib/libavutil.so"
.. &&\

python3 SampleDecode.py

python3: Relink /lib/x86_64-linux-gnu/libsystemd.so.0' with /lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol clock_gettime' python3: Relink /lib/x86_64-linux-gnu/libudev.so.1' with /lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol clock_gettime'
Segmentation fault (core dumped)

RTP: missed 169 packets

hi!
when i parsed rtsp encoded h.265,i encountered below problem.

[rtsp @ 0x5611e3333ec0] max delay reached. need to consume packet
[rtsp @ 0x5611e3333ec0] RTP: missed 169 packets
Decode Error occurred for picture 258[rtsp @ 0x5611e3333ec0] max delay reached. need to consume packet

the resulting picture is of poor quality,why?

How to execute resize operation directly by calling CUDA interface?

Hello, thanks very much for your amazing job! I finished building process and linked dynamic library with python binding successfully

If 4K RTSP stream is plugged as input source, it brings high CPU usage when I use opencv resize operation.

So, I try input hwidth,hheight to PySurfaceResizer to execute downsample directly as you show in SampleDecodeMultiThread.I convert raw image to RGB space.

width, height = self.nvDec.Width(), self.nvDec.Height()
print(width, height)
hwidth, hheight = int(width / 2), int(height / 2)
self.w = hwidth
self.h = hheight
print(self.nvDec.Format())
self.nvCvt = nvc.PySurfaceConverter(width, height, self.nvDec.Format(), nvc.PixelFormat.RGB, gpuID)
self.nvRes = nvc.PySurfaceResizer(hwidth, hheight, self.nvCvt.Format(), gpuID)
self.nvDwn = nvc.PySurfaceDownloader(hwidth, hheight, self.nvRes.Format(), gpuID)

But, I encountered issue as follow:

rawFrame = np.ndarray(shape=(resSurface.HostSize()), dtype=np.uint8)
success = self.nvDwn.DownloadSingleSurface(resSurface, rawFrame)
if not success:
print('Failed to download surface')
break

the program exited when download rawFrame from surface.

/home/jt1/miniconda3/bin/python /home/jt1/VideoProcessingFramework/install/bin/test.py
Decoding on GPU 0
[hls @ 0x556f553a4e40] Skip ('#EXT-X-VERSION:3')
[hls @ 0x556f553a4e40] Skip ('#EXT-X-ALLOW-CACHE:NO')
[hls @ 0x556f553a4e40] Skip ('#EXT-X-VERSION:3')
[hls @ 0x556f553a4e40] Skip ('#EXT-X-ALLOW-CACHE:NO')
[hls @ 0x556f553a4e40] Opening 'http://10.196.118.36:9054/live/cHZnNjcxLWF2LzE2LzU%3D/0.ts' for reading
[hls @ 0x556f553a4e40] Opening 'http://10.196.118.36:9054/live/cHZnNjcxLWF2LzE2LzU%3D/1.ts' for reading
3840 2160
PixelFormat.NV12
24883200
Failed to download surface

Where I made a mistake? It will be very helpful if you could provide some information about it.

Import Error!

I'm using nvidia-docker

nvidia-cuda:10.1-devel-ubuntu18.04

Missing after normal compilation libnvcuvid.so.1

root@67a27ca160f2:~/Git/VideoProcessingFramework/install/bin# python ./SampleDecode.py               Traceback (most recent call last):
  File "./SampleDecode.py", line 17, in <module>
    import PyNvCodec as nvc
ImportError: libnvcuvid.so.1: cannot open shared object file: No such file or directory

PyNvCodec error

when I run import PyNvCodec as nvc, then error below:

ModuleNotFoundError: No module named 'PyNvCodec'

decode format

Hi, this is a good project!
I want to know the VPF support rtsp stream format?

wiki typo?

In step 3 on the wik page (https://github.com/NVIDIA/VideoProcessingFramework/wiki/Building-from-source) for linux it says:

VPF user CMake find_library to locate ffmpeg

Since I am not super well versed in the use of CMake it seems like that is a typo, but it is difficult for me to know since i don't know much about CMake. And for me to try to figure out how to

link VPF against desired ffmpeg version

figuring out how to link this, would be easier if I knew what the first part was supposed to mean.

it seems like it was supposed to be:

VPF uses CMake's "find_library" command to locate ffmpeg.

thank you for your awesome work!

Still Decode Error

I follow the build guide,but there are still error:

Decoding on GPU 0
Can't open /home/admin/VPF/VideoProcessingFramework/big_buck_bunny_720p_h264.mov: Invalid data found when processing input
Traceback (most recent call last):
  File "SampleDecode.py", line 23, in <module>
    nvDec = nvc.PyNvDecoder(encFile, gpuID)
ValueError: FFmpegDemuxer: no AVFormatContext provided.

and the SampleRTSP.py:

This sample takes rtsp stream URL as input and transcodes it to local H.264 file
Decoding on GPU 0
Can't open rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov: Protocol not found
FFmpegDemuxer: no AVFormatContext provided.

My CMake configure output:

-- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA 10.0.130
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Searching for FFmpeg libs in /home/admin/VPF/FFmpeg/build_x64_release_shared/lib
-- Searching for FFmpeg headers in /home/admin/VPF/FFmpeg/build_x64_release_shared/include
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.6m.so (found suitable version "3.6.8", minimum required is "3.5") 
-- Found PythonInterp: /usr/bin/python3.6 (found version "3.6.8") 
-- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.6m.so
-- pybind11 v2.3.dev0
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- LTO enabled
-- Configuring done
-- Generating done
-- Build files have been written to: /home/admin/VPF/VideoProcessingFramework/build

I've removed ffmpeg by apt and build it from source

Hello, can you compile in python3.8?

Hello,I'm so sorry to ask you one more question!

Can you compile in python3.8?

When I compiled in python3.8,i encountered an error:

-- Searching for FFmpeg libs in /usr/local/ffmpeg/lib
-- Searching for FFmpeg headers in /usr/local/ffmpeg/include
CMake Error at /usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
Could NOT find PythonLibs (missing: PYTHON_INCLUDE_DIRS) (Required is at
least version "3.5")
Call Stack (most recent call first):
/usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-3.10/Modules/FindPythonLibs.cmake:262 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
PyNvCodec/CMakeLists.txt:46 (find_package)

-- Configuring incomplete, errors occurred!
See also "/home/jt1/VideoProcessingFramework/build/CMakeFiles/CMakeOutput.log".

DecodeSingleFrame operation make high CPU usage ?

My code is as follows:

...
nvDec = nvc.PyNvDecoder(encFile, gpuID)
try:
    for i in tqdm(range(numFrames)):
        decFrame = nvDec.DecodeSingleFrame()
...

When I run this code, the CPU usage is 200 ~ 300% (40 CPUs on the server), does DecodeSingleFrame operation make high CPU usage ?
If so, is there any way to reduce CPU usage when the picture is copied to the local.
Or is there any way to convert the picture(in the GPU memory) into a pytorch tensor?
Looking forward to your reply, thanks~

pip install vpf

Not having to go through the quite involved manual installation process and instead just being able to run pip install vpf, for example, would be greatly appreciated. 🙂

rtsp/rtmp decode Error occurred for picture xxx

Hi, @rarzumanyan

Decode Error occurred for picture 800

After a careful testing, I found the error would be most related to the rtsp url itself after 1/2 minutes.
So is there any solution to ignore the that? Just like opencv which also use ffmpeg as backend will run smoothly for a long time.

Just mentioned here, after the vpf error shows in the output, the decoding process would be stucking.(actually the empty output of DecodeSingleSurface stops the decoding thread) What I could do is just restart the decoding thread I guess.

Any good solutions?
Best wishes.

ValueError: No AVFormatContext provided.

root@ashutosh-GL553VE:/opt/github/VideoProcessingFramework/install/bin# python3 SampleDecode_rtsp.py 
Decoding on GPU 0
Can't open rtsp://ashutosh:[email protected]:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif: Protocol not found
Traceback (most recent call last):
  File "SampleDecode_rtsp.py", line 23, in <module>
    nvDec = nvc.PyNvDecoder(encFile, gpuID)
ValueError: No AVFormatContext provided.

import PyNvCodec as nvc

gpuID = 0
encFile = "rtsp://ashutosh:[email protected]:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif"
#decFile = open("big_buck_bunny_1080p_h264.nv12", "wb")

nvDec = nvc.PyNvDecoder(encFile, gpuID)

while True:
    rawFrame = nvDec.DecodeSingleFrame()
    # Decoder will return zero-size frame if input file is over;
    if not (rawFrame.size):
        break

    frameByteArray = bytearray(rawFrame)
    print(type(frameByteArray))
    #decFile.write(frameByteArray)

Set gpu devices

Hi,

nvc.PyNvDecoder(encFile, gpuID)

There are 2 GPUs, If I set gpuID to 1, the memory usage of gpu0 will also increase. What is the cause of this phenomenon? How to make the program use only gpu1 memory?Thank you ~
PS: 2 threads running on different GPUs, can not handle this problem in multi-thread by setting CUDA_VISIBLE_DEVICES.

RuntimeError: can't get unknown filter by name

ipython

import PyNvCodec as nvc 
nvDec = nvc.PyNvDecoder('rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov', 0)
Decoding on GPU 0
[-----] Media format: RTSP input (rtsp)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: can't get unknown filter by name

Failed to be used in multi-thread application

Thank you for your effort developing this framework. It's convinient to use.

I've been successful to start the samples you provided. However, when using thie framework in my own applicatioin, I've confronted with following problem.

My code is like such

import cv2
import PyNvCodec as nvc
from threading import Thread


class CNVCamCtrl:
    def __init__(self):
        self.thread = None
        self.is_running = False
        self.nvc_cap = None

    def __del__(self):
        self.stop()

    def connect(self, source, dev_id=0):
        print("openning %s" % source)
        try:
            self.nvc_cap = nvc.PyNvDecoder(source, dev_id)
            print("open video success")
            return True
        except:
            print("open video fail")
            self.nvc_cap = None
            return False

    def start(self, callback, size):
        self.stop()
        if self.nvc_cap is not None:
            print("start")
            self.is_running = True
            self.thread = Thread(target=self.getLoop, args=[callback, size])
            self.thread.setDaemon(True)
            self.thread.start()

    def stop(self):
        if self.thread is not None:
            self.is_running = False
            self.thread.join()
            self.thread = None

    def getLoop(self, callback, size):
        if (self.nvc_cap is not None) and self.is_running:
            width = self.nvc_cap.Width()
            print("frame width %d" % width)
            img = self.nvc_cap.DecodeSingleFrame()
            print("frame size %d" % img.size)
            if img.size:
                raw_size = (int(img.size / width), int(width))
            while (img.size) and self.is_running:
                img = self.nvc_cap.DecodeSingleFrame()
                img = img.reshape(raw_size)
                img = cv2.cvtColor(img, cv2.COLOR_YUV2BGR_NV12)
                img = cv2.resize(img, size)
                callback(img)


if __name__ == "__main__":
    import time

    def callback(img):
        cv2.imshow("img", img)
        cv2.waitKey(1)

    cam_ctrl = CNVCamCtrl()
    cam_ctrl.connect(
        "rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=0"
    )
    cam_ctrl.start(callback, (320, 240))
    time.sleep(10)
    cam_ctrl.stop()

The log I got is

openning rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=0
Decoding on GPU 0
[udp @ 0000024dfd606a40] 'circular_buffer_size' option was set but it is not supported on this build (pthread support is required)
[udp @ 0000024dfd618ec0] 'circular_buffer_size' option was set but it is not supported on this build (pthread support is required)
[-----] Media format: RTSP input (rtsp)
[-----] 27 RTSP input
open video success
start
frame width 1920
frame size 0

I can get it work if I directly put getLoop in start other than run getLoop in a thread.

    def start(self, callback, size):
        self.stop()
        if self.nvc_cap is not None:
            print("start")
            self.is_running = True
            self.getLoop(callback, size)
            # self.thread = Thread(target=self.getLoop, args=[callback, size])
            # self.thread.setDaemon(True)
            # self.thread.start()

Would you please give me any advice or suggestion to fix it? Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.