Code Monkey home page Code Monkey logo

abhitronix / vidgear Goto Github PK

View Code? Open in Web Editor NEW
3.2K 62.0 238.0 118.59 MB

A High-performance cross-platform Video Processing Python framework powerpacked with unique trailblazing features :fire:

Home Page: https://abhitronix.github.io/vidgear

License: Apache License 2.0

Python 98.76% Shell 1.24%
opencv multithreading python video-processing ffmpeg youtube video-stabilization video framework twitch webrtc-video streaming real-time screen-capture peer-to-peer hls live-streaming dash video-streaming yt-dlp

vidgear's Introduction

VidGear

VidGear tagline

Releases   |   Gears   |   Documentation   |   Installation   |   License

Build Status Codecov branch Azure DevOps builds (branch)

Glitter chat Build Status PyPi version

Code Style

 

VidGear is a High-Performance Video Processing Python Library that provides an easy-to-use, highly extensible, thoroughly optimised Multi-Threaded + Asyncio API Framework on top of many state-of-the-art specialized libraries like OpenCV, FFmpeg, ZeroMQ, picamera, starlette, yt_dlp, pyscreenshot, dxcam, aiortc and python-mss serving at its backend, and enable us to flexibly exploit their internal parameters and methods, while silently delivering robust error-handling and real-time performance 🔥

VidGear primarily focuses on simplicity, and thereby lets programmers and software developers to easily integrate and perform Complex Video Processing Tasks, in just a few lines of code.

 

The following functional block diagram clearly depicts the generalized functioning of VidGear APIs:

@Vidgear Functional Block Diagram

 

Table of Contents

 

 

TL;DR

What is vidgear?

"VidGear is a cross-platform High-Performance Framework that provides an one-stop Video-Processing solution for building complex real-time media applications in python."

What does it do?

"VidGear can read, write, process, send & receive video files/frames/streams from/to various devices in real-time, and faster than underline libraries."

What is its purpose?

"Write Less and Accomplish More"VidGear's Motto

"Built with simplicity in mind, VidGear lets programmers and software developers to easily integrate and perform Complex Video-Processing Tasks in their existing or newer applications without going through hefty documentation and in just a few lines of code. Beneficial for both, if you're new to programming with Python language or already a pro at it."

 

 

Getting Started

If this is your first time using VidGear, head straight to the Installation ➶ to install VidGear.

Once you have VidGear installed, Checkout its Well-Documented Function-Specific Gears ➶

Also, if you're already familiar with OpenCV library, then see Switching from OpenCV Library ➶

Or, if you're just getting started with OpenCV-Python programming, then refer this FAQ ➶

 

 

Gears: What are these?

VidGear is built with multiple APIs a.k.a Gears, each with some unique functionality.

Each API is designed exclusively to handle/control/process different data-specific & device-specific video streams, network streams, and media encoders/decoders. These APIs provides the user an easy-to-use, dynamic, extensible, and exposed Multi-Threaded + Asyncio optimized internal layer above state-of-the-art libraries to work with, while silently delivering robust error-handling.

These Gears can be classified as follows:

A. Video-Capture Gears:

  • CamGear: Multi-Threaded API targeting various IP-USB-Cameras/Network-Streams/Streaming-Sites-URLs.
  • PiGear: Multi-Threaded API targeting various Raspberry-Pi Camera Modules.
  • ScreenGear: High-performance API targeting rapid Screencasting Capabilities.
  • VideoGear: Common Video-Capture API with internal Video Stabilizer wrapper.

B. Video-Writer Gears:

  • WriteGear: Handles Lossless Video-Writer for file/stream/frames Encoding and Compression.

C. Streaming Gears:

  • StreamGear: Handles Transcoding of High-Quality, Dynamic & Adaptive Streaming Formats.

  • Asynchronous I/O Streaming Gear:

    • WebGear: ASGI Video-Server that broadcasts Live MJPEG-Frames to any web-browser on the network.
    • WebGear_RTC: Real-time Asyncio WebRTC media server for streaming directly to peer clients over the network.

D. Network Gears:

  • NetGear: Handles High-Performance Video-Frames & Data Transfer between interconnecting systems over the network.

  • Asynchronous I/O Network Gear:

    • NetGear_Async: Immensely Memory-Efficient Asyncio Video-Frames Network Messaging Framework.

 

 

CamGear

CamGear Functional Block Diagram

CamGear can grab ultra-fast frames from a diverse range of file-formats/devices/streams, which includes almost any IP-USB Cameras, multimedia video file-formats (upto 4k tested), various network stream protocols such as http(s), rtp, rtsp, rtmp, mms, etc., and GStreamer's pipelines, plus direct support for live video streaming sites like YouTube, Twitch, LiveStream, Dailymotion etc.

CamGear provides a flexible, high-level, multi-threaded framework around OpenCV's VideoCapture class with access almost all of its available parameters. CamGear internally implements yt_dlp backend class for seamlessly pipelining live video-frames and metadata from various streaming services like YouTube, Twitch, and many more ➶. Furthermore, its framework relies exclusively on Threaded Queue mode for ultra-fast, error-free, and synchronized video-frame handling.

CamGear API Guide:

>>> Usage Guide

 

 

VideoGear

VideoGear API provides a special internal wrapper around VidGear's exclusive Video Stabilizer class.

VideoGear also acts as a Common Video-Capture API that provides internal access for both CamGear and PiGear APIs and their parameters with an exclusive enablePiCamera boolean flag.

VideoGear is ideal when you need to switch to different video sources without changing your code much. Also, it enables easy stabilization for various video-streams (real-time or not) with minimum effort and writing way fewer lines of code.

Below is a snapshot of a VideoGear Stabilizer in action (See its detailed usage here):

VideoGear Stabilizer in action!
Original Video Courtesy @SIGGRAPH2013

Code to generate above result:

# import required libraries
from vidgear.gears import VideoGear
import numpy as np
import cv2

# open any valid video stream with stabilization enabled(`stabilize = True`)
stream_stab = VideoGear(source="test.mp4", stabilize=True).start()

# open same stream without stabilization for comparison
stream_org = VideoGear(source="test.mp4").start()

# loop over
while True:

    # read stabilized frames
    frame_stab = stream_stab.read()

    # check for stabilized frame if Nonetype
    if frame_stab is None:
        break

    # read un-stabilized frame
    frame_org = stream_org.read()

    # concatenate both frames
    output_frame = np.concatenate((frame_org, frame_stab), axis=1)

    # put text over concatenated frame
    cv2.putText(
        output_frame,
        "Before",
        (10, output_frame.shape[0] - 10),
        cv2.FONT_HERSHEY_SIMPLEX,
        0.6,
        (0, 255, 0),
        2,
    )
    cv2.putText(
        output_frame,
        "After",
        (output_frame.shape[1] // 2 + 10, output_frame.shape[0] - 10),
        cv2.FONT_HERSHEY_SIMPLEX,
        0.6,
        (0, 255, 0),
        2,
    )

    # Show output window
    cv2.imshow("Stabilized Frame", output_frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close both video streams
stream_org.stop()
stream_stab.stop()

VideoGear API Guide:

>>> Usage Guide

 

 

PiGear

PiGear

PiGear is similar to CamGear but made to support various Raspberry Pi Camera Modules (such as OmniVision OV5647 Camera Module and Sony IMX219 Camera Module).

PiGear provides a flexible multi-threaded framework around complete picamera python library, and provide us the ability to exploit almost all of its parameters like brightness, saturation, sensor_mode, iso, exposure, etc. effortlessly. Furthermore, PiGear also supports multiple camera modules, such as in the case of Raspberry-Pi Compute Module IO boards.

Best of all, PiGear contains Threaded Internal Timer - that silently keeps active track of any frozen-threads/hardware-failures and exit safely, if any does occur. That means that if you're running PiGear API in your script and someone accidentally pulls the Camera-Module cable out, instead of going into possible kernel panic, API will exit safely to save resources.

Code to open picamera stream with variable parameters in PiGear API:

# import required libraries
from vidgear.gears import PiGear
import cv2

# add various Picamera tweak parameters to dictionary
options = {
    "hflip": True,
    "exposure_mode": "auto",
    "iso": 800,
    "exposure_compensation": 15,
    "awb_mode": "horizon",
    "sensor_mode": 0,
}

# open pi video stream with defined parameters
stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output Frame", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

PiGear API Guide:

>>> Usage Guide

 

 

ScreenGear

ScreenGear is designed exclusively for targeting rapid Screencasting Capabilities, which means it can grab frames from your monitor in real-time, either by defining an area on the computer screen or full-screen, at the expense of inconsiderable latency. ScreenGear also seamlessly support frame capturing from multiple monitors as well as supports multiple backends.

ScreenGear implements a Lightning-Fast API wrapper around dxcam, pyscreenshot & python-mss python libraries and also supports an easy and flexible direct internal parameters manipulation.

Below is a snapshot of a ScreenGear API in action:

ScreenGear in action!

Code to generate the above results:

# import required libraries
from vidgear.gears import ScreenGear
import cv2

# open video stream with default parameters
stream = ScreenGear().start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output Frame", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

ScreenGear API Guide:

>>> Usage Guide

 

 

WriteGear

WriteGear Functional Block Diagram

WriteGear handles various powerful Video-Writer Tools that provide us the freedom to do almost anything imaginable with multimedia data.

WriteGear API provides a complete, flexible, and robust wrapper around FFmpeg, a leading multimedia framework. WriteGear can process real-time frames into a lossless compressed video-file with any suitable specifications (such asbitrate, codec, framerate, resolution, subtitles, etc.).

WriteGear also supports streaming with traditional protocols such as RTSP/RTP, RTMP. It is powerful enough to perform complex tasks such as Live-Streaming (such as for Twitch, YouTube etc.) and Multiplexing Video-Audio with real-time frames in just few lines of code.

Best of all, WriteGear grants users the complete freedom to play with any FFmpeg parameter with its exclusive Custom Commands function (see this doc) without relying on any third-party API.

In addition to this, WriteGear also provides flexible access to OpenCV's VideoWriter API tools for video-frames encoding without compression.

WriteGear primarily operates in the following two modes:

  • Compression Mode: In this mode, WriteGear utilizes powerful FFmpeg inbuilt encoders to encode lossless multimedia files. This mode provides us the ability to exploit almost any parameter available within FFmpeg, effortlessly and flexibly, and while doing that it robustly handles all errors/warnings quietly. You can find more about this mode here ➶

  • Non-Compression Mode: In this mode, WriteGear utilizes basic OpenCV's inbuilt VideoWriter API tools. This mode also supports all parameter transformations available within OpenCV's VideoWriter API, but it lacks the ability to manipulate encoding parameters and other important features like video compression, audio encoding, etc. You can learn about this mode here ➶

WriteGear API Guide:

>>> Usage Guide

 

 

StreamGear

NetGear API

StreamGear automates transcoding workflow for generating Ultra-Low Latency, High-Quality, Dynamic & Adaptive Streaming Formats (such as MPEG-DASH and Apple HLS) in just few lines of python code.

StreamGear provides a standalone, highly extensible, and flexible wrapper around FFmpeg multimedia framework for generating chunked-encoded media segments of the content.

SteamGear is an out-of-the-box solution for transcoding source videos/audio files & real-time video frames and breaking them into a sequence of multiple smaller chunks/segments of suitable lengths. These segments make it possible to stream videos at different quality levels (different bitrates or spatial resolutions) and can be switched in the middle of a video from one quality level to another – if bandwidth permits – on a per-segment basis. A user can serve these segments on a web server that makes it easier to download them through HTTP standard-compliant GET requests.

SteamGear currently supports MPEG-DASH (Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1) and Apple HLS (HTTP Live Streaming). But, Multiple DRM support is yet to be implemented.

SteamGear also creates a Manifest file (such as MPD in-case of DASH) or a Master Playlist (such as M3U8 in-case of Apple HLS) besides segments that describe these segment information (timing, URL, media characteristics like video resolution and bit rates) and is provided to the client before the streaming session.

StreamGear primarily works in two Independent Modes for transcoding which serves different purposes:

  • Single-Source Mode: In this mode, StreamGear transcodes entire video file (as opposed to frame-by-frame) into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you're transcoding long-duration lossless videos(with audio) for streaming that required no interruptions. But on the downside, the provided source cannot be flexibly manipulated or transformed before sending onto FFmpeg Pipeline for processing. Learn more about this mode here ➶

  • Real-time Frames Mode: In this mode, StreamGear directly transcodes frame-by-frame (as opposed to a entire video file), into a sequence of multiple smaller chunks/segments for streaming. This mode works exceptionally well when you desire to flexibility manipulate or transform numpy.ndarray frames in real-time before sending them onto FFmpeg Pipeline for processing. But on the downside, audio has to added manually (as separate source) for streams. Learn more about this mode here ➶

StreamGear API Guide:

>>> Usage Guide

 

 

NetGear

NetGear API

NetGear is exclusively designed to transfer video-frames & data synchronously between interconnecting systems over the network in real-time.

NetGear implements a high-level wrapper around PyZmQ python library that contains python bindings for ZeroMQ - a high-performance asynchronous distributed messaging library.

NetGear seamlessly supports additional bidirectional data transmission between receiver(client) and sender(server) while transferring video-frames all in real-time.

NetGear can also robustly handle Multiple Server-Systems and Multiple Client-Systems and at once, thereby providing access to a seamless exchange of video-frames & data between multiple devices across the network at the same time.

NetGear allows remote connection over SSH Tunnel that allows us to connect NetGear client and server via secure SSH connection over the untrusted network and access its intranet services across firewalls.

NetGear also enables real-time JPEG Frame Compression capabilities for boosting performance significantly while sending video-frames over the network in real-time.

For security, NetGear implements easy access to ZeroMQ's powerful, smart & secure Security Layers that enable Strong encryption on data and unbreakable authentication between the Server and the Client with the help of custom certificates.

NetGear as of now seamlessly supports three ZeroMQ messaging patterns:

Whereas supported protocol are: tcp and ipc.

NetGear API Guide:

>>> Usage Guide

 

 

WebGear

WebGear is a powerful ASGI Video-Broadcaster API ideal for transmitting Motion-JPEG-frames from a single source to multiple recipients via the browser.

WebGear API works on Starlette's ASGI application and provides a highly extensible and flexible async wrapper around its complete framework. WebGear can flexibly interact with Starlette's ecosystem of shared middleware, mountable applications, Response classes, Routing tables, Static Files, Templating engine(with Jinja2), etc.

WebGear API uses an intraframe-only compression scheme under the hood where the sequence of video-frames are first encoded as JPEG-DIB (JPEG with Device-Independent Bit compression) and then streamed over HTTP using Starlette's Multipart Streaming Response and a Uvicorn ASGI Server. This method imposes lower processing and memory requirements, but the quality is not the best, since JPEG compression is not very efficient for motion video.

In layman's terms, WebGear acts as a powerful Video Broadcaster that transmits live video-frames to any web-browser in the network. Additionally, WebGear API also provides a special internal wrapper around VideoGear, which itself provides internal access to both CamGear and PiGear APIs, thereby granting it exclusive power of broadcasting frames from any incoming stream. It also allows us to define our custom Server as source to transform frames easily before sending them across the network(see this doc example).

Below is a snapshot of a WebGear Video Server in action on Chrome browser:

WebGear in action!
WebGear Video Server at http://localhost:8000/ address.

Code to generate the above result:

# import required libraries
import uvicorn
from vidgear.gears.asyncio import WebGear

# various performance tweaks
options = {
    "frame_size_reduction": 40,
    "frame_jpeg_quality": 80,
    "frame_jpeg_optimize": True,
    "frame_jpeg_progressive": False,
}

# initialize WebGear app
web = WebGear(source="foo.mp4", logging=True, **options)

# run this app on Uvicorn server at address http://localhost:8000/
uvicorn.run(web(), host="localhost", port=8000)

# close app safely
web.shutdown()

WebGear API Guide:

>>> Usage Guide

 

 

WebGear_RTC

WebGear_RTC is similar to WeGear API in many aspects but utilizes WebRTC technology under the hood instead of Motion JPEG, which makes it suitable for building powerful video-streaming solutions for all modern browsers as well as native clients available on all major platforms.

WebGear_RTC is implemented with the help of aiortc library which is built on top of asynchronous I/O framework for Web Real-Time Communication (WebRTC) and Object Real-Time Communication (ORTC) and supports many features like SDP generation/parsing, Interactive Connectivity Establishment with half-trickle and mDNS support, DTLS key and certificate generation, DTLS handshake, etc.

WebGear_RTC can handle multiple consumers seamlessly and provides native support for ICE (Interactive Connectivity Establishment) protocol, STUN (Session Traversal Utilities for NAT), and TURN (Traversal Using Relays around NAT) servers that help us to seamlessly establish direct media connection with the remote peers for uninterrupted data flow. It also allows us to define our custom streaming class with suitable source to transform frames easily before sending them across the network(see this doc example).

WebGear_RTC API works in conjunction with Starlette's ASGI application and provides easy access to its complete framework. WebGear_RTC can also flexibly interact with Starlette's ecosystem of shared middleware, mountable applications, Response classes, Routing tables, Static Files, Templating engine(with Jinja2), etc.

Additionally, WebGear_RTC API also provides a special internal wrapper around VideoGear, which itself provides internal access to both CamGear and PiGear APIs.

Below is a snapshot of a WebGear_RTC Media Server in action on Chrome browser:

WebGear_RTC in action!
WebGear_RTC Video Server at http://localhost:8000/ address.

Code to generate the above result:

# import required libraries
import uvicorn
from vidgear.gears.asyncio import WebGear_RTC

# various performance tweaks
options = {
    "frame_size_reduction": 30,
}

# initialize WebGear_RTC app
web = WebGear_RTC(source="foo.mp4", logging=True, **options)

# run this app on Uvicorn server at address http://localhost:8000/
uvicorn.run(web(), host="localhost", port=8000)

# close app safely
web.shutdown()

WebGear_RTC API Guide:

>>> Usage Guide

 

 

NetGear_Async

WebGear in action!

.

NetGear_Async can generate the same performance as NetGear API at about one-third the memory consumption, and also provide complete server-client handling with various options to use variable protocols/patterns similar to NetGear, but lacks in term of flexibility as it supports only a few NetGear's Exclusive Modes.

NetGear_Async is built on zmq.asyncio, and powered by a high-performance asyncio event loop called uvloop to achieve unmatchable high-speed and lag-free video streaming over the network with minimal resource constraints. NetGear_Async can transfer thousands of frames in just a few seconds without causing any significant load on your system.

NetGear_Async provides complete server-client handling and options to use variable protocols/patterns similar to NetGear API. Furthermore, NetGear_Async allows us to define our custom Server as source to transform frames easily before sending them across the network(see this doc example).

NetGear_Async now supports additional bidirectional data transmission between receiver(client) and sender(server) while transferring video-frames. Users can easily build complex applications such as like Real-Time Video Chat in just few lines of code.

NetGear_Async as of now supports all four ZeroMQ messaging patterns:

Whereas supported protocol are: tcp and ipc.

NetGear_Async API Guide:

>>> Usage Guide

 

 

Contributions

👑 Contributor Hall of Fame 👑




We're happy to meet new contributors💗


We welcome your contributions to help us improve and extend this project. If you want to get involved with VidGear development, checkout the Contribution Guidelines ▶️

We're offering support for VidGear on Gitter Community Channel. Come and join the conversation over there!

 

 

Donations

PiGear

VidGear is free and open source and will always remain so. ❤️

It is something I am doing with my own free time. But so much more needs to be done, and I need your help to do this. For just the price of a cup of coffee, you can make a difference 🙂

Buy Me a Coffee at ko-fi.com

 

 

Citation

Here is a Bibtex entry you can use to cite this project in a publication:

DOI

@software{vidgear,
  author       = {Abhishek Thakur and
                  Zoe Papakipos and
                  Christian Clauss and
                  Christian Hollinger and
                  Ian Max Andolina and
                  Vincent Boivin and
                  enarche-ahn and
                  freol35241 and
                  Benjamin Lowe and
                  Mickaël Schoentgen and
                  Renaud Bouckenooghe},
  title        = {abhiTronix/vidgear: VidGear v0.3.1},
  month        = jul,
  year         = 2023,
  publisher    = {Zenodo},
  version      = {vidgear-0.3.1},
  doi          = {10.5281/zenodo.8174694},
  url          = {https://doi.org/10.5281/zenodo.8174694}
}

 

 

Copyright

Copyright © abhiTronix 2019

This library is released under the Apache 2.0 License.

vidgear's People

Contributors

abhitronix avatar bml1g12 avatar bobotig avatar cclauss avatar chollinger93 avatar enarche-ahn avatar freol35241 avatar iandol avatar ibtsam3301 avatar iraadit avatar vboivin avatar zpapakipos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vidgear's Issues

Set VideoGear's decoder.

Is there any way to set the video decoder for the VideoGear class? I've been looking through docs but not so many details found.
I would like to use the GPU in order to decode the video, in order to speed up the process a little.

TLS Connection was non-properly terminated

Hi, i want to get live stream from youtube using my raspberry pi. But, i got output:

The TLS connection was non-properly terminated. The specified session has been invalidated for some reason.

Please help me

[Bug]: Assertion error in CamGear API during colorspace manipulation

Description

This bug directly affects colorspace manipulation in CamGear API. Due to this bug, the CamGear API currently exits itself with (-215:Assertion failed) !_src.empty() in function 'cvtColor' error with any colorspace because of the improper handling of threaded queue structure.

Acknowledgment

  • A brief but descriptive Title of your issue
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.
  • I have read the Contributing Guidelines.

Environment

  • VidGear version: 0.1.6-dev
  • Branch: Development
  • Python version: All
  • pip version: All
  • Operating System and version: All

Expected Behavior

No Assertion error while self-terminating.

Actual Behavior

Throws (-215:Assertion failed) !_src.empty() in function 'cvtColor' error on self-termination.

Code to reproduce

from vidgear.gears import CamGear
stream = CamGear(source=test.mp4, colorspace = 'COLOR_BGR2YUV', logging=True).start()
while True:
	frame = stream.read()
	# check if frame is None
	if frame is None:
		#if True break the infinite loop
		break
stream.stop()

On executing this function, It will self-terminate with (-215:Assertion failed) !_src.empty() in function 'cvtColor' error.

How do i use Vidgear for stabilizing series of images?

Question

Acknowledgment

  • A brief but descriptive Title of your issue.
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.

Context

Your Environment

  • VidGear version:
  • Branch:
  • Python version:
  • pip version:
  • Operating System and version:

Optional

[Proposal] WebRTC Real-time video streaming with vidgear

Detailed Description

WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. The WebRTC components have been optimized to best serve this purpose. This proposal is to test whether it is possible to add WebRTC python support to transfer video frames from source using webRTC in real-time and bring this implementation under vidgear's multithreaded API environment.

Context

Our goal through this proposal is to test whether it is possible to add WebRTC python support to transfer video frames from source using webRTC in real-time and if possible, then implement this with vidgear multithreaded API.

Your Environment

  • VidGear version: all
  • Branch: Development
  • Python version: all
  • pip version: non applicable
  • Operating System and version: all

Any Other Important Information

Helpful Resource/Library : Aiortc is a library for Web Real-Time Communication (WebRTC) and Object Real-Time Communication (ORTC) in Python. It is built on top of asyncio, Python's standard asynchronous I/O framework. Source: https://github.com/aiortc/aiortc

[Proposal] Can NetGear Client/Server can send/receive data with custom certificates?

Detailed Description

I think someone in the middle can capture the frame that Server sends to the client without security mechanisms. Can NetGear Client/Server can send/receive data with custom certificates?

Context

This proposal is to add Secure the connection between Servers and Client with custom certificates. This gives us strong encryption on data, and (as far as we know) unbreakable authentication. Stonehouse is the minimum you would use over public networks and assures clients that they are speaking to an authentic server while allowing any client to connect. More information can be found in these links:
https://github.com/zeromq/pyzmq/blob/master/examples/security/stonehouse.py
https://github.com/zeromq/pyzmq/blob/master/examples/security/generate_certificates.py

Your Environment

  • VidGear version: latest
  • Branch: PyPi
  • Python version: all
  • pip version: latest
  • Operating System and version: not applicable

Any Other Important Information

Not available

How can I send this video to javascript

Question

I was trying to use a socket but found this. My question is how can I send this video to web and using javascript and this lib

this is my javascript this is how I receive image using javascript

    var arrayBuffer = msg.data;
    var bytes = new Uint8Array(arrayBuffer);

    var image = document.getElementById('image');
    image.src = 'data:image/png;base64,'+encode(bytes);
};

WriteGear Bare-Minimum example (Non-Compression) not working

Description

  1. I followed the demo here: https://github.com/abhiTronix/vidgear/wiki/Non-Compression-Mode:-OpenCV#1-writegear-bare-minimum-examplenon-compression-mode

  2. Run the code

  3. The following error showed:

Compression Mode is disabled, Activating OpenCV In-built Writer!
InputFrame => Height:360 Width:640 Channels:1
FILE_PATH: /******/Output.mp4, FOURCC = 1196444237, FPS = 30.0, WIDTH = 640, HEIGHT = 360, BACKEND =
OpenCV: FFMPEG: tag 0x47504a4d/'MJPG' is not supported with codec id 7 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
Warning: RGBA and 16-bit grayscale video frames are not supported by OpenCV yet, switch to `compression_mode` to use them!
Traceback (most recent call last):
  File "cam_demo.py", line 31, in <module>
    writer.write(gray)
  File "/Users/*****/lib/python3.7/site-packages/vidgear/gears/writegear.py", line 221, in write
    raise ValueError('All frames in a video should have same size')
ValueError: All frames in a video should have same size

Acknowledgment

  • A brief but descriptive Title of your issue
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.
  • I have read the Contributing Guidelines.

Environment

  • VidGear version: 0.1.5
  • Branch: PyPi
  • Python version: 3.7.3
  • pip version: 19.1.1
  • Operating System and version: macOS 10.14.3

Expected Behavior

Write frame to file.

Actual Behavior

Frames in different sizes.

Possible Fix

WriteGear specify the output frame size?

Steps to reproduce

(Write your steps here:)

See description.

Optional

Enhancement: Multithreaded Live Screen Cast support in vidgear

Multithreaded Live Screen Cast

Introduction:

A screencast is a digital recording of computer screen output, also known as a video screen capture. The term screencast compares with the related term screenshot; whereas screenshot generates a single picture of a computer screen, a screencast is essentially a movie of the changes over time that a user sees on a computer screen.

Available Resources:

Python MSS:

MSS stands for Multiple Screen Shots, is an ultra-fast cross-platform multiple screenshots module in pure python using ctypes. With MSS we can easily define an area on the computer screen or an open window to record the live screen.

Goal

Our goal is to implement live Screen Cast support in vidgear by implementing a high-level wrapper around Python MSS at the expense of little to no latency in python.

TODO

  • Implement Live Screen Cast support in vidgear
  • Prepare a Multi-Threaded wrapper around Python-MSS
  • Create new ScreenGear class from scratch to implement this feature
  • Make ScreenGear compatible with existing vidgear Classes.
  • Must provide higher framerate at low latency with fewer resources
  • Optimize overall performance
  • Fix related bugs

How Get Frame from Youtube Live Stream

Hello
With some youtube videos,this code works.
But this code not works with these youtube live stream.

https://www.youtube.com/watch?v=17Deeq8N2e4
https://www.youtube.com/watch?v=1y5dcfnv-Ss
https://www.youtube.com/watch?v=tbLXWVhu8-Q

  1. Method
    vPafy = pafy.new(videoUrl)
    play = vPafy.getbest(preftype="mp4")
    return play.url

  2. Method
    streams = streamlink.streams(videoUrl)
    return streams["best"].url

cap = cv.VideoCapture(videoUrl)

All 2 Method not works.

The error screen like this

[ERROR:0] global C:\projects\opencv-python\opencv\modules\videoio\src\cap.cpp (116) cv::VideoCapture::open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.1.1) C:\projects\opencv-python\opencv\modules\videoio\src\cap_images.cpp:235: error: (-5:Bad argument) CAP_IMAGES: error, expected '0?[1-9][du]' pattern, got: https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1574433696/ei/QJ_XXfqBHNOTgQOBqpGwBw/ip/222.112.215.2/id/1EiC9bvVGnk.1/itag/96/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/goi/160/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D137/hls_chunk_host/r5---sn-3u-bh2ll.googlevideo.com/playlist_type/DVR/initcwndbps/7760/mm/44/mn/sn-3u-bh2ll/ms/lva/mv/m/mvi/4/pl/23/dover/11/keepalive/yes/fexp/23842630/mt/1574412027/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,goi,sgoap,sgovp,playlist_type/sig/ALgxI2wwRAIgG6xA4SgrD4PZGAfyup1jpL003-U3CQomrURDSKrCbZQCIElX0iQYvSGuZK-aoDsbY9Zv6SVTCNHOXGoUXhPCj0bN/lsparams/hls_chunk_host,initcwndbps,mm,mn,ms,mv,mvi,pl/lsig/AHylml4wRQIhAO1AS0iv1JaOu9igx-i3uGV-52UNCvd1Kd4Fu9SSC6OqAiAVjNjYYdr37w4Id111zdRsu8csAkIAfynBOk4SEO9f3w%3D%3D/playlist/index.m3u8 in function 'cv::icvExtractPattern'

Traceback (most recent call last):
File "D:/VideoFeedProcessing/VideoFeed/main.py", line 226, in
out = cv.VideoWriter('_output.avi', fourcc, 15, (ori_wid, ori_hei))
TypeError: must be real number, not tuple
[tcp @ 000001601c64fbc0] Connection to tcp://manifest.googlevideo.com:443 failed: Error number -138 occurred

How to read lenght of file in frame and how to jump to specific frame

Hi.
I want to use vidgear with for webcam and also the file loading.
For webcam or youtube run everything well problem is when I load the video.
My program is based on frame position so I use code like this:

video_pos =int(cap.get(cv2.CAP_PROP_POS_FRAMES))

but I get error
AttributeError: 'CamGear' object has no attribute 'get' how I can fix it?

Some question is how I can jump to specific frame.

Thanks for the answer

Question

Acknowledgment

  • A brief but descriptive Title of your issue.
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.

Context

Your Environment

  • VidGear version:
  • Branch:
  • Python version:
  • pip version:
  • Operating System and version:

Optional

video sending using multithreading

In context, I transfer video from a camera on a raspberry pi 3, to a local computer. The local network is only used for this purpose so there is no congestion on the network, besides I am connected by cable, on the computer and on the raspberry, (LAN). The shipping is slow and I had an idea to increase the transfer using double wire delivery

Question

I have a question, to verify if it is possible or someone else has tried to do it.

In context, I transfer video from a camera on a raspberry pi 3, to a local computer. The local network is only used for this purpose so there is no congestion on the network, besides I am connected by cable, on the computer and on the raspberry, (LAN).
The video quality I transfer has a size of 1920 x 1080.

Using the NetGear library I transfer the video correctly, but at an average speed of 3 FPS.

For my purposes I need more video at a higher speed, but without reducing the quality of the video or resizing it.

server = NetGear (address = '192.168.x.xxx', port = '5454', protocol = 'tcp', pattern = 1, receive_mode = False, logging = True, ** options) #Define netgear client at Server IP address .
server2 = NetGear ...... different port

Try to implement a communication using 2 threads, where in each thread you send a video frame to the computer so that in this way while sending a video frame you can also make another sending at the same time of a subsequent frame. In theory this could help me increase the amount of SPF.

But what I have observed, is that internally even if you use threads, the video sending is only done one by one, even if you have two sending objects, one or the other will wait internally for the other to finish sending to send yours.

for example, implement 3 threads to send video at the same time but with consecutive frames, executing the code for 10 seconds. Each thread could only send 10 frames, where in total for the ten seconds were 30, 3 FPS.

Each thread is constantly checking on a corresponding video stack if there is a frame to send.

while True:
    if len (queue1)> 0:
        server.send (frame)

Acknowledgment

  • [*] A brief but descriptive Title of your issue.
  • [*] I have searched the issues for my issue and found nothing related or helpful.
  • [*] I have read the FAQ.
  • [*] I have read the Wiki.

Context

I mainly want to increase the amount of video frames I receive

In context, I transfer video from a camera on a raspberry pi 3, to a local computer. The local network is only used for this purpose so there is no congestion on the network, besides I am connected by cable, on the computer and on the raspberry, (LAN).
The video quality I transfer has a size of 1920 x 1080.

Your Environment

  • VidGear version: 0.1.5
  • Branch:
  • Python version: 3.5
  • pip version: current version
  • Operating System and version: Ubuntu 16.04

Optional

multihilo

New Feature Request: Bi-Directional Messaging

Hello!
Firstly, thank you very much for your work, VidGear is working really well with my project! I'm able to send one way video and other information from a Raspberry Pi (server) to a second computer (client).

Looking at the docs, the "pattern" used to configure Netgear is ZMQ.pair by default - which is said to be bidirectional. Could this be used to send information back to the server?

For instance, the server could send the frame and a string of information over to the client, which could reply with confirmation that it has recieved the data, as well as any other information. In my case, it could send data back to a raspberry pi which can be parsed to turn on an LED or drive a motor.

That would be super useful to my project, and may help others with theirs!

Thank you!

Python 2 MultiServer Mode, ValueError: [ERROR]: Failed to connect address

I use the testing branch of vidgear to use use Multiserver Mode.

running this part of code on a raspberry pi 3B + that transmits the video from a USB camera to a computer.

from vidgear.gears import NetGear
from vidgear.gears import CamGear
import cv2

#Open live video stream from device at index 0
stream = cv2.VideoCapture(0) 

#activate multiserver_mode
options = {'multiserver_mode': True, 'flag' : 0, 'copy' : False, 'track' : False}

#change following IP address '192.168.1.xxx' with Client's IP address and assign unique port address(for e.g 5566).
server = NetGear(address = 192.168.x.x, port = '5566', protocol = 'tcp',  pattern = 2, receive_mode = False, **options) # and keep rest of settings similar to Client

# infinite loop until [Ctrl+C] is pressed
while True:
	try: 
		(grabbed, frame) = stream.read()
		# read frames

		# check if frame is not grabbed
		if not grabbed:
			#if True break the infinite loop
			break

		# do something with frame and data(to be sent) here

		text = "Hello, I'm Server-1 at Port Address: 5566."

		# send frame and data through server
		server.send(frame, message = text)
	
	except KeyboardInterrupt:
		#break the infinite loop
		break

# safely close video stream.
stream.release()
# safely close server-1
server.close()

I get this error:
ValueError: [ERROR]: Failed to connect address: tcp://192.168.0.107:5566 and pattern: 2! Kindly recheck all parameters.

Tracking the problem, it originates in line 477:
self.msg_socket.bind(protocol+'://' + str(address) + ':' + str(port))

testing this code in python3 works fine.

Is the problem due to the python version?

Enhancement: Real-time Video Stabilization in vidgear

Real-time Video Stabilization in vidgear

Introduction:

Video stabilization refers to a family of methods used to reduce the blurring & distortion associated with the motion of the camera. In other words, it compensates for any angular movement, equivalent to yaw, pitch, roll, and x and y translations of the camera. A related problem common in videos shot from mobile phones. The camera sensors in these phones contain what is known as an electronic rolling shutter. When taking a picture with a rolling shutter camera, the image is not captured instantaneously. Instead, the camera captures the image one row of pixels at a time, with a small delay when going from one row to the next. Consequently, if the camera moves during capture, it will cause image distortions ranging from shear in the case of low-frequency motions (for instance an image captured from a drone) to wobbly distortions in the case of high-frequency perturbations (think of a person walking while recording video). These distortions are especially noticeable in videos where the camera shake is independent across frames. The ability to locate, identify, track and stabilize objects at different poses and backgrounds is important in many real-time video applications. Object detection, tracking, alignment, and stabilization have been a research area of great interest in computer vision and pattern recognition due to the challenging nature of some slightly different objects such as faces, where algorithms should be precise enough to identify, track and focus one individual from the rest.

Real-Time Video Stabilization:

A few months back, while researching on my humanoid, I experienced significant jitteriness at the output due to motion in the cameras/Servos/platform that was causing tracked features to get lost on the way and thus resulting in false-positive movement of humanoid eyes. So, In order to eliminate this problem, I decided to implement a real-time video stabilizer. Therefore I studied & experimented various methods published in various research papers and online resources and finally came to the conclusion that some state-of-the-art video stabilization methods can achieve a quite good visual effect, but they always cost a lot of time. On the other hand, other real-time video stabilization methods cannot generate satisfactory results.

Goal:

Our goal is to implement real-time video stabilization for vidgear which can provide a good balance between stabilization and latency at expense of little to no computational power requirement thereby ideal for the raspberry pi too. Secondly, It must be implemented using OpenCV Computer Vision library for open-source considerations.

Resources:

  • Going through various methods published in various research papers and online resources, I think the Simple video stabilization using OpenCV by nghiaho12 works the best on my Raspberry Pi. It is less computationally expensive and there is a C++ implementation available for getting things started with.

TODO

  • Implement a Real-Time Video Stabilizer from scratch in python
  • Not extra dependency must be used except the existing ones
  • Must provide good stabilization and low latency with no extra resources
  • Merge stabilizer with VideoGear Class
  • Must be compatible with any video stream and able to perform at High FPS.

How to set framerate with ScreenGear

Question

I would like to set the frame rate with which the ScreenGear module acquires the frames from my monitor.
How can I do it?

Other details:

If I grab a small frame, I will have very slow down video effect (i.e. I record for 10 seconds but the writed video is 1 minute), since it is able to grab many fps and put them with the fixed fps of the WriteGear module (e.g. 25).

On the other hand, If I grab a large frame (e.g. my whole 4k monitor), I will have a very fast video effect (i.e. I record for 30 seconds but the writed video is 5 seconds), since it is able to grab few fps because of the large images.

How can I manage this?

Enhancement: Real-Time Video Frame Transferring over the network through Messaging

Real-Time Video Frame Transferring over the network through Messaging

Messaging

PatternHierarchy

Messaging makes applications loosely coupled by communicating asynchronously, which also makes the communication more reliable because the two applications do not have to be running at the same time. Messaging makes the messaging system responsible for transferring data from one application to another, so the applications can focus on what data they need to share but not worry so much about how to share it.

Message Oriented protocols

Message Oriented protocols send data in distinct chunks or groups. The receiver of data can determine where one message ends and another begins. Message protocols are usually built over streams but there is one layer in between which takes care to separate each logical part from another. It parses input stream for you and gives you result only when the whole dataset arrives and not all states in between.

Available Resources

  1. MQTT - Mosquitto is a lightweight MQTT is a machine-to-machine (M2M)/"Internet of Things" connectivity protocol broker messaging library. It works on top of the TCP/IP protocol. It is designed for connections with remote locations where a "small code footprint" is required or the network bandwidth is limited. The publish-subscribe messaging pattern requires a message broker.

  2. ZeroMQ (also spelled ØMQ, 0MQ or ZMQ) is a high-performance asynchronous brokerless messaging library, aimed at use in distributed or concurrent applications. It provides a message queue, but unlike message-oriented middleware, a ZeroMQ system can run without a dedicated message broker.

Since ZeroMQ outperformed MQTT in various tests and it's well-documented as well. I decided to go with ZeroMQ for messaging implementation in vidgear.

Goal

Our goal is to implement real-time video frames transferring over the network in vidgear by implementing a high-level wrapper around PyZmQ that contains Python bindings for ZeroMQ. This wrapper will provide both read and write functionality and read function will be multi-threaded for high-speed frame capturing with minimum latency and memory constraints.

TODO

  • Implement a new Netgear class: a high-level wrapper around ZeroMQ
  • Add both send() and recv() function for transferring frames
  • Make send() function multi-threaded and error-free with Threaded Queue Mode.
  • Add support for various possible messaging synchronous patterns
  • frame-transferring between server/client must be synchronized and ultrafast with minimum latency
  • Robustly handle the server and client end, even if any of them is started at a different instant.
  • Server end must able to terminate stream at the client(s) end automatically.
  • Server and Client must able to talk/send messages at any instance while transferring frames.

I want to get the live stream from youtube, and for that, I have used opencv along with the package vidgear. But while running the code, I am getting the following error. I am sure that there is no problem with the URL. I have tried with pafy and streamlink. Even though both have given the result but after few frames, it was getting stuck and I want sequential frames without any pause.

import cv2
from vidgear.gears import CamGear
stream = CamGear(source="https://www.youtube.com/watch?v=VIk_6OuYkSo", y_tube =True, time_delay=1, logging=True).start() # YouTube Video URL as input

while True:

frame = stream.read()
if frame is None:
    break

cv2.imshow("Output Frame", frame)


key = cv2.waitKey(30) 

if key == ord("q"):

    break

cv2.destroyAllWindows()
stream.stop()

'NoneType' object has no attribute 'extension'
Traceback (most recent call last):
File "C:\Users\CamfyVision\AppData\Local\Programs\Python\Python36\lib\site-packages\vidgear\gears\camgear.py", line 120, in init
print('Extension: {}'.format(_source.extension))
AttributeError: 'NoneType' object has no attribute 'extension'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "DronrStream.py", line 4, in
stream = CamGear(source="https://www.youtube.com/watch?v=VIk_6OuYkSo", y_tube =True, time_delay=1, logging=True).start() # YouTube Video URL as input
File "C:\Users\CamfyVision\AppData\Local\Programs\Python\Python36\lib\site-packages\vidgear\gears\camgear.py", line 125, in init
raise ValueError('YouTube Mode is enabled and the input YouTube Url is invalid!')

Bug: sys.stderr.close() throws ValueError bug in WriteGear class

Bug Description:

The use of sys.stderr.close() at line:

sys.stderr.close()
breaks stderr and throws ValueError: I/O operation on closed file at the output. This also results in hidden tracebacks, since tracebacks are written to stder thereby making it difficult to identify this bug.

Code to reproduce:

from vidgear.gears import WriteGear
import numpy as np

try:
	np.random.seed(0)
	# generate random data for 10 frames
	random_data = np.random.random(size=(10, 1080, 1920, 3)) * 255
	input_data = random_data.astype(np.uint8)
	writer = WriteGear("garbage.garbage")
	writer.write(input_data)
	writer.close()
except Exception as e:
        print(e)
finally:
	einfo = sys.exc_info()
	import traceback; print(''.join(traceback.format_exception(*einfo)))
	print('finally?')

This code will result in traceback with ultimately leads to ValueError: I/O operation on closed file at the output.

Error With YouTube Live

I'm running the example code to use a live YouTube stream as a video source, but am receiving an error:

"YouTube Mode is enabled and the input YouTube Url is invalid!"

I'm using the sample code verbatim, and only changed the YouTube URL to a live stream (https://youtu.be/Nk4HAS-HOr4) instead of the Rick Roll video.

I'm pretty sure all of my packages are up to date, including youtube-dl

Dropped support for Python 2.7 legacy

Hello everyone,

VidGear as of now supports both python 3 and python 2.7 legacies. But the support for Python 2.7 will be dropped in the upcoming major release i.e. v0.1.6 as most of the vidgear critical dependency's are already been migrated or in process of migrating their source-code to Python3. Therefore, This issue is a reminder of everyone using vidgear to start migrating into Python 3 as soon as possible.

[Proposal] Can NetGear Client receive input from multiple Servers?

[Enhancement] NetGear Client can receive inpunt from multipule Server and show them into a Grid View

Detailed Description

Same concept is here https://www.pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Biside frame, NetGear Server can send other information, eg: Frame type, Camera Name... some thing like that: server.send(frame, dict(msg_type=2, context='camera continue frame')}) and NetGear Client can receive data like this: frame, extra_data = client.recv()

Bug: Non-Blocking frame handling of Video Files in CamGear Class

Bug Insights

Currently, CamGear(or common VidGear Class) uses separate threads to process frames from the given source at certain high speed one after another, let's say, 50fps(50 frames per second). Now for a video source, imagine we are performing a heavy computational task where the frames are being processed at 5fps only. Under these conditions due to multi-threaded frame capturing, CamGear will keep on cycling frames from the source at 50fps in the background even while no one is requesting for them and thereby finally end up returning a NoneType frame at the output if video file(as source) is of fixed length. This erroneous behavior also leads to the wrong frame being processed at any given instance and another undesired behavior called Frame Skipping

Affected VideoCapture Streams::warning:

  • This bug is present in any video stream of fixed length including network streams
  • This bug doesn't affect camera (H.W) devices since input is automatically managed by IO resource handler

Code to Reproduce Bug:

This code will call sleep function for a duration of 2secs at each loop cycle to imitate a heavy computation task being performed and counts frame number that is being processed:

from vidgear.gears import CamGear
import cv2
import time

stream = CamGear(source='test.mp4').start() #Open any video file stream of fixed length

# infinite loop
frame_num = 0
while True:
	
	frame = stream.read()
	# read frames

	# check if frame is None
	if frame is None:
		#if True break the infinite loop
                print(frame_num)
		break
	
	# Sleep for 1 seconds (imitating a heavy computational task)
	time.sleep(2)
	   
	# Show output window
	cv2.imshow("Output Frame", frame)

	key = cv2.waitKey(1) & 0xFF
	# check for 'q' key-press
	if key == ord("q"):
		#if 'q' key-pressed break out
		break
        frame_num+=1
cv2.destroyAllWindows()
# close output window

stream.stop()
# safely close video stream

Current Behavior(Output)

Due to the above-mentioned bug, when this algorithm is run, it will exit immediately without processing each frame or skipped frames will visible depending upon the length of the input video. The final frame_num value will be around 0~10 (when a 20-120 sec video is used), which is far less than in comparison to actual numbers.

TODO

  • Implement a Blocking Mode in CamGear Class with a threaded queue
  • Use collections.deque() for performance consideration
  • Add a test for this Mode
  • Fix Bugs and robust testing of this implementation.

The output video is slower than the input

Hi,

I don't know if I'm the only one experiencing this, but the output video after framewise processing using OpenCV is slower than the input video.

Basically the workflow is this:

  1. Read frame from stream;
  2. Apply OpenCV functions over it
  3. Write the frame back using Writer.

In my case, an one minute video(4K 60FPS) ends up having 2 minutes and a half.

[Proposal] OSX environment support for Travis Cli tests

Detailed Description

VidGear as of now do not officially support for macOS(OSX) systems/environments. Thereby this proposal is for bringing OSX environments support to VidGear officially by implementing automated Travis CLI Tests environment for it(similar to Linux).

Context

This proposal aim at bringing this OSX python support to VidGear by implementing automated Travis CLI pytest similar to Linux environments and fixing the bugs encountered, so that VidGear can work seamlessly on OSX systems too.

Your Environment

  • VidGear version: 0.16.0
  • Branch: Development
  • Python version: All
  • pip version: Latest
  • Operating System and version: Not applicable

How to record color video?

Question

I try the example in the document. The gray image can be recorded. The output video is fine.
But when I comment the color convert code, the video seems corrupt.

# import libraries
from vidgear.gears import ScreenGear
from vidgear.gears import WriteGear
import cv2


options = {'top': 40, 'left': 0, 'width': 100, 'height': 100} # define dimensions of screen w.r.t to given monitor to be captured

output_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast"} #define (Codec,CRF,preset) FFmpeg tweak parameters for writer


stream = ScreenGear(monitor=1, logging=True, **options).start() #Open Live Screencast on current monitor 

writer = WriteGear(output_filename = 'Output.mp4', compression_mode = True, logging = True, **output_params) #Define writer with output filename 'Output.mp4' 

# infinite loop
while True:
	
	frame = stream.read()
	# read frames

	# check if frame is None
	if frame is None:
		#if True break the infinite loop
		break
	

	# {do something with frame here}
        # gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

	# write a modified frame to writer
        writer.write(frame) 
       
        # Show output window
	cv2.imshow("Output Frame", frame)

	key = cv2.waitKey(1) & 0xFF
	# check for 'q' key-press
	if key == ord("q"):
		#if 'q' key-pressed break out
		break

cv2.destroyAllWindows()
# close output window

stream.stop()
# safely close video stream
writer.close()
# safely close writer

Writegear - use hardware encoder

Writegear uses libx264 or libx265 encoders. Is it possible to make use of existing hardware encoders - for example on an Intel CPU the h264_vaapi encoder. I ask as using libx264 my CPU is saturating and the output video is dropping frames. In tests I have done outside of Writegear, using the h264_vaapi encoder significantly reduces the CPU load.

speed up video sending when compressing the image

<! --- Provide a general summary of the problem in the previous Title ->
Accelerate video streaming by compressing information

Detailed description

<! --- Provide a detailed description of the change or addition you are proposing ->
NetGear works correctly for sending video between clients and servers, but it can improve this speed in situations where high quality video is handled or in saturated networks, reducing the size when compressing the viden sent.

What I propose is, internally add a preprocessing of the image, compressing it in jpeg format, with a variable that indicates to what quality it is correct to compress it, it could even be compressed as png.

Something like, before sending it, apply the compression:

encode_param = [int (cv2.IMWRITE_JPEG_QUALITY), 90]
result, encimg = cv2.imencode ('. jpg', img, encode_param)

and upon receiving it, decompress it:

decimg = cv2.imdecode (encimg, 1)

information here:
https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga292d81be8d76901bff7988d18d2b42ac

Context

<! --- Why is this change important to you? How would you use it? ->
I think it would be helpful to transmit video in large modifications where its weight in memory causes a slow delivery. In addition to processes where the image quality is not of great importance.
<! --- Will this change the existing VidGear APIs? How? ->
It would not affect much, due to serious compression and decompression from point to point, when sending and receiving it.
<! --- How can it benefit other users? ->
Helping them to speed up the process of sending information.

Its environment

<! --- Include so many relevant details about the environment in which you worked ->

  • VidGear version: v0.1.5
  • Branch: <! --- Teacher / Testing / Development / PyPi -> Teacher
  • Python version: 3.5
  • pip version: 19.3.1
  • Operating system and version: Linux

Any other important information

<! --- This is an example / screenshot that I want to share ->
mejora

How can we to make transmit frame faster?

I want to use this project's NetGear. When I use frame 10x10 pixel, sending time is 0.0065s per frame. Using 600x600 pixel, the spend time is 0.095s, And 1000x1000 is 0.26s. This is too slow. I want to have a real-time show about transmitting frame. what can we do to faster the transmitting time?

I try to use asynic to improve transmitting speed, but I'm failed, as follow is my code:

async def send_img(server):

    # print('send the number is: ', queue.qsize())
    while True:
        if not queue.empty():
            frame = queue.get()
            print('sending....')
            frame = cv2.resize(frame)
            server.send(frame)


def main():
    print(os.getpid())
    pid = os.fork()
    if pid == 0:
        detection()
    else:

        options = {'copy': False, 'track': False}
        # change following IP address '192.168.1.xxx' with yours
        server = NetGear(address='192.168.10.189', port='5454', protocol='tcp', pattern=0, receive_mode=False,
                     logging=True, **options)  # Define netgear server at your system IP address.

        loop = asyncio.get_event_loop()
        loop.run_until_complete(send_img(server))

This is code segment, can anyone give some suggest?
Thank you

Camera_num option in PiCamera

Question

I'm using a Raspberry Pi Compute module with two RPI cameras. I'm using the example code for the Raspberry Pi camera and I'm trying to use the camera_num option from here:

class picamera.PiCamera(camera_num=0, stereo_mode='none', stereo_decimate=False, resolution=None, framerate=None, sensor_mode=0, led_pin=None, clock_mode='reset', framerate_range=None)

I tried putting it in

stream = PiGear(camera_num=1, resolution=...

and also in the options string but both times it says
'PiCamera' object has no attribute 'camera_num'.

Is this supported? If I use the PiCamera module directly this works.

Acknowledgment

  • A brief but descriptive Title of your issue.
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.

Context

I'm trying to synchronize the two Raspberry Pi Cameras. If I use PiCamera library and thread them they are still slightly out of sync. I'm hoping to use this library to simply write a timestamp to a log file for each video and then I'll attempt to sync them offline. The built in stereoscopic option in the PiCamera library doesn't support anything over about 640x480 and I'm looking to record at 1920x1080 on each camera.

Your Environment

  • VidGear version: 0.1.5
  • Branch:
  • Python version: 3.7
  • pip version: 3
  • Operating System and version: Raspbian

Optional

Processing(opencv) and Streaming multiple IPCamera to the Client

Since I am new to this library, I want someone to help/advise me about How to stream and do process(e.g: detect a face, motion detection etc) the connected multiple IP cameras videos to the web client?

Your Environment

  • VidGear version: latest
  • Branch:
  • Python version: 3.6.8
  • pip version:
  • Operating System and version: rasperry pi 3 and pi4 (4gb)

Please help

module 'zmq' has no attribute '0'

When I use NetGear, I got this error: module 'zmq' has no attribute '0'
I install pyzmp==17.1.2
numpy ==1.15.4
OpenCV-contirb-python==3.4.2.16
vidgear==0.1.5

And I run this server in container base ubuntu16.04

here is my code:

# import libraries
from vidgear.gears import NetGear
import cv2

stream = cv2.VideoCapture('hamilton_clip.mp4') #Open any video stream

options = {'flag': '0', 'copy': False, 'track': False}
#change following IP address '192.168.1.xxx' with yours
server = NetGear(address = '192.168.10.189', port = '5454', protocol = 'tcp',  pattern = 0, receive_mode = False, logging = True, **options) #Define netgear server at your system IP address.

# infinite loop until [Ctrl+C] is pressed
while True:
	try: 
		(grabbed, frame) = stream.read()
		# read frames

		# check if frame is not grabbed
		if not grabbed:
			#if True break the infinite loop
			break

		# do something with frame here

		# send frame to server
		server.send(frame)
	
	except KeyboardInterrupt:
		#break the infinite loop
		break

# safely close video stream
stream.release()
# safely close server
writer.close()

it come from netgear demo

How to add audio to WriteGear class?

Many thanks for your VidGear code. In real time I am trying to process a
webcam video input using Opencv (using Ubuntu) then write the output to a file using
writegear (no audio at this point) and all works fine.
The problem comes when I try and add the audio from the webcam to the output. I thought
inserting something simple like
-f alsa -ac 1 -i hw:0
so the ffmpeg command line looks like
ffmpeg -y -f rawvideo -vcodec rawvideo -s 1280x720 -pix_fmt gray -i - -f alsa -ac 1 -i hw:0 -vcodec libx264 -crf 0 -preset fast output.mp4
at the point you build the ffmpeg string cmd (line 318 in writegear) would do the trick but that
simply throws an ffmpeg error. However I structure the ffmpeg command I
can't get ffmpeg to accept the command string.
Is it possible to add audio to the writegear ffmpeg output?

Replace print with a logging module

Detailed Description

Proposal to replace print command with python's logging module completely and add the severity level for dynamic usage.

Context

The proposal is to replace print errors with the logging python module. The logging library has a lot of useful features like:

  • Easy to see where and when (even what line no.) a logging call is being made from.
  • You can log to files, sockets, pretty much anything, all at the same time.
  • You can differentiate your logging based on severity.

But Print doesn't have any of these. Also, if vidgear is meant to be imported by other python tools, it's bad practice to print things to stdout since the user likely won't know where the print messages are coming from. With the logging module, a user can choose whether or not they want to propagate logging messages from vidgear or not.

Your Environment

  • VidGear version: All
  • Branch: Testing
  • Python version: All 3+
  • pip version: Latest
  • Operating System and version: All

WriteGear from ScreenGear

Follow this code:

# IMPORT  
from vidgear.gears import ScreenGear
from vidgear.gears import WriteGear
import cv2

# SHOW WINDOW
cv2.namedWindow('Output_Frame',cv2.WINDOW_NORMAL)

# SCREEN
options = {'top': 300, 'left': 300, 'width': 300, 'height': 200} 
stream = ScreenGear(monitor=1, logging=True, **options).start()

# WRITE
output_params = {"-input_framerate":25}
writer = WriteGear(output_filename = 'video_Screen2Write_example.mp4', logging = True, **output_params) 

# MAIN LOOP
while True: 
	
    # read frame from SCREEN
    frame = stream.read()
    if frame is None:
        break

    # show frame in window and WRITE
    cv2.imshow("Output_Frame", frame)
    writer.write(frame)

    # if 'q' then EXIT
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break


# CLOSE ALL
cv2.destroyAllWindows()
stream.stop()
writer.close()

After running for 30 seconds press "q" to stop. Then go to the saved file video, look at its Properties-> Duration. It will be much more than 30 seconds, with Framerate = 25.
(In fact during the writing process I see the speed>1.0x and the fps>25)

I suppose that the reason is that my writer is faster than 25 fps, so it is able to screen at higher framerate, generating a longer output video.


NOTE:
If I set output_params = {"-input_framerate":stream.framerate}, where stream = ScreenGear(monitor=1, logging=True, **options).start()

I get:
AttributeError: 'ScreenGear' object has no attribute 'framerate'


How can I use WRITE tacking video from SCREEN with a temporal coherence with a fixed framerate?

FFmpeg backend

FFmpeg parameters for opencv capture

I have started looking through CamGear and am hoping it will allow for more control of the ffmpeg backend

Question

So is it possible to launch an opencv capture object with an ffmpeg command.

The reason being i would like to use cuvid/ h264_cuvid (hardware decoder)
(i know you can kinda do it with # OPENCV_FFMPEG_CAPTURE_OPTIONS="video_codec;h264_cuvid|rtsp_transport;tcp")
but this doesnt give access to all the other ffmpeg features)

but also pipe that capture to another containter .mp4 with a codec copy so we dont waste resources.

and also potentially map it to another sink like an ffserver feed

Regards Andrew

[Proposal] Crop and Zoom feature for Stabilizer class

Detailed Description

A new parameter which can handle cropping and zooming frames to reduce the black borders arise because of stabilization being too noticeable.

Context

Currently border_size parameter in Stabilizer class only adds a border to output frame to reduce the effect of wrapping during stabilization. This proposal is to additionally introduce a new parameter that can handle cropping and zooming frames to reduce the black borders(similar to the feature available in Adobe AfterEffects)

Your Environment

  • VidGear version: 0.1.6-dev
  • Branch: developemnt
  • Python version: all 3+
  • pip version: all
  • Operating System and version: all

Remove duplicate code to import MSS

Detailed Description

The code to import the MSS module is duplicated. You first try to import OS specific code, and if no OS was good, you still do from mss import mss.
But the current code is a duplicate from the mss.mss() factory, which already handle OS specific imports for you :)

The current code will simply double the checks and the result would be the same in any cases (when the OS is not handled).

The patch would be quite simple:

-			import platform			
-			if platform.system() == 'Linux':
-				from mss.linux import MSS as mss
-			elif platform.system() == 'Windows':
-				from mss.windows import MSS as mss
-			elif platform.system() == 'Darwin':
-				from mss.darwin import MSS as mss
-			else:
-				from mss import mss
+			from mss import mss

Do you are interested in a PR?

BTW I am the MSS author, glad to see the module in use there 👍

Context

Your Environment

  • VidGear version: 0.1.5
  • Branch: all
  • Python version: all
  • pip version: all
  • Operating System and version: all

Any Other Important Information

How to synchronize between two cameras?

Thank you for that great repository.

Is it possible to have two cameras taking images synchronized to each other? Or will there be an offset due to the multithreading nature?

Multi-camera frame refresh issue

Description

I have four USB cameras connected to an RPI4. The while loop captures a fresh frame from each camera each time it loops. The issue is - 3 of 4 cameras always capture a fresh frame. Sometimes the fourth camera does for a few frames and then it stops. I have no idea what's causing this issue.

Acknowledgment

  • A brief but descriptive Title of your issue
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.
  • I have read the Contributing Guidelines.

Environment

  • VidGear version: 0.14.0
  • Branch:
  • Python version: 3.7
  • pip version: 0.18.1
  • Operating System and version: Raspbian Buster

Expected Behavior

All four cameras should capture a fresh frame each loop

Actual Behavior

three of four cameras capture a fresh frame

Possible Fix

I know a lot of work-arounds, but no idea on a fix

Steps to reproduce

(Write your steps here:)

You'd likely need an RPI4 and these same ELP USB cameras. The odd thing is sometimes the fourth camera works for the first few frames and then it stops. Also, I've placed markers throughout the loop to make sure the variable for which the frame data is named is cleared each loop - yet somehow the python script is presenting the same image over and over.

Optional

How to recover from picamera I/O operation on closed file?

Question

Sometimes when fetching frames from PiGear in an infinite loop, my program would encounter this async exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/vidgear/gears/pigear.py", line 158, in update
    for stream in self.stream:
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/camera.py", line 1889, in capture_continuous
    if not encoder.wait(self.CAPTURE_TIMEOUT):
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/encoders.py", line 395, in wait
    self.stop()
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/encoders.py", line 419, in stop
    self._close_output()
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/encoders.py", line 349, in _close_output
    mo.close_stream(output, opened)
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/mmalobj.py", line 428, in close_stream
    stream.flush()
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/array.py", line 237, in flush
    super(PiRGBArray, self).flush()
ValueError: I/O operation on closed file.

If it happens, there seems no obvious way for the caller program to know about it, and there is no instruction on how to recover from this exception. Is there any way to detect or recover from this situation?

Acknowledgment

  • A brief but descriptive Title of your issue.
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.

Context

Your Environment

  • VidGear version: 0.1.5
  • Branch: PyPi
  • Python version: 3.7.3
  • pip version: 18.1
  • Operating System and version: Raspbian Buster

How to set webcam resolution?

Thank you very much for this amazing work, it came just when I needed!

I'm trying to set the resolution to 1080p but isn't working. My code looks like this:

Read the video files

options = {"hflip": True, "exposure_mode": "auto", "iso": 'auto', 
           "exposure_compensation": 'auto', "awb_mode": "horizon",
           "sensor_mode": 0} 

video1 = VideoGear(enablePiCamera = False, resolution= (1920, 1080),
                   framerate=24, time_delay=2, **options).start() 

Support for rtmp broadcast

Is there any way I can implement rtmp broadcast using your api, NetGear for example.

Question

Acknowledgment

  • A brief but descriptive Title of your issue.
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.

Context

I already have a rtmp server running and a vlc rtmp subscriber. Now, I need an rtmp publisher using your library or opencv.

Your Environment

  • VidGear version: 0.1.5
  • Branch:
  • Python version: 3.6
  • pip version: 19.1.1
  • Operating System and version: ubuntu 18.04

Optional

The solution I expect should be like this

from vidgear.gears import VideoGear
from vidgear.gears import NetGear

stream = VideoGear(source=0).start() 
options = {flag : 0, copy : False, track : False}
server = NetGear(address = '127.0.0.1', port = '1935', protocol = 'rtmp',  pattern = 0, receive_mode = False, logging = True, **options) 

while True:
	try: 
		frame = stream.read()
		if frame is None:
			break
		server.send(frame)
	
	except KeyboardInterrupt:
		break

stream.stop()
server.close()

Pulling Youtube Video

Copy and pasted your code from :github. The code fail to work. Seems like its returning some None frames similar to feeding cv2 w/ links generated using Pafy.

CODE:

import cv2

stream = CamGear(source='https://youtu.be/dQw4w9WgXcQ', y_tube =True,  time_delay=1, logging=True).start() # YouTube Video URL as input

# infinite loop
while True:

    frame = stream.read()
    # read frames

    # check if frame is None
    if frame is None:
        #if True break the infinite loop
        break

    # do something with frame here

    cv2.imshow("Output Frame", frame)
    # Show output window

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        #if 'q' key-pressed break out
        break

cv2.destroyAllWindows()
# close output window

stream.stop()
# safely close video stream.

OUTPUT:

python vidgear_test.py 
Title: Rick Astley - Never Gonna Give You Up (Video)
Extension: webm

pipenv config:

[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true

[dev-packages]

[packages]
numpy = "*"
opencv-python = "*"
jupyter = "*"
tensorflow = "*"
pillow = "*"
matplotlib = "*"
youtube-dl = "*"
pafy = "*"
vidgear = "*"

[requires]
python_version = "3.6"

Video didn't write when packaging the program with PyInstaller

I use PyInstaller to package my app. I create one file, windowed application. And I use the path function written by myself.
https://github.com/Seraphli/TestField/blob/master/Examples/logging_and_tqdm/util.py#L4-L43
I create a config file in subfolder config and that is fine.
I then pass the absolute path to WriteGear and it creates the video file. But after closing the WriteGear, the video file has 0kb size. Seems the video is not written into it.

# import libraries
from vidgear.gears import ScreenGear
from vidgear.gears import WriteGear
import cv2
import time

options = {'top': 40, 'left': 0, 'width': 100,
           'height': 100}  # define dimensions of screen w.r.t to given monitor to be captured

output_params = {"-vcodec": "libx264", "-crf": 0,
                 "-preset": "fast"}  # define (Codec,CRF,preset) FFmpeg tweak parameters for writer

stream = ScreenGear(monitor=1,
                    logging=True).start()  # Open Live Screencast on current monitor

writer = WriteGear(output_filename='Output.mp4', compression_mode=True,
                   logging=True,
                   **output_params)  # Define writer with output filename 'Output.mp4'

# infinite loop
while True:

    frame = stream.read()
    # read frames

    # check if frame is None
    if frame is None:
        # if True break the infinite loop
        break

    # {do something with frame here}
    # gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # write a modified frame to writer
    writer.write(frame)

    # Show output window
    cv2.imshow("Output Frame", frame)

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        # if 'q' key-pressed break out
        break

    # delay of about  0.1 seconds
time.sleep(0.1)

cv2.destroyAllWindows()
# close output window

stream.stop()
# safely close video stream
writer.close()
# safely close writer

I create a one file app for this small demo.
https://drive.google.com/file/d/1Wgd4bm9bzK079TL73XfEuwGxsZpLcExt/view?usp=sharing

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.