Code Monkey home page Code Monkey logo

watsor's People

Contributors

asmirnou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

watsor's Issues

Zones as config?

Would it be possible to setup zones via config with coordinates rather than fiddling with images?

Jetson Nano 2GB - Only CPU Detection Works!

I followed the guide posted here to install Watsor on my Jetson Nano 2GB.

The Jetson Nano 2GB struggles to build the TensorRT model; I therefore followed @asmirnou's advice and used his prebuilt model posted on Google Drive.

Despite having copied the gpu_fp16.buf file to the model/ directory, the detector automatically defaults to CPU. How do I get Watsor to use the TensorRT engine instead of CPU?

This is my metrics output:

{ "cameras": [ { "name": "porch", "fps": { "decoder": 15.1, "sieve": 7.0, "visual_effects": 0.0, "snapshot": 7.0, "mqtt": 7.0 }, "buffer_in": 10, "buffer_out": 0 } ], "detectors": [ { "name": "CPU", "fps": 7.0, "fps_max": 7, "inference_time": 137.2 } ] }

This is the line I used to run Watsor:

python3 -m watsor.main_for_gpu --config config/config.yaml --model-path model/

This is the relevant section from my config.yaml

ffmpeg:
decoder:
- -hide_banner
- -loglevel
- error
- -nostdin
- -fflags
- nobuffer
- -flags
- low_delay
- -fflags
- +genpts+discardcorrupt
- -c:v
- h264_nvv4l2dec
- -i
- -f
- rawvideo
- -pix_fmt
- rgb24

detect:

  • person:
    area: 20 # Minimum area of the bounding box an object should have in
    # order to be detected. Defaults to 10% of entire video resolution.
    confidence: 60 # Confidence threshold that a detection is what it's guessed to be,
    # otherwise it's ruled out. 50% if not set.
  • car:
    zones: [1, 3, 5] # Limit the zones on mask image, where detection is allowed.
    # If not set or empty, all zones are allowed.
    # Run zones.py -m mask.png to figure out a zone number.
  • truck:

cameras:

  • porch: # Camera name
    width: 1280 #
    height: 720 # Video feed resolution in pixels

    input: "rtsp://[redacted]:[redacted]@192.168.xx.xxx:554/cam/realmonitor?channel=1&subtype=0"

    detect: # The values below override
    - person: # detection defaults for just
    - car: # this camera

My Environment:

  • Jetson Nano 2GB
  • Linux jetson 4.9.201-tegra, #R32 (release), REVISION: 5.0
  • Jetson 4.5
  • Tensorrt 7.1.3.0-1+cuda10.2

Add support for Intel Neural Compute Stick 2

Watsor supports multiple USB accelerators, equally balancing the load among them. Having at least one accelerator connected Watsor uses the hardware for computation, less loading the CPU. This is a feature request to add the detector performing the inference on Intel Neural Compute Stick 2. The software support comes from the open-source distribution of OpenVINO™.


There are no plans to work on this feature at the moment until someone decides to contribute to the project. Vote on this issue to raise the interest to it. I'll consider adding the support if it gets 100 positive reactions. If someone can ship me the device for free, I'll add the support sooner.

Coral Stick

Not sure why, but my Coral USB stick doesn't seem work. Seeing errors like this in the logs.

watchdog WatchDog DEBUG : Thread frontlawn (Snapshot) is alive
watchdog WatchDog DEBUG : Process detector1 (ObjectDetector) is alive
detector1 ObjectDetector ERROR : Detection failure

Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/watsor/detection/detector.py", line 86, in _run
with detector_class(*detector_args) as object_detector:
File "/usr/local/lib/python3.6/dist-packages/watsor/detection/edge_tpu.py", line 15, in __init__
device_path=device_path)
File "/usr/lib/python3/dist-packages/edgetpu/detection/engine.py", line 71, in __init__
super().__init__(model_path, device_path)
File "/usr/lib/python3/dist-packages/edgetpu/basic/basic_engine.py", line 90, in __init__
model_path, device_path)
RuntimeError: Error in device opening (/sys/bus/usb/devices/1-5)!

I see a nameless detector noted in the metrics endpoint:

    "detectors": [
        {
            "name": "",
            "fps": 0.0,
            "fps_max": 0.0,
            "inference_time": 0.0
        }
    ]

It is mounted, via:

/dev/bus/usb:/dev/bus/usb

And can confirm it's mounted from the host.

Jetson nano 2gb

Good day when running the docker jetson version:
docker run -t -i
--rm
--env LOG_LEVEL=info
--volume /etc/localtime:/etc/localtime:ro
--volume /etc/watsor/config:/etc/watsor:ro
--publish 8080:8080
--shm-size=512m
--runtime nvidia
smirnou/watsor.jetson:latest

i am getting an error:

Building TensorRT engine. This may take few minutes.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/watsor/engine.py", line 80, in
model_width=args.model_width, model_height=args.model_height)
File "/usr/local/lib/python3.6/dist-packages/watsor/engine.py", line 21, in build_engine
builder.max_workspace_size = 1 << 30
AttributeError: 'tensorrt.tensorrt.Builder' object has no attribute 'max_workspace_size'
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/watsor/main_for_gpu.py", line 24, in
], check=True)
File "/usr/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['python3', '-u', '/usr/local/lib/python3.6/dist-packages/watsor/engine.py', '-i', '/usr/share/watsor/model/gpu.uff', '-o', '/usr/share/watsor/model/gpu.buf', '-p', '16']' returned non-zero exit status 1.
vnet@vnet-jetson:~$ AttributeError: 'tensorrt.tensorrt.Builder' object has no attribute 'max_workspace_size'

can you help me please?

Broken Pipe

Hi,

Running the 64bit Pi4 addon on a Pi4 with Coral USB. I get the error below on startup or somtimes soon afterwards. The metrics then all report 0 and there is no camera feed on the Ui page.

mount: /tmp/tmpmount/console: special device /dev/console does not exist.
umount: /dev: target is busy.
MainThread       werkzeug                 INFO    : Listening on ('0.0.0.0', 8080)
MainThread       root                     INFO    : Starting Watsor on bf72466c-watsor.pi4 with PID 7
front            FFmpegDecoder            INFO    : av_interleaved_write_frame(): Broken pipe
front            FFmpegDecoder            INFO    :     Last message repeated 61 times
front            FFmpegDecoder            INFO    : Error writing trailer of pipe:: Broken pipe

[FR] Multiple Redundant Detectors

I’m actually using Frigate (on Coral USB) + Deepstack (on CPU) + HA to get mobile notifications with snapshot&video of detected people on my cameras. I combine Frigate with Deepstack because TFlite models raise too many false positives. Coral is great to keep CPU load negligible but is very limited on detector capability/tuning: to use the CPU just to confirm the coral detections is fine to keep low CPU usage. The idea to combine (as a pipe) multiple detectors makes false positives disappear for me.

Would be possible to run multiple detectors on the same event/image to confirm its goodness?

Home Assistant Add-On crashes at boot

I just installed the Home Assistant Add-On from the repository, and after configuring the bare minimum in /config/watsor/config.yaml I get this error:

jq: error: Could not open file /data/options.json: Permission denied
MainThread root ERROR : Either filename or data should be defined as input

Can't seem to change any configuration parameters to resolve this. For reference I'm running HASS on an x86 laptop inside a virtual machine and installed the standard (not RasPi) add-on. No problems currently with any other add-ons.

running on snap images

How to grab still images from camera ?

It is interesting to run on grabbed jpeg single images, not stream video. Every (configured) step in seconds ffmpeg can
grab an image from camera using http protocol. For example unifi cameras has http://192.168.2.43/snap.jpeg grabbing url

ffmpeg error while decoding rtsp stream

Hello. I have three cameras all running h264 streams via RTSP. I have an NVIDIA 970 GPU. I'm looking to use the GPU for decoding the h264 stream as well as performing the object detection. I am seeing some decoding errors in the logs.

Building TensorRT engine. This may take few minutes.
TensorRT engine saved to /usr/share/watsor/model/gpu.buf
MainThread       werkzeug                 INFO    : Listening on ('0.0.0.0', 8080)
MainThread       root                     INFO    : Starting Watsor on 61057dbf1f8b with PID 1
inside           FFmpegDecoder            INFO    : [h264 @ 0x55a411e1dfe0] error while decoding MB 35 1, bytestream -27
backdoor         FFmpegDecoder            INFO    : [h264 @ 0x563a6b961340] error while decoding MB 11 29, bytestream -15
inside           FFmpegDecoder            INFO    : [h264 @ 0x55a411e1dfe0] error while decoding MB 8 4, bytestream -7
frontdoor        FFmpegDecoder            INFO    : [h264 @ 0x55770bab6a40] error while decoding MB 4 23, bytestream -7

Shelling into the container I see cuvid as an option.

watsor@57b132249a48:/$ ffmpeg -hwaccels
ffmpeg version 3.4.8-0ubuntu0.2 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
  configuration: --prefix=/usr --extra-version=0ubuntu0.2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Hardware acceleration methods:
vdpau
vaapi
cuvid

nvidia-smi and nvtop in the main OS show watsor leveraging the GPU.

The health endpoint shows the following:

{
    "cameras": [
        {
            "name": "inside",
            "fps": {
                "decoder": 5.1,
                "sieve": 5.1,
                "visual_effects": 0.0,
                "snapshot": 5.1,
                "mqtt": 5.1
            },
            "buffer_in": 0,
            "buffer_out": 0
        },
        {
            "name": "frontdoor",
            "fps": {
                "decoder": 5.1,
                "sieve": 5.1,
                "visual_effects": 0.0,
                "snapshot": 5.1,
                "mqtt": 5.1
            },
            "buffer_in": 0,
            "buffer_out": 0
        },
        {
            "name": "backdoor",
            "fps": {
                "decoder": 122.1,
                "sieve": 11.9,
                "visual_effects": 0.0,
                "snapshot": 11.9,
                "mqtt": 11.9
            },
            "buffer_in": 0,
            "buffer_out": 0
        }
    ],
    "detectors": [
        {
            "name": "GeForce GTX 970",
            "fps": 24.9,
            "fps_max": 191,
            "inference_time": 5.2
        }
    ]
}

My guess is it has something to do with what I'm passing as parameters to ffmpeg. Here is my config.yaml:

# Optional HTTP server configuration and authentication.
http:
  port: 8080


# Optional MQTT client configuration and authentication.
mqtt:
  host: 192.168.1.226
  port: 1883


# Default FFmpeg arguments for decoding video stream before detection and encoding back afterwards.
# Optional, can be overwritten per camera.
ffmpeg:
  decoder:
    - -hide_banner              # hide build options and library versions
    - -loglevel
    -  error
    - -nostdin
    - -hwaccel                   # These options enable hardware acceleration, check what's available with: ffmpeg -hwaccels
    -  cuvid
    - -hwaccel_output_format
    -  yuv420p
    - -fflags
    -  nobuffer
    - -flags
    -  low_delay
    - -fflags
    -  +genpts+discardcorrupt
    - -i                          # camera input field will follow '-i' ffmpeg argument automatically
    - -f
    -  rawvideo
    - -pix_fmt
    -  rgb24
    - -rtsp_transport             # try to prevent lost packets/frames via TCP
    -  tcp
  # encoder:                        # Encoder is optional, remove the entire list to disable.
  #   - -hide_banner
  #   - -loglevel
  #   -  error
  #   - -f
  #   -  rawvideo
  #   - -pix_fmt
  #   -  rgb24
  #   - -i                          # detection output stream will follow '-i' ffmpeg argument automatically
  #   - -an
  #   - -f
  #   -  mpegts
  #   - -vcodec
  #   -  libx264
  #   - -pix_fmt
  #   -  yuv420p
  #   - -vf
  #   - "drawtext='text=%{localtime\\:%c}': x=w-tw-lh: y=h-2*lh: fontcolor=white: box=1: [email protected]"


# Detect the following labels of the object detection model.
# Optional, can be overwritten per camera.
detect:
  - person:
      area: 10                    # Minimum area of the bounding box an object should have in
                                  # order to be detected. Defaults to 10% of entire video resolution.
      confidence: 50              # Confidence threshold that a detection is what it's guessed to be,
                                  # otherwise it's ruled out. 50% if not set.


# List of cameras and their configurations.
cameras:
  - inside:
      width: 640
      height: 480
      input: rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=1
  - frontdoor:
      width: 704
      height: 480
      input: rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=1
  - backdoor:
      width: 704
      height: 480
      input: rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=1

Car detection

First of all, this is great. Had been waiting on frigate to get GPU support for a while now, but seems to be languishing, and I believe you'd contributed it. The docs for this are significantly better, so I've been looking at switching. I have a pretty good working setup with deepstack currently, but would like some of watsor's functionality (in particular, good Home Assistant integration).

I've got some cameras setup, and all seems to be working well. person detection works very well, with low-latency, but for some reason I don't seem to be able to trigger a match for car or truck with stationary or moving vehicles. My config is pretty sparse:

...
# Detect the following labels of the object detection model.
# Optional, can be overwritten per camera.
detect:
  - person:
      confidence: 60
  - car:
  - truck:


# List of cameras and their configurations.
cameras:
  - pumphouse:
      width: 640
      height: 480
      input: rtsp://foo:[email protected]:554/Streaming/Channels/2/
...

Any pointers?

Concatenating to Output File

Great job with this. Super easy to use. Successfully using it with with Nvidia Pascal GPU pushing MQTT to Home Assistant.

I'm struggling with something I think should be simple, I don't think this is an issue - just a lack of understanding on my part.

I'd like the filename to include the date and time so recordings can be easily found, but I can't seem to get the syntax right.

I'm tying to do something like output: !ENV "/recordings/south/202011231928.mp4"

Date formatted like YYYYMMDDHHMM.

Any help is appreciated! Thanks.

[FR] Seekable video archive

I've been looking at building something incredibly similar and stumbled across Watsor.

I'm interested in building a system that takes in live video, has real time person detection with messaging to HomeAssistant, and then also keeps that video around and has a web interface that lets a user seek around and look through older videos, for example when a person was detected.

Would you be interested in collaborating on this? I was planning to build the backend and frontend both in Typescript. I can try my hand at Python, but I can't promise that it will be good Python. My rough plan was to have ffmpeg take in the RSTP streams and output HLS that is then consumed by a backend that keeps track of all of the HLS chunks that get written out. I'm also happy to try to make something that sits along side Watsor and just requires a few ffmpeg flags to output the HLS in a way that I can consume.

IPcam VCA/Dual VCA

IPcam VCA feature allows to offload some types of recognition to the IPcam (outmost edge?).
With Dual VCA, the IPcam sends the interesting areas on a 2nd stream, so software can use it.
Do IPcams have a dedicated chipset for motion recognition that could help with performance in frigate?

Related issue for frigate: blakeblackshear/frigate#2278

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.