Code Monkey home page Code Monkey logo

galliot-us / smart-social-distancing Goto Github PK

View Code? Open in Web Editor NEW
139.0 139.0 37.0 42.71 MB

Social Distancing Detector using deep learning and capable to run on edge AI devices such as NVIDIA Jetson, Google Coral, and more.

Home Page: https://neuralet.com

License: Apache License 2.0

Dockerfile 3.21% Shell 0.70% Python 93.62% JavaScript 0.04% HTML 2.43%
coral-dev-board coral-tpu deep-learning deep-neural-networks edge-ai edge-computing edge-tpu jetson jetson-nano jetson-tx2 nvidia-jetson nvidia-jetson-nano nvidia-jetson-tx2 social-distance-monitoring social-distancing social-distancing-detection

smart-social-distancing's People

Contributors

alpha-carinae29 avatar dependabot[bot] avatar emmawdev avatar gpicart avatar jfer11 avatar jsonsadler avatar kkrampa avatar lucabenvenuto avatar mats-claassen avatar mdegans avatar mhejrati avatar mohammad7t avatar mrn-mln avatar mrupgrade avatar pgrill avatar renzodgc avatar robert-p97 avatar sasikiran avatar undefined-references avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smart-social-distancing's Issues

Alphapose Onnx and TensorRT models

Hi, I've been working on converting Alphapose (the model which contributors have developed for X86 devices at #113) to onnx and tensorRT.
I will send a PR after finishing it. here I wanna share my experience with this procedure.
I tried to convert a Pytorch-based model of HRnet net (one of the available backbones of Alphapose) to Onnx but the path was not straightforward to me.
First of all, I use Pytorch version 1.1.0 for exporting the onnx model but the version was buggy and I faced several errors. after trying some versions of PyTorch finally the below versions worked for me.

`
torch==1.5.1
torchvision==0.6.1

//Expporting parameters
pose_model.load_state_dict(torch.load(MODEL.pt, map_location=map_location))
dummy_input = torch.randn(1, 3, 256, 192, requires_grad=True).cuda()
torch.onnx.export(pose_model, dummy_input, "alphapose.onnx", export_params=True, opset_version=11)
`
Next step, I will check the onnx model results and compare them to the x86 model result in order to make sure that the onnx model is exported successfully.

I will update the procedure here.
please feel free to ask for more details if you need them.

Black feed previews when input feed is from IP Camera

Hello, I'm testing smart-social-distancing using a jetson nano and a D-Link DCS-5222LB IP Camera.
The frontend is running locally on my laptop, the Processor is running on the Jetson and the Dlink camera are all on the same subnet.

Laptop: 192.168.188.20
Dlink camera: 192.168.188.23
Jetson: 192.168.188.81

Here are my configurations:

config-frontend.ini

[App]
Host: 0.0.0.0
Port: 8000

[Processor]
Host: 192.168.188.81
Port: 8000

config-jetson.ini

[App]
VideoPath = rtsp://username:[email protected]/live1.sdp
Resolution = 640,360

Encoder: videoconvert ! video/x-raw,format=I420 ! x264enc speed-preset=ultrafast

[API]
Host = 0.0.0.0
Port = 8000

[CORE]
Host = 0.0.0.0
QueuePort = 8010
QueueAuthKey = shibalba

I'm running the Processor on the Jetson with this command:

docker build -f jetson-nano.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-nano" .
docker run -it --runtime nvidia --privileged -p 8000:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-nano

and this is the output I get on the console:

ok video is going to be processed
[TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace:
INFO:libs.detectors.jetson.mobilenet_ssd_v2:allocated buffers
Device is:  Jetson
Detector is:  ssd_mobilenet_v2_pedestrian_softbio
image size:  [300, 300, 3]
INFO:libs.distancing:opened video rtsp://admin:[email protected]/live2.sdp
error: XDG_RUNTIME_DIR not set in the environment.
0:00:02.002293373    68   0x55a7901730 ERROR                default gstvaapi.c:254:plugin_init: Cannot create a VA display
INFO:libs.distancing:processed frame 1 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 11 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 21 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 31 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 41 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 51 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 61 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 71 for rtsp://username:[email protected]/live1.sdp

[...] and so on

I'm running the webapp locally on my laptop with this command:

docker build -f frontend.Dockerfile -t "neuralet/smart-social-distancing:latest-frontend" .
docker build -f web-gui.Dockerfile -t "neuralet/smart-social-distancing:latest-web-gui" .
docker run -it -p 8000:8000 --rm neuralet/smart-social-distancing:latest-web-gui 

This is the output I get from the console:

Successfully built 0047284f1577
Successfully tagged neuralet/smart-social-distancing:latest-web-gui
INFO:     Started server process [1]
INFO:uvicorn.error:Started server process [1]
INFO:     Waiting for application startup.
INFO:uvicorn.error:Waiting for application startup.
INFO:     Application startup complete.
INFO:uvicorn.error:Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:uvicorn.error:Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

When I browse the frontend locally with the latest Chrome on http://0.0.0.0:8000/panel/#/live I see the Camera Feed and the Bird's View box completely black. See the screenshot below:

screen_0

The plot underneath the camera feeds is working because if I step back and forth in front of the camera seems like it's recognizing me.

If I open Chrome's inspector I can see that there are two errors in the console.

screen_errors

And if I try performing a GET to that URL (or any similar path) I get a 404.

errors details

and a "Not Found" response.

screen_not_found

Has anyone of you experienced something similar when working with IP Cams? Any idea on how to fix this?

Thank you very much and big ups for the beautiful project!

Cheers

[TensorRT] ERROR: Could not register plugin creator: FlattenConcat_TRT in namespace: INFO:libs.detectors.jetson.mobilenet_ssd_v2:allocated buffers

After I made fresh installation of Jetpack 4.3, I have build the docker files but I am facing the following issue on Jetson Nano device:

[TensorRT] ERROR: Could not register plugin creator: FlattenConcat_TRT in namespace:
INFO:libs.detectors.jetson.mobilenet_ssd_v2:allocated buffers

I did not make any changes to the config file and I am running the following commands:
sudo docker run -it --runtime nvidia --privileged -p 8080:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-nano

Screenshot from 2020-07-29 19-12-35

Pedestrian plot and environment score plot: yet to be implemented?

The tutorial page (https://neuralet.com/docs/tutorials/smart-social-distancing/) mentions the output view, and presents two nice analytic graphs: the pedestrian plot and the environment score plot. Using the newest version of the master branch on a jetson nano with jetpack 4.3 I see the video stream with detections and the birds eye view, yet I do not see the pedestrian and environment plots. Are they yet to be implemented, or am I perhaps missing something? Any hints would be greatly appreciated!

Edit: In #25 I see a picture of the plot graph, strange, it is not displaying for me.

Import error for Detector

I am trying to run Smart Distance on Jetson TX2, facing below issue. Please advice. Thank you.

Env -

  1. Jetson Tx2
  2. Jetpack 4.3
  3. CUDA 10.2

Steps to reproduce -

  1. Follow all instructions for Jetson Tx2.
  2. Run below Docker command.
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-nano

Output -

image
image

Expected -
Live feed with smart distance for the video present in data folder
I also tried to correct the Download path, as per this PR

ahmet@ahmet-desktop:~/smart-social-distancing$ docker run -it --runtime nvidia --privileged -p 8000:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-nano docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown. ERRO[0001] error waiting for container: context canceled

Please help me ? I didnt understand how I solve it ?

Issue with video feeds buffering and becoming out-of-sync with each other

Hi all,

I managed to get this up and running on a Jetson Nano a couple of months ago.
Coming back to it now everything has changed! 👍

I'm running both the front-end and the processor on the Nano. I have no experience with docker so was not sure how to get the two parts communicating, eventually I got something though. I detail the steps here just in case.

I built and run everything on the Nano itself in headless mode as follows:

Modified the config-frontend.ini file as follows:

[App]
Host: 0.0.0.0
Port: 8000

[Processor]
; The IP and Port on which your Processor node is runnning (according to your docker run's -p HOST_PORT:8000 ... for processor's docker run command
Host: 192.168.1.104 <-- changed this line to reflect the IP address of the Nano
Port: 8001

Build the docker files:

docker build -f frontend.Dockerfile -t "neuralet/smart-social-distancing:latest-frontend" .
docker build -f jetson-web-gui.Dockerfile -t "neuralet/smart-social-distancing:latest-web-gui-jetson" .

docker build -f jetson-nano.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-nano" .

Start the front-end:

sudo docker run -it -p 8000:8000 --rm neuralet/smart-social-distancing:latest-web-gui-jetson

# Output
INFO:     Started server process [1]
INFO:uvicorn.error:Started server process [1]
INFO:     Waiting for application startup.
INFO:uvicorn.error:Waiting for application startup.
INFO:     Application startup complete.
INFO:uvicorn.error:Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:uvicorn.error:Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

Start the processor:

sudo docker run -it --runtime nvidia --privileged -p 8001:8000 -v "$PWD/data":/repo/data -v "$PWD/config-jetson.ini":/repo/config-jetson.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-jetson-nano

# Output
video file at  /repo/data/softbio_vid.mp4 not exists, downloading...
--2020-10-15 10:46:41--  https://social-distancing-data.s3.amazonaws.com/softbio_vid.mp4
Resolving social-distancing-data.s3.amazonaws.com (social-distancing-data.s3.amazonaws.com)... 52.217.65.116
Connecting to social-distancing-data.s3.amazonaws.com (social-distancing-data.s3.amazonaws.com)|52.217.65.116|:443... connected.
HTTP request sent, awaiting response... INFO:__main__:Reporting disabled!
200 OK
Length: 25371423 (24M) [video/mp4]
Saving to: 'data/softbio_vid.mp4'

data/softbio_vid.mp4            2%[>                                                 ] 534.65K   730KB/s               INFO:libs.processor_core:Core's queue has been initiated
INFO:__main__:Core Started.
INFO:root:Starting processor core
INFO:libs.processor_core:Core is listening for commands ...
data/softbio_vid.mp4            7%[==>                                               ]   1.81M  1.17MB/s               INFO:api.processor_api:Connection established to Core's queue
data/softbio_vid.mp4            8%[===>                                              ]   2.15M  1.23MB/s               INFO:__main__:API Started.
data/softbio_vid.mp4           10%[====>                                             ]   2.47M  1.26MB/s               INFO:     Started server process [11]
INFO:uvicorn.error:Started server process [11]
INFO:     Waiting for application startup.
INFO:uvicorn.error:Waiting for application startup.
INFO:     Application startup complete.
INFO:uvicorn.error:Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:uvicorn.error:Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
data/softbio_vid.mp4          100%[=================================================>]  24.20M  1.62MB/s    in 16s

2020-10-15 10:46:57 (1.52 MB/s) - 'data/softbio_vid.mp4' saved [25371423/25371423]

running curl 0.0.0.0:8000/process-video-cfg
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0INFO:api.processor_api:process-video-cfg requests on api
INFO:api.processor_api:waiting for core's response...
INFO:libs.processor_core:command received: Commands.PROCESS_VIDEO_CFG
INFO:libs.processor_core:Setup scheduled tasks
INFO:libs.processor_core:should not send notification for camera default
INFO:libs.processor_core:started to process video ...
100     4  100     4    0     0     59      0 --:--:-- --:--:-- --:--:--    60
ok video is going to be processed
INFO:libs.engine_threading:[68] taking on 1 cameras
[TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace:
INFO:libs.detectors.jetson.mobilenet_ssd_v2:allocated buffers
Device is:  Jetson
Detector is:  ssd_mobilenet_v2_coco
image size:  [300, 300, 3]
INFO:libs.distancing:opened video /repo/data/softbio_vid.mp4
error: XDG_RUNTIME_DIR not set in the environment.
0:00:00.812662754    79   0x55b7599660 ERROR                default gstvaapi.c:254:plugin_init: Cannot create a VA display
INFO:libs.distancing:processed frame 1 for /repo/data/softbio_vid.mp4
INFO:libs.distancing:processed frame 101 for /repo/data/softbio_vid.mp4
INFO:libs.distancing:processed frame 201 for /repo/data/softbio_vid.mp4
INFO:libs.distancing:processed frame 301 for /repo/data/softbio_vid.mp4

This seems to be good so far, I start getting the graph appearing in my browser when I visit 192.168.1.104:8000.

However, the video feeds seem to buffer and stutter a lot. Furthermore, the birds-eye view and the video feed quickly become out-of-sync with one another. If I look back at the running processor I observe something along the following lines (which happens an awful lot):

INFO:libs.distancing:processed frame 3301 for /repo/data/softbio_vid.mp4
INFO:libs.distancing:processed frame 3401 for /repo/data/softbio_vid.mp4
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 389, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/fastapi/applications.py", line 181, in __call__
    await super().__call__(scope, receive, send)  # pragma: no cover
  File "/usr/local/lib/python3.6/dist-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 86, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 142, in simple_response
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 376, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/staticfiles.py", line 94, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/responses.py", line 314, in __call__
    "more_body": more_body,
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 68, in sender
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 148, in send
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 156, in _send
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 483, in send
    output = self.conn.send(event)
  File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 469, in send
    data_list = self.send_with_data_passthrough(event)
  File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 502, in send_with_data_passthrough
    writer(event, data_list.append)
  File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 78, in __call__
    self.send_data(event.data, write)
  File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 98, in send_data
    raise LocalProtocolError("Too much data for declared Content-Length")
h11._util.LocalProtocolError: Too much data for declared Content-Length
ERROR:uvicorn.error:Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 389, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/fastapi/applications.py", line 181, in __call__
    await super().__call__(scope, receive, send)  # pragma: no cover
  File "/usr/local/lib/python3.6/dist-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 86, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 142, in simple_response
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 376, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/staticfiles.py", line 94, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/responses.py", line 314, in __call__
    "more_body": more_body,
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 68, in sender
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 148, in send
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 156, in _send
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 483, in send
    output = self.conn.send(event)
  File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 469, in send
    data_list = self.send_with_data_passthrough(event)
  File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 502, in send_with_data_passthrough
    writer(event, data_list.append)
  File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 78, in __call__
    self.send_data(event.data, write)
  File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 98, in send_data
    raise LocalProtocolError("Too much data for declared Content-Length")
h11._util.LocalProtocolError: Too much data for declared Content-Length

My questions are; Is this expected behavior? Is the software not robust/mature enough at this stage? Is there something wrong with how I have configured the docker images or how I am running them? Is it the Nano which is just not powerful enough (e.g. in terms of pfs, or for running both processor and serving front-end)?

I only ask because I seem to recall that the simpler version I used some months back worked better for me. The front-end was super trivial, but it didn't suffer from these issues. I'm happy to roll back to that older version if required but first wanted to check this is not something that can be addressed.

Many thanks

libnvinfer and wrong datadirectory, not writable?

Hello

I've been trying to get this to run for use in our facility, but to no avail. I feel I'm close though. I have two issues when trying to run the docker command:

sudo docker run -it --runtime nvidia --privileged -p 8000:8000 -v "$PWD":/repo neuralet/smart-social-distancing:latest-jetson-nano

INFO:libs.area_threading:[70] taking on notifications for 1 areas
ERROR:root:libnvinfer.so.6: cannot open shared object file: No such file or directory
Traceback (most recent call last):
File "/repo/libs/engine_threading.py", line 49, in run
self.engine = CvEngine(self.config, self.source["section"], self.live_feed_enabled)
File "/repo/libs/cv_engine.py", line 23, in init
self.detector = Detector(self.config)
File "/repo/libs/detectors/detector.py", line 23, in init
self.detector = JetsonDetector(self.config)
File "/repo/libs/detectors/jetson/detector.py", line 22, in init
from . import mobilenet_ssd_v2
File "/repo/libs/detectors/jetson/mobilenet_ssd_v2.py", line 3, in
import tensorrt as trt
File "/usr/local/lib/python3.6/dist-packages/tensorrt/init.py", line 1, in
from .tensorrt import *
ImportError: libnvinfer.so.6: cannot open shared object file: No such file or directory
100 4 100 4 0 0 8 0 --:--:-- --:--:-- --:--:-- 8
ok video is going to be processed

The processor keeps going though so I'm not sure this is an actual problem. Then it starts looping this:

INFO:root:Exception processing area Kitchen
INFO:root:Restarting the area processing
INFO:libs.area_engine:Enabled processing area - area0: Kitchen with 1 cameras
INFO:libs.area_engine:Area reporting on - area0: Kitchen is waiting for reports to be created
ERROR:root:[Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv'
Traceback (most recent call last):
File "/repo/libs/area_threading.py", line 46, in run
self.engine.process_area()
File "/repo/libs/area_engine.py", line 67, in process_area
with open(os.path.join(camera["file_path"], str(date.today()) + ".csv"), "r") as log:
FileNotFoundError: [Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv'
INFO:root:Exception processing area Kitchen
ERROR:root:[Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv'
Traceback (most recent call last):
File "/repo/libs/area_threading.py", line 54, in run
raise e
File "/repo/libs/area_threading.py", line 46, in run
self.engine.process_area()
File "/repo/libs/area_engine.py", line 67, in process_area
with open(os.path.join(camera["file_path"], str(date.today()) + ".csv"), "r") as log:
FileNotFoundError: [Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv'

The directory that it is looking for 'sources' does not exist. There is a directory data/processor/static/data/default/objects_log, but even when I create the sources directory, the bug persists. When I create the csv manually (which can't be intended), I get this:

IndexError: deque index out of range
INFO:root:Exception processing area Kitchen
INFO:root:Restarting the area processing
INFO:libs.area_engine:Enabled processing area - area0: Kitchen with 1 cameras
ERROR:root:deque index out of range
Traceback (most recent call last):
File "/repo/libs/area_threading.py", line 46, in run
self.engine.process_area()
File "/repo/libs/area_engine.py", line 68, in process_area
last_log = deque(csv.DictReader(log), 1)[0]
IndexError: deque index out of range
INFO:root:Exception processing area Kitchen
ERROR:root:deque index out of range
Traceback (most recent call last):
File "/repo/libs/area_threading.py", line 54, in run
raise e
File "/repo/libs/area_threading.py", line 46, in run
self.engine.process_area()
File "/repo/libs/area_engine.py", line 68, in process_area
last_log = deque(csv.DictReader(log), 1)[0]
IndexError: deque index out of range

I'm by no measure a Linux or Docker expert so this might be a small issue to resolve, but I need help.

Thanks in advance and great work so far!

Include Jetpack 4.4 support for jetson nano

The master branch is broken because the devices Jetson Nano and Jetson TX2 use the same config-file but different dockerfiles. TheTX2 image uses Jetpack 4.4. However, the nano image uses Jetpack 4.3.

To fix the issue I see 2 approaches:

  • Split the config file into 2 (#117)
  • Make the jetson-nano image compatible with Jetpack 4.4.

@mhejrati , what do you think?

How to use PI camera V2.0 as a video input?

Thanks for such a great and useful project.

Recently, I've been trying to test it with my PI camera V2.0(CSI-Camera) on Jetson Nano with JetPack 4.3, Ubuntu 18.04, and CUDA 10.0.

I modified the Dockerfile of Nano you offer to disable running the command. Then I tried to run the gstreamer command inside the container but I got this error as shown here:

image

And the same command works on the same machine outside the docker container.

Could you help me to get it running with this CSI-Camera?

Crash on launch

Hi, I followed the instructions. I have JetPack 4.3 installed on TX2. This is the error I get. I tried the other repo and the result is same.

INFO:__main__:Services Started.
INFO:     Started server process [14]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
[TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace: 
[TensorRT] ERROR: INVALID_CONFIG: The engine plan file is generated on an incompatible device, expecting compute 6.2 got compute 5.3, please rebuild.
[TensorRT] ERROR: engine.cpp (1324) - Serialization Error in deserialize: 0 (Core engine deserialization failure)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "neuralet-distancing.py", line 14, in start_engine
    engine = CvEngine(config)
  File "/repo/libs/core.py", line 25, in __init__
    self.detector = Detector(self.config)
  File "/repo/libs/detectors/jetson/detector.py", line 18, in __init__
    self.net = mobilenet_ssd_v2.Detector(self.config)
  File "/repo/libs/detectors/jetson/mobilenet_ssd_v2.py", line 66, in __init__
    self.context = self._create_context()
  File "/repo/libs/detectors/jetson/mobilenet_ssd_v2.py", line 34, in _create_context
    for binding in self.engine:
TypeError: 'NoneType' object is not iterable

Master is broken for x86-gpu

Hi,

Master does not work well for x86-gpu, seems prints from the Detector class are not showing (the first thing that made me think it is not working), but I can see some outputs in Lanthorn UI, which stops after a few frames and fps becomes zero.

The weird thing is that the lines:
INFO:libs.distancing:processed frame 1 for /repo/data/softbio_vid.mp4 INFO:openpifpaf.decoder.generator.cifcaf:3 annotations: [13, 10, 9] ...
are showing for x86, but not for x86-gpu (while it processes some frames initially). What has been changed?

XGD_Runtime Error and Loading Video problem

Hello, thank you to everyone who contributed to this project.
When I try it on Jetson nano, the video feels like loading continuously. It plays after 5 seconds, loads again, plays again for 5 seconds, then pauses. I refresh the page but the same problem continue. It also gives errors related to XGD_Runtime on console.

I created the docker image with the command below.
docker build -f jetson-nano.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-nano" .

Also I tried it on my PC, the result is same. What am i missing ?

Screen Shot 2020-06-26 at 20 45 45

Error in face anonymizer when using PoseNet

I got following error in face anonymizer when I run app with posenet_1281x721 model on sample video file:

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
    self.run()
  File "/repo/libs/engine_threading.py", line 46, in run
    self.engine.process_video(self.source['url'])
  File "/repo/libs/distancing.py", line 291, in process_video
    cv_image, objects, distancings = self.__process(cv_image)
  File "/repo/libs/distancing.py", line 127, in __process
    cv_image = self.anonymize_image(cv_image, objects_list)
  File "/repo/libs/distancing.py", line 557, in anonymize_image
    roi = self.anonymize_face(roi)
  File "/repo/libs/distancing.py", line 573, in anonymize_face
    return cv.GaussianBlur(image, (kernel_w, kernel_h), 0)
cv2.error: OpenCV(4.3.0) /tmp/opencv-4.3.0/modules/imgproc/src/smooth.dispatch.cpp:296: error: (-215:Assertion failed) ksize.width > 0 && ksize.width % 2 == 1 && ksize.height > 0 && ksize.height % 2 == 1 in function 'createGaussianKernels'

docker: invalid publish opts format (should be name=value but got 'HOST_PORT:8000')

Hi, i am new to docker and follow exactly as per the instruction
$ sudo docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD/data":/data neuralet/smart-social-distancing:latest-jetson-nano
here's that i got
docker: invalid publish opts format (should be name=value but got 'HOST_PORT:8000')

please advise what i may probably miss

thank you

Interface for analytics storage

I thought about an interface for analytics storage. The main Logger class would obtain these values and send them to whichever concrete logger, which will be responsible for saving it in some supported format (csv, protobuf, JSON, etc) following this spec:

  • Version
  • Timestamp
  • Number of detections
  • List of detections:
    • Position (x, y, z)
    • Optional:
      • Wearing face mask
      • BBox (Xmin, Ymin, Xmax, Ymax)
      • Id (Movement Tracking)
      • Orientation (needs pose detection)
      • Body Pose keypoints (needs pose detection)

An efficient way of storing this would be using something like protobufs for example. Here I show an example in JSON to see how it could look:
Example:

{
   "version": "1.0",
   "timestamp": "2020-07-08T13:36:53+0000",
   "detection_number": 1,
   "detections": [
      {
         "position": [2.32, 3.10, 0.24], // x,y,z
         "face_mask": true,
         "tracking_id": 3241,
         "bbox": [23, 50, 120, 80], // pixel values of box
         "orientation": 60, // degrees, explained below
         "keypoints": [
            [2.32, 3.02, 0.24], // x,y,z of each point
            "..."
         ]
      }
   ]
}

For both position and orientation we could define a line through the middle of the image which serves as z axis as well as orientation 0 (if the person looks into the camera). Orientation would mean degrees from that line to the right from the person’s perspective. Could be in radians as well.

Float values could be saved with a precision of 0.1 or 0.01.

Any comments and suggestions are welcome. What do you think @mhejrati ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.