Code Monkey home page Code Monkey logo

deepocli's Introduction

deepomatic-command-line-interface

Deepomatic Command Line Interface.

This command line interface has been made to help you interact with our services via the command line.

Build Status

CLI Documentation

Find the complete documentation at docs.deepomatic.com/deepomatic-cli/

Installation

pip install deepomatic-cli

If you need rpc support, prefer:

# requires deeomatic-rpc package to be installed
pip install deepomatic-cli[rpc]

Autocompletion

To activate the autocompletion the easiest way is to add the following line to your shell config file:

eval "$(register-python-argcomplete deepo)"

For example if you use bash:

cat <<"EOF" >> ~/.bashrc

# activate deepomatic-cli autocomplete
eval "$(register-python-argcomplete deepo)"
EOF

(If it slows down your shell startup too much, you can pre-generate the completion into a static file then source it in your .bashrc: that doesn't change when deepo-cli is updated (except when updating argcomplete itself).)

For more information, checkout the documentation of argcomplete

FAQ

opencv-python (-headless) installation takes forever

Depending on your pip version, it might rebuild it from source. 19.3 is the minimum supported version

  • Check version with pip -V
  • Update with pip install 'pip>=19.3'

Window output doesn't work. I get a cv2.error.

deepomatic-cli ships with opencv-python-headless as most of the features don't need a GUI. This also avoids requiring libGL on the system (it is for example usually not there in docker containers). If you want to use the GUI features, we recommend installing opencv-python after installing deepomatic-cli:

pip install deepomatic-cli
opencv_install=$(pip3 freeze | grep opencv-python-headless | sed 's/-headless//g')
pip uninstall opencv-python-headless
pip install $opencv_install

About the output video codec

The CLI makes heavy use of OpenCV which does not provide the ability to configure the video encoder settings. We can choose the codec to use (FourCC), but we can't choose the bitrate, quality, number of pass, profile or any other settings. The quality chosen by OpenCV remains a small mystery, it seems to vary depending on the codec.

If for some reason the output video encoding does not suit you (too heavy, bad quality), here are our options:

Changing the FourCC

Set the --fourcc option of the CLI. The opencv-python package only provide the codecs which have a free license. This means you will not be able to choose avc1 or hevc. We provide a dockerfile and an installation script to rebuild opencv-python with x264 encoder (corresponding to acv1 FourCC). The readme can be found here. Please makes sure you can use it (it is patented and not free).

If you are on windows, there is an alternative using openh264:

  • Download the library (should work with openh264-1.7.0-win64.dll.bz2)
  • Extract the archive in C:\Windows\System32 or in the same directory where the CLI command is launched

Piping to ffmpeg or cvlc

If you want more freedom on the encoding settings, we suggest piping the CLI to ffmpeg or cvlc by using the option -o stdout (working with infer, draw, blur and noop commands). In both case you need to tell ffmpeg or cvlc about the resolution, framerate and color space of the input stream. You can use ffprobe or mediainfo to get the resolution and framerate of your input video. The color space (chroma) does not depend on your input video but on our CLI which by default output BGR color space.

Again, make sure you can legally use the codec specified in the command.

Example using ffmpeg

deepo platform model draw -i $input_video_path -o stdout -r $model_id | ffmpeg -f rawvideo -pixel_format bgr24 -video_size 1280x720 -framerate 15 -i - -c:v $codec $output_video_path

Example using cvlc

BGR color space is not supported by cvlc, so we have to convert the stream to RGB.

deepo platform model draw -i $input_video_path -o stdout -r $model_id --output_color_space RGB | cvlc --demux=rawvideo --rawvid-fps=15 --rawvid-width=1280 --rawvid-height=720 --rawvid-chroma=RV24 - --sout "#transcode{vcodec=$codec}:std{access=file,dst=$output_video_path}" vlc://quit

deepocli's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

mayur-ram

deepocli's Issues

Allow to send images as bmp to increase throughput

We currently send images encoded reencoded in jpg. In some case it might be faster to send images as bmp (no reencoding/less cpu time on customer machine). Let's add an option and see in practise how it goes.

Verbose debug doesn't work

It probably only happen when rpc package is here.
Other packages such as deepomatic-api and deepomatic-rpc might call logging.BasicConfig before the cli_parser.run() is called. Thus,

logging.basicConfig(level=log_level, format=log_format)
doesn't work in some case.
Either we removed previous loggers handlers or we import those package after the run() function
Second proposal is probably cleaner but just in case, for the first method:

for handler in logging.root.handlers[:]:
    logging.root.removeHandler(handler)

Cannot import all deepomatic packages in a python script

Whatever the order of import is, we cannot import all the packages together, there must be an issue with the way we handle the namespace. It probably needs a fix in all packages.

python3
Python 3.6.6 (default, Sep 12 2018, 18:26:19) 
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import deepomatic.api
>>> import deepomatic.rpc
>>> import deepomatic.cli
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'deepomatic.cli'
>>> 

All three packages are obviously installed.

More debug information when detecting input format

We need more visiblity on what is going on when detecting input format (in DEBUG level):

  • except Exception:

    If it is a json extension file, we should be able to open it and json decode it. If not the case we should let raise.
  • Log before and after is_valid() methods
  • Log when json detection failed and the cause (the jsonschema libary should give a human readable error)

ZMQ support

Sometimes we want to interact directly with deepomatic-cli without touching the flow of the program.
Because of monkey patching and the current pipeline, it is hard to garanty that it will work as a library, and it imply a good understanding of the internal implementation of deepomatic-cli.

To overcome this, I suggest to implement zmq connectors to send command and retrieve results easily. We thus only have to spawn a deepomatic-cli in server mode and send command.

On client side, this would look:

from deepomatic.cli.zmq.client import Client
from deepomatic.cli.zmq.queue import Queue

client = Client(deepomatic_cli_url)

response = client.send_command({
    'input' : "rtsp://localhost:554",
    'output' : "zmq_queue" # either generated by server or forced by client
})

queue_url = response['output']
total = response['total']
queue = Queue(queue_url)
for i in range(total):
    frame_results = queue.pop()

Draw font size

When using deepo draw it would be great if we could control the font size with an option or at least make it proportional to the image.

Implement communication with the worker-nn by using the Workflow command

Currently, we need to fill in the recognition_id in the operate show case UI. This was required because we use the "Recognition" RPC command, which requires this number.

See the number 40030 in this screen shot:
Capture d’écran 2019-11-12 à 15 21 47

However, the spirit of what we are building is to rely on Workflows: they take no argument in the RPC command (not show_discarded, not max_num_results --> this should be a parameter of the Recognition node of the workflow).

We thus need to switch on the "Workflow" RPC command, which will have the benefit of removing the "Recognition" field in the UI.

How to fix ?

As soon as the workflows are released, we probably just need to switch to create_workflow_command_mix here:

self._command_mix = rpc.helpers.v07_proto.create_recognition_command_mix(recognition_version_id,

Commands (draw/infer) stopped when only half is processed

When trying to run a simple command such asdeepo draw -i video.avi -o frames%04d.jpg -r xx only half of the video is processed and the process is done when reaching around 50% without any error messages. The same happens with a deepo infer.

On windows it returns the following error:
UserWarning: libuv only supports millisecond timer resolution; all times less will be set to 1ms
No errors on the other OS.

deepomatic-cli: 0.3.5
OS: Windows / Ubuntu 16.04 / MacOS Mojave 10.14.4

None frame when drawing ?

None frame should be received in Infer Thread when the program has to exit. Though it seems some None values are received when the program is not done to process an entire directory ?

Summary displayed before end of progress bar.

The summary is displayed before the progress bar is closed.

$ deepo infer -i images/ -o predictions.json -r fashion-v4
[INFO 2019-10-11 13:27:11,221] Input processing:   0%|          | 0/615 [00:00<?, ?it/s]
[INFO 2019-10-11 13:27:35,996] Input processing:   0%|          | 1/615 [00:24<4:13:31, 24.77s/it]
[INFO 2019-10-11 13:27:37,897] Input processing:   0%|          | 2/615 [00:26<2:16:16, 13.34s/it]
...
[INFO 2019-10-11 13:38:20,066] Input processing:  98%|#########8| 603/615 [11:08<00:13,  1.11s/it]
[INFO 2019-10-11 13:38:20,471] Input processing:  99%|#########8| 607/615 [11:09<00:08,  1.10s/it]
[INFO 2019-10-11 13:38:20,711] Input processing: 100%|#########9| 612/615 [11:09<00:03,  1.09s/it]
[INFO 2019-10-11 13:38:20,805] Summary: errors=0 uncompleted=0 successful=615 total=615.
[INFO 2019-10-11 13:38:20,808] Input processing: 100%|##########| 615/615 [11:09<00:00,  1.09s/it]

Improve tests

We should check that outputs works well (all files exists on the disk and are well formated)

Manage extensions collision

Today we don't handle collisions due to extensions.

The two files img.png and img.jpg will result in the same prediction files, drawn images, JSON key, etc.

We might want to incorporate the extension in the frame.name to have a proper unique identifier.

Rate limiting

By default deepo try to go as fast as possible. However in some case, we might want to throttle it to a maximum of iteration per second. We should add an option for this.

Probably we should adjust the Queue maxsize in consequences.

Reproduce directory structure with recursive option and string wildcard json

When using the recursive option with a json string wildcard, we should reproduce the input directory structure. Otherwise file can collide.

With directory:

dir
├── subdir1
│   └── img.png
└── subdir2
    └── img.png

And command:

deepo infer -i dir -R -o predictions/pred_%s.json -r 123

Will result in:

predictions
└── pred_img_123.json

Stronger tests

  • Check filename and content of generated files
  • Test robustness on expected errors (invalid images, timeouts, bad status)
  • Test new commands (camera/workflow)

Improve HTTP user_agent

deepocli should properly identify itself via the user_agent.

Where

  • call to api.deepomatic.com via deepomatic-client-python should use the already existing Client user_agent_suffix parameter
  • call to studio.deepomatic.com should replace deepomatic-client-python/(vesta) by deepocli/xxx

What

  • the string deepocli
  • the deepocli version (maybe with git revision too if possible, would help for development versions)

Cannot use urls if using a json input file

Command infer does not work when using a Studio json with urls of images (stored on google storage for instance) instead of path to locally stored images.
The run command is

deepo infer -i $PATH_TO_STUDIO_JSON -o $PATH_TO_OUTPUT_JSON -r 40521 -t 0.85

and it gives the error

[WARNING 2019-11-18 11:28:32,005] Could not find file https://storage.googleapis.com/dp-vesta-prod/vesta/datasets/261-adrien/pb-sout-foa-implantation-chambre/b9fcdd69-27c8-472f-b4ad-edd9b6ee0170/1be175f0-e397-4e99-b5e5-74bb6053df58.jpg referenced in JSON /Users/adriengregorj/Documents/Deepomatic/dev/projects/sogetrel/data/implantation.json, skipping it
[INFO 2019-11-18 11:28:32,011] Input processing: 0it [00:00, ?it/s]

deepomatic-cli: 0.4.0
OS: MacOS Mojave 10.14.6

Cleanup feedback class

I guess now you can do something like:

from ..version import __title__, __version__
DEFAULT_USER_AGENT_PREFIX = user_agent_prefix = '{}/{}'.format(__title__, __version__)

def __init__(self, **kwargs):
    kwargs['user_agent_prefix'] = kwargs.get('user_agent_prefix', DEFAULT_USER_AGENT_PREFIX)
    self.http_helper = HTTPHelper(version=None, **kwargs)

And later on:

client = Client(api_key=api_key)

We can also increase the number of greenlet (45)

Originally posted by @maingoh in #125

Deepocli doesn't deal with all input images

Airtable url: https://api.airtable.com/v0/appgWOl8ecHRBewEz/Bugs%20%26%20Issues/rec6J8rAgSrwAkmVw
It happened with INA on their local site.
"Les tests d'inférence s'arrêtent au bout d'un certain temps.".
Voici la config :
deepomatic-api 0.8.0
deepomatic-cli 0.3.5
deepomatic-rpc 0.8.0

Voilà la commande executée :

deepo infer -t 0.5 -i C+N_20190425_050000_68400/json/C+N_20190425_050000_68400.json -o C+N_20190425_050000_68400/json/C+N_20190425_050000_68400_res.json -r worker_queue_ina_cnews -k 39810 -u amqp://user:[email protected]:5672/deepomatic

Below are a few logs, I will attach some others.

[INFO deepomatic.cli.input_data 2019-11-05 16:43:40,704 18985 139883595167488 common.py:35] Input processing:  74%|#######3  | 50565/68400 [12:50<04:31, 65.66it/s]
[INFO deepomatic.cli.input_data 2019-11-05 16:43:41,060 18985 139883595167488 common.py:35] Input processing:  74%|#######3  | 50594/68400 [12:50<04:31, 65.67it/s]
[INFO deepomatic.cli.input_data 2019-11-05 16:43:41,518 18985 139883595167488 common.py:35] Input processing:  74%|#######4  | 50623/68400 [12:50<04:30, 65.66it/s]
[INFO deepomatic.cli.input_data 2019-11-05 16:44:00,864 18985 139883832067840 common.py:35] Input processing:  74%|#######4  | 50648/68400 [13:10<04:36, 64.09it/s]
[INFO deepomatic.rpc.amqp.client 2019-11-05 16:44:20,112 18985 139884890478336 client.py:127] Sending heartbeat
[INFO deepomatic.cli.input_data 2019-11-05 16:44:20,170 18985 139884890478336 common.py:35] Input processing:  74%|#######4  | 50648/68400 [13:29<04:43, 62.56it/s]

Ils peuvent s'interrompre au bout de 25 %, 2 %,, ...

Dans la log deepomatic, j'observe le message suivant. Est-ce lié ?

neural-worker_1    | [INFO start_common 2019-11-05 17:05:17,853 20 139747494717184 start_common.py:118] Starting maya-worker-nn ['--workflows', '/var/lib/deepomatic/services/worker-nn/resources/workflows.json']
neural-worker_1    | I1105 17:05:18.144289    20 Resources.cpp:100] Default resource values :
neural-worker_1    | I1105 17:05:18.144327    20 Resources.cpp:101] Maximum memory : 1e+02Gb
neural-worker_1    | I1105 17:05:18.144346    20 Resources.cpp:102] Number of threads : 16
neural-worker_1    | I1105 17:05:18.144356    20 Resources.cpp:103] GPU used : 0
neural-worker_1    | I1105 17:05:18.144358    20 Resources.cpp:110] Using GPU 0 with 15079 MB of global memory.
neural-worker_1    | I1105 17:05:18.177417    20 License.cpp:92] Licensing env established
neural-worker_1    | I1105 17:05:18.178443    20 AbstractWorkerNN.cpp:35] No DBPg found. If you are not running offline, this is an issue.
neural-worker_1    | I1105 17:05:18.178463    20 MapResourcePool.hpp:58] Creating new 'AMQP Channel' with key '140314963634688' (count = 1)
neural-worker_1    | I1105 17:05:18.178485    20 AmqpWrapper.cpp:51] Creating Main channel on AMQP server: user@rabbitmq:5672/deepomatic
neural-worker_1    | E1105 17:05:18.179595    20 MapResourcePool.hpp:64] Could not create resource 'AMQP Channel' with key '140314963634688': a socket error occurred

Support user shutdown request: SIGTERM

Sometimes we want to abort the current execution, and with the deepomatic-rpc backend it cannot be interrupted: ctrl-c (SIGTERM) does nothing (is it stuck waiting on a reply on the rabbitmq reply queue?)

Studio: wrong summary uncompleted when adding images

Airtable url: https://api.airtable.com/v0/appgWOl8ecHRBewEz/Bugs%20%26%20Issues/rec6lnTlkcztCtiHR
When I upload image through deepocli, I get a wrong summary in the end. See the last line.

[INFO 2019-11-26 09:34:38,494] Uploading images:  98%|#########8| 1750/1782 [11:41<00:12,  2.49it/s]
[INFO 2019-11-26 09:34:40,599] Uploading images:  99%|#########8| 1760/1782 [11:43<00:08,  2.50it/s]
[INFO 2019-11-26 09:34:41,914] Uploading images:  99%|#########9| 1770/1782 [11:45<00:04,  2.51it/s]
[INFO 2019-11-26 09:34:50,608] Uploading images: 100%|##########| 1782/1782 [11:53<00:00,  2.50it/s]
[INFO 2019-11-26 09:34:50,661] Summary: errors=0 uncompleted=-1782 successful=1782 total=1782.

Wrong value for cap.get(cv2.CAP_PROP_FPS)

Sometimes, the .get(cv2.CAP_PROP_FPS) call on cv2.VideoCapture instances gives a very wrong estimation of the input stream (video ?) fps. We have seen a value 180000 fps for a 15fps camera.
This can mess up the command if the input_fps options is set.
It would be nice to use cv2.CAP_PROP_FPS value for a first estimation of the fps, but keep measuring it to get a more accurate estimation afterwards. This would also let cameras have varying framerate without impacting the input_fps option.

Keyboard interrupt is ignored when rpc retry to connect

We can't stop deepocli smoothly when the amqp url is bad. This might be an issue in deepomatic-rpc as well

[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:29,977 289 140399470933824 client.py:265] New connection to amqp://user:**@192.168.176.4:5672/deepomatic
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:29,983 289 140399470933824 client.py:265] New connection to amqp://user:**@192.168.176.4:5672/deepomatic
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:29,990 289 140399470933824 client.py:126] [Errno 111] Connection refused
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:30,001 289 140399470933824 client.py:126] [Errno 111] Connection refused
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:30,012 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:30,012 289 140399470933824 client.py:29] Retry in 0 seconds.
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:30,022 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:30,023 289 140399470933824 client.py:29] Retry in 0 seconds.
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:30,034 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:30,034 289 140399470933824 client.py:29] Retry in 0.05 seconds.
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:30,094 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:30,095 289 140399470933824 client.py:29] Retry in 0.05 seconds.
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:30,156 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:30,156 289 140399470933824 client.py:29] Retry in 1.05 seconds.
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:31,216 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:31,216 289 140399470933824 client.py:29] Retry in 1.05 seconds.
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:32,279 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:32,279 289 140399470933824 client.py:29] Retry in 2.05 seconds.
^C[INFO deepomatic.cli.thread_base 2019-09-03 13:02:34,340 289 140399470933824 thread_base.py:286] Stop asked, waiting for threads to process queued messages.
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:34,341 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:34,341 289 140399470933824 client.py:29] Retry in 2.05 seconds.
^C^C^C^C^C^C[INFO deepomatic.cli.thread_base 2019-09-03 13:02:36,396 289 140399470933824 thread_base.py:290] Hard stop
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:36,397 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:36,397 289 140399470933824 client.py:29] Retry in 3.05 seconds.


^C^C
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:39,455 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:39,456 289 140399470933824 client.py:29] Retry in 3.05 seconds.
^C^C[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:42,514 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:42,514 289 140399470933824 client.py:29] Retry in 4.05 seconds.
^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^Cc[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:46,572 289 140399470933824 client.py:28] [Errno 111] Connection refused
[INFO deepomatic.rpc.amqp.client 2019-09-03 13:02:46,572 289 140399470933824 client.py:29] Retry in 4.05 seconds.
[WARNING deepomatic.rpc.amqp.client 2019-09-03 13:02:50,637 289 140399470933824 client.py:28] [Errno 111] Connection refused

studio add_images command should be more integrated

Currently the add_images command is separated from the rest in term of code, which makes the maintenance harder (we have to duplicate the shutdown part and each thing we fix in other commands must fixed there as well). It should be integrated into the InputData and OutputData design and use the same loop than other commands.

This probably imply a refactoring of input_loop function.

Regulate the number of requests pushed

Since #43 we now push requests with a lot of advance which make the requests accumulating in rabbitmq. In order to avoid disk/memory leak on rabbitmq for long streams we should regulate the number of message in the queue by synchronizing SendInferenceThread and ResultInferenceThread

Handle special characters

Today, when we encounter a non-ascii character in the label name, it's replaced by ?.

Ideally, we'd like to handle all of them or at least find the nearest ascii representation é -> e.

Test RPC

We should test rpc workflows (using run images). Also test with amqp output.

Deepocli doesn't run until the end

Airtable url: https://api.airtable.com/v0/appgWOl8ecHRBewEz/Bugs%20%26%20Issues/rectqktwPIxsnRQEd
INA ran a test on a server equipped with a GPU T4. I think they are running inference with a directory as input.
It was working but the command was randomly stopping at like 2%, or 25% or 70%.

Here are the versions installed on the machine :
deepomatic-api 0.8.0
deepomatic-cli 0.3.5
deepomatic-rpc 0.8.0

They do get a result but with only part of the images analysed.

They never experienced this problem on another machine (version 0.8.3 de deepo-cli).

Here is what they see in the logs :

neural-worker_1    | [INFO start_common 2019-11-05 17:05:17,853 20 139747494717184 start_common.py:118] Starting maya-worker-nn ['--workflows', '/var/lib/deepomatic/services/worker-nn/resources/workflows.json']
neural-worker_1    | I1105 17:05:18.144289    20 Resources.cpp:100] Default resource values :
neural-worker_1    | I1105 17:05:18.144327    20 Resources.cpp:101] Maximum memory : 1e+02Gb
neural-worker_1    | I1105 17:05:18.144346    20 Resources.cpp:102] Number of threads : 16
neural-worker_1    | I1105 17:05:18.144356    20 Resources.cpp:103] GPU used : 0
neural-worker_1    | I1105 17:05:18.144358    20 Resources.cpp:110] Using GPU 0 with 15079 MB of global memory.
neural-worker_1    | I1105 17:05:18.177417    20 License.cpp:92] Licensing env established
neural-worker_1    | I1105 17:05:18.178443    20 AbstractWorkerNN.cpp:35] No DBPg found. If you are not running offline, this is an issue.
neural-worker_1    | I1105 17:05:18.178463    20 MapResourcePool.hpp:58] Creating new 'AMQP Channel' with key '140314963634688' (count = 1)
neural-worker_1    | I1105 17:05:18.178485    20 AmqpWrapper.cpp:51] Creating Main channel on AMQP server: user@rabbitmq:5672/deepomatic
neural-worker_1    | E1105 17:05:18.179595    20 MapResourcePool.hpp:64] Could not create resource 'AMQP Channel' with key '140314963634688': a socket error occurred

Command to dump an rtsp stream

For debug purpose, it is often useful to dump an rtsp stream into an mp4 file.
Also we could display it on a window (if X is available)

This probably already work, but without inference arguments this would be more useful.

Draw background color for tags

When using deepo draw with a tagging/classificaiton model, all tags are displayed on a red background. It would be great if we could use a simple red/green/orange color code depending on the score and threshold.

Allow to log predictions

When using a log level DEBUG, we should be able to:

  • display prediction in DEBUG log level
  • log when we skip frames (or flush queues, with queue qsize before flush)
  • maybe also when frames are send to the worker with them track_id or task_id

Add compatibility checks between input and output

We should have a matrix of which input is compatible with which output instead of lazily warn no predictions to output or no frame to output.

Thus we raise an error and quit early if both are not compatible (if possible at the argparse level).

Debug mode

It would be nice if we could activate a debug mode such as

if os.getenv('debug'):
    import requests
    import logging
    logging.basicConfig(level=logging.DEBUG)

Draw predictions of one image and show in window

It currently raise:

deepo draw -i https://sites.create-cdn.net/siteimages/28/4/9/284928/15/7/9/15798435/761x1000.jpg -r fashion_v4 -o windowTraceback (most recent call last):
  File "/usr/local/bin/deepo", line 11, in <module>
    sys.exit(main())
  File "/home/hugo/deepomatic/deepocli/deepomatic/cli/__main__.py", line 9, in main
    run(args)
  File "/home/hugo/deepomatic/deepocli/deepomatic/cli/cli_parser.py", line 81, in run
    return args.func(vars(args))
  File "/home/hugo/deepomatic/deepocli/deepomatic/cli/cli_parser.py", line 34, in <lambda>
    draw_parser.set_defaults(func=lambda args: input_loop(args, DrawImagePostprocessing(**args)))
  File "/home/hugo/deepomatic/deepocli/deepomatic/cli/input_data.py", line 88, in input_loop
    inputs = iter(get_input(kwargs.get('input', 0), kwargs))
  File "/home/hugo/deepomatic/deepocli/deepomatic/cli/input_data.py", line 40, in get_input
    return StreamInputData(descriptor, **kwargs)
  File "/home/hugo/deepomatic/deepocli/deepomatic/cli/input_data.py", line 375, in __init__
    super(StreamInputData, self).__init__(descriptor, **kwargs)
  File "/home/hugo/deepomatic/deepocli/deepomatic/cli/input_data.py", line 204, in __init__
    self._open_video()
  File "/home/hugo/deepomatic/deepocli/deepomatic/cli/input_data.py", line 217, in _open_video
    raise Exception("Could not open video {}".format(self._descriptor))
Exception: Could not open video https://sites.create-cdn.net/siteimages/28/4/9/284928/15/7/9/15798435/761x1000.jpg

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.