Code Monkey home page Code Monkey logo

hass-deepstack-face's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hass-deepstack-face's Issues

FR: save_timestamped_file event.

Good day.

Please consider adding the below event available in HASS-Deepstack-object for for the save_file_folder: in HASS-Deepstack-face

saved_file : the path to the saved annotated image, which is the timestamped file if save_timestamped_file is True, or the default saved image if False

Thank you.

HA Entity state not updating

Hi, I am trying to run deepstack face recognition on my HA.
HASS is running on rpi4, and deepstack is running on another rpi4. Issue what I am facing at the moment is that I dont get any recognition update on entity inside HA. State of this entity always stays Unknown. However if I check deepstack on another rpi4 I can see my recognition request from HA, so not sure what is wrong. Are there any log files on HA or on deepstack side?
So ether recognition is not accuring or recognition result is not geting back to HA.
I am also running frigate container together with deepstack on same rpi4, not sure if it can have some effect.

HA config:

  • platform: deepstack_face
    ip_address: 192.168.31.13
    port: 80
    timeout: 5
    detect_only: False
    save_file_folder: /config/www/snapshots/
    save_timestamped_file: True
    save_faces: True
    save_faces_folder: /config/www/faces/
    show_boxes: True
    source:
    - entity_id: camera.entrance_person
    name: deepstack_face_recognition

Screenshot 2021-05-15 at 15 08 07

Screenshot 2021-05-15 at 15 06 45

Can this be used together with HASS-Deepstack-object?

Should I deploy 2 containers?
Because:
HASS-Deepstack-object:

docker run -e VISION-DETECTION=True -e API-KEY="Mysecretkey" -v localstorage:/datastore -p 5000:5000 --name deepstack -d deepquestai/deepstack

HASS-Deepstack-face:

sudo docker run -e VISION-FACE=True -e API-KEY="Mysecretkey" -v localstorage:/datastore -p 5000:5000 deepquestai/deepstack

Do I have to change port?

FR: save as png

save_faces save images with very poor quality. Is it possible to save this in better quality, so they can be used for training?

Question regarding setup

I've setup several image processors in the past and in my testing I've decided upon this on a Jetson Nano to do real time facial recognition for secure entry and notification on my house. I e got the Nano ordered but thought I'd give it a try just through HACS to see how well it would run on my i5 laptop with VMWare and Home Assistant OVA running.

Adding it was easy enough add I only added 2 camera entities for it to scan. I can see the entities in HA and they currently show "Unknown" like what you show in the setup docs. But if I or anyone goes in front of the camera, the state for the processor entity never changes. I've successfully tried it faces, well I didn't receive any error anyway.

So my question is, do I need to run this completely separate from HA to get it to work? It seems adding the component through HACS and adding the proper configuration.yaml entries isn't working for me and I haven't yet setup a separate docker server outside of what HA has built in. Am I missing something as I've been able to get DOODS, Facebox, and TensorFlow all setup before without any issue. Thanks in advance for all your hard work and let me know if there is anything I can do in the future to contribute

Not always take new photo

I have the next script, but for some reason sometimes it don´t take a photo when it is executed and use the last time execution photo

`alias: Face Recognition Timbre
sequence:

  • service: image_processing.scan
    data: {}
    target:
    entity_id: image_processing.face
  • delay:
    hours: 0
    minutes: 0
    seconds: 2
    milliseconds: 0
  • service: notify.mobile_app_androidluis
    data_template:
    message: >-
    {{state_attr('image_processing.face', 'faces').0.name }} fue detectado
    con un nivel de precición de {{state_attr('image_processing.face',
    'faces').0.confidence | round(1) }}
    data:
    image: /media/local/deepstack/snapshots/face_latest.jpg
  • service: notify.mobile_app_iphone_luis
    data_template:
    message: >-
    {{state_attr('image_processing.face', 'faces').0.name }} fue detectado
    con un nivel de precición de
    {{state_attr('image_processing.face','faces').0.confidence | round(1) }}
    data:
    image: /media/local/deepstack/snapshots/face_latest.jpg
  • service: notify.mobile_app_ipad_de_luisr
    data_template:
    message: >-
    {{state_attr('image_processing.face', 'faces').0.name }} fue detectado
    con un nivel de precición de
    {{state_attr('image_processing.face','faces').0.confidence | round(1) }}
    data:
    image: /media/local/deepstack/snapshots/face_latest.jpg
    mode: single`

This script is executed when my doorbell is press, so in theory every time it is a person

Update for image_processing.face_counter_XXXX fails

After upgrading to latest version of HA and Deepstack I'm starting to get this error:

Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 278, in async_update_ha_state await self.async_device_update() File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 474, in async_device_update raise exc File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 132, in async_update await self.async_process_image(image.content) File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 112, in async_process_image return await self.hass.async_add_executor_job(self.process_image, image) File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "/config/custom_components/deepstack_face/image_processing.py", line 210, in process_image self._dsface.recognise(image) AttributeError: 'DeepstackFace' object has no attribute 'recognise'

Versions as follows:

HA: 2020.12.1
Deep Stack (docker): Latest (Open source models version)
HASS DeepStack: v3.5

One face, one name, and different angles

It is not possible to train the system so that it can detect faces from different angles. For example, I took photos of the face in 5 versions, straight, at an angle of 45 and 90 degrees. I upload a photo and assign each photo one name. When I make a check for each photo, some photos forget the names that I previously assigned a name to. For example, I uploaded a photo with a face where the face looks straight and assigned the name Divan, then I uploaded a photo at an angle of 45 degrees and assigned the name Divan, then I uploaded a photo at 90 degrees and also assigned the name Divan. When you upload a photo of a face that looks directly into the camera, the system no longer recognizes it, it recognizes the last photo. I understand that you can only upload one photo for one face, right, or is it still possible to train the system of one face from different angles?

Не получается обучить систему, чтобы она могла определять лица в разных ракурсах. Например сделал фотографии лица в 5 вариантах, прямо, под углом 45 и 90 градусов. Загружаю фото и присваиваю каждому фото одно имя. Когда делаю проверку по каждому фото, то на некоторых фото забываются имена, которым я ранее присвоил имя. Например загрузил фото с лицом, где лицо смотрит прямо и присвоил имя Divan, далее загрузил фото под углом в 45 градусов и присвоил имя Divan, потом загрузил фото под 90 градусов и также присвоил имя Divan. При проверки, загрузив фото лица, которое смотрит прямо в камеру система уже не опознает, она опознает последнее фото. Я так понимаю, что можно загрузить только одно фото для одного лица, верно или все же можно обучить систему одного лица с разных ракурсов?

HomeAssistant - Deepstack GPU v.2021.02.1 - Face detection strange image_processing output

Dear Mark,
thanks for your great Deepstack Integration, during my test seems that everything is working. Face recognition seems ok from both side Deepstack and HA but the output of image_processing is this:
`faces:

  • name: dimitri
    confidence: 55.067
    bounding_box:
    height: 0.36
    width: 0.165
    y_min: 0.258
    x_min: 0.308
    y_max: 0.619
    x_max: 0.473
    prediction:
    confidence: 0.55067384
    userid: dimitri
    y_min: 279
    x_min: 592
    y_max: 668
    x_max: 908
    entity_id: image_processing.face_counter
    total_faces: null
    total_matched_faces: 0
    matched_faces: {}
    last_detection: 2021-04-29_17-36-01
    friendly_name: face_counter
    device_class: face`

I have "registered" my face and from deepstack curl request seems ok (right confidence value) but from the image_processing i still saw "total_faces=0".
The picture grabbed from the image scan seems also work, i can saw the red square on my face.
Do you know why? Something is changed on Deepstack restapi?
Thanks

UnicodeEncodeError

I am getting below error when image processing face service is called. I had no such problem with object integration. The error indicates that the local language specific characters are not passed successfully. I do indeed use such character for naming on the source config. The \u0131 character is dotless "i" as per https://www.compart.com/en/unicode/U+0131.

  source:
    - entity_id: camera.laxdiskstation_mutfak_kamera
      name: "Mutfak Kamera Yüz Tanıma (GS60)"
2021-10-22 02:07:50 ERROR (MainThread) [homeassistant.helpers.entity] Update for image_processing.mutfak_kamera_yuz_tanima_gs60 fails
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/PIL/ImageDraw.py", line 414, in draw_text
    mask, offset = font.getmask2(
AttributeError: 'ImageFont' object has no attribute 'getmask2'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 438, in async_update_ha_state
    await self.async_device_update()
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 625, in async_device_update
    raise exc
  File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 138, in async_update
    await self.async_process_image(image.content)
  File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 118, in async_process_image
    return await self.hass.async_add_executor_job(self.process_image, image)
  File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/deepstack_face/image_processing.py", line 247, in process_image
    self.save_image(
  File "/config/custom_components/deepstack_face/image_processing.py", line 335, in save_image
    draw_box(
  File "/usr/src/homeassistant/homeassistant/util/pil.py", line 45, in draw_box
    draw.text(
  File "/usr/local/lib/python3.9/site-packages/PIL/ImageDraw.py", line 469, in text
    draw_text(ink)
  File "/usr/local/lib/python3.9/site-packages/PIL/ImageDraw.py", line 429, in draw_text
    mask = font.getmask(
  File "/usr/local/lib/python3.9/site-packages/PIL/ImageFont.py", line 149, in getmask
    return self.font.getmask(text, mode)
UnicodeEncodeError: 'latin-1' codec can't encode character '\u0131' in position 10: ordinal not in range(256) 

Basic plan only supports 5 faces

Hi,
I wanted to try your component because in the README you stated Choose the basic plan which will give us unlimited access for one installation.

However on the website it says for the Basic Plan Face Recognition API (5 Faces)

Which of them is true?

Thanks!

Error from Deepstack request, status code: 400

After upgrading to ver 0.8 I can now save stored faces but I'm getting this error when trying to teach new ones:

deepstack.core.DeepstackException: Error from Deepstack request, status code: 400

Detailed trace here:

Logger: homeassistant.components.websocket_api.http.connection
Source: custom_components/deepstack_face/image_processing.py:259
Integration: Home Assistant WebSocket API (documentation, issues)
First occurred: 13:48:57 (8 occurrences)
Last logged: 14:01:52

[23254560460912] Error from Deepstack request, status code: 400
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/websocket_api/commands.py", line 135, in handle_call_service
    await hass.services.async_call(
  File "/usr/src/homeassistant/homeassistant/core.py", line 1445, in async_call
    task.result()
  File "/usr/src/homeassistant/homeassistant/core.py", line 1484, in _execute_service
    await self._hass.async_add_executor_job(handler.job.target, service_call)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/deepstack_face/image_processing.py", line 161, in service_handle
    classifier.teach(name, file_path)
  File "/config/custom_components/deepstack_face/image_processing.py", line 259, in teach
    self._dsface.register(name, image)
  File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 277, in register
    response = process_image(
  File "/usr/local/lib/python3.8/site-packages/deepstack/core.py", line 130, in process_image
    raise DeepstackException(
deepstack.core.DeepstackException: Error from Deepstack request, status code: 400

Face and object detection are working fine

deepstack_teach_face

i got this error while running home assistant and deepstack in docker:

Falha ao chamar o serviço image_processing/deepstack_teach_face. required key not provided @ data['name']

But i have already tried insert this config in the home assistant service deepstack_teach_face:

service: deepstack_teach_face
name: Test
file_path: /config/test.jpeg

PS: sorry for the language, im from Brazil.

403 Error on Docker Container

I have installed the component as per the readme, I have Deppstack Face running on a docker container via portainer on HA.

Everything seems like it should be working, I can reach deepstack via locaip:port (get the splash screen) and can confirm that when I attempt to call service from HA it reaches Deepstack.

But Im getting a 403 error that I cant resolve. it applies to the scan and face learning services.

Example Log output from container:

[GIN] 2021/05/24 - 23:22:51 | 403 | 86.952µs | 192.168.1.64| POST /v1/vision/face/register
[GIN] 2021/05/24 - 23:38:24 | 403 | 29.444µs | 192.168.1.64| POST /v1/vision/face/register
[GIN] 2021/05/24 - 23:40:18 | 403 | 28.929µs | 192.168.1.64| POST /v1/vision/face/recognize

Example log output from supervisor portainer add on log in HA:

2021/05/24 16:31:08 [ERROR] [http,client] [message: unexpected status code] [status_code: 403]

Any advice would be welocme. Ive spent most of today trying to identify the issue, let alone resolve it!

Create a sensor for scan_interval

I configured my deepstack with scan_interval: 30, but it apparently doesn't work.

How can I create a sensor in the HA to scan every 10 minutes?

Breaking change Home Assistant 2021.12

See robmarkcole/HASS-Deepstack-object#261

Logger: homeassistant.helpers.entity
Source: helpers/entity.py:549
First occurred: 13:14:39 (6 occurrences)
Last logged: 13:14:39

    Entity image_processing.deepstack_face_xxxx (<class 'custom_components.deepstack_face.image_processing.FaceClassifyEntity'>) implements device_state_attributes. Please report it to the custom component author.
    [... cut for brevity ...]
    Entity image_processing.deepstack_object_xxx (<class 'custom_components.deepstack_object.image_processing.ObjectClassifyEntity'>) implements device_state_attributes. Please report it to the custom component author.

Face endpoints not activated

i have object working good, but face giving me error in homeassistant:
[custom_components.deepstack_face.image_processing] Depstack error : Error from Deepstack: Face endpoints not activated

  - platform: deepstack_face
    ip_address: ***.***.*.***
    port: 85
    api_key: /key/
    timeout: 30
    detect_only: False
    scan_interval: 20
    source:
      - entity_id: camera.hoov
        name: face_counter_hoov

where and how should activate endpoint?

Implement force_update config variable

Hey,

first of all thank you for this cool integration!
Could you please add the ability, that an event is fired on each successful response no matter if a face is detected or recognised?
The case is, that i want immediately send the current camera picture after the last one is processed. The config entry "scan_interval" is too static for me.
Another possibility could be to implement this as a feature instead of firing an event. Means sending the new camera image directly when the response is here and add an config entry for that.

Thanks in advance!

FR: Add config for minimum confidence level

Hello! great integration, loving it so far....
The thing that im missing is a setting to establish a minimum confidence level, and that it dont save snapshots that dont rank above that minimum.
Would be great if that function can be done, or instruction on how to do it via other way.

Big thanks!

Possible face attributes

Hi,

I'm curious if this integration will allow me to detect attributes for a face.

What i mean by this is, is for example if the eyes are closed.

I'm looking to implement a babymonitor with this if thats possible.

Looking forward to your response!

Kind regards,
Rob

image_processing.scan results in "Unable to find referenced entities"

Following the tutorials of Everything Smart Home.
Got to the point I set up the automation to start deepstack with the image of the camera.

For testing purpose I set up a local camera in hass:

camera:
  - platform: local_file
    name: test_local_file_face
    file_path: /media/frigate/clips/yoosee_frigate-1630426277.427424-9ztudw.jpg

This is just a random detected person image frame of me.

And invoking the service as follows:

service: image_processing.scan
target:
  entity_id: camera.test_local_file_face

And the result in the log is:

WARNING (MainThread) [homeassistant.helpers.service] Unable to find referenced entities camera.test_local_file_face

Looking at the Deepstack output, it also does not show any movement of some sort.
Using something like deepstack-ui or deepstack trainer, shows activity with status code 200. I taught it ~187 images of faces already without any problem. Also deepstack-ui returns the proper person being detected when I manually scan a image of a taught person.

Is the problem within the custom_component or is it with the image I'm using?
Currently using deepquestai/deepstack:cpu-x5-beta as others failed to work properly (giving 500's).

FR - Only teach a face to deepstack if a single face is detected in the picture

Greatings. For my needs I would like that the deepstack face integration would only learn a face IF the input image to the teach_face service contains only a single face. In addition it would be nice for an event to fired when a face is taught (or when you try to teach a face. Not sure if you missing these features by design or not. Either way I have a quick and dirty implementation of this where I first scan the picture before telling deepstack to learn a face. Let me known if this is something you are interested in adding to your repository.

Something like this:

            n_face = detect_number_of_faces(image)
            if n_face == 0: 
                _LOGGER.info("No face detected in %s", file_path)
            elif n_face >= 1: 
                _LOGGER.info("Multiple faces detected in %s", file_path)
            else: 
                self._dsface.register(name, image)
                _LOGGER.info("Deepstack face taught name : %s", name)
                
            #Fire an event to notify the frontend 
            event_data = {
                "person_name": name, 
                "image": file_path, 
                "faces": n_face
            }

Add run all to docs

docker run -e VISION-DETECTION=True -e VISION-SCENE=True -e VISION-FACE=True -e API-KEY="" -v localstorage:/datastore -d -p 5000:5000 --name deepstack deepquestai/deepstack:noavx

processing a portrait image turns it in landscape

manually processing a portrait image with faces from a camera: local_file recognizes the faces correctly, but turns the image in landscape mode causing a second manual process to fail to see any faces...

FR: Add a list of names to the integration

I made corrections to the code and got the output of names in one line. Please add this code to your integration and if you see that you need to fix it, fix it. I'm not a programmer, I just read python literature and tried to implement it, so I'm sorry for my mistakes.
Внес исправления в код и получил вывод имен в одну строку. Добавьте пожалуйста этот код в свою интеграцию и если увидите, что нужно исправить - исправьте. Я не программист, просто читал литературу питона и попробовал это реализовать, так что простите за мои ошибки.

image
image

image
image

def get_valid_filename(name: str) -> str:
    return re.sub(r"(?u)[^-\w.]", "", str(name).strip().replace(" ", "_"))


def get_faces(predictions: list, img_width: int, img_height: int):
    """Return faces with formatting for annotating images."""

    faces = []
    names_list = []
    decimal_places = 3
    for pred in predictions:
        if not "userid" in pred.keys():
            name = "unknown"
        else:
            name = pred["userid"]
        confidence = round(pred["confidence"] * 100, decimal_places)
        box_width = pred["x_max"] - pred["x_min"]
        box_height = pred["y_max"] - pred["y_min"]
        box = {
            "height": round(box_height / img_height, decimal_places),
            "width": round(box_width / img_width, decimal_places),
            "y_min": round(pred["y_min"] / img_height, decimal_places),
            "x_min": round(pred["x_min"] / img_width, decimal_places),
            "y_max": round(pred["y_max"] / img_height, decimal_places),
            "x_max": round(pred["x_max"] / img_width, decimal_places),
        }
        faces.append(
            {"name": name, "confidence": confidence, "bounding_box": box, "prediction": pred}
        )
        if name == 'Valentina' or name == 'valentina':
            name = 'Валентина'
        elif name == 'Svetlana' or name == 'svetlana':
            name = 'Светлана'
        elif name == 'Aleksandr' or name == 'aleksandr':
            name = 'Александр'
        elif name == 'Oleg' or name == 'oleg':
            name = 'Олег'
        elif name == 'Artem' or name == 'artem':
            name = 'Артем'
        elif name == 'Igor' or name == 'igor':
            name = 'Игорь'
        names_list.append(name)
    faces[0]['bounding_box']['names'] = ', '.join(names_list)
    return faces


def setup_platform(hass, config, add_devices, discovery_info=None):
    """Set up the classifier."""
    if DATA_DEEPSTACK not in hass.data:
        hass.data[DATA_DEEPSTACK] = []

Output the sensor
Выводим сенсор

# The sensor displays the name of the identified person
# Сенсор отображает имя опознанного лица
     identified_person:
        friendly_name: 'Name of the identified person'
        value_template: >
            {{ states.image_processing.detect_face_smartphone_camera.attributes.faces[0].bounding_box.names }}

Error 500 for Face detection, works for object detection

Hi,

Great plugins! Thank you for all the effort. I have been trying to implement these following the guides from the Everything Smart Home Tutorials. I'm running Home Assistant on a ubuntu VM and have installed Deepstack in a container using Portainer on the HA VM. The object detection works perfectly, but I am getting an error on the face detection/recognition.
Here are the logs from the container for a pass on the object but error on the face:
image

When I look into the stderr.txt file in the logs on the container I get the following:

image

my HA config is like this:

image

can you please point me in the direction of what I am missing?

Training event not firing.

Hi,

I am running deepstack on my truenas core server in a docker container.
my Home assistant seems to make a propper connection, i get immages in my folders with faces labeled as unknown and boxes trown arround them as advertised.

however, the one thing i can't get to work is training any faces.
It looks like the event deepstack_face.teach_face is not firing ever.

when I listen for this event and then proceed to train the model no events are found.
I scan the logs of my docker and nothing appears.

I am not sure what i have done wrong.

Curent config:

image_processing:
  - platform: deepstack_face
    ip_address: 10.2.69.20
    port: 10049
    detect_only: False
    timeout: 30
    save_file_folder: /config/snapshots/
    save_timestamped_file: True
    save_faces: True
    save_faces_folder: /config/faces/
    show_boxes: True
    source:
      - entity_id: camera.voordeur_high
        name: face_counter

not a directory for dictionary value

Trying to get this set up, but I keep getting the following error. I've tried /media and /config/media but the error doesn't go away.

2021-11-09 21:02:43 ERROR (MainThread) [homeassistant.config] Invalid config for [image_processing.deepstack_face]: not a directory for dictionary value @ data['save_faces_folder']. Got '/config/media/deepstack/faces/'
not a directory for dictionary value @ data['save_file_folder']. Got '/config/media/deepstack/snapshots/'. (See /config/configuration.yaml, line 276). Please check the docs at https://github.com/robmarkcole/HASS-Deepstack-face

/config/media is mounted from my nas as thats where I want to store all my NVR related stuff, Dafang, Frigate, Deepstack, etc. Folders are mounted fine.

➜  media ls -l /config/media
total 1
drwxrwxrwx    5 42949672 42949672         5 Nov  9 14:16 Dafang
drwxrwxrwx    4 42949672 42949672         4 Nov  9 14:36 deepstack
➜  media ls -l /config/media/deepstack
total 1
drwxrwxrwx    2 42949672 42949672         2 Nov  9 14:36 faces
drwxrwxrwx    2 42949672 42949672         2 Nov  9 14:36 snapshots
➜  media

Any ideas where I could be going wrong here?

How do I display a list of names in one line? \ Как отобразить список имен в одной строке?

Can you add an attribute for displaying the list of displayed faces to the integration?
Можете добавить в интеграцию аттрибут вывода списка отображаемых лиц?
image

I created such a sensor that outputs a list of names in one line, but there is a problem, commas are not put, spaces are put and will show user01_RUS user02_RUS user03_RUS user04_RUS
Я создал такой сенсор, который выводит список имен в одну строку, но есть проблема, не ставятся запятые, проставляются пробелы и будет показывать user01_RUS user02_RUS user03_RUS user04_RUS

      persons_hall_rus:
        friendly_name: 'Persons in hall rus'
        value_template: >
            {% for names in states.image_processing.detect_face_camera.attributes.faces %}
                {% if names.name == 'user01_EN' %} user01_RUS
                {% elif names.name == 'user02_EN' %} user02_RUS
                {% elif names.name == 'user03_EN' %} user03_RUS
                {% elif names.name == 'user04_EN' %} user04_RUS
                {% elif names.name == 'user05_EN' %} user05_RUS
                {% endif %}
            {% endfor %}

If I put commas after each name, it will display user01_RUS, user02_RUS, user03_RUS, user04_RUS, where there will be a comma at the end after the name and this is not correct.
Если я поставлю запятые после каждого имени, то будет отображаться user01_RUS, user02_RUS, user03_RUS, user04_RUS, где в конце после имени будет стоять запятая и это не правильно.

        value_template: >
            {% for names in states.image_processing.detect_face_camera.attributes.faces %}
                {% if names.name == 'user01_EN' %} user01_RUS,
                {% elif names.name == 'user02_EN' %} user02_RUS,
                {% elif names.name == 'user03_EN' %} user03_RUS,
                {% elif names.name == 'user04_EN' %} user04_RUS,
                {% elif names.name == 'user05_EN' %} user05_RUS,
                {% endif %}
            {% endfor %}

Do something like this
Сделать примерно так

{
    "event_type": "image_processing.detect_face",
    "data": {
        "names_list": [
            "User01, User02, User03, User04, User05 "
        ],
        "entity_id": "image_processing.detect_face_eufy_camera"
    },
    "origin": "LOCAL",
    "time_fired": "2021-05-08T15:33:50.506890+00:00",
    "context": {
        "id": "40fd38a4eee09da31624fcdf186ed0da",
        "parent_id": null,
        "user_id": null
    }
}

Face recognition not working if object recognition is installed too

I have installed both the object and face recognition.
When I try to run face recognition on the deepstack log I see /v1/vision/detection.
this is my configuration for the face recognition

  - platform: deepstack_face
    ip_address: 192.168.1.200
    port: 80
    api_key: !secret deepstack_api_key
    timeout: 5
    detect_only: False
    save_file_folder: /config/www/snapshots/
    save_timestamped_file: False
    save_faces: True
    save_faces_folder: /config/www/faces/
    show_boxes: True
    source:
      - entity_id: camera.camera_11
        name: face_counter

Not getting events sometimes - No faces in picture ?

Hi,

Just set this up to try and recognize people from my unifi protect cameras, I want to replace the large number of person detected notifications I get from their system to something a bit smarter that won't spam me every time I step outside, I'd like to ignore known people and alert only on unknown faces.

The G4 Pros already do person detection, so I created an automation to run image_processing.scan on cameras reporting a person in frame every second until their detected object entity goes back to none.
I then went and walked by one and saw some image_processing.detect_face events flow in : great.

Now someone else just walked by a camera, and the automation did trigger (a lot) of scans, but no events got generated.
Looking at the recorded footage I suspect the issue is simply the angle, the person's face was almost always facing away from the camera which unfortunately is going to be true a lot of the time with this.

Is that correct, running a scan on a picture with no visible faces does nothing at all ? If so I'm thinking a separate event could be triggered when the scan completes saying no faces were found, that might be handy.

If there is indeed nothing returned when the face isn't visible then my plan B would be something like this :

  • One automation with a trigger on the camera reporting a person in frame, running a deepstack scan per second
  • Another automation with a trigger on the camera reporting a person in frame, with an action to wait on the image_processing.detect_face event with a timeout.
    • If it gets a known face event, do nothing
    • If it gets an unknown face event, notify our phones that an unknown person was seen + picture
    • If it times out after X seconds or minutes with no events, notify our phones that a person was seen but no face could be found + picture.

A 'no face found' events could be nice here to avoid relying on a timeout, but that should be fine.
From the perspective of someone who's been using this integration for more than 5 minutes, does the above make sense or am I missing something ?

Thanks !

Update for image_processing.face_detection fails

Hi @robmarkcole,

I've had the Deepstack-face custom component (installed via HACS) installed for a while and it has always worked flawlessly. Today it just stopped working, not sure why. I constantly get the error message 'Update for image_processing.office_face_detection fails'

This is the full error message from Home Assistant's log:

Source: custom_components/deepstack_face/image_processing.py:210 
First occurred: 2:49:33 PM (1 occurrences) 
Last logged: 2:49:33 PM

Update for image_processing.office_face_detection fails
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 278, in async_update_ha_state
    await self.async_device_update()
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 474, in async_device_update
    raise exc
  File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 132, in async_update
    await self.async_process_image(image.content)
  File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 112, in async_process_image
    return await self.hass.async_add_executor_job(self.process_image, image)
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/deepstack_face/image_processing.py", line 210, in process_image
    self._dsface.recognise(image)
AttributeError: 'DeepstackFace' object has no attribute 'recognise'

Also, since this issue has started occurring, I am unable to teach new faces:

image

This is the image_processing YAML in HA:

- platform: deepstack_face
  ip_address: 192.168.86.5
  port: 5000
  api_key: !secret deepstack_api_key
  save_file_folder: /config/deepstack/
  save_timestamped_file: True
  source:
    - entity_id: camera.office_cam
      name: office_face_detection
homeassistant:
  allowlist_external_dirs: 
    - /config/deepstack/

This is my HA installation:

  • version: 2020.12.1
  • installation_type: Home Assistant OS
  • dev: false
  • hassio: true
  • docker: true
  • virtualenv: false
  • python_version: 3.8.6
  • os_name: Linux
  • os_version: 5.4.84
  • arch: x86_64
  • HASS-Deepstack-face 0.6

I have tried deleting the Docker container and re-installing it with all the below variations and the issue still occurs every time:

  • sudo docker run -e VISION-FACE=True -e VISION-DETECTION=True -e API-KEY="*****" -v localstorage:/datastore -p 5000:5000 --name deepstack -d deepquestai/deepstack:noavx
  • sudo docker run -e VISION-FACE=True -e VISION-DETECTION=True -e API-KEY="*****" -v localstorage:/datastore -p 5000:5000 --name deepstack -d deepquestai/deepstack:latest
  • sudo docker run -e VISION-FACE=True -e VISION-DETECTION=True -e API-KEY="*****" -v localstorage:/datastore -p 5000:5000 --name deepstack -d deepquestai/deepstack

I have tried without 'sudo', with port 80:5000, with and without VISION-DETECTION. Nothing helped. When launching the component in Docker everything seems to be working correctly:

image

Any assistance is greatly appreciated!

CodeProject.AI works but gives an error

I switched to CodeProject,AI changing only the port number under image_processing and faces are matched and framed as before, but every one logs the following error:


Logger: homeassistant.helpers.entity
Source: custom_components/deepstack_face/image_processing.py:226
Integration: deepstack_face (documentation)
First occurred: 12:14:14 PM (50 occurrences)
Last logged: 1:16:10 PM

Update for image_processing.bedroom_faces fails
Update for image_processing.kitchen_faces failsTraceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 515, in async_update_ha_state
    await self.async_device_update()
  File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 710, in async_device_update
    raise exc
  File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 139, in async_update
    await self.async_process_image(image.content)
  File "/usr/src/homeassistant/homeassistant/components/image_processing/__init__.py", line 119, in async_process_image
    return await self.hass.async_add_executor_job(self.process_image, image)
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/deepstack_face/image_processing.py", line 226, in process_image
    self._predictions = self._dsface.recognize(image)
  File "/usr/local/lib/python3.10/site-packages/deepstack/core.py", line 305, in recognize
    return response["predictions"]
KeyError: 'predictions'

face was recognized even with below than minimum confidence threshold.. how to avoid?

Dear Rob,

please help me with your fantastic piece of software. I taught my face to deepstack with the deepstack_teach_face service. I sadly noted that deepstack identified my garbage man as myself :)

What I mean is that during latest recognition, it gave me back my name as recognized, but with a lower confidence threshold than default (which is 67% if I'm not mistaken), with 59%. I guess the face registering process is done with this default confidence threshold, right? I couldn't find a way how to set minimum threshold with the service during registration process.

kép

Why did it return my name when the confidence is so low? How can avoid situations like this in the future?
Many thanks for your efforts Rob!

Teach face service ( home assistance )

hi
i am trying to add faces but i am getting this error :

Failed to call service image_processing.deepstack_teach_face. extra keys not allowed @ data['sequence'][0]['file_path']. Got '/config/www/faces/moh_1.jpg' extra keys not allowed @ data['sequence'][0]['name']. Got 'mohammed'

the service code :
service: image_processing.deepstack_teach_face
name: mohammed
file_path: /config/www/faces/moh_1.jpg

there is no spaces and yamle is typed not copy & paste
adding { or " or ' give me the same error

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.