Code Monkey home page Code Monkey logo

video-analyzer's Introduction

Deprecated - Azure Video Analyzer

We’re retiring the Azure Video Analyzer preview service, you're advised to transition your applications off of Video Analyzer by 01 December 2022. This repo is no longer being maintained.

Introduction

Azure Video Analyzer (AVA) provides a platform for you to build intelligent video applications that span the edge and the cloud. The platform consists of an IoT Edge module and an Azure service. It offers the capability to capture, record, and analyze live videos and publish the results, namely video and insights from video, to edge or cloud.

Azure Video Analyzer on IoT Edge

Azure Video Analyzer is an IoT Edge module which offers functionality that can be combined with other Azure edge modules such as Stream Analytics on IoT Edge, Cognitive Services on IoT Edge as well as Azure services in the cloud such as Event Hub, Cognitive Services, etc. to build powerful hybrid (i.e. edge + cloud) applications. Video Analyzer is designed to be a pluggable platform, enabling you to plug video analysis edge modules (e.g. Cognitive services containers, custom edge modules built by you with open source machine learning models or custom models trained with your own data) and use them to analyze live video without worrying about the complexity of building and running a live video pipeline.

With Video Analyzer, you can continue to use your CCTV cameras with your existing video management systems (VMS) and build video analytics apps independently. Video Analyzer can be used in conjunction with existing computer vision SDKs and toolkits to build cutting edge hardware accelerated live video analytics enabled IoT solutions. Apart from analyzing live video, the edge module also enables you to optionally record video locally on the edge or to the cloud, and to publish video insights to Azure services (on the edge and/or in the cloud). If video and video insights are recorded to the cloud, then the Video Analyzer cloud service can be used to manage them.

The Video Analyzer cloud service can therefore be also used to enhance IoT solutions with VMS capabilities such as recording, playback, and exporting (generating video files that can be shared externally). It can also be used to build a cloud-native solution with the same capabilities, as shown in the diagram below, with cameras connecting directly to the cloud.



This repo

This repository is a starting point to learn about and engage in Video Analyzer open source projects.This repository is not an official Video Analyzer product support location, however, we will respond to issues filed here as best we can.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

To find opportunities for contributions, please search for "Contributions needed" section in Readme.md of any folder.

License

This repository is licensed with the MIT license.

Microsoft Open Source Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct.

Resources:

video-analyzer's People

Contributors

anilmur avatar bhargaviannadevara avatar fvneerden avatar gadamilan avatar jasonxian-msft avatar microsoftopensource avatar mskeith avatar naiteeks avatar nicolasbotto avatar nikitapitliya avatar nofaredanoren avatar russell-cooks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

video-analyzer's Issues

Deployment of edgemodule failing InternalServerError

Deployment of sample fails for me. The deploy-video-analyzer-resources deployment's videoAnalyzer and storage and identity deployments work but the Microsoft.Media/videoAnalyzers/edgeModules with name avasample2friogalnq3x2/avaedge fails with error

{ "status": "Failed", "error": { "code": "ResourceDeploymentFailure", "message": "The response for resource had empty or invalid content." } }
It worked initially (yesterday) but 3 subsequent attempts at deployment have failed with this error.

Tutorial: Record and stream inference metadata with video

I'm trying to implement this tutorial with Visual Studio Code AVA extension:
https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/record-stream-inference-data-with-video

In which you show this schematic:
image

So I tried to implement it, but 2 errors ocurred: (There are no matching media types between nodes 'objectTrackingProcessor' and 'videoSink')

image

I read in this document that Video Sink has limitations: "Must be immediately downstream from RTSP source or signal gate processor."

https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/quotas-limitations

So it is not possible to link Video Sink and Object Tracker like you said?

USB-to-RTSP not working on Linux arm64

Hi there,

I hope you are doing good, I have been trying to convert my USB camera feed to RTSP for over a month. I haven't figured out any suitable way of doing that. I am using Linux arm64 OS in Raspberry Pi 4 (4 GB RAM). I have followed the steps given in https://github.com/Azure/video-analyzer/tree/main/edge-modules/sources/USB-to-RTSP . Everything go smoothly without any errors. When I run the main docker file it is supposed to convert USB camera feed to RTSP but it doesn't do it. I can't seem to access the output 'rtsp://127.0.0.1/stream1' inside VLC or any other streaming platform. I have tried my best to ensure every step followed correctly. I haven't seen any results. When I tried to make it work on my PC (OS Linux amd64), it actually works. I haven't figured out the solution to this issue. I would need your help in this regard. It's an urgency. I need to make my AI solution work inside Raspberry Pi. I am using Azure Video Analyzer which only supports RTSP streams. Kindly educate me if there is a way to make my USB camera stream work with Azure Video Analyzer.

Regards
Sheraz Faisal
[email protected]

Can't set environment variables in layered deployment

Hi,

We've started deploying the video analyzer module using layered deployment. Using non-layered deployment everything works fine. But when we use layered deployment, the LOCAL_USER_ID and LOCAL_USER_GROUP are ignored.

The error message we get at module startup:

<4> 2022-03-21 14:18:44.751 +00:00 Application: LOCAL_USER_ID and LOCAL_GROUP_ID environment variables are not set. The program will run as root! For optimum security, make sure to set LOCAL_USER_ID and LOCAL_GROUP_ID environment variables to a non-root user and group. See https://aka.ms/ava-iot-edge-prod-checklists-user-accounts for more information.; Context: activityId=cbc78a97-41f1-4c51-afc6-439fa8190cac

Our layered deployment looks like this:

{
    "Id": "mtningviol2-devops-deployment-test-layered-2",
    "SchemaVersion": "1.0",
    "Labels": null,
    "Content": {
        "ModulesContent": {
            "$edgeAgent": {
                "properties.desired.modules.azurevideoanalyzeredge": {
                    "version": "1.0",
                    "type": "docker",
                    "status": "running",
                    "restartPolicy": "always",
                    "settings": {
                        "image": "mcr.microsoft.com/media/video-analyzer:1.1",
                        "createOptions": "{\u0022HostConfig\u0022:{\u0022LogConfig\u0022:{\u0022Type\u0022:\u0022\u0022,\u0022Config\u0022:{\u0022max-size\u0022:\u002210m\u0022,\u0022max-file\u0022:\u002210\u0022}},\u0022Binds\u0022:[\u0022/var/media/:/var/media/\u0022,\u0022/var/lib/videoanalyzer:/var/lib/videoanalyzer\u0022,\u0022/var/lib/videoanalyzer/logs:/var/lib/videoanalyzer/logs\u0022],\u0022IpcMode\u0022:\u0022host\u0022,\u0022ShmSize\u0022:1536870912}}"
                    },
                    "env": {
                        "LOCAL_USER_ID": {
                            "value": "1010"
                        },
                        "LOCAL_USER_GROUP": {
                            "value": "1010"
                        }
                    }
                }
            },
            "$edgeHub": {
                "properties.desired.routes.azurevideoanalyzeredgeEdgeToIoTHub": "FROM /messages/modules/azurevideoanalyzeredge/outputs/* INTO $upstream"
            },
            "azurevideoanalyzeredge": {
                "properties.desired.ProvisioningToken": "eyJhbGciOiJSUzI1NiIsImtpZCI6ImROdl91aS02dG84YlUzbHNoX0NPb0pWZEFYVSIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJCSU9CMTI0IiwiYXVkIjpbImh0dHBzOi8vZGQ5MmMyYzRkYTE3NDJiZWIzY2EyNDg2YWYyMGRjYjIuZWRnZWFwaS5ub3J0aGV1cm9wZS52aWRlb2FuYWx5emVyLmF6dXJlLm5ldC9lZGdlTW9kdWxlcy9CSU9CMTI0Il0sImV4cCI6MTY0NDQzNzY4NiwidG9rZW5UeXBlIjoiUHJvdmlzaW9uaW5nVG9rZW4iLCJtb2R1bGVBcm1JZCI6Ii9zdWJzY3JpcHRpb25zLzY4NzI0NmI1LTQ4MmItNDM1ZC1iYjNjLWQ4MGIxYjBjNTI2MC9yZXNvdXJjZUdyb3Vwcy9iaW8tdXR2LWtwLXJnL3Byb3ZpZGVycy9NaWNyb3NvZnQuTWVkaWEvdmlkZW9BbmFseXplcnMvYmlvdXR2a3B2aWRlb2FuYWx5emVyL2VkZ2VNb2R1bGVzL0JJT0IxMjQiLCJtb2R1bGVJZCI6IjZhNWMxNWYxLWFmNmItNDJkYy1hYjYyLWQwOTM5MjVmMmI5YSIsImlzcyI6Imh0dHBzOi8vbm9ydGhldXJvcGUudmlkZW9hbmFseXplci5henVyZS5uZXQvIn0.pooC62UHvXz-psdtXeV_s5zNn6hVa5PAWtbudUaih4mRxrWkKCYqUKsK4_3HXM0lCrglQIYWoEvSDU9HXeiZPewloBQJdvzQ_-sXIn-wumQ0UvOg-sbmAtMQQjdNDdmrhwQbvFHL6Bp0E-GCsnWC_JiPA3_JhZmBCL_WMja6vFKXwVzsdzDL2xOcCvdwaCIkmuADrR8yXhAFaTHmeDIrBSPH2t-zaJbxesQIcwLAb-8gLE59t3Q1BNSUaSUr0TdrbLVpX4BJruxbTij6m_mdpMwDP0PC0yxxQlI132HRhDT9fIqXLzPx_fwQ8bnGgDB24qxIiI9vTMfWefiL5DkRSQ",
                "properties.desired.DebugLogsDirectory": "/var/lib/videoanalyzer/logs",
                "properties.desired.ApplicationDataDirectory": "/var/lib/videoanalyzer",
                "properties.desired.DiagnosticsEventsOutputName": "diagnostics",
                "properties.desired.OperationalEventsOutputName": "operational",
                "properties.desired.LogLevel": "Verbose",
                "properties.desired.LogCategories": "Application,Events,MediaPipeline",
                "properties.desired.AllowUnsecuredEndpoints": "false",
                "properties.desired.TelemetryOptOut": "false"
            }
        },
        "ModuleContent": null,
        "DeviceContent": null
    },
    "ContentType": "assignment",
    "TargetCondition": "deviceId=\u0027MST299D\u0027",
    "CreatedTimeUtc": "0001-01-01T00:00:00",
    "LastUpdatedTimeUtc": "0001-01-01T00:00:00",
    "Priority": 20,
    "SystemMetrics": null,
    "Metrics": {
        "Results": {},
        "Queries": {}
    },
    "

Versions

Iot Edge version: 1.1.8
EFLOW version: 1.1.2112.20121
Hyper-V version: 10.0.19041.1
Windows Host: Windows 10 Pro 20H2, 19042.1288

Offline Support of Solution

Our customers range from DOD to USDA to National Institution for Health (NIH) and more. Most of our use cases for deploying edge solutions will require those devices to be offline for a period of time. For production deployments, the short window that is currently set won't work for most of our use cases. We like the product and see good use of the solution for various customers but unless the offline support time range is extended or somehow designed to run fully offline we can't deploy this solution to a majority of our projects. Even if the offline time was a minimum of 72 hours this may work. Preferably it would be good to see this solution set up to run offline for a few weeks as well. A perfect case in hand is the 82nd Airbourne and their immediate response team was just deployed to Afghanistan to assist in the evacuation of the embassy. The ASE devices they will have deployed could use some of these features but since they are deployed will have very limited connectivity for the few weeks they are deployed.

Running grpcExtensionOpenVINO tutorial should not give permisssion denied error on accessing shared memory

I'm running the grpcExtensionOpenVINO topology tutorial against the deployment.openvino.grpc.template deployment. All modules look to be running. I get a protocol error when avaextension:5001 is called.

      [IoTHubMonitor] [2:50:57 PM] Message received from [avasample-iot-edge-device/avaedge]:
      {
        "body": {
          "code": "connection_terminated",
          "target": "avaextension:5001",
          "protocol": "grpc"
        },
        "properties": {
          "topic": "/subscriptions/14bbbdfe-3992-46e7-9a9e-65aae95e071a/resourceGroups/ava-record2cloud/providers/Microsoft.Media/videoAnalyzers/avasamplepa4h3hcmivqhq",
          "subject": "/edgeModules/avaedge/livePipelines/Sample-Pipeline-1/processors/grpcExtension",
          "eventType": "Microsoft.VideoAnalyzer.Diagnostics.ProtocolError",
          "eventTime": "2021-11-27T14:50:57.118Z",
          "dataVersion": "1.0"
        },

It seems to be a permissions error with access to shared memory

    PermissionError: [Errno 13] Permission denied: '/dev/shm/inference_client_share_memory_8406712821321267488'
    Exception ignored in: <function SharedMemoryManager.__del__ at 0x7f2e11b1a5e0>
    Traceback (most recent call last):
      File "/home/video-analytics-serving/samples/lva_ai_extension/common/shared_memory.py", line 145, in __del__
        self._shm_file.close()
    AttributeError: 'SharedMemoryManager' object has no attribute '_shm_file'
    {"levelname": "INFO", "asctime": "2021-11-27 14:41:57,941", "message": "Exception:\n\tFile name: /home/video-analytics-serving/samples/lva_ai_extension/server/media_graph_extension.py\n\tLine number: 85\n\tLine: self.shared_memory_manager = SharedMemoryManager(\n\tValue: [Errno 13] Permission denied: '/dev/shm/inference_client_share_memory_8406712821321267488'", "module": "exception_handler"}
    ERROR:grpc._server:Exception iterating responses: [Errno 13] Permission denied: '/dev/shm/inference_client_share_memory_8406712821321267488'
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/grpc/_server.py", line 453, in _take_response_from_response_iterator
        return next(response_iterator), True
      File "/home/video-analytics-serving/samples/lva_ai_extension/server/media_graph_extension.py", line 330, in ProcessMediaStream
        client_state = State(request.media_stream_descriptor)
      File "/home/video-analytics-serving/samples/lva_ai_extension/server/media_graph_extension.py", line 85, in __init__
        self.shared_memory_manager = SharedMemoryManager(
      File "/home/video-analytics-serving/samples/lva_ai_extension/common/shared_memory.py", line 54, in __init__
        self._shm_file = open(self._shm_file_full_path, 'r+b')

I've tried to add a binding for /dev/shm to the deployment and also attempted to use the latest tag, 0.6.1-dlstreamer-edge-ai-extension but I still get permissions denied on the shared memory. e.g. added the following

    "avaextension": {
                "version": "1.0",
                "type": "docker",
                "status": "running",
                "restartPolicy": "always",
                "settings": {
                  "image": "intel/video-analytics-serving:0.6.1-dlstreamer-edge-ai-extension",
                  "createOptions": {
                    "ExposedPorts": {
                      "80/tcp": {},
                      "5001/tcp": {}
                    },
                    "HostConfig": {
                      "Binds": ["/tmp:/tmp", "/dev/shm:/dev/shm"],

Need option to set timezone for the video player

Currently all timings in the player are UTC, which makes the timeline and time-container pretty useless.

We need to have an option to configure the users timezone to have the timings adjusted accordingly.

AVA on Azure Stack Edge GPU

Is there a new documentation about the necessary steps/adjustments to run AVA on Azure Stack Edge ? similar to:
LVA on Stack Edge

The module works for some time but then goes down sometimes with Error Code 139 sometimes without Error Code. I already created Mounts for /var/media and /var/lib/videoanalyzer in the Device Twin but it does not seem to fix the issue.

Ava number of messages

Hi, I'm trying to implement a Azure Video Analyzer in Azure Iot Edge, but the Ava module is communicating with Azure Iot Hub and spending thousands of my daily messages in few hours.
Can anyone explain why is it sending or receive so many messages when I'm not sending any to the cloud? Or is there anything that I can do to reduce it?

Is this Repo is working or not

I m trying to build a docker file from this repo locally and want to do inference,I built the docker image and ran the container but while testing in POST Method it is throwing 502 bad gateway error

But at the same time, I ran the image from the MCR (docker run --name my_yolo_container -p 8080:80 -d -i mcr.microsoft.com/ava-utilities/avaextension:http-yolov3-onnx-v1.0) it is working

Can anyone help me to figure out, Is the repository is not updated, or can you help me in sharing the MCR repo

Thank you

VideoSink should store video to LocalMediaCachePath when internet connectivity is lost

When specifying the localMediaCachePath as below, I am not seeing any videos saved when the edge device loses Internet connectivity. Edge device is an Azure Stack Edge Mini-R. When connectivity is restored, video is being recorded to the Video Analyzer storage account.

          {
              "@type": "#Microsoft.VideoAnalyzer.VideoSink",
              "localMediaCachePath": "/var/lib/videoanalyzer/tmp",
              "localMediaCacheMaximumSizeMiB": "2048",
              "videoName": "motion-with-multiple-grpc-extensions",
              "videoCreationProperties": {
                  "title": "motion-with-multiple-grpc-extensions",
                  "description": "Sample video using motion with gRPC extension",
                  "segmentLength": "PT30S"
              },
              "name": "videoSink",
              "inputs": [
                  {
                      "nodeName": "signalGateProcessor",
                      "outputSelectors": [
                          {
                              "property": "mediaType",
                              "operator": "is",
                              "value": "video"
                          }
                      ]
                  }
              ]
          }

I verified the avaedge container has a bind for /var/lib/videoanalyzer which is to /media/appdata and tmp exists.

Mounts:
/var/lib/videoanalyzer from mediaappdata (rw)

What could I be missing?

Cannot get AVA working in IoT Edge Simulator - System.ArgumentNullException

For development purposes we would like to use AVA in the IoT Edge Simulator (iotedgehubdev) but are encountering some issues. After finally being able to start it (we had to remove the Binds for example since iotedgehubdev doesn't support them) we got the following crash:

avaedge       | Unhandled exception. System.ArgumentNullException: String reference not set to an instance of a String. (Parameter 's')
avaedge       |    at System.Text.Encoding.GetBytes(String s)
avaedge       |    at Microsoft.Media.LiveVideoAnalytics.Common.Utilities.StableHash.GetStableHash(String input)
avaedge       |    at Microsoft.Media.LiveVideoAnalytics.Edge.Modules.MediaEdge.Hosts.IoTEdge.Core.DataIntegrityEnforcer.EnsureApplicationDirectoryDataIsValidAsync()
avaedge       |    at Microsoft.Media.LiveVideoAnalytics.Edge.Modules.MediaEdge.Hosts.IoTEdge.Core.MediaEdgeModule.InitializeAsync()
avaedge       |    at Microsoft.Media.LiveVideoAnalytics.Edge.Modules.MediaEdge.Hosts.IoTEdge.Core.MediaEdgeModule.InitializeAsync()
avaedge       |    at Microsoft.Media.LiveVideoAnalytics.Edge.Common.Hosts.IoTEdge.EdgeHost.SetModuleAsync(IEdgeModule module)
avaedge       |    at Microsoft.Media.LiveVideoAnalytics.Edge.Modules.MediaEdge.Hosts.IoTEdge.Program.Main(String[] args)
avaedge       |    at Microsoft.Media.LiveVideoAnalytics.Edge.Modules.MediaEdge.Hosts.IoTEdge.Program.Main(String[] args)
avaedge       |    at Microsoft.Media.LiveVideoAnalytics.Edge.Modules.MediaEdge.Hosts.IoTEdge.Program.<Main>(String[] args)

There is a parameter not set, but we are not sure which one :) could anyone help us with this? The following are the properties that we put in the module twin:

{
            "applicationDataDirectory": "/var/lib/videoanalyzer",
            "provisioningToken": "MASKED",
            "diagnosticsEventsOutputName": "diagnostics",
            "operationalEventsOutputName": "operational",
            "logLevel": "information",
            "LogCategories": "Application,Events",
            "allowUnsecuredEndpoints": true,
            "telemetryOptOut": false,
}

Show more than one instance of Video Analyzer Widget in Angular app.

Hi,

I am trying to show more than one instance of Video Analyzer Widget in an Angular Web App, i.e. more than one video in the same page.

I can see both widgets, but they get the same video-resource.

I am creating each instance as follows, in typescript, with different videoNames:

        const config: IAvaPlayerConfig = {
          videoName: <videoName>,
          clientApiEndpointUrl: <clientApiEndpointUrl>,
          token: <token>
        };

        const avaPlayer = new Player(config);
        this.playerContainer.nativeElement.append(avaPlayer);

        avaPlayer.load();

I can see two calls to /videos/?api-version=2021-05-01-preview, each with different videoName, but then two calls to /videos//listStreamingToken?api-version=2021-05-01-preview, but with the same videoName.

Is there a limit to the Azure Widget Analyzer that only one instance is possible to render on a page or am I doing something upside down?

VideoSink-VideoPublishingOptions : DisableRtspPublishing set to true still makes widget enter Live-mode

Hi,

I've been testing cloud pipelines and the sample topology: Live capture, record, and stream from RTSP camera behind firewall

When setting DisableRtspPublishing-setting to true on the videoSink the excepted behavior is to disable the widget to enter Live-mode because the low-latency rtsp-url is not published.

When the pipeline is active, the video-analyzer widget still goes into "Live"-mode.

Is there another way to disable Live-mode on the video analyzer or video analyzer widget?

Regards

deploy-container-registry error on deploying the sample deployment via Deploy To Azure

I have tried a few times but I keep having an error when it tries to deploy the container registry on the sample deployment solution.

The registry DNS name avasampleregistrysxn3dqx3nhjli.azurecr.io is already in use. You can check if the name is already claimed using following API: https://docs.microsoft.com/en-us/rest/api/containerregistry/registries/checknameavailability"

It looks to me that the randomization of the ACR name isn't working?

Updated topology file for OpenVINO DL Streamer Extension

Problem turning this into modules on the iot device

I have one iot edge device intel mini pc. AFter a lot of experimentation i am able to use the topology and json to do two things:

  1. openvino face detection
  2. yolo v3 object detection

My question is, how do i turn this into a module? The example operations json has a manual input feature. i have bypassed that and seperated the single json into two json - one for starting and one for ending the live pipeline.

  1. Can we have a docker/module where this is being used?
  2. can we run two seperate live pipelines on the same device? ifor example out of the above two modules only yolo is reporting objects. However, upon disabling yolo, openvino starts giving faces. so i have a feeling maybe two seperate topologies cannot be deployed or something.

Provisioning token for multiple devices

Hi,

We will be deploying the Video Analyzer module on hundreds of IoT Edge devices. Do we have to create one edge module entry (and its accompanying provisioning token) for each of these devices in the Video Analyzer settings? Or can we use the same provisioning token for all devices?

Best regards,
Björn

unable to view stream on azure video analyzer portal due to RtspIngestionSourceDisconnected

Hello. I am using the azure video analyzer to connect cameras to the cloud using a remote device adapter method in the cloud. I have done following steps:
• I have attached IOT Hub to Azure Video Analyzer
• I have successfully installed azure ava edge module on my IOT Edge device.
• I have tested the ava module working fine using direct method successful response.
For remote adapter method I have done following steps

  1. Created IOT Device on IOT Hub

  2. Successful response of remoteDeviceAdapterSet method to connect IOT device to IOT Edge device as a gateway using newly created device id, IP address of camera and primary connection string.

  3. In azure video analyzer portal, I have created topology record behind the firewall

  4. I have used TCP as transport, unsecure endpoint and IOT device remote tunnel in topology.

  5. After creating topology, I have successfully created pipeline by setting parameters of the camera

  6. I have also given RTSP URL as rtsp://localhost:554/cam/realmonitor?channel=1&subtype=0

  7. The above stream was working fine on VLC media player.

  8. After creating pipeline, I have activated pipeline and got error in video livestream as
    Errors related to streaming. Shaka Error STREAMING.FAILED_TO_CREATE_SESSION (404,RTSP/1.0 404 NotFound CSeq: 1 Server: Azure_Media_Services_RTSP_Server/1.0 Date: Sat, 15 Jan 2022 16:56:32 GMT )

  9. I have checked the device troubleshoot error and I am getting following error as:
    An error occurred when processing the request. Error message: An error has occurred while communicating with an upstream server. Error details: Code: RemoteDeviceAdapterTunnelConnectionError, Target: wss://xxxxxxxxxxxxxxxx.device-tunnel.westeurope.videoanalyzer.azure.net/livePipelines/rtsp15014/nodes/rtspSource, Message: DestinationError(NotAWebSocket) The server returned status code '401' when status code '101' was expected...; Context: activityId=xxxxxxxxxxxxxxxxx <6> 2022-01-15 16:57:39.852 +00:00 Application: Request completed: Method name=tunnelOpen Status=BadGateway Elapsed=971.5773ms.; Context: activityId=0b5ea324-25fb-4ff8-a82a-a4255896e7e4 <6> 2022-01-15 16:57:43.499 +00:00 Application: Request started: Method name=tunnelOpen.; Context: activityId=f7b86f3f-99cb-4c5e-9d3f-732a83d9287f

  10. Also, by enabling logs in service I am receiving logs in my storage account. I have received the error as:
    { "time": "2022-01-15T16:56:46.5831670Z", "resourceId": "/SUBSCRIPTIONS/xxxxxxxxxxx/RESOURCEGROUPS/RG-IOTHW-LABS/PROVIDERS/MICROSOFT.MEDIA/VIDEOANALYZERS/ANLYDEMO", "region": "westeurope", "category": "Diagnostics", "operationName": "Microsoft.VideoAnalyzer.Diagnostics.RtspIngestionSessionEnded", "operationVersion": "1.0", "level": "Error", "uri": "rtsp://localhost:554/cam/realmonitor?channel=1&subtype=0", "traceContext": { "traceId": "76fc8168-0bc0-4f29-bafd-d196fdbc52a8"}, "properties": { "subject": "/livePipelines/rtsp15014/sources/rtspSource", "body": { "error": { "code": "RtspIngestionSourceDisconnected", "message": "The RTSP source was disconnected" } }}}

  11. While this stream is working well in my VLC media player.

Video Analyzer: Motion detection within Zones.

Hi,

I am trying out Video Analyzer on IoT Edge.
Is it possible to specify a zone for motion detection?

I.e. construct a pipeline which records and creates events on motion detection within the specified zone.

Or is it planned as a feature in the future?

No ARMV7 Manifest for Video Analyzer

Hi,
When I try to deploy AVA Module to my IoT Edge Device (ARMV7 Ubuntu 20.04). I get this error

Could not create module avaedge
caused by: Could not pull image mcr.microsoft.com/media/video-analyzer:latest
caused by: no matching manifest for linux/arm/v7 in the manifest list entries

Is ARMV7 Supported?

Continuous video recording to same VideoSink after reinstallation of edge/VA-module

Hi,

I know there's a disclaimer, because of security reasons regarding the video sink:

Due to security reasons, a given Video Analyzer edge module instance can only record content to new video entries, or previously recorded video entries by the same module. Any attempt to record content to an existing video which has not been created by the same edge module instance will result in failure to record.

This is the case for us after we reinstalled a video analyzer edge-module. Which will create a long lived pipeline which we first thought would be able to record to the same video (video name/resource) as the uninstalled Video Analyzer edge module instance.

Is it some how possible to enable a VA-module to record to a video name/resource not created by itself?

Tunneling in IoT Edge Device - Record Camera Behind Firewall

Hi there,

I wanted to reach for the following questions regarding the new release of Azure Video Analyzer. I wanted to implement cloud based video recording behind firewall.

  1. How to enable tunneling in Azure Video Analyzer Modules working on edge devices
  2. Do we have to implement IoT PnP interface when working with IoT edge modules.
  3. Do IoT Plug and Play have any role while using topology of record camera behind firewall
  4. When trying to set topology of cloud-record-camera-behind-firewall. The existing AVA module doesn't recognize tunneling parameters. What kind of installation does it need.

It would be great if you could give me explanations to these questions. Thank you

Regards
Sheraz Faisal
[email protected]

"This video is not ready for playback right now" after video deletion and recreation

We are doing some RTSP recording on the edge with Azure Video Analyzer.

Today we noticed that the ava edge module threw continuous errors of type "Microsoft.VideoAnalyzer.Diagnostics.VideoFormatError".
Because we couldn't identify any cause, we deleted the entire video reference via the Azure Portal.
After this we recreated pipelines and restarted recording. The edge module error is now resolved and it seems to be recording just fine. Files are being added on storage.

However, when trying to watch the video in the player (also via the Azure Portal), we always get the message:
"This video is not ready for playback right now, Check whether your live pipeline is in an active state and is connected to the RTSP camera" although the actual status of the video is "Recording" and it has been recording for several minutes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.