Code Monkey home page Code Monkey logo

deepstream_360_d_smart_parking_application's People

Contributors

aiyerganapathy avatar nvbrupde avatar sujitbiswas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepstream_360_d_smart_parking_application's Issues

scala.reflect.internal.MissingRequirementError

I was running the spark job to generate playback data and I got this:

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.inject.internal.cglib.core.$ReflectUtils$1 (file:/usr/share/maven/lib/guice.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
WARNING: Please consider reporting this to the maintainers of com.google.inject.internal.cglib.core.$ReflectUtils$1
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for com.nvidia.ds:stream-360:jar:1.0
[WARNING] 'build.plugins.plugin.version' for org.scala-tools:maven-scala-plugin is missing. @ line 125, column 12
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING] 
[INFO] 
[INFO] ----------------------< com.nvidia.ds:stream-360 >----------------------
[INFO] Building stream-360 1.0
[INFO] --------------------------------[ jar ]---------------------------------
[WARNING] The POM for harsha2010:magellan:jar:1.0.5-s_2.11 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ stream-360 ---
[INFO] Deleting /home/hasher/GIT/deepstream_360_d_smart_parking_application/stream/target
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ stream-360 ---
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] Copying 6 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ stream-360 ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-scala-plugin:2.15.2:compile (default) @ stream-360 ---
[INFO] Checking for multiple versions of scala
[WARNING] Invalid POM for harsha2010:magellan:jar:1.0.5-s_2.11, transitive dependencies (if any) will not be available, enable debug logging for more details
[WARNING]  Expected all dependencies to require Scala version: 2.11.8
[WARNING]  com.twitter:chill_2.11:0.8.0 requires scala version: 2.11.7
[WARNING] Multiple versions of scala libraries detected!
[INFO] includes = [**/*.java,**/*.scala,]
[INFO] excludes = []
[INFO] /home/hasher/GIT/deepstream_360_d_smart_parking_application/stream/src/main/scala:-1: info: compiling
[INFO] Compiling 14 source files to /home/hasher/GIT/deepstream_360_d_smart_parking_application/stream/target/classes at 1558086975704
[ERROR] error: scala.reflect.internal.MissingRequirementError: object java.lang.Object in compiler mirror not found.
[ERROR] 	at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:17)
[ERROR] 	at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:18)
[INFO] 	at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:53)
[INFO] 	at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:45)
[INFO] 	at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:45)
[INFO] 	at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:66)
[INFO] 	at scala.reflect.internal.Mirrors$RootsBase.getClassByName(Mirrors.scala:102)
[INFO] 	at scala.reflect.internal.Mirrors$RootsBase.getRequiredClass(Mirrors.scala:105)
[INFO] 	at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass$lzycompute(Definitions.scala:257)
[INFO] 	at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass(Definitions.scala:257)
[INFO] 	at scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1394)
[INFO] 	at scala.tools.nsc.Global$Run.<init>(Global.scala:1215)
[INFO] 	at scala.tools.nsc.Driver.doCompile(Driver.scala:31)
[INFO] 	at scala.tools.nsc.MainClass.doCompile(Main.scala:23)
[INFO] 	at scala.tools.nsc.Driver.process(Driver.scala:51)
[INFO] 	at scala.tools.nsc.Driver.main(Driver.scala:64)
[INFO] 	at scala.tools.nsc.Main.main(Main.scala)
[INFO] 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[INFO] 	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[INFO] 	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[INFO] 	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
[INFO] 	at org_scala_tools_maven_executions.MainHelper.runMain(MainHelper.java:161)
[INFO] 	at org_scala_tools_maven_executions.MainWithArgsInFile.main(MainWithArgsInFile.java:26)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  9.890 s
[INFO] Finished at: 2019-05-17T15:26:17+05:30
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scala-tools:maven-scala-plugin:2.15.2:compile (default) on project stream-360: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1(Exit value: 1) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

VMS for managing hundreds of cameras

Thanks for this great repository.
Do I need a VMS to manage hundreds cameras as you are describing in a DeepStream application ?
In the setup you are describing, how are the cameras synchronised precisely and have their timestamps in-sync, video is encoded in the correct format, configured etc.
I thought only VMS could provide that functionality.
Thanks for your clarifications :)

ERROR: <create_render_bin:90>: Failed to create 'sink_sub_bin_sink1'

Hi, I pull the container from https://ngc.nvidia.com/catalog/containers/nvidia:deepstream_360d
and use following command
nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /root nvcr.io/nvidia/deepstream-360d:4.0.1-19.11

I got this error
(gst-plugin-scanner:15): GStreamer-WARNING **: 06:51:21.855: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_eglglessink.so': libcuda.so.1: cannot open shared object file: No such file or directory
** ERROR: main:601: Specify config file with -c option
Quitting
App run failed
root@e35d64642179:~/DeepStream360d_Release/samples# deepstream-360d-app -c configs/deepstream-360d-app/source10_gpu0.txt

(deepstream-360d-app:16): GLib-GObject-CRITICAL **: 06:51:27.571: g_object_set: assertion 'G_IS_OBJECT (object)' failed
** ERROR: <create_render_bin:90>: Failed to create 'sink_sub_bin_sink1'
** ERROR: <create_render_bin:168>: create_render_bin failed
** ERROR: <create_sink_bin:670>: create_sink_bin failed
** ERROR: <create_processing_instance:709>: create_processing_instance failed
** ERROR: <create_pipeline:1121>: create_pipeline failed
** ERROR: main:631: Failed to create pipeline
Quitting
App run failed

please help, thanks

How to run without tracker?

How to run the deep-stream application without tracker? I only need parking spots occupancy, tracking is not mandatory so what configurations need to be setup for this?

Integration with other inference platforms

Hi,

Can we integrate this parking application with other inference planform like Nvidia tritan inference server, I want to integrate the inference/detection/tracking module, how i can achieve this?

Thanks,

ERROR: Service 'kafka' failed to build

Trying to run this application on AWS. Details and reproducible steps below.

Instance: NVIDIA Volta Deep Learning AMI 18.09.1 - ami-09affadd70a78d6cb
NVIDIA Volta Deep Learning AMI 18.09.1

# clone repository
git clone https://github.com/NVIDIA-AI-IOT/deepstream_360_d_smart_parking_application.git
cd deepstream_360_d_smart_parking_application

#  set exports
export IP_ADDRESS=<IP_ADDRESS>
export GOOGLE_MAP_API_KEY=<GOOGLE_MAP_API_KEY>

#  install docker-compose
sudo curl -L https://github.com/docker/compose/releases/download/1.21.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

# setup analytics server
cd ./analytics_server_docker
sudo -E docker-compose up -d

This is where I get an error (shown below). It looks like Kafka cannot be pulled from any of the open repositories. Any thoughts?

ubuntu@<IP ADDRESS>:~/deepstream_360_d_smart_parking_application/analytics_server_docker$ sudo -E docker-compose up -d
Building kafka
Step 1/11 : FROM openjdk:8u171-jre-alpine
 ---> 0fe3f0d1ee48
Step 2/11 : ARG kafka_version=1.1.0
 ---> Using cache
 ---> f78b62f19d0e
Step 3/11 : ARG scala_version=2.12
 ---> Using cache
 ---> 0e203acfbd3a
Step 4/11 : ARG glibc_version=2.27-r0
 ---> Using cache
 ---> e879b981f445
Step 5/11 : MAINTAINER wurstmeister
 ---> Using cache
 ---> 64f1e9c8de72
Step 6/11 : ENV KAFKA_VERSION=$kafka_version     SCALA_VERSION=$scala_version     KAFKA_HOME=/opt/kafka     GLIBC_VERSION=$glibc_version
 ---> Using cache
 ---> 674fa76840fb
Step 7/11 : ENV PATH=${PATH}:${KAFKA_HOME}/bin
 ---> Using cache
 ---> 79f87916e194
Step 8/11 : COPY download-kafka.sh start-kafka.sh broker-list.sh create-topics.sh versions.sh /tmp/
 ---> Using cache
 ---> db1c177e89f0
Step 9/11 : RUN apk add --no-cache bash curl jq docker  && mkdir /opt  && chmod a+x /tmp/*.sh  && mv /tmp/start-kafka.sh /tmp/broker-list.sh /tmp/create-topics.sh /tmp/versions.sh /usr/bin  && sync && /tmp/download-kafka.sh  && tar xfz /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz -C /opt  && rm /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz  && ln -s /opt/kafka_${SCALA_VERSION}-${KAFKA_VERSION} /opt/kafka  && rm /tmp/*  && wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VERSION}/glibc-${GLIBC_VERSION}.apk  && apk add --no-cache --allow-untrusted glibc-${GLIBC_VERSION}.apk  && rm glibc-${GLIBC_VERSION}.apk
 ---> Running in c2d65bca3b14
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/19) Installing ncurses-terminfo-base (6.1_p20180818-r1)
(2/19) Installing ncurses-terminfo (6.1_p20180818-r1)
(3/19) Installing ncurses-libs (6.1_p20180818-r1)
(4/19) Installing readline (7.0.003-r0)
(5/19) Installing bash (4.4.19-r1)
Executing bash-4.4.19-r1.post-install
(6/19) Installing nghttp2-libs (1.32.0-r0)
(7/19) Installing libssh2 (1.8.0-r3)
(8/19) Installing libcurl (7.61.1-r1)
(9/19) Installing curl (7.61.1-r1)
(10/19) Installing libmnl (1.0.4-r0)
(11/19) Installing jansson (2.11-r0)
(12/19) Installing libnftnl-libs (1.1.1-r0)
(13/19) Installing iptables (1.6.2-r0)
(14/19) Installing device-mapper-libs (2.02.178-r0)
(15/19) Installing libltdl (2.4.6-r5)
(16/19) Installing libseccomp (2.3.3-r1)
(17/19) Installing docker (18.06.1-r0)
Executing docker-18.06.1-r0.pre-install
(18/19) Installing oniguruma (6.8.2-r0)
(19/19) Installing jq (1.6_rc1-r1)
Executing busybox-1.28.4-r1.trigger
OK: 269 MiB in 71 packages
Downloading kafka 1.1
Downloading Kafka from http://mirror.reverse.net/pub/apache/kafka/1.1.0/kafka_2.12-1.1.0.tgz
Connecting to mirror.reverse.net (208.100.14.200:80)
wget: server returned error: HTTP/1.1 404 Not Found
ERROR: Service 'kafka' failed to build: The command '/bin/sh -c apk add --no-cache bash curl jq docker  && mkdir /opt  && chmod a+x /tmp/*.sh  && mv /tmp/start-kafka.sh /tmp/broker-list.sh /tmp/create-topics.sh /tmp/versions.sh /usr/bin  && sync && /tmp/download-kafka.sh  && tar xfz /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz -C /opt  && rm /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz  && ln -s /opt/kafka_${SCALA_VERSION}-${KAFKA_VERSION} /opt/kafka  && rm /tmp/*  && wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VERSION}/glibc-${GLIBC_VERSION}.apk  && apk add --no-cache --allow-untrusted glibc-${GLIBC_VERSION}.apk  && rm glibc-${GLIBC_VERSION}.apk' returned a non-zero code: 1

Edits: Formatting to make it easier to read.

libtrtserver.so: cannot open shared object file: No such file or directory

I run docker on my PC, but i got this error

(gst-plugin-scanner:554): GStreamer-WARNING **: 02:31:21.877: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtrtserver.so: cannot open shared object file: No such file or directory
** ERROR: main:650: Failed to set pipeline to PAUSED
Quitting
ERROR from sink_sub_bin_sink2: Could not configure supporting library.
Debug info: gstnvmsgbroker.c(388): legacy_gst_nvmsgbroker_start (): /GstPipeline:pipeline/GstBin:sink_sub_bin2/GstNvMsgBroker:sink_sub_bin_sink2:
unable to connect to broker library
ERROR from sink_sub_bin_sink2: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
Debug info: gstbasesink.c(5265): gst_base_sink_change_state (): /GstPipeline:pipeline/GstBin:sink_sub_bin2/GstNvMsgBroker:sink_sub_bin_sink2:
Failed to start
App run failed

If i set sink1 = facesink, app can run ok
[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=1

Did i miss something?

How to run with custom detection/perception module?

Please guide if I want to use only analytics module with our own detection/perception module? How can i use custom perception module integrated with analytics module. Also please guide with the exact format of the Kafka message to be sent using custom module without using NVIDIA perception module.

unable to run analytics_server_docker

While running the script start.sh for analytics_server_docker, I am facing below errors:

ERROR: Service 'kafka' failed to build: The command '/bin/sh -c apk add --no-cache bash curl jq docker && mkdir /opt && chmod a+x /tmp/.sh && mv /tmp/start-kafka.sh /tmp/broker-list.sh /tmp/create-topics.sh /tmp/versions.sh /usr/bin && sync && /tmp/download-kafka.sh && tar xfz /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz -C /opt && rm /tmp/kafka_${SCALA_VERSION}-${KAFKA_VERSION}.tgz && ln -s /opt/kafka_${SCALA_VERSION}-${KAFKA_VERSION} /opt/kafka && rm /tmp/ && wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VERSION}/glibc-${GLIBC_VERSION}.apk && apk add --no-cache --allow-untrusted glibc-${GLIBC_VERSION}.apk && rm glibc-${GLIBC_VERSION}.apk' returned a non-zero code: 4
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: "./bin/spark-submit": stat ./bin/spark-submit: no such file or directory": unknown

No Source Code For Perception Server

I saw NVIDIA marked:
This is a “deployment” container, meaning that any compiler toolchains, header files, build libraries, etc. that are required to build samples and other software are not included in the container image.

But for a playground project, the key is to know how deepstream works w/ source code examples.

Could you please share the code?

Question About Streaming

Hi everyone,

I hope all is fine,

Thank you for sharing this repository it's useful, I didn't go deeper in this repository, can you explain on big picture how do you dealing with 150 cameras i.e how to read the frames from these cameras (pipeline that you have used, or technologies or ...)?

Error with Perception Docker

I have followed the steps as suggested.

When I run the following command:

deepstream-360d-app -c DeepStream360d_Release/samples/configs/deepstream-360d-app/source10_gpu0.txt

I get the following error:

bash: deepstream-360d-app: command not found

I believe that the folder path

    DeepStream360d_Release/samples/configs/deepstream-360d-app/source10_gpu0.txt
is wrong as I can't it in my system.

Am I supposed to pull it from here?
nvcr.io/nvidia/deepstream_360d:4.0.1-19.11

Running application on Nvidia Jetson TX2 / Xavier

I've tried running this application on Nvidia Jetson TX2 and Nvidia Xavier.
I've faced difficulty running the run.sh file in perception server as 'nvidia-docker' is not supported on arm64 architecture. Is there any work around that?

Also will the deepstream 3.0 be supported on these two devices?

Image Calibration

Hi, I'm having some issues with calibrating my snapshots to generate the perception spreadsheet. I'm following this guide: https://devblogs.nvidia.com/calibration-translate-video-data/ and the one on the deepstream sdk guide.

  • "The first step in calibration is to get snapshot images from all cameras."
    How do I get these dewarped snapshots and what are the resolutions needed?
  • And with QGIS, there's this line "Make a note of the longitude and latitude of the origin (in this case, the center of the building)."
    Does this mean that I have to get the coordinate of the center of the referenced TIF image?
  • On the nvaisle/nvspot.csv files, what do ROI and the H values represent?
  • How do I find the gx,gy coordinates? I tried just importing the latitute and longitude but they don't work.

Thank you.

CPU support

Hi,

Just want to know; can we run this whole application on CPU servers? Please specify which modules we can use and which requires GPU.

Thanks,
Muhammad Ajmal Siddiqui

Analytical Docker is not delivering data from test_app_4

Hi,

I ran the deepstream test_app_4 and the machine and the deepstream analytical server.

I am not getting the metadata(car make/model and LPR from the deepstream-test4. instead, I am getting. Is it working?

19/01/11 07:46:18 INFO streaming.StreamExecution: Streaming query made progress: {
  "id" : "b20155a1-ba18-48e1-962c-e555f9092361",
  "runId" : "d330ab4b-adf9-4719-be93-ef52f9371dc2",
  "name" : "trafficFlowRate",
  "timestamp" : "2019-01-11T07:46:18.150Z",
  "numInputRows" : 0,
  "inputRowsPerSecond" : 0.0,
  "processedRowsPerSecond" : 0.0,
  "durationMs" : {
    "getOffset" : 1,
    "triggerExecution" : 1
  },
  "eventTime" : {
    "watermark" : "1970-01-01T00:00:00.000Z"
  },
  "stateOperators" : [ {
    "numRowsTotal" : 0,
    "numRowsUpdated" : 0
  } ],
  "sources" : [ {
    "description" : "KafkaSource[Subscribe[metromind-start]]",
    "startOffset" : {
      "metromind-start" : {
        "2" : 0,
        "5" : 0,
        "4" : 0,
        "7" : 0,
        "1" : 0,
        "3" : 0,
        "6" : 0,
        "0" : 0
      }
    },
    "endOffset" : {
      "metromind-start" : {
        "2" : 0,
        "5" : 0,
        "4" : 0,
        "7" : 0,
        "1" : 0,
        "3" : 0,
        "6" : 0,
        "0" : 0
      }
    },
    "numInputRows" : 0,
    "inputRowsPerSecond" : 0.0,
    "processedRowsPerSecond" : 0.0
  } ],
  "sink" : {
    "description" : "org.apache.spark.sql.execution.streaming.ForeachSink@720e630c"
  }
}

what all has to be changed to get it up and running>?

cassandra: Name or service not known

Hi,

Running - ./bin/spark-submit --class com.nvidia.ds.stream.StreamProcessor --master spark://master:7077 --executor-memory 8G --total-executor-cores 4 /tmp/data/stream-360-1.0-jar-with-dependencies.jar

gives me following error-

Exception in thread "main" java.net.UnknownHostException: cassandra: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at java.net.InetAddress.getByName(InetAddress.java:1076)
at com.nvidia.ds.stream.StreamProcessor$$anonfun$1.apply(StreamProcessor.scala:70)
at com.nvidia.ds.stream.StreamProcessor$$anonfun$1.apply(StreamProcessor.scala:70)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
at com.nvidia.ds.stream.StreamProcessor$.main(StreamProcessor.scala:70)
at com.nvidia.ds.stream.StreamProcessor.main(StreamProcessor.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

When checked if docker is running, it is not shown in the list. I tried to run the docker manually, it gives me the error of cassandra service not available.

Is there a way to fix this?

Error while starting a spark streaming job (when deploying the analytical server)

Hi,

I am following the setup instructions and have hit an issue. I run the following command:

sudo docker exec -it spark-master /bin/bash
./bin/spark-submit --class com.nvidia.ds.stream.StreamProcessor --master spark://master:7077 --executor-memory 8G --total-executor-cores 4 /tmp/data/stream-360-1.0-jar-with-dependencies.jar

Then, in the course of its execution, I am getting errors of the following nature:

19/02/13 15:35:55 WARN clients.NetworkClient: Error while fetching metadata with correlation id XX: {metromind-start=LEADER_NOT_AVAILABLE}

Is anybody familiar with this?

Thanks!

Why logstash?

Hello,
This may not be an issue, but I don't see the use of logstash, wouldn't Spark Streaming be enough to ingest the data and send it to Elasticsearch for indexation?
I have looked everywhere on the internet for an answer to this question but unfortunately, I found none. I really hope someone can help with this.

ui failed build

After run start.sh
Building ui
Step 1/17 : FROM node:8 as ui-builder
ERROR: Service 'ui' failed to build: Error parsing reference: "node:8 as ui-builder" is not a valid repository/tag: invalid reference format

Camera support which are NOT 360d

Hi,

Just wanted to know , can we use the same application for cameras which are not 360D, Is there any limitation or requires changes at code level?

Thanks,

Error while fetching metadata with correlation id 1355 : {metromind-start=LEADER_NOT_AVAILABLE}

After successfully the analytic server start scrip, i have started facing the below errors continuously.

20/03/07 11:13:29 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1355 : {metromind-start=LEADER_NOT_AVAILABLE}
20/03/07 11:13:29 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1354 : {metromind-start=LEADER_NOT_AVAILABLE}
20/03/07 11:13:29 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1354 : {metromind-start=LEADER_NOT_AVAILABLE}
20/03/07 11:13:29 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1356 : {metromind-start=LEADER_NOT_AVAILABLE}
20/03/07 11:13:29 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1356 : {metromind-start=LEADER_NOT_AVAILABLE}
20/03/07 11:13:29 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1355 : {metromind-start=LEADER_NOT_AVAILABLE}
20/03/07 11:13:29 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1355 : {metromind-start=LEADER_NOT_AVAILABLE}
20/03/07 11:13:29 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1357 : {metromind-start=LEADER_NOT_AVAILABLE}
20/03/07 11:13:29 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1357 : {metromind-start=LEADER_NOT_AVAILABLE}
20/03/07 11:13:29 WARN clients.NetworkClient: Error while fetching metadata with correlation id 1356 : {metromind-start=LEADER_NOT_AVAILABLE}

Error with logstash

This is for Analytics Server Docker
I'm getting the following error when running the start.sh script.
I believe it's because logstash requires spark-master but spark-master is not built.

The error is as follows:

standard_init_linux.go:211: exec user process caused "exec format error" ERROR: Service 'logstash' failed to build: The command '/bin/sh -c logstash-plugin install logstash-filter-json logstash-output-kafka' returned a non-zero code: 1 Error: No such container: spark-master

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.