Code Monkey home page Code Monkey logo

daf's Introduction

Data & Analytics Framework (DAF)

Welcome to the project homepage.

Infrastructure setup

To see how the DAF is setup please take a look at the setup documentation.

Development Guidelines

In general, we prefer to keep all the project with Version 1.0.0-SNAPSHOT. This requires that could need to publish to your local nexus ivy repository some local projects. This part is experimental and can change in the future.

Submodules

We use submodules for Kong and other projects. To init the repo submodule run:

$ git submodule init
$ git submodule update

Internal Team

Each time you start working on the DAF the desiderata is that:

  • For a new feature you have to create a branch with a meaningful name. The desiderata is something like feature_some_meaningful_name. It would be useful also to have a branch related to the feature
  • For a bug-fix you have to create a branch named bug_number_of_the_bug

Whenever the work on the branch is finished it is need to:

  1. squash all your commit in one commit
  2. create a pull request for master and assign it to another one in the team.

If you don't have practice with branching, squashing and merging you can use git-extras as helper. Git extras has commands like: git feature to create a feature, git squash to squash your commits.

The aim of this is to share your work.

The releases will be tagged and there will be also a branch.

External TEAM

Please fork the project and then do a pull request at the end. Pull request are super welcome !!! :)

Dev Doc

  1. metrics setup
  2. java configurations setup

daf's People

Contributors

acherici avatar aijanai avatar bianchi74 avatar bvhme avatar dgreco avatar fabiana001 avatar fabiofumarola avatar giux78 avatar gruggiero avatar gvarisco avatar lilloraffa avatar luca-pic avatar luzzu-lab avatar mariaclaudia avatar panelladavide avatar ruphy avatar seralf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

daf's Issues

Funzionalità "modifica password" non disponibile

La funzionalità cambia password non è presente in front end. Non è possibile modificare la password legata al proprio user, è necessario contattare manualmente l'operatore per modificare la password.
Vedere issue #239

[storage_manager] Connection exception during test

Connection exception with Hbase, while openTsdbContext try to load existing data in hbase (openTSDBContext.loadDataFrame(metric, tags, interval)- PhysicalDatasetController line 228):

org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Thu Nov 09 15:06:09 CET 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68252: row 'tsdb-uid,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=10.137.1.15,56135,1510236117824, seqNum=0

Password policy improvements

NIST’s new guidelines say you need a minimum of 8 characters. (That’s not a maximum minimum – you can increase the minimum password length for more sensitive accounts.) Better yet, NIST says you should allow a maximum length of at least 64. Also, applications must allow all printable ASCII characters, including spaces, and should accept all UNICODE characters, too, including emoji!

Therefore, this issue wants to track all the improvements being made to our DAF's password policies.

No error message loading a sample for a new dataset

Dataset -> Carica -> Upload sample
No error message is displayed when the sample file is uploaded even if the process is not ok.
I attached a sample that apparently was ok but in the backend Kylo didn't process it because it was not compliant.

Add the requirement about the sample that has to be upload.

None.get in get-dataset in Storage Manager

When you call the following url with a GET and passing a valid authentication header

http://storage-manager.default.svc.cluster.local:9000/dataset-manager/v1/dataset/daf%253A%252F%252Fdataset%252Ford%252Falessandro%252Fdefault_org%252FAGRI%252Forganizzazioni%252Fagency_infer_ale

You should get as result a json with the dataset, but it returns a None.get.

Iot event structure

Old Version

  • version: Version of this schema (1)
  • id: A globally unique identifier for this event instance. Optional
  • ts: Epoch timestamp in millis. Required.
  • event_type_id: ID indicating the type of event (integer). Required.
  • source: Deprecated event source. Optional.
  • location: Location from which the event was generated. Required.
  • host: Hostname, IP, or other device identifier from which the event was generated. Required.
  • service: Service or process from which the event was generated. Required.
  • body: Raw event content in bytes. Optional.
  • attributes: Event type-specific key/value pairs, usually extracted from the event body. Required.

New Version

  • version: Version of this schema (2)
  • id: A globally unique identifier for this event instance. Optional
  • ts: Epoch timestamp in millis. Required.
  • temporal_granularity: atom of time from a particular application’s point of view, for example: second, minute, hour, or day. Optional.
  • event_certainty: denotes an estimate of the certainty of this particular event [0,1]. 1.0 if it is certain that this event occurred in reality in the way described by the event, down to a value of 0 if it is certain that the event did not occur as described. Required.
  • event_annotation: provides a free-text explanation of what happened in this particular event. Optional.
  • event_type_id: ID indicating the type of event (string). Required.
  • source: The event source attribute is the name of the entity that originated this event. This can be either an event producer or an event processing agent. Required.
  • location: Location from which the event was generated. Required.
  • body: Raw event content in bytes. Optional.
  • metric: quantitative measurement associated to an event. Optional
  • attributes: Event type-specific key/value pairs, usually extracted from the event body. Required.

PySparkX.X has python 2 kernel without "conda standard" libraries

It looks like the environments:

  • PySpark
  • PySpark3

actually have the same 2.7 kernel, that lacks packages such as numpy, pandas, ...etc.
Even some spark DataFrame functions rely on those, therefore the usability is quite limited at the moment.
A Docker container update might be #necessary.

Enable report notifications after pushing a dataset

When the user load via sftp/scp a copy of the dataset, no info are displayed about the ETL process status.
The user should be informed with a message/report with some sanity checks (rows failed, rows loaded, warnings.)

[iot-ingestion-manager] go down due OutOfMemoryError

After one day running the Spark streaming consumer went down due the following error:
bash [Stage 5:===============================================> (10 + 2) / 12]Exception in thread "Yarn application state monitor" java.lang.OutOfMemoryError: Java heap space [Stage 5:===============================================> (10 + 2) / 12]Exception in thread "dispatcher-event-loop-2" java.lang.OutOfMemoryError: Java heap space [Stage 5:===============================================> (10 + 2) / 12]Exception in thread "JobGenerator" java.lang.OutOfMemoryError: Java heap space [Stage 5:===============================================> (10 + 2) / 12]Exception in thread "dispatcher-event-loop-1" java.lang.OutOfMemoryError: Java heap space 2017-09-24 16:32:01.064 [netty-rpc-env-timeout] ERROR o.a.s.s.c.YarnSchedulerBackend$YarnSchedulerEndpoint.logError(91) - Error requesting driver to remove executor 27 for reason Container marked as failed: container_1501855380192_0278_01_000028 on host: slave2.platform.daf.gov.it. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1501855380192_0278_01_000028 Exit code: 1

Each time that an RDD is cached it needs to be un-persisted (i.e. rdd.unpersist())

[IoT_ingestion_manager] Kafka consumer offsets out of range with no configured reset policy for partitions

Running the iot_ingestion manager, I have the following exception:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, slave1.platform.daf.gov.it, executor 4): org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions: {daf-iot-events-10=46123606}

[iot_ingestion_manager] improving logging

Adding Timestamp/date information.
From:

[info] - common.TransformersStream$ - Saved Offsets: (10,1854),(2,1856),(9,1855),(5,1854),(4,1854),(1,1855),(8,1856),(0,1856),(3,1856),(11,1854),(6,1856),(7,1854)

To:

2017 Sep 19 17:57:42.115 +0000 - [info] - common.TransformersStream$ - Saved Offsets: (10,1854),(2,1856),(9,1855),(5,1854),(4,1854),(1,1855),(8,1856),(0,1856),(3,1856),(11,1854),(6,1856),(7,1854)

Update logical uri path

Change the logic in the class it.gov.daf.common.utils.UriConverter in the project common.

Errore durante il caricamento di un nuovo dataset su Dataportal

Viene visualizzato il messaggio di errore "Errore durante il caricamento. Si prega di riprovare più tardi." caricando un sample di dati in formato csv.
Lo stesso sample se caricato direttamente su un feed Kylo funziona correttamente presumibilmente il problema è su dataportal e non sull'inferenza dei dati.

Create a new pipeline for ingesting data into DAF

Data & Analytics Framework (DAF in short) is an infrastructure to consume and distribute many different kind of datasets coming from many different sources.

You can read more here: http://daf-docs.readthedocs.io/en/latest/architecture/logical_arch/logicalView.html

Find an interesting data source (with a certain platform) and convert it so it's usable within DAF. We have several mentors available, with different expertises, so make sure you discuss with your mentor which kind of application would be useful.

A (now archived) ingestion pipeline you can take as inspiration: https://github.com/italia/daf-replicate-ingestion

Mentor: This is a very broad task, with many possible mentors: ask on the #daf channel on https://slack.developers.italia.it once you start having a basic idea

[iot-ingestion-manager] OpenTsdb connector throws exception if a tag contains special symbols

Actually, if some exception is threw, we lose each information about it.
This occurs for example when strings with special symbols are used as tags in openTsdb. It generate an exception that produces a spark job failure:

java.lang.IllegalArgumentException: Invalid tag value ("Ponte Regina Margherita(TO)"): illegal character: ...

The whole system fails but analyzing the log of the iot-ingestion-manager service, it seems that the system is correctly running

Conversion error from json to client case classes

When you try to convert a json returned by the catalog manager api into MetaCatalog case class, all fields containing the symbol "_" (i.e. logical_uri, group_own) are not parsed.

The problem is due by the fact that swagger converts fields with "_" (i.e. logical_uri -> logicalUri)

Create a data product based on a DAF dataset

Create an application based on a DAF dataset

You can choose any technology you want, but you should be creating an interesting dashboard, visualization, or useful program which exploits the possibilities of the DAF framework. We have several mentors available, with different expertises, so make sure you discuss with your mentor which kind of application would be useful.

Example applications:

Mentor: This is a very broad task: ask on the #daf channel on https://slack.developers.italia.it

Note: Creativity is a big part of the deliverable, here. Try to imagine new, beautiful, intelligent ways of how you can make sense of our datasets.

Error using Jupyter Hub

Steps to recreate the bug:

  1. create a user
  2. login to the data portal private
  3. open the datascience
  4. it opens jupyter hub with a button with text start my server
  5. click on it
  6. go 500 internal server error
[E 2018-01-23 10:38:19.205 JupyterHub gen:914] Exception in Future <tornado.concurrent.Future object at 0x7fbb51b08358> after timeout
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.5/site-packages/tornado/gen.py", line 910, in error_callback
        future.result()
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/handlers/base.py", line 444, in finish_user_spawn
        yield spawn_future
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/user.py", line 476, in spawn
        raise e
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/user.py", line 450, in spawn
        resp = yield server.wait_up(http=True, timeout=spawner.http_timeout)
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/utils.py", line 180, in wait_for_http_server
        timeout=timeout
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/utils.py", line 135, in exponential_backoff
        raise TimeoutError(fail_message)
    TimeoutError: Server at http://127.0.0.1:39616/user/fabiofumarola/ didn't respond in 30 seconds
    
[E 2018-01-23 10:38:19.205 JupyterHub gen:914] Exception in Future <tornado.concurrent.Future object at 0x7fbb51b08358> after timeout
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.5/site-packages/tornado/gen.py", line 910, in error_callback
        future.result()
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/handlers/base.py", line 444, in finish_user_spawn
        yield spawn_future
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/user.py", line 476, in spawn
        raise e
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/user.py", line 450, in spawn
        resp = yield server.wait_up(http=True, timeout=spawner.http_timeout)
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/utils.py", line 180, in wait_for_http_server
        timeout=timeout
      File "/opt/conda/lib/python3.5/site-packages/jupyterhub/utils.py", line 135, in exponential_backoff
        raise TimeoutError(fail_message)
    TimeoutError: Server at http://127.0.0.1:39616/user/fabiofumarola/ didn't respond in 30 seconds

[iot_ingestion_manager] connection exceptions during test

017-11-09 16:22:42.884 [New I/O boss #18] ERROR org.apache.kudu.client.TabletClient.exceptionCaught(723) - [Peer master-127.0.0.1:64030] Unexpected exception from downstream on [id: 0x3de71906]
java.net.ConnectException: Connection refused: /127.0.0.1:64030
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
	at org.apache.kudu.client.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
	at org.apache.kudu.client.shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
	at org.apache.kudu.client.shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
2017-11-09 16:22:42.955 [Thread-452] ERROR o.a.z.server.NIOServerCnxnFactory.uncaughtException(44) - Thread Thread[Thread-452,5,main] died
java.security.PrivilegedActionException: null
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:360)
	at org.apache.kudu.spark.kudu.KuduContext.<init>(KuduContext.scala:74)
	at iot_ingestion_manager.yaml.Iot_ingestion_managerYaml$$anonfun$4$$anonfun$apply$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$6.apply(iot_ingestion_manager.yaml.scala:293)
	at iot_ingestion_manager.yaml.Iot_ingestion_managerYaml$$anonfun$4$$anonfun$apply$3$$anonfun$apply$1$$anonfun$apply$mcV$sp$6.apply(iot_ingestion_manager.yaml.scala:290)
	at scala.util.Success.foreach(Try.scala:236)
	at iot_ingestion_manager.yaml.Iot_ingestion_managerYaml$$anonfun$4$$anonfun$apply$3$$anonfun$apply$1.apply$mcV$sp(iot_ingestion_manager.yaml.scala:289)
	at iot_ingestion_manager.yaml.Iot_ingestion_managerYaml$$anon$1.run(iot_ingestion_manager.yaml.scala:121)
Caused by: org.apache.kudu.client.NoLeaderFoundException: Master config (127.0.0.1:64030) has no leader. Exceptions received: org.apache.kudu.client.RecoverableException: [Peer master-127.0.0.1:64030] Connection reset
	at org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:240)
	at org.apache.kudu.client.ConnectToCluster.access$000(ConnectToCluster.java:47)
	at org.apache.kudu.client.ConnectToCluster$ConnectToMasterErrCB.call(ConnectToCluster.java:309)
	at org.apache.kudu.client.ConnectToCluster$ConnectToMasterErrCB.call(ConnectToCluster.java:298)
	at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
	at com.stumbleupon.async.Deferred.addCallbacks(Deferred.java:688)
	at org.apache.kudu.client.ConnectToCluster.run(ConnectToCluster.java:158)
	at org.apache.kudu.client.AsyncKuduClient.getMasterTableLocationsPB(AsyncKuduClient.java:1169)
	at org.apache.kudu.client.AsyncKuduClient.exportAuthenticationCredentials(AsyncKuduClient.java:553)
	at org.apache.kudu.client.KuduClient.exportAuthenticationCredentials(KuduClient.java:293)
	at org.apache.kudu.spark.kudu.KuduContext$$anon$1.run(KuduContext.scala:75)
	at org.apache.kudu.spark.kudu.KuduContext$$anon$1.run(KuduContext.scala:74)
	... 8 common frames omitted
Caused by: org.apache.kudu.client.RecoverableException: [Peer master-127.0.0.1:64030] Connection reset
	at org.apache.kudu.client.TabletClient.sendRpc(TabletClient.java:195)
	at org.apache.kudu.client.ConnectToCluster.connectToMaster(ConnectToCluster.java:96)
	at org.apache.kudu.client.ConnectToCluster.run(ConnectToCluster.java:156)
	... 13 common frames omitted
2017-11-09 16:22:55.770 [application-akka.actor.play-api-first-oauth-token-checker-9] INFO  i.yaml.Iot_ingestion_managerYaml.apply(356) - Begin operation 'stop'
2017-11-09 16:22:55.781 [application-akka.actor.play-api-first-oauth-token-checker-9] INFO  i.yaml.Iot_ingestion_managerYaml.apply(362) - About to stop the streaming context
2017-11-09 16:22:55.784 [application-akka.actor.play-api-first-oauth-token-checker-9] INFO  i.yaml.Iot_ingestion_managerYaml.apply(365) - Thread stopped
2017-11-09 16:22:55.784 [application-akka.actor.play-api-first-oauth-token-checker-9] INFO  i.yaml.Iot_ingestion_managerYaml.apply(368) - End operation 'stop'
[info] ServiceSpec
[info] 
[info] The iot-ingestion-manager should
[error]   ! receive and store the metric timeseries in OpenTSDB correctly
[error]    java.lang.RuntimeException: Unexpected exception (TSQuery.java:218)
[error] net.opentsdb.core.TSQuery.buildQueries(TSQuery.java:218)
[error] ServiceSpec$$anonfun$1$$anonfun$apply$13$$anon$1.delayedEndpoint$ServiceSpec$$anonfun$1$$anonfun$apply$13$$anon$1$1(ServiceSpec.scala:271)
[error] ServiceSpec$$anonfun$1$$anonfun$apply$13$$anon$1$delayedInit$body.apply(ServiceSpec.scala:169)
[error] play.api.test.WithServer$$anonfun$around$3.apply(Specs.scala:69)
[error] play.api.test.WithServer$$anonfun$around$3.apply(Specs.scala:69)
[error] play.api.test.PlayRunners$class.running(Helpers.scala:76)
[error] play.api.test.Helpers$.running(Helpers.scala:382)
[error] play.api.test.WithServer.around(Specs.scala:69)
[error] play.api.test.WithServer.delayedInit(Specs.scala:57)
[error] ServiceSpec$$anonfun$1$$anonfun$apply$13$$anon$1.<init>(ServiceSpec.scala:169)
[error] ServiceSpec$$anonfun$1$$anonfun$apply$13.apply(ServiceSpec.scala:169)
[error] ServiceSpec$$anonfun$1$$anonfun$apply$13.apply(ServiceSpec.scala:169)
[error] com.stumbleupon.async.DeferredGroup.done(DeferredGroup.java:169)
[error] com.stumbleupon.async.DeferredGroup.recordCompletion(DeferredGroup.java:142)
[error] com.stumbleupon.async.DeferredGroup.access$000(DeferredGroup.java:36)
[error] com.stumbleupon.async.DeferredGroup$1Notify.call(DeferredGroup.java:82)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
[error] com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
[error] com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.callback(Deferred.java:1005)
[error] shaded.org.hbase.async.HBaseRpc.callback(HBaseRpc.java:698)
[error] shaded.org.hbase.async.RegionClient.decode(RegionClient.java:1516)
[error] shaded.org.hbase.async.RegionClient.decode(RegionClient.java:88)
[error] shaded.org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
[error] shaded.org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
[error] shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
[error] shaded.org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1206)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
[error] shaded.org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
[error] shaded.org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
[error] shaded.org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
[error] shaded.org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294)
[error] shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
[error] shaded.org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3108)
[error] shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
[error] shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
[error] shaded.org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
[error] shaded.org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
[error] shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
[error] shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
[error] net.opentsdb.uid.UniqueId$1GetIdCB.call(UniqueId.java:356)
[error] net.opentsdb.uid.UniqueId$1GetIdCB.call(UniqueId.java:353)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
[error] com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
[error] com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.callback(Deferred.java:1005)
[error] shaded.org.hbase.async.HBaseRpc.callback(HBaseRpc.java:698)
[error] shaded.org.hbase.async.RegionClient.decode(RegionClient.java:1516)
[error] shaded.org.hbase.async.RegionClient.decode(RegionClient.java:88)
[error] shaded.org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
[error] shaded.org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
[error] shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
[error] shaded.org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1206)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
[error] shaded.org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
[error] shaded.org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
[error] shaded.org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
[error] shaded.org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294)
[error] shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
[error] shaded.org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3108)
[error] shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
[error] shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
[error] shaded.org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
[error] shaded.org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
[error] shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
[error] shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
[error] CAUSED BY
[error]  com.stumbleupon.async.DeferredGroupException: At least one of the Deferreds failed, first exception: (DeferredGroup.java:169)
[error] com.stumbleupon.async.DeferredGroup.done(DeferredGroup.java:169)
[error] com.stumbleupon.async.DeferredGroup.recordCompletion(DeferredGroup.java:142)
[error] com.stumbleupon.async.DeferredGroup.access$000(DeferredGroup.java:36)
[error] com.stumbleupon.async.DeferredGroup$1Notify.call(DeferredGroup.java:82)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
[error] com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
[error] com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.callback(Deferred.java:1005)
[error] shaded.org.hbase.async.HBaseRpc.callback(HBaseRpc.java:698)
[error] shaded.org.hbase.async.RegionClient.decode(RegionClient.java:1516)
[error] shaded.org.hbase.async.RegionClient.decode(RegionClient.java:88)
[error] shaded.org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
[error] shaded.org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
[error] shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
[error] shaded.org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1206)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
[error] shaded.org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
[error] shaded.org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
[error] shaded.org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
[error] shaded.org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294)
[error] shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
[error] shaded.org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3108)
[error] shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
[error] shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
[error] shaded.org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
[error] shaded.org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
[error] shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
[error] shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
[error] net.opentsdb.uid.UniqueId$1GetIdCB.call(UniqueId.java:356)
[error] net.opentsdb.uid.UniqueId$1GetIdCB.call(UniqueId.java:353)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
[error] com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
[error] com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.callback(Deferred.java:1005)
[error] shaded.org.hbase.async.HBaseRpc.callback(HBaseRpc.java:698)
[error] shaded.org.hbase.async.RegionClient.decode(RegionClient.java:1516)
[error] shaded.org.hbase.async.RegionClient.decode(RegionClient.java:88)
[error] shaded.org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
[error] shaded.org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
[error] shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
[error] shaded.org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1206)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
[error] shaded.org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
[error] shaded.org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
[error] shaded.org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
[error] shaded.org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294)
[error] shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
[error] shaded.org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3108)
[error] shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
[error] shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
[error] shaded.org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
[error] shaded.org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
[error] shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
[error] shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
[error] CAUSED BY
[error]  net.opentsdb.uid.NoSuchUniqueName: No such name for 'metrics': 'speed' (UniqueId.java:356)
[error] net.opentsdb.uid.UniqueId$1GetIdCB.call(UniqueId.java:356)
[error] net.opentsdb.uid.UniqueId$1GetIdCB.call(UniqueId.java:353)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
[error] com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
[error] com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
[error] com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
[error] com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
[error] com.stumbleupon.async.Deferred.callback(Deferred.java:1005)
[error] shaded.org.hbase.async.HBaseRpc.callback(HBaseRpc.java:698)
[error] shaded.org.hbase.async.RegionClient.decode(RegionClient.java:1516)
[error] shaded.org.hbase.async.RegionClient.decode(RegionClient.java:88)
[error] shaded.org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
[error] shaded.org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
[error] shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
[error] shaded.org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1206)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
[error] shaded.org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
[error] shaded.org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
[error] shaded.org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
[error] shaded.org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294)
[error] shaded.org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
[error] shaded.org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
[error] shaded.org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3108)
[error] shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
[error] shaded.org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
[error] shaded.org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
[error] shaded.org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
[error] shaded.org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
[error] shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
[error] shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

Integrazione di GovPay con il Data & Analytics Framework (DAF)

Realizzare l'integrazione tra GovPay, piattaforma opensource di connessione al circuito di pagamento PagoPA per Enti e Partner Tecnologici, ed il Data & Analytics Framework, piattaforma di data ingestion, processing e analytics promosso dal team per la trasformazione digitale. È necessario eseguire i seguenti passaggi:

  • identificazione di un dataset di interesse per il DAF delle transazioni su PagoPA
  • definizione dei metadati relativi al dataset
  • analisi o definizione di una API di integrazione ai servizi di data ingestion del DAF
  • realizzazione di un client per l'alimentazione del DAF a partire dai dati di interesse del progetto PagoPA collezionati da GovPay

Riferimenti:

Edit dataset target path on hdfs

This is a directory example:
/daf/ordinary/pac_cdc/tecnologia__monitoraggio/cdc_o_apm
--------------->organization/domain__subdomain/dataset_name
-------------------------------->/db_name /table_name

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.