Code Monkey home page Code Monkey logo

Comments (9)

sduskis avatar sduskis commented on June 24, 2024

RetriesExhaustedWithDetailsException is too generic to be helpful in figuring out what the specific problem is. We need to improve the logging in https://github.com/GoogleCloudPlatform/cloud-bigtable-client/blob/master/bigtable-hbase-dataflow/src/main/java/com/google/cloud/bigtable/dataflow/CloudBigtableIO.java#L615

Part of the issue is my relatively weak understanding of logging in Dataflow. Let me reach out to them to get advice. If you have experience in Dataflow logging, feel free to submit a pull request with better exception handling.

from java-bigtable-hbase.

derjust avatar derjust commented on June 24, 2024

And it also occurs now here: com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:483) in our processElement - not only on the finishBundle:

com.google.cloud.dataflow.sdk.util.UserCodeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 4 actions: StatusRuntimeException: 4 times,
 at com.google.cloud.dataflow.sdk.util.DoFnRunner.processElement(DoFnRunner.java:171) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase.processElement(ParDoFnBase.java:193) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.DoFnRunner$DoFnProcessContext.output(DoFnRunner.java:483) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.DoFnRunner.processElement(DoFnRunner.java:171) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:171) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.start(ReadOperation.java:117) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:66) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:137) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:132) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 4 actions: StatusRuntimeException: 4 times,
 at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.handleExceptions(BigtableBufferedMutator.java:207) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-6X7wujMPcabYnFblG0Dueg.jar:na]
 at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.mutate(BigtableBufferedMutator.java:141) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-6X7wujMPcabYnFblG0Dueg.jar:na]
 at com.google.cloud.bigtable.dataflow.CloudBigtableIO$CloudBigtableSingleTableWriteFn.processElement(CloudBigtableIO.java:605) ~[bigtable-hbase-dataflow-0.2.2-SNAPSHOT-5Z7IdGEQM5njbVmGbZ3fAQ.jar:na]

from java-bigtable-hbase.

derjust avatar derjust commented on June 24, 2024

And here are contained exceptions - generated with this code: #479
This one is an example from an exception caused by the line c.output(mutation) - the same we see from the finishBundle message:

 2015-09-10 19:54:59,953 WARN  | [pool-1-thread-3] (c.g.c.b.d.CloudBigtableIO:137) | c089d9e1-29f8-4c03-9069-ce092a1aabfe: processElement see cause: UNKNOWN: extracted status from HTTP :status 502

 Headers(path=null,authority=null,metadata={:status=[502], alt-svc=[quic=":443"; p="1"; ma=604800], alternate-protocol=[443:quic,p=1], content-length=[0], content-type=[text/html; charset=UTF-8], date=[Thu, 10 Sep 2015 19:54:59 GMT], server=[GFE/2.0]})
 DATA-----------------------------

 io.grpc.StatusRuntimeException: UNKNOWN: extracted status from HTTP :status 502

 Headers(path=null,authority=null,metadata={:status=[502], alt-svc=[quic=":443"; p="1"; ma=604800], alternate-protocol=[443:quic,p=1], content-length=[0], content-type=[text/html; charset=UTF-8], date=[Thu, 10 Sep 2015 19:54:59 GMT], server=[GFE/2.0]})
 DATA-----------------------------

 at io.grpc.Status.asRuntimeException(Status.java:428) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 2015-09-10 19:54:59,966 WARN  | [pool-1-thread-3] (c.g.c.b.d.CloudBigtableIO:137) | c089d9e1-29f8-4c03-9069-ce092a1aabfe: processElement see cause: UNKNOWN: extracted status from HTTP :status 502

 Headers(path=null,authority=null,metadata={:status=[502], alt-svc=[quic=":443"; p="1"; ma=604800], alternate-protocol=[443:quic,p=1], content-length=[0], content-type=[text/html; charset=UTF-8], date=[Thu, 10 Sep 2015 19:54:59 GMT], server=[GFE/2.0]})
 DATA-----------------------------

 io.grpc.StatusRuntimeException: UNKNOWN: extracted status from HTTP :status 502

 Headers(path=null,authority=null,metadata={:status=[502], alt-svc=[quic=":443"; p="1"; ma=604800], alternate-protocol=[443:quic,p=1], content-length=[0], content-type=[text/html; charset=UTF-8], date=[Thu, 10 Sep 2015 19:54:59 GMT], server=[GFE/2.0]})
 DATA-----------------------------

 at io.grpc.Status.asRuntimeException(Status.java:428) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 2015-09-10 19:54:59,973 WARN  | [pool-1-thread-3] (c.g.c.b.d.CloudBigtableIO:137) | c089d9e1-29f8-4c03-9069-ce092a1aabfe: processElement see cause: UNKNOWN: extracted status from HTTP :status 502

 Headers(path=null,authority=null,metadata={:status=[502], alt-svc=[quic=":443"; p="1"; ma=604800], alternate-protocol=[443:quic,p=1], content-length=[0], content-type=[text/html; charset=UTF-8], date=[Thu, 10 Sep 2015 19:54:59 GMT], server=[GFE/2.0]})
 DATA-----------------------------

 io.grpc.StatusRuntimeException: UNKNOWN: extracted status from HTTP :status 502

 Headers(path=null,authority=null,metadata={:status=[502], alt-svc=[quic=":443"; p="1"; ma=604800], alternate-protocol=[443:quic,p=1], content-length=[0], content-type=[text/html; charset=UTF-8], date=[Thu, 10 Sep 2015 19:54:59 GMT], server=[GFE/2.0]})
 DATA-----------------------------

 at io.grpc.Status.asRuntimeException(Status.java:428) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 2015-09-10 19:54:59,982 WARN  | [pool-1-thread-3] (c.g.c.b.d.CloudBigtableIO:137) | c089d9e1-29f8-4c03-9069-ce092a1aabfe: processElement see cause: UNKNOWN: extracted status from HTTP :status 502

 Headers(path=null,authority=null,metadata={:status=[502], alt-svc=[quic=":443"; p="1"; ma=604800], alternate-protocol=[443:quic,p=1], content-length=[0], content-type=[text/html; charset=UTF-8], date=[Thu, 10 Sep 2015 19:54:59 GMT], server=[GFE/2.0]})
 DATA-----------------------------

 io.grpc.StatusRuntimeException: UNKNOWN: extracted status from HTTP :status 502

 Headers(path=null,authority=null,metadata={:status=[502], alt-svc=[quic=":443"; p="1"; ma=604800], alternate-protocol=[443:quic,p=1], content-length=[0], content-type=[text/html; charset=UTF-8], date=[Thu, 10 Sep 2015 19:54:59 GMT], server=[GFE/2.0]})
 DATA-----------------------------

 at io.grpc.Status.asRuntimeException(Status.java:428) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]

In parallel the same exception also appears in the DataFlow log:

(f1d5b828c020b572): io.grpc.StatusRuntimeException: UNKNOWN: extracted status from HTTP :status 502 Headers(path=null,authority=null,metadata={:status=[502], alt-svc=[quic=":443"; p="1"; ma=604800], alternate-protocol=[443:quic,p=1], content-length=[0], content-type=[text/html; charset=UTF-8], date=[Thu, 10 Sep 2015 19:56:14 GMT], server=[GFE/2.0]}) DATA----------------------------- at io.grpc.Status.asRuntimeException(Status.java:428) 
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) 
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293)
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)

from java-bigtable-hbase.

sduskis avatar sduskis commented on June 24, 2024

Thanks for the detailed logs. The 502s were generated due to a server-side issue. We think that the 502s are unrelated to your testing. We're tracking down the offending issue now.

from java-bigtable-hbase.

derjust avatar derjust commented on June 24, 2024

Ok the 502 is gone. Now we see those in the logs.
It appears only twice at the end of the execution - and we lost like 5k elements out of 500m

 2015-09-10 21:03:51,665 INFO  | [pool-1-thread-5] (c.g.c.b.g.BigtableSession:45) | Opening connection for projectId sungard-cat-demo, zoneId us-central1-b, clusterId sept-poc, on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com.
 2015-09-10 21:03:51,152 INFO  | [pool-1-thread-1] (c.g.c.d.s.r.w.DataflowWorkerHarness:321) | Finished processing stage s01 with 0 errors in 513.397 seconds
 2015-09-10 21:03:51,193 INFO  | [pool-1-thread-1] (c.g.c.d.s.r.w.DataflowWorkerHarness:288) | Starting MapTask stage s01
 2015-09-10 21:03:51,213 INFO  | [pool-1-thread-1] (c.g.c.b.g.BigtableSession:45) | Opening connection for projectId sungard-cat-demo, zoneId us-central1-b, clusterId sept-poc, on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com.
 2015-09-10 21:03:51,976 INFO  | [pool-1-thread-6] (c.g.c.d.s.r.w.DataflowWorkerHarness:321) | Finished processing stage s01 with 0 errors in 359.665 seconds
 at com.google.bigtable.repackaged.io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:287) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:799) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:766) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1234) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.ChannelHandlerInvokerUtil.invokeFlushNow(ChannelHandlerInvokerUtil.java:165) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:272) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.handler.codec.http2.Http2ConnectionHandler.exceptionCaught(Http2ConnectionHandler.java:409) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.ChannelHandlerInvokerUtil.invokeExceptionCaughtNow(ChannelHandlerInvokerUtil.java:64) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelHandlerInvoker.invokeExceptionCaught(DefaultChannelHandlerInvoker.java:110) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.ChannelHandlerInvokerUtil.invokeExceptionCaughtNow(ChannelHandlerInvokerUtil.java:64) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelHandlerInvoker.invokeExceptionCaught(DefaultChannelHandlerInvoker.java:110) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:86) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:158) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:467) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 javax.net.ssl.SSLException: SSLEngine closed already
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,263 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,268 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 2015-09-10 21:03:51,268 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,269 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,271 WARN  | [pool-1-thread-8] (c.g.c.b.d.CloudBigtableIO:134) | processElement: c31166c0-11a7-45b7-ab3a-dfa6c8c42677 occured during finishing: Failed 3 actions: StatusRuntimeException: 3 times,
 javax.net.ssl.SSLException: SSLEngine closed already
 at io.grpc.Status.asRuntimeException(Status.java:428) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 at io.grpc.Status.asRuntimeException(Status.java:428) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 Caused by: javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,334 WARN  | [pool-1-thread-8] (c.g.c.b.d.CloudBigtableIO:137) | c31166c0-11a7-45b7-ab3a-dfa6c8c42677: processElement see cause: UNKNOWN
 io.grpc.StatusRuntimeException: UNKNOWN
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 Caused by: java.io.IOException: Connection reset by peer
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.8.0_60-ea]
 at com.google.bigtable.repackaged.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:115) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 ... 1 common frames omitted
 2015-09-10 21:03:51,338 WARN  | [pool-1-thread-8] (c.g.c.d.s.r.w.DataflowWorker:246) | Uncaught exception occurred during work unit execution:
 at com.google.cloud.dataflow.sdk.util.DoFnRunner.invokeProcessElement(DoFnRunner.java:193) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 2015-09-10 21:03:51,241 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData ''. Forcing shutdown of the connection.
 java.io.IOException: Broken pipe
 at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_60-ea]
 at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_60-ea]
 at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[na:1.8.0_60-ea]
 at com.google.bigtable.repackaged.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:311) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.ChannelHandlerInvokerUtil.invokeFlushNow(ChannelHandlerInvokerUtil.java:165) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelHandlerInvoker.invokeFlush(DefaultChannelHandlerInvoker.java:355) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:272) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.handler.ssl.SslHandler.flush(SslHandler.java:478) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelHandlerInvoker.invokeFlush(DefaultChannelHandlerInvoker.java:355) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.handler.codec.http2.Http2ConnectionHandler.onException(Http2ConnectionHandler.java:491) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.netty.NettyClientHandler.exceptionCaught(NettyClientHandler.java:259) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:142) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.handler.ssl.SslHandler.exceptionCaught(SslHandler.java:705) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:142) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:934) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:510) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:381) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 2015-09-10 21:03:51,252 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 2015-09-10 21:03:51,258 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,279 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData ''. Forcing shutdown of the connection.
 2015-09-10 21:03:51,326 WARN  | [pool-1-thread-8] (c.g.c.b.d.CloudBigtableIO:137) | c31166c0-11a7-45b7-ab3a-dfa6c8c42677: processElement see cause: UNKNOWN
 io.grpc.StatusRuntimeException: UNKNOWN
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 Caused by: javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,329 WARN  | [pool-1-thread-8] (c.g.c.b.d.CloudBigtableIO:137) | c31166c0-11a7-45b7-ab3a-dfa6c8c42677: processElement see cause: UNKNOWN
 io.grpc.StatusRuntimeException: UNKNOWN
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 at io.grpc.Status.asRuntimeException(Status.java:428) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_60-ea]
 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_60-ea]
 at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_60-ea]
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_60-ea]
 at com.google.bigtable.repackaged.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:854) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:510) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:467) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:381) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 com.google.cloud.dataflow.sdk.util.UserCodeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 3 actions: StatusRuntimeException: 3 times,
 at com.google.cloud.dataflow.sdk.util.DoFnRunner.processElement(DoFnRunner.java:171) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.8.0_60-ea]
 at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_60-ea]
 at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[na:1.8.0_60-ea]
 at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_60-ea]
 at com.google.bigtable.repackaged.io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:287) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:799) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:311) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:766) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1234) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.ChannelHandlerInvokerUtil.invokeFlushNow(ChannelHandlerInvokerUtil.java:165) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelHandlerInvoker.invokeFlush(DefaultChannelHandlerInvoker.java:355) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:272) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.handler.ssl.SslHandler.flush(SslHandler.java:478) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.ChannelHandlerInvokerUtil.invokeFlushNow(ChannelHandlerInvokerUtil.java:165) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelHandlerInvoker.invokeFlush(DefaultChannelHandlerInvoker.java:355) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:272) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.handler.codec.http2.Http2ConnectionHandler.onException(Http2ConnectionHandler.java:491) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.handler.codec.http2.Http2ConnectionHandler.exceptionCaught(Http2ConnectionHandler.java:409) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.netty.NettyClientHandler.exceptionCaught(NettyClientHandler.java:259) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.ChannelHandlerInvokerUtil.invokeExceptionCaughtNow(ChannelHandlerInvokerUtil.java:64) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelHandlerInvoker.invokeExceptionCaught(DefaultChannelHandlerInvoker.java:110) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:142) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.handler.ssl.SslHandler.exceptionCaught(SslHandler.java:705) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.ChannelHandlerInvokerUtil.invokeExceptionCaughtNow(ChannelHandlerInvokerUtil.java:64) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelHandlerInvoker.invokeExceptionCaught(DefaultChannelHandlerInvoker.java:110) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:142) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:934) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:86) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:158) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:510) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:467) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:381) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703) [bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 2015-09-10 21:03:51,252 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,258 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,263 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,268 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,268 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,269 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData 'SSLEngine closed already'. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,271 WARN  | [pool-1-thread-8] (c.g.c.b.d.CloudBigtableIO:134) | processElement: c31166c0-11a7-45b7-ab3a-dfa6c8c42677 occured during finishing: Failed 3 actions: StatusRuntimeException: 3 times,
 2015-09-10 21:03:51,279 ERROR | [bigtable-grpc-elg-3] (c.g.b.r.i.n.h.c.h.Http2ConnectionHandler:181) | Sending GOAWAY failed: lastStreamId '0', errorCode '2', debugData ''. Forcing shutdown of the connection.
 javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,326 WARN  | [pool-1-thread-8] (c.g.c.b.d.CloudBigtableIO:137) | c31166c0-11a7-45b7-ab3a-dfa6c8c42677: processElement see cause: UNKNOWN
 io.grpc.StatusRuntimeException: UNKNOWN
 at io.grpc.Status.asRuntimeException(Status.java:428) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 Caused by: javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,329 WARN  | [pool-1-thread-8] (c.g.c.b.d.CloudBigtableIO:137) | c31166c0-11a7-45b7-ab3a-dfa6c8c42677: processElement see cause: UNKNOWN
 io.grpc.StatusRuntimeException: UNKNOWN
 at io.grpc.Status.asRuntimeException(Status.java:428) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 Caused by: javax.net.ssl.SSLException: SSLEngine closed already
 2015-09-10 21:03:51,334 WARN  | [pool-1-thread-8] (c.g.c.b.d.CloudBigtableIO:137) | c31166c0-11a7-45b7-ab3a-dfa6c8c42677: processElement see cause: UNKNOWN
 io.grpc.StatusRuntimeException: UNKNOWN
 at io.grpc.Status.asRuntimeException(Status.java:428) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:264) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.ClientCallImpl$ClientStreamListenerImpl$3.run(ClientCallImpl.java:293) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 Caused by: java.io.IOException: Connection reset by peer
 at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_60-ea]
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.8.0_60-ea]
 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_60-ea]
 at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_60-ea]
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_60-ea]
 at com.google.bigtable.repackaged.io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:854) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:115) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:510) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:467) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:381) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.bigtable.repackaged.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 ... 1 common frames omitted
 2015-09-10 21:03:51,338 WARN  | [pool-1-thread-8] (c.g.c.d.s.r.w.DataflowWorker:246) | Uncaught exception occurred during work unit execution:
 com.google.cloud.dataflow.sdk.util.UserCodeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 3 actions: StatusRuntimeException: 3 times,
 at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase.processElement(ParDoFnBase.java:193) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:52) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:171) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.start(ReadOperation.java:117) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:66) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:234) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.doWork(DataflowWorker.java:171) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:137) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:147) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:132) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60-ea]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
 Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 3 actions: StatusRuntimeException: 3 times,
 at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.handleExceptions(BigtableBufferedMutator.java:207) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.mutate(BigtableBufferedMutator.java:141) ~[bigtable-hbase-1.0-0.2.2-SNAPSHOT-W6q8ewaHrRe17HwPiEws9w.jar:na]
 at com.google.cloud.bigtable.dataflow.CloudBigtableIO$CloudBigtableSingleTableWriteFn.processElement(CloudBigtableIO.java:621) ~[bigtable-hbase-dataflow-0.2.2-SNAPSHOT-HwygXQ7wIv-cP9RMiDRYUA.jar:na]

from java-bigtable-hbase.

sduskis avatar sduskis commented on June 24, 2024

This is another symptom of closing the buffered mutator before all of the requests complete. I'll do what I can to fix this issue over the next few business days.

from java-bigtable-hbase.

derjust avatar derjust commented on June 24, 2024

What is the impact on the the worker itself? Are we just loosing some pipeline events or does that result in an illegal state affecting other execution within the pipeline?
Because it looks to me that affect a single element which might be in the middle of the total data. I.e. we saw it appearing on elements around 65mio while import 1bil.

And we now saw this, too - as it is from finishBundle it also falls into the same category?

com.google.cloud.dataflow.sdk.util.UserCodeException: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: Error while mutating the row 'd49a9b1b-652f-4cd8-8f7c-465c1f9ccf01|QMCB' (projects/XXXX/zones/us-central1-b/clusters/YYYY/tables/1billion)
 at com.google.cloud.dataflow.sdk.util.DoFnRunner.finishBundle(DoFnRunner.java:205) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.ParDoFnBase.finishBundle(ParDoFnBase.java:198) ~[google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:234) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:147) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:132) [google-cloud-dataflow-java-sdk-all-1.0.0-rHz39Me5Bgx6ma4iSO7HHQ.jar:na]
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_60-ea]
 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
Caused by: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: Error while mutating the row 'd49a9b1b-652f-4cd8-8f7c-465c1f9ccf01|QMCB' (projects/XXXX/zones/us-central1-b/clusters/YYYY/tables/1billion)

from java-bigtable-hbase.

sduskis avatar sduskis commented on June 24, 2024

I'm not really sure what the impact of an exception has on a worker. I'm more familiar with the Bigtable side of things.

from java-bigtable-hbase.

kevinsi4508 avatar kevinsi4508 commented on June 24, 2024

It seems to be multiple issues involved here. If possible please break it to multiple issues. Regardless, more logging is needed.

from java-bigtable-hbase.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.