gojekfarm / beast Goto Github PK
View Code? Open in Web Editor NEW[Deprecated] Load data from Kafka to any data warehouse. BQ sink is being supported in Firehose now. https://github.com/odpf/firehose
License: Apache License 2.0
[Deprecated] Load data from Kafka to any data warehouse. BQ sink is being supported in Firehose now. https://github.com/odpf/firehose
License: Apache License 2.0
We can't change bigquery schema once created specifically renaming field, but in protobuf context renaming field is allowed.
Since the data type is not changed, we should be able to update the ColumnMapping without breaking the data flow.
message order {
string order_number = 1
}
becomes
message order {
string order_no = 1
}
beast should update its column mapping and push the same field in bigquery order_number
Describe the default configuration for the application, and also the need for override for each part. eg: queue, poll time, ack timeout, thread pool size.
The current check for BQInsertErrors in the case where messages are out of streaming insert range relies on the response message from Bigquery API.
Since the Bigquery streaming insert ranges have changed as mentioned here, the messages are treated as failures and not OutOfBound messages.
This is the reason why messages are not sent to DLQ even when they fall outside the streaming insert range.
Right now, beast assumes the data is in protobuf format. We should customize it to other format
Create a parser based on configuration (className) and using reflection to create it.
currently Deploy
stage is failing in travis for branches. It should be run only on master
branch.
gradlew build
which includes checkstyle tooAdd code coverage with https://github.com/codecov/example-gradle
Bigquery provides support for adding partition expiry so that data for the partitions are deleted after the expiry time.
Package name for package com.gojek.beast.com.gojek.beast.protomapping;
. should be package com.gojek.beast.protomapping;
Stencil points to a server which holds file descriptors for the protobufs. We could add doc/script to setup a SimpleHTTPServer for the given protobufs.
Problem
Currently, there is only one offset committer thread that acknowledges the successful consumption back to Kafka. As per beast architecture, the Consumer, BQ Workers, and Acknowledger threads work independently and are connected by blocking queues.
The push operation on blocking queues which put the Kafka messages to the queue is not indefinitely blocking, instead, there is a timeout specified for getting a free slot on the queue to push the batch of Kafka messages.
Since we can spawn any number of BQ Workers, the Commit Queue processed by Acknowledger gets full and even with sufficiently high timeouts, the commit queue stays full because of the high load of messages on Acknowledger.
We require a mechanism to increase the processing capacity of Acknowledger thread so that it doesn't become the bottleneck for the application.
Approaches
Disadvantages:
We push data in the queue in a synchronous fashion. So if the push to commit queue takes long time, we are bottle necked on this and we are essentially using only one thread to push data to bigquery. This results in big performance degradation and diverges from beast philosophy of scaling.
With this approach of batch commit, We need to make sure that there is no data loss.
Currently, when bigquery throws some error because of rate limiting, the error message shown is:
StopEvent{reason='FailureStatus{cause=java.lang.RuntimeException: Push failed, message='null'}', source='BqQueueWorker'}
Instead, we can use the message in FailureStatus instead of creating the error message ourselves.
For a recursive protobuf schema like below:
message TestRecursiveMessage {
string string_value = 1;
TestRecursiveMessage recursive_message = 2;
}
The Bigquery schema generation logic fails with StackOverflow because of the Recursive data type. Since Recursive data types are supported in protobuf, the recursive protobuf should be converted to bigquery schema with a limit on the depth of the schema.
Bigquery supports up to 15 levels deep nesting of bigquery fields. Thus 15 can be set as limit on the depth of the schema.
Add support for googleprotobuf.Struct fields.
How we can map a message with dynamic schema to a bigquery column.
Failed to push records to sink FailureStatus{cause=com.google.cloud.bigquery.BigQueryException: www.googleapis.com, message='null'}
We've cases where push failed (which's retriable), but we don't have the logs, which needs to fixed.
Beast process gets hung when it's not able to find GOOGLE_CREDENTIALS.
Here is stack trace for the same.
java.io.FileNotFoundException: /<some-secret-file> (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at com.gojek.beast.factory.BeastFactory.getBigQueryInstance(BeastFactory.java:96)
at com.gojek.beast.factory.BeastFactory.createBigQuerySink(BeastFactory.java:88)
at com.gojek.beast.factory.BeastFactory.createBqWorkers(BeastFactory.java:81)
Exception in thread "main" java.lang.NullPointerException
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
at com.google.cloud.ServiceOptions$Builder.setCredentials(ServiceOptions.java:207)
at com.gojek.beast.factory.BeastFactory.getBigQueryInstance(BeastFactory.java:103)
at com.gojek.beast.factory.BeastFactory.createBigQuerySink(BeastFactory.java:88)
at com.gojek.beast.factory.BeastFactory.createBqWorkers(BeastFactory.java:81)
at com.gojek.beast.launch.Main.main(Main.java:27)
Expected outcome:
Beast process should restart if config is not proper.
getting following error while running task runConsumer
Exception in thread "main" java.lang.ExceptionInInitializerError
at com.gojek.beast.com.gojek.beast.protomapping.ProtoUpdateListener.createStencilClient(ProtoUpdateListener.java:49)
at com.gojek.beast.com.gojek.beast.protomapping.ProtoUpdateListener.<init>(ProtoUpdateListener.java:34)
at com.gojek.beast.factory.BeastFactory.<init>(BeastFactory.java:76)
at com.gojek.beast.launch.Main.main(Main.java:26)
' to java.lang.BooleannsupportedOperationException: Cannot convert 'false
at org.aeonbits.owner.Util.unsupported(Util.java:128)
at org.aeonbits.owner.Converters.unsupportedConversion(Converters.java:276)
at org.aeonbits.owner.Converters.access$200(Converters.java:42)
at org.aeonbits.owner.Converters$4.tryConvert(Converters.java:160)
at org.aeonbits.owner.Converters.doConvert(Converters.java:268)
at org.aeonbits.owner.Converters.convert(Converters.java:263)
at org.aeonbits.owner.PropertiesInvocationHandler.resolveProperty(PropertiesInvocationHandler.java:87)
at org.aeonbits.owner.PropertiesInvocationHandler.invoke(PropertiesInvocationHandler.java:67)
at com.sun.proxy.$Proxy5.isStatsdEnabled(Unknown Source)
at com.gojek.beast.stats.Stats.<init>(Stats.java:24)
at com.gojek.beast.stats.Stats.<clinit>(Stats.java:16)
... 4 more
Caused by: java.lang.IllegalArgumentException: false
at java.desktop/com.sun.beans.editors.BooleanEditor.setAsText(BooleanEditor.java:59)
at org.aeonbits.owner.Converters$4.tryConvert(Converters.java:157)
... 11 more
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.