Code Monkey home page Code Monkey logo

itachi's Introduction

itachi

itachi brings useful functions from modern database management systems to Apache Spark :)

For example, you can import the Postgres extensions and write Spark code that looks just like Postgres.

The functions are implemented as native Spark functions, so they're performant.

In general, only those functions that difficult for the Apache Spark Community to maintain in the master branch will be added to this library.

Installation

Fetch the JAR file from Maven.

libraryDependencies += "com.github.yaooqinn" %% "itachi" % "0.1.0"

Here's the Maven link where the JAR files are stored.

itachi requires Spark 3+.

Simple function registration --------------

Access the Postgres / Teradata functions with these commands::

org.apache.itachi.registerPostgresFunctions
org.apache.itachi.registerTeradataFunctions

Simple example

Suppose you have the following data table and would like to join the two arrays, with the familiar array_cat function from Postgres.:

+------+------+
|  arr1|  arr2|
+------+------+
|[1, 2]|    []|
|[1, 2]|[1, 3]|
+------+------+

Concatenate the two arrays::

spark
  .sql("select array_cat(arr1, arr2) as both_arrays from some_data")
  .show()

+------------+
| both_arrays|
+------------+
|      [1, 2]|
|[1, 2, 1, 3]|
+------------+

itachi lets you write Spark SQL code that looks just like Postgres SQL!

Spark SQL extensions installation --------------

Config your spark applications with spark.sql.extensions, e.g. spark.sql.extensions=org.apache.spark.sql.extra.PostgreSQLExtensions

  • org.apache.spark.sql.extra.PostgreSQLExtensions
  • org.apache.spark.sql.extra.TeradataExtensions

Databricks Installation --------------

Create an init script in DBFS::

dbutils.fs.mkdirs("dbfs:/databricks/scripts/")

dbutils.fs.put("/databricks/scripts/itachi-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/itachi_2.12-0.1.0.jar https://repo1.maven.org/maven2/com/github/yaooqinn/itachi_2.12/0.1.0/itachi_2.12-0.1.0.jar""", true)

Before starting the cluster, set the Spark Config::

spark.sql.extensions org.apache.spark.sql.extra.PostgreSQLExtensions

Also set the DBFS file path before starting the cluster::

dbfs:/databricks/scripts/itachi-install.sh

You can now attach a notebook to the cluster using Postgres SQL syntax.

Spark SQL Compliance

This is a Spark SQL extension supplying add-on or aliased functions to the Apache Spark SQL builtin standard functions.

The functions in this library take precedence over the native Spark functions in the even of a name conflict.

Contributing

More popular modern dbms system function can be added with your help

itachi's People

Contributors

mrpowers avatar yaooqinn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

itachi's Issues

Great work & publishing to Maven

Great work on this project. Looks really useful. I will help you get users ;)

Are you publishing this to Maven yet?

Once we get this lib in Maven, I can submit a README PR with an installation / quick start section that grabs the reader's attention and motivates them to start using the lib.

It might be cool to give this lib a single-word name - something like kyuubi would be pretty sweet ;)

Thanks for all your open source Spark contributions. You do a lot to help the community!

Error messages on sbt test main branch

Here's the error message I'm seeing:

[info] TeradataExtensionsTest:
[info] - char2hexint
[info] - EDITDISTANCE
[info] - index
21/05/09 22:11:31 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 4)
java.lang.RuntimeException: '(1 < 0)' is not true!
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21/05/09 22:11:31 ERROR TaskSetManager: Task 0 in stage 4.0 failed 1 times; aborting job
21/05/09 22:11:31 ERROR Executor: Exception in task 0.0 in stage 5.0 (TID 5)
java.lang.RuntimeException: '(1 < 0)' is not true!
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:755)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21/05/09 22:11:31 ERROR TaskSetManager: Task 0 in stage 5.0 failed 1 times; aborting job

Using the library in Databricks environment

I did a bit of experimentation and looks like it's tricky to use this lib in Databricks.

Any way we can provide an interface that doesn't require the user to set a configuration option?

Perhaps we can let the user run an import statement like import org.apache.spark.sql.itachi.postgres._ to get all the functions? The function registration process is still a little fuzzy for me. Let me know if you think this would be possible!

Rename default branch to main

Calling the default branch "main" is the current thing to do.

I've switched over all my other projects and it wasn't too bad, see this guide.

I'm trying to get all the projects I work on switched so I don't need to remember what the main branch is called for every project.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.