Code Monkey home page Code Monkey logo

hive-solr's Introduction

Hive SerDe

This project includes tools to build a Hive SerDe to index Hive tables to Solr.

Features

  • Index Hive table data to Solr.

  • Read Solr index data to a Hive table.

  • Kerberos support for securing communication between Hive and Solr.

  • As of v2.2.4 of the SerDe, integration with Lucidworks Fusion is supported.

    • Fusion’s index pipelines can be used to index data to Fusion.

    • Fusion’s query pipelines can be used to query Fusion’s Solr instance for data to insert into a Hive table.

Build the SerDe Jars

This project has a dependency on the solr-hadoop-common submodule (contained in a separate GitHub repository, https://github.com/lucidworks/solr-hadoop-common). This submodule must be initialized before building the SerDe .jar.

To initialize the submodule, pull this repo, then:

git submodule init

Once the submodule has been initialized, the command git submodule update will fetch all the data from that project and check out the appropriate commit listed in the superproject. You must initialize and update the submodule before attempting to build the SerDe jar.

The build uses Gradle. However, you do not need to have Gradle already installed before attempting to build.

To build the jar files, run this command:

./gradlew clean shadowJar --info

This will make two .jar files, one for use with Hive v0.13 only, and another that can be used with Hive 0.12, 0.14, 0.15.

  • solr-hive-serde/build/libs/solr-hive-serde-2.2.6.jar

  • solr-hive-serde-0.13/build/libs/solr-hive-serde-0.13-2.2.6.jar

Troubleshooting Clone Issues

If GitHub and SSH are not configured the following exception will be thrown:

    Cloning into 'solr-hadoop-common'...
    Permission denied (publickey).
    fatal: Could not read from remote repository.

    Please make sure you have the correct access rights
    and the repository exists.
    fatal: clone of '[email protected]:LucidWorks/solr-hadoop-common.git' into submodule path 'solr-hadoop-common' failed

To fix this error, use the generating an SSH key tutorial.

Using the SerDe

Install the SerDe Jar to Hive

In order for Hive to work with Solr, the Hive SerDe jar must be added as a plugin to Hive.

From a Hive prompt, use the ADD JAR command and reference the path and filename of the SerDe jar for your Hive version.

   hive> ADD JAR solr-hive-serde-2.2.6.jar;

This can also be done in your Hive command to create the table, as in the example below.

Indexing Data with a Hive External Table

Indexing data to Solr or Fusion requires creating a Hive external table. An external table allows the data in the table to be used (read or write) by another system or application outside of Hive.

Indexing Data to Solr

For integration with Solr, the external table allows you to have Solr read from and write to Hive.

To create an external table for Solr, you can use a command similar to below. The properties available are described after the example.

hive> CREATE EXTERNAL TABLE solr (id string, field1 string, field2 int)
      STORED BY 'com.lucidworks.hadoop.hive.LWStorageHandler'
      LOCATION '/tmp/solr'
      TBLPROPERTIES('solr.server.url' = 'http://localhost:8888/solr',
                    'solr.collection' = 'collection1',
                    'solr.query' = '*:*');

In this example, we have created an external table named "solr", and defined a custom storage handler (STORED BY 'com.lucidworks.hadoop.hive.LWStorageHandler').

The LOCATION indicates the location in HDFS where the table data will be stored. In this example, we have chosen to use /tmp/solr.

In the section TBLPROPERTIES, we define several properties for Solr so the data can be indexed to the right Solr installation and collection:

solr.zkhost

The location of the ZooKeeper quorum if using LucidWorks in SolrCloud mode. If this property is set along with the solr.server.url property, the solr.server.url property will take precedence.

solr.server.url

The location of the Solr instance if not using LucidWorks in SolrCloud mode. If this property is set along with the solr.zkhost property, this property will take precedence.

solr.collection

The Solr collection for this table. If not defined, an exception will be thrown.

solr.query

The specific Solr query to execute to read this table. If not defined, a default of *:* will be used. This property is not needed when loading data to a table, but is needed when defining the table so Hive can later read the table.

lww.commit.on.close

If true, inserts will be automatically committed when the connection is closed. True is the default.

lww.jaas.file

Used only when indexing to or reading from a Solr cluster secured with Kerberos.

This property defines the path to a JAAS file that contains a service principal and keytab location for a user who is authorized to read from and write to Solr and Hive.

The JAAS configuration file must be copied to the same path on every node where a Node Manager is running (i.e., every node where map/reduce tasks are executed). Here is a sample section of a JAAS file:

Client { --(1)
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/data/solr-indexer.keytab" --(2)
  storeKey=true
  useTicketCache=true
  debug=true
  principal="[email protected]"; --(3)
};
  1. The name of this section of the JAAS file. This name will be used with the lww.jaas.appname parameter.

  2. The location of the keytab file.

  3. The service principal name. This should be a different principal than the one used for Solr, but must have access to both Solr and Hive.

lww.jaas.appname

Used only when indexing to or reading from a Solr cluster secured with Kerberos.

This property provides the name of the section in the JAAS file that includes the correct service principal and keytab path.

If the table needs to be dropped at a later time, you can use the DROP TABLE command in Hive. This will remove the metadata stored in the table in Hive, but will not modify the underlying data (in this case, the Solr index).

Indexing Data to Fusion

If you use Lucidworks Fusion, you can index data from Hive to Solr via Fusion’s index pipelines. These pipelines allow you several options for further transforming your data.

Tip

If you are using Fusion, you already have the Hive SerDe in Fusion’s ./apps/connectors/resources/lucid.hadoop/jobs directory. The SerDe jar that supports Fusion is v2.2.4 or higher. This was released with Fusion 3.0.

A 2.2.4 or higher jar built from this repository will also work with Fusion 2.4.x releases.

This is an example Hive command to create an external table to index documents in Fusion and to query the table later.

hive> CREATE EXTERNAL TABLE fusion (id string, field1 string, field2 int)
      STORED BY 'com.lucidworks.hadoop.hive.FusionStorageHandler'
      LOCATION '/tmp/fusion'
      TBLPROPERTIES('fusion.endpoints' = 'http://localhost:8764/api/apollo/index-pipelines/<pipeline>/collections/<collection>/index',
                    'fusion.realm' = 'KERBEROS',
                    'fusion.user' = '[email protected]',
                    'java.security.auth.login.config' = '/path/to/JAAS/file',
                    'fusion.jaas.appname' = 'FusionClient',);
                    'fusion.query.endpoints' = 'http://localhost:8764/api/apollo/query-pipelines/<pipeline>/collections/<collection>/<handler>/?',
                    'fusion.query' = '*:*'

In this example, we have created an external table named "fusion", and defined a custom storage handler (STORED BY 'com.lucidworks.hadoop.hive.FusionStorageHandler').

The LOCATION indicates the location in HDFS where the table data will be stored. In this example, we have chosen to use /tmp/fusion.

In the section TBLPROPERTIES, we define several properties for Fusion so the data can be indexed to the right Fusion installation and collection:

fusion.endpoints

The full URL to the index pipeline in Fusion. The URL should include the pipeline name and the collection data will be indexed to.

fusion.realm

This is used with fusion.user and fusion.password to authenticate to Fusion for indexing data. Two options are supported, KERBEROS or NATIVE.

Kerberos authentication is supported with the additional definition of a JAAS file. The properties java.security.auth.login.config and fusion.jaas.appname are used to define the location of the JAAS file and the section of the file to use.

Native authentication uses a Fusion-defined username and password. This user must exist in Fusion, and have the proper permissions to index documents.

fusion.user

The Fusion username or Kerberos principal to use for authentication to Fusion. If a Fusion username is used ('fusion.realm' = 'NATIVE'), the fusion.password must also be supplied.

fusion.password

This property is not shown in the example above. The password for the fusion.user when the fusion.realm is NATIVE.

java.security.auth.login.config

This property defines the path to a JAAS file that contains a service principal and keytab location for a user who is authorized to read from and write to Fusion and Hive.

The JAAS configuration file must be copied to the same path on every node where a Node Manager is running (i.e., every node where map/reduce tasks are executed). Here is a sample section of a JAAS file:

Client { --(1)
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/data/fusion-indexer.keytab" --(2)
  storeKey=true
  useTicketCache=true
  debug=true
  principal="[email protected]"; --(3)
};
  1. The name of this section of the JAAS file. This name will be used with the fusion.jaas.appname parameter.

  2. The location of the keytab file.

  3. The service principal name. This should be a different principal than the one used for Fusion, but must have access to both Fusion and Hive. This name is used with the fusion.user parameter described above.

fusion.jaas.appname

Used only when indexing to or reading from Fusion when it is secured with Kerberos.

This property provides the name of the section in the JAAS file that includes the correct service principal and keytab path.

fusion.query.endpoints

The full URL to a query pipeline in Fusion. The URL should include the pipeline name and the collection data will be read from. You should also specify the request handler to be used.

If you do not intend to query your Fusion data from Hive, you can skip this parameter.

fusion.query

The query to run in Fusion to select records to be read into Hive. This is *:* by default, which selects all records in the index.

If you do not intend to query your Fusion data from Hive, you can skip this parameter.

Query and Insert Data to Hive

Once the table is configured, any syntactically correct Hive query will be able to query the index.

For example, to select three fields named "id", "field1", and "field2" from the "solr" table, you would use a query such as:

hive> SELECT id, field1, field2 FROM solr;

Replace the table name as appropriate to use this example with your data.

To join data from tables, you can make a request such as:

hive> SELECT id, field1, field2 FROM solr left
      JOIN sometable right
      WHERE left.id = right.id;

And finally, to insert data to a table, simply use the Solr table as the target for the Hive INSERT statement, such as:

hive> INSERT INTO solr
      SELECT id, field1, field2 FROM sometable;

Example Indexing Hive to Solr

Solr includes a small number of sample documents for use when getting started. One of these is a CSV file containing book metadata. This file is found in your Solr installation, at $SOLR_HOME/example/exampledocs/books.csv.

Using the sample books.csv file, we can see a detailed example of creating a table, loading data to it, and indexing that data to Solr.

CREATE TABLE books (id STRING, cat STRING, title STRING, price FLOAT, in_stock BOOLEAN, author STRING, series STRING, seq INT, genre STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; --(1)

LOAD DATA LOCAL INPATH '/solr/example/exampledocs/books.csv' OVERWRITE INTO TABLE books; --(2)

ADD JAR solr-hive-serde-2.2.6.jar; --(3)

CREATE EXTERNAL TABLE solr (id STRING, cat_s STRING, title_s STRING, price_f FLOAT, in_stock_b BOOLEAN, author_s STRING, series_s STRING, seq_i INT, genre_s STRING) --(4)
     STORED BY 'com.lucidworks.hadoop.hive.LWStorageHandler' --(5)
     LOCATION '/tmp/solr' --(6)
     TBLPROPERTIES('solr.zkhost' = 'zknode1:2181,zknode2:2181,zknode3:2181/solr',
                   'solr.collection' = 'gettingstarted',
                   'solr.query' = '*:*'), --(7)
                   'lww.jaas.file' = '/data/jaas-client.conf'; --(8)


INSERT OVERWRITE TABLE solr SELECT b.* FROM books b;
  1. Define the table books, and provide the field names and field types that will make up the table.

  2. Load the data from the books.csv file.

  3. Add the solr-hive-serde-2.2.6.jar file to Hive. Note the jar name shown here omits the version information which will be included in the jar file you have. If you are using Hive 0.13, you must also use a jar specifically built for 0.13.

  4. Create an external table named solr, and provide the field names and field types that will make up the table. These will be the same field names as in your local Hive table, so we can index all of the same data to Solr.

  5. Define the custom storage handler provided by the solr-hive-serde-2.2.6.jar.

  6. Define storage location in HDFS.

  7. The query to run in Solr to read records from Solr for use in Hive.

  8. Define the location of Solr (or ZooKeeper if using SolrCloud), the collection in Solr to index the data to, and the query to use when reading the table. This example also refers to a JAAS configuration file that will be used to authenticate to the Kerberized Solr cluster.

How to contribute

  1. Fork this repo i.e. <username|organization>/hadoop-solr, following the fork a repo/ tutorial. Then, clone the forked repo on your local machine:

    $ git clone https://github.com/<username|organization>/hadoop-solr.git
  2. Configure remotes with the configuring remotes tutorial.

  3. Create a new branch:

    $ git checkout -b new_branch
    $ git push origin new_branch

    Use the creating branches tutorial to create the branch from GitHub UI if you prefer.

  4. Develop on new_branch branch only, do not merge new_branch to your master. Commit changes to new_branch as often as you like:

    $ git add <filename>
    $ git commit -m 'commit message'
  5. Push your changes to GitHub.

    $ git push origin new_branch
  6. Repeat the commit & push steps until your development is complete.

  7. Before submitting a pull request, fetch upstream changes that were done by other contributors:

    $ git fetch upstream
  8. And update master locally:

    $ git checkout master
    $ git pull upstream master
  9. Merge master branch into new_branch in order to avoid conflicts:

    $ git checkout new_branch
    $ git merge master
  10. If conflicts happen, use the resolving merge conflicts tutorial to fix them:

  11. Push master changes to new_branch branch

    $ git push origin new_branch
  12. Add jUnits, as appropriate, to test your changes.

  13. When all testing is done, use the create a pull request tutorial to submit your change to the repo.

Note

Please be sure that your pull request sends only your changes, and no others. Check it using the command:

git diff new_branch upstream/master

hive-solr's People

Contributors

ctargett avatar acesar avatar diegogutierrez avatar wsolaligue avatar deansg avatar

Watchers

James Cloos avatar tomnic.wang avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.