Code Monkey home page Code Monkey logo

kite-examples's Introduction

Kite SDK Examples

The Kite Examples project provides examples of how to use the Kite SDK.

Each example is a standalone Maven module with associated documentation.

Basic Examples

  • dataset is the closest to a HelloWorld example of Kite. It shows how to create datasets and perform streaming writes and reads over them.
  • dataset-hbase shows how to store entities in HBase using the RandomAccessDataset API.
  • dataset-staging shows how to use two datasets to manage Parquet-formatted data
  • logging is an example of logging events from a command-line programs to Hadoop via Flume, using log4j as the logging API.
  • logging-webapp is like logging, but the logging source is a webapp.

Advanced Examples

  • demo is a full end-to-end example of a webapp that logs events using Flume and performs session analysis using Crunch and Hive.

Getting Started

The easiest way to run the examples is on the Cloudera QuickStart VM, which has all the necessary Hadoop services pre-installed, configured, and running locally. See the notes below for any initial setup steps you should take.

The current examples run on version 5.1.0 of the QuickStart VM.

Checkout the latest branch of this repository in the VM:

git clone git://github.com/kite-sdk/kite-examples.git
cd kite-examples

Then choose the example you want to try and refer to the README in the relevant subdirectory.

Setting up the QuickStart VM

There are two ways to run the examples with the QuickStart VM:

  1. Logged in to the VM guest (username and password are both cloudera).
  2. From your host computer.

The advantage of the first approach is that you don't need to install anything extra on your host computer, such as Java or Maven, so there are no fewer set up steps.

For either approach, you need to make the following changes while logged into the VM:

  • Sync the system clock For some of the examples it's important that the host and guest times are in sync. To synchronize the guest, login and type sudo ntpdate pool.ntp.org.
  • Configure the NameNode to listen on all interfaces In order to access the cluster from the host computer, the NameNode must be configured to listen on all network interfaces. This is done by setting the dfs.namenode.rpc-bind-host property in /etc/hadoop/conf/hdfs-site.xml:
  <property>
    <name>dfs.namenode.rpc-bind-host</name>
    <value>0.0.0.0</value>
  </property>
  • Configure the History Server to listen on all interfaces In order to access the cluster from the host computer, the History Server must be configured to listen on all network interfaces. This is done by setting the mapreduce.jobhistory.address property in /etc/hadoop/conf/mapred-site.xml:
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>0.0.0.0:10020</value>
  </property>
  • Configure HBase to listen on all interfaces In order to access the cluster from the host computer, HBase must be configured to listen on all network interfaces. This is done by setting the hbase.master.ipc.address and hbase.regionserver.ipc.address properties in /etc/hbase/conf/hbase-site.xml:
  <property>
    <name>hbase.master.ipc.address</name>
    <value>0.0.0.0</value>
  </property>

  <property>
    <name>hbase.regionserver.ipc.address</name>
    <value>0.0.0.0</value>
  </property>
  • Restart the vm Restart the VM with sudo shutdown -r now

The second approach is preferable when you want to use tools from your own development environment (browser, IDE, command line). However, there are a few extra steps you need to take to configure the QuickStart VM, listed below:

  • Add a host entry for quickstart.cloudera Add or edit a line like the following in /etc/hosts on the host machine
127.0.0.1       localhost.localdomain   localhost       quickstart.cloudera
  • Enable port forwarding Most of the ports that need to be forward are pre-configured on the QuickStart VM, but there are few that we need to add. For VirtualBox, open the Settings dialog for the VM, select the Network tab, and click the Port Forwarding button. Map the following ports - in each case the host port and the guest port should be the same. Also, your VM should not be running when you are making these changes.
    • 8032 (YARN ResourceManager)
    • 10020 (MapReduce JobHistoryServer)

If you have VBoxManage installed on your host machine, you can do this via command line as well. In bash, this would look something like:

# Set VM_NAME to the name of your VM as it appears in VirtualBox
VM_NAME="QuickStart VM"
PORTS="8032 10020"
for port in $PORTS; do
  VBoxManage modifyvm "$VM_NAME" --natpf1 "Rule $port,tcp,,$port,,$port"
done

Running integration tests

Some of the examples include integration tests. You can run them all with the following command:

for module in $(ls -d -- */); do
  (cd $module; mvn clean verify; if [ $? -ne 0 ]; then break; fi)
done

Troubleshooting

Working with the VM

  • What are the usernames/passwords for the VM?

    • Cloudera manager: cloudera/cloudera
    • HUE: cloudera/cloudera
    • Login: cloudera/cloudera
  • I can't find the file in VirtualBox (or VMWare)!

    • You probably need to unpack it: In Windows, install 7zip, and extract the VM files from the .7z file. In linux or mac, cd to where you copied the file and run 7zr e cloudera-quickstart-vm-4.3.0-kite-vbox-4.4.0.7z
    • You should be able to import the extracted files to VirtualBox or VMWare
  • How do I open a .ovf file?

    • Install and open VirtualBox on your computer
    • Under the menu "File", select "Import..."
    • Navigate to where you unpacked the .ovf file and select it
  • What is a .vmdk file?

    • The .vmdk file is the virtual machine disk image that accompanies a .ovf file, which is a portable VM description.
  • How do I open a .vbox file?

    • Install and open VirtualBox on your computer
    • Under the menu "Machine", select "Add..."
    • Navigate to where you unpacked the .vbox file and select it
  • How do I fix "VTx" errors?

    • Reboot your computer and enter BIOS
    • Find the "Virtualization" settings, usually under "Security" and enable all of the virtualization options
  • How do I get my mouse back?

    • If your mouse/keyboard is stuck in the VM (captured), you can usually release it by pressing the right CTRL key. If you don't have one (or that didn't work), then the release key will be in the lower-right of the VirtualBox window
  • Other problems

    • Using VirtualBox? Try using VMWare.
    • Using VMWare? Try using VirtualBox.

kite-examples's People

Contributors

epishkin avatar esammer avatar markgrover avatar rdblue avatar tomwheeler avatar tomwhite avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kite-examples's Issues

FileNotFoundException when morphilne files in src/main/resources

Hi everybody, I'm trying to write some unit test for my morphlines files and I'm facing a problem related with the location of my morphline files.

The files are under "src/main/resources" folder then when compiled are in "target/classes" folder but the AbstractMorphlineTest class is looking for it on "target/test-classes". I understand that the obvious solution is move my files to "src/test/resources" but I wonder if there is any way to override this setting.

Thanks in advance

P.S.- I also have looked for the AbstractMorphlineTest in order to make some changes and submit a PR but I'm not able to find it in any repo.

Question about kite.repo.uri

Hi everyone

I want to adapt the json example provided, but I got this error:

15/05/20 08:09:47 INFO conf.FlumeConfiguration: Processing:UFOKiteDS
15/05/20 08:09:47 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [UFOAgent]
15/05/20 08:09:47 INFO node.AbstractConfigurationProvider: Creating channels
15/05/20 08:09:47 INFO channel.DefaultChannelFactory: Creating instance of channel archivo type file
15/05/20 08:09:47 INFO node.AbstractConfigurationProvider: Created channel archivo
15/05/20 08:09:47 INFO source.DefaultSourceFactory: Creating instance of source UFODir, type spooldir
15/05/20 08:09:47 INFO interceptor.StaticInterceptor: Creating StaticInterceptor: preserveExisting=true,key=flume.avro.schema.url,value=file:/home/itam/schemas/ufos.avsc
15/05/20 08:09:47 INFO api.MorphlineContext: Importing commands
15/05/20 08:09:52 INFO api.MorphlineContext: Done importing commands
15/05/20 08:09:52 INFO sink.DefaultSinkFactory: Creating instance of sink: UFOKiteDS, type: org.apache.flume.sink.kite.DatasetSink
15/05/20 08:09:52 ERROR node.AbstractConfigurationProvider: Sink UFOKiteDS has been removed due to an error during configuration
java.lang.IllegalArgumentException
        at org.kitesdk.shaded.com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
        at org.kitesdk.data.URIBuilder.<init>(URIBuilder.java:106)
        at org.kitesdk.data.URIBuilder.<init>(URIBuilder.java:90)
        at org.apache.flume.sink.kite.DatasetSink.configure(DatasetSink.java:188)
        at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
        at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:413)
        at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:98)
        at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
15/05/20 08:09:52 INFO node.AbstractConfigurationProvider: Channel archivo connected to [UFODir]
15/05/20 08:09:52 INFO node.Application: Starting new configuration:{ sourceRunners:{UFODir=EventDrivenSourceRunner: { source:Spool Directory source UFODir: { spoolDir: /opt/ufos } }} sinkRunners:{} channels:{arch
ivo=FileChannel archivo { dataDirs: [/opt/ufos/log/data] }} }
15/05/20 08:09:52 INFO node.Application: Starting Channel archivo
15/05/20 08:09:52 INFO file.FileChannel: Starting FileChannel archivo { dataDirs: [/opt/ufos/log/data] }...
15/05/20 08:09:52 INFO file.Log: Encryption is not enabled

I ran the flume-agent with:

flume-ng agent -n UFOAgent -Xmx100m --conf ingestion -f ingestion/spooldir_example.conf

The spooldir_example.conf is


# Componentes
UFOAgent.sources = UFODir
UFOAgent.channels = archivo
UFOAgent.sinks = UFOKiteDS

# Canal
UFOAgent.channels.archivo.type = file
UFOAgent.channels.archivo.checkpointDir = /opt/ufos/log/checkpoint/
UFOAgent.channels.archivo.dataDirs = /opt/ufos/log/data/

# Fuente
UFOAgent.sources.UFODir.type = spooldir
UFOAgent.sources.UFODir.channels = archivo
UFOAgent.sources.UFODir.spoolDir = /opt/ufos
UFOAgent.sources.UFODir.fileHeader = true
UFOAgent.sources.UFODir.deletePolicy = immediate

# Interceptor
UFOAgent.sources.UFODir.interceptors = attach-schema morphline

UFOAgent.sources.UFODir.interceptors.attach-schema.type = static
UFOAgent.sources.UFODir.interceptors.attach-schema.key = flume.avro.schema.url
UFOAgent.sources.UFODir.interceptors.attach-schema.value = file:/home/itam/schemas/ufos.avsc

UFOAgent.sources.UFODir.interceptors.morphline.type = org.apache.flume.sink.solr.morphline.MorphlineInterceptor$Builder
UFOAgent.sources.UFODir.interceptors.morphline.morphlineFile = /home/itam/ingestion/morphline.conf
UFOAgent.sources.UFODir.interceptors.morphline.morphlineId = convertUFOFileToAvro


# Sumidero
UFOAgent.sinks.UFOKiteDS.type = org.apache.flume.sink.kite.DatasetSink
UFOAgent.sinks.UFOKiteDS.channel = archivo
UFOAgent.sinks.UFOKiteDS.kite.repo.uri = dataset:hive
UFOAgent.sinks.UFOKiteDS.kite.dataset.name = ufos
UFOAgent.sinks.UFOKiteDS.kite.batchSize = 10

I created the dataset as follows:

kite-dataset create ufos --schema /home/itam/schemas/ufos.avsc --format avro

Finally, the morphline.conf is

morphlines: [                                                                                                                                                                                                                                  
  {                                                                                                                                                                                                                                            
    id: convertUFOFileToAvro                                                                                                                                                                                                                   
    importCommands: ["com.cloudera.**", "org.kitesdk.**" ]                                                                                                                                                                                     
    commands: [                                                                                                                                                                                                                                
      { tryRules {                                                                                                                                                                                                                             
    catchExceptions : false                                                                                                                                                                                                                    
    throwExceptionIfAllRulesFailed : true                                                                                                                                                                                                      
    rules : [                                                                                                                                                                                                                                  
   # next rule of tryRules cmd:                                                                                                                                                                                                                
   {                                                                                                                                                                                                                                           
     commands : [                                                                                                                                                                                                                              
     { readCSV: {                                                                                                                                                                                                                              
     separator : "\t"                                                                                                                                                                                                                          
     columns : [Timestamp, City, State, Shape, Duration, Summary, Posted]                                                                                                                                                                      
     trim: true                                                                                                                                                                                                                                
     charset : UTF-8                                                                                                                                                                                                                           
     quoteChar : "\""                                                                                                                                                                                                                          
     }                                                                                                                                                                                                                                         
    }                                                                                                                                                                                                                                          

    {                                                                                                                                                                                                                                          
     toAvro {                                                                                                                                                                                                                                  
      schemaFile: /home/itam/schemas/ufos.avsc                                                                                                                                                                                                 
     }                                                                                                                                                                                                                                         

    }                                                                                                                                                                                                                                          
    {                                                                                                                                                                                                                                          
     writeAvroToByteArray: {                                                                                                                                                                                                                   

      format: containerlessBinary                                                                                                                                                                                                              

     }                                                                                                                                                                                                                                         

    }                                                                                                                                                                                                                                          
   ]                                                                                                                                                                                                                                           
   }                                                                                                                                                                                                                                           
   # next rule of tryRules cmd:                                                                                                                                                                                                                
   {                                                                                                                                                                                                                                           
     commands : [                                                                                                                                                                                                                              
    { dropRecord {} }                                                                                                                                                                                                                          
     ]                                                                                                                                                                                                                                         
   }                                                                                                                                                                                                                                           

  ]                                                                                                                                                                                                                                            
    }                                                                                                                                                                                                                                          
  }                                                                                                                                                                                                                                            

  { logTrace { format : "output record: {}", args : ["@{}"] } }                                                                                                                                                                                
    ]                                                                                                                                                                                                                                          
  }                                                                                                                                                                                                                                            


]                                                                                                                                                                                                                                              

What I am doing wrong?

can not build datasets with maven

Hi,
I try to implement the demos, but when I would like to create the datasets with the mvn command, it comes to an error:
[INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] demo ............................................... SUCCESS [ 5.167 s] [INFO] demo-core .......................................... SKIPPED [INFO] demo-crunch ........................................ SKIPPED [INFO] demo-logging-webapp ................................ SKIPPED [INFO] demo-reports-webapp ................................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 5.426 s [INFO] Finished at: 2016-06-02T11:06:06+02:00 [INFO] Final Memory: 35M/1370M [INFO] ------------------------------------------------------------------------ Exception in thread "Thread-2" java.lang.NoClassDefFoundError: org/apache/hadoop/util/ShutdownHookManager$2 at org.apache.hadoop.util.ShutdownHookManager.getShutdownHooksInOrder(ShutdownHookManager.java:124) at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:52) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.ShutdownHookManager$2 at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass(SelfFirstStrategy.java:50) at org.codehaus.plexus.classworlds.realm.ClassRealm.unsynchronizedLoadClass(ClassRealm.java:271) at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:247) at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass(ClassRealm.java:239) ... 2 more

The dataset is not set.

We don't use the cloudera VM, we use cloudera enterprise in cluster mode.

Thanks in advance.
Martin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.