Code Monkey home page Code Monkey logo

reference-apps's Introduction

Databricks Reference Apps

At Databricks, we are developing a set of reference applications that demonstrate how to use Apache Spark. This book/repo contains the reference applications.

The reference applications will appeal to those who want to learn Spark and learn better by example. Browse the applications, see what features of the reference applications are similar to the features you want to build, and refashion the code samples for your needs. Additionally, this is meant to be a practical guide for using Spark in your systems, so the applications mention other technologies that are compatible with Spark - such as what file systems to use for storing your massive data sets.

  • Log Analysis Application - The log analysis reference application contains a series of tutorials for learning Spark by example as well as a final application that can be used to monitor Apache access logs. The examples use Spark in batch mode, cover Spark SQL, as well as Spark Streaming.

  • Twitter Streaming Language Classifier - This application demonstrates how to fetch and train a language classifier for Tweets using Spark MLlib. Then Spark Streaming is used to call the trained classifier and filter out live tweets that match a specified cluster. For directions on how to build and run this app - see twitter_classifier/scala/README.md.

  • Weather TimeSeries Data Application with Cassandra - This reference application works with Weather Data which is taken for a given weather station at a given point in time. The app demonstrates several strategies for leveraging Spark Streaming integrated with Apache Cassandra and Apache Kafka for fast, fault-tolerant, streaming computations with time series data.

These reference apps are covered by license terms covered here.

reference-apps's People

Contributors

bllchmbrs avatar cheffpj avatar holdenk avatar jkbradley avatar jshipper avatar lenards avatar mengxr avatar miguelperalvo avatar mslinn avatar nchammas avatar otobrglez avatar patmcdonough avatar rohanbhanderi avatar tashoyan avatar vidaha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reference-apps's Issues

Exception in thread "main" java.lang.NoClassDefFoundError:org/apache/spark/mllib/feature/HashingTF

I am trying to run Collect.scala from Twitter Streaming Language Classifier and keep getting below error. I think i followed all the insructions but, i am still stuck with this error. I would really appreciate if someone can help.

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/mllib/feature/HashingTF
at com.databricks.apps.twitter_classifier.Utils$.(Utils.scala:12)
at com.databricks.apps.twitter_classifier.Utils$.(Utils.scala)
at com.databricks.apps.twitter_classifier.Collect$.main(Collect.scala:26)
at com.databricks.apps.twitter_classifier.Collect.main(Collect.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:292)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.mllib.feature.HashingTF
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 11 more

SparkSQL count

In twitter classifier ExamineAndTrain, the sql query to count users based on lang returns 0

SparkException: Task not serializable in logs_analyzer/chapter1/java6

When running logs_analyzer/chapter1/java6/src/main/java/com/databricks/apps/logs/chapter1/LogAnalyzer.java as per instructions, I get this error "org.apache.spark.SparkException: Task not serializable" (at com.databricks.apps.logs.chapter1.LogAnalyzer.main(LogAnalyzer.java:51)). This is a known issue, mentioned in Databricks' Spark-KnowledgeBase/Troubleshooting. In Scala, Python and Java 8, for this example, it's fine.

After doing some digging, I think the problem is in with the Functions.LONG_NATURAL_ORDER_COMPARATOR function object, in the Functions.java file . It is an instance of the Comparator class. That class doesn't implement the Serializable interface. When we pass the Functions.LONG_NATURAL_ORDER_COMPARATOR to the different actions of LogAnalyzer.java, it tries to serialize the Functions.LONG_NATURAL_ORDER_COMPARATOR object, but as it isn't Serializable, it can't.

There are different solutions to this problem, but we can create a new Serializable Comparator class, instantiate it and reference it from Functions.LONG_NATURAL_ORDER_COMPARATOR, instead of instantiating directly Comparator and referencing that instance. I've tested it and it works. I'll propose the new solution soon.

This is the complete original exception:

Exception in thread "main" org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:1242)
    at org.apache.spark.rdd.RDD.reduce(RDD.scala:844)
    at org.apache.spark.rdd.RDD.min(RDD.scala:1152)
    at org.apache.spark.api.java.JavaRDDLike$class.min(JavaRDDLike.scala:549)
    at org.apache.spark.api.java.JavaRDD.min(JavaRDD.scala:32)
    at com.databricks.apps.logs.chapter1.LogAnalyzer.main(LogAnalyzer.java:51)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.NotSerializableException: com.databricks.apps.logs.Functions$2
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:42)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:73)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:164)
    ... 14 more

"java.lang.RuntimeException: Table Not Found: com.databricks.app.logs" in LogAnalyzerSQL.scala

When running LogAnalyzerSQL.scala, an "java.lang.RuntimeException: Table Not Found: com.databricks.app.logs" is produced.

It think the problem is in line 28:

accessLogs.registerAsTable("com/databricks/app/logs")

When I changed it for:

accessLogs.registerAsTable("com.databricks.app.logs")

It worked. I expect to produce a pull request later.

This is the complete exception:

Exception in thread "main" java.lang.RuntimeException: Table Not Found: com.databricks.app.logs
    at scala.sys.package$.error(package.scala:27)
    at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:90)
    at org.apache.spark.sql.catalyst.analysis.SimpleCatalog$$anonfun$1.apply(Catalog.scala:90)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.sql.catalyst.analysis.SimpleCatalog.lookupRelation(Catalog.scala:90)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$2.applyOrElse(Analyzer.scala:84)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$2.applyOrElse(Analyzer.scala:82)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:165)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:183)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:212)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:168)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:183)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:212)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:168)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:156)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:82)
    at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:81)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
    at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
    at scala.collection.immutable.List.foldLeft(List.scala:84)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
    at org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:397)
    at org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:397)
    at org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan$lzycompute(SQLContext.scala:398)
    at org.apache.spark.sql.SQLContext$QueryExecution.optimizedPlan(SQLContext.scala:398)
    at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:402)
    at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:400)
    at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:406)
    at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:406)
    at org.apache.spark.sql.SchemaRDD.collect(SchemaRDD.scala:438)
    at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:440)
    at org.apache.spark.sql.SchemaRDD.take(SchemaRDD.scala:103)
    at org.apache.spark.rdd.RDD.first(RDD.scala:1092)
    at com.databricks.apps.logs.chapter1.LogAnalyzerSQL$.main(LogAnalyzerSQL.scala:33)
    at com.databricks.apps.logs.chapter1.LogAnalyzerSQL.main(LogAnalyzerSQL.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Exception in thread โ€œmainโ€ java.lang.StackOverflowError

https://github.com/databricks/reference-apps/blob/master/twitter_classifier/scala/src/main/scala/com/databricks/apps/twitter_classifier/ExamineAndTrain.scala

Exception in main java.lang.stackoverflowerror
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)

while executing the following code
sqlContext.sql("SELECT text FROM tweetTable LIMIT 10").collect().foreach(println)

File not found - API to submit Apache Spark jobs on ec2

Aim - API call from any machine that submits a Spark job to Spark EC2 cluster Job runs perfectly well - Python file running on Localhost- Apache Spark However, unable to run it on Apache Spark EC2.

API call

 curl -X POST http://ec2-54-209-108-127.compute-1.amazonaws.com:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{
  "action" : "CreateSubmissionRequest",
  "appArgs" : [ "" ],
  "appResource" : "wordcount.py",
  "clientSparkVersion" : "1.5.0",
  "environmentVariables" : {
    "SPARK_ENV_LOADED" : "1"
  },
  "mainClass" : "",
  "sparkProperties" : {
    "spark.jars" : "wordcount.py",
    "spark.driver.supervise" : "true",
    "spark.app.name" : "MyJob",
    "spark.eventLog.enabled": "true",
    "spark.submit.deployMode" : "cluster",
    "spark.master" : "spark://ec2-54-209-108-127.compute-1.amazonaws.com:6066"
  }}'

{
  "action" : "CreateSubmissionResponse",
  "message" : "Driver successfully submitted as driver-20160712145703-0003",
  "serverSparkVersion" : "1.6.1",
  "submissionId" : "driver-20160712145703-0003",
  "success" : true
}

To get the response, following API returns error - File not found

curl  http://ec2-54-209-108-127.compute-1.amazonaws.com:6066/v1/submissions/status/driver-20160712145703-0003
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "ERROR",
  "message" : "Exception from the cluster:\njava.io.FileNotFoundException: wordcount.py (No such file or directory)\n\tjava.io.FileInputStream.open(Native Method)\n\tjava.io.FileInputStream.<init>(FileInputStream.java:146)\n\torg.spark-project.guava.io.Files$FileByteSource.openStream(Files.java:124)\n\torg.spark-project.guava.io.Files$FileByteSource.openStream(Files.java:114)\n\torg.spark-project.guava.io.ByteSource.copyTo(ByteSource.java:202)\n\torg.spark-project.guava.io.Files.copy(Files.java:436)\n\torg.apache.spark.util.Utils$.org$apache$spark$util$Utils$$copyRecursive(Utils.scala:539)\n\torg.apache.spark.util.Utils$.copyFile(Utils.scala:510)\n\torg.apache.spark.util.Utils$.doFetchFile(Utils.scala:595)\n\torg.apache.spark.util.Utils$.fetchFile(Utils.scala:394)\n\torg.apache.spark.deploy.worker.DriverRunner.org$apache$spark$deploy$worker$DriverRunner$$downloadUserJar(DriverRunner.scala:150)\n\torg.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:79)",
  "serverSparkVersion" : "1.6.1",
  "submissionId" : "driver-20160712145703-0003",
  "success" : true,
  "workerHostPort" : "172.31.17.189:59433",
  "workerId" : "worker-20160712083825-172.31.17.189-59433"
}

Awaiting suggestions and improvements. p.s. - newbie in Apache Spark..

Update API call (Set the main class, appArgs, appResource, clientSparkVersion to updated value) ->

curl -X POST http://ec2-54-209-108-127.compute-1.amazonaws.com:6066/v1/submissions/create{
"action" : "CreateSubmissionRequest",
"appArgs" : [ "/wordcount.py" ],
"appResource" : "file:/wordcount.py",
"clientSparkVersion" : "1.6.1",
"environmentVariables" : {
"SPARK_ENV_LOADED" : "1"
},
"mainClass" : "org.apache.spark.deploy.SparkSubmit",
"sparkProperties" : {
"spark.driver.supervise" : "false",
"spark.app.name" : "Simple App",
"spark.eventLog.enabled": "true",
"spark.submit.deployMode" : "cluster",
"spark.master" : "spark://ec2-54-209-108-127.compute-1.amazonaws.com:6066"
}
}

Cant import maven project Log-Analyze

Using jdk8, Maven 3.3.3, Eclipse Mars Release (4.5.0), Log-Analyzer application

Trying to import a Maven project into eclipse (Import 'existing Maven project'). Getting error: CoreException: Could not calculate build plan: Plugin org.apache.maven.plugins:maven-compiler-plugin:2.3.2 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-compiler-plugin:jar:2.3.2: ArtifactResolutionException: Failure to transfer org.apache.maven.plugins:maven-compiler-plugin:pom:2.3.2 from https://repo.maven.apache.org/maven2

Have tried to install this plugin- whatever I tried 'Install New Software"- did not work. Tried adding a couple of M2E connectors also. I now have:
m2e connector for maven-remote-resources-plugin
m2e connector for the Maven Dependency Plugin

How to solve this? Thanks

how to run Log anayzer app

i am able to run all chapter applications but when i tried to run app of loganalyzer i am not able to run properly. somehow i solved all errors and build using mvn bu using the information in the given link #53. but after that please specify each step by step procedure for running the applications in single and multi node clusters. i have seen in flags.java path for LOGS_DIRECTORY OUTPUT_HTML_FILE CHECKPOINT_DIRECTORY is given would you please specifically tell which path we have to with single and multi node cluster

https://github.com/databricks/reference-apps/blob/master/twitter_classifier/scala/src/main/scala/com/databricks/apps/twitter_classifier/ExamineAndTrain.scala Exception in main java.lang.stackoverflowerror scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) while executing the following code sqlContext.sql("SELECT text FROM tweetTable LIMIT 10").collect().foreach(println)

Please help me.

Typos in the text in section 'First Logs Analyzer in Spark'

The following text contains a small typo:

Again, call cache on the context size RDD...

The word context should be replaced with content.

Also, the next section of the text is not properly ended:

...there could be many invalid response codes to cause an.

new reference apps mentioned @ Spark Summit East 2015?

Hi @vidaha,

Just wondering when the new apps you mentioned 2 weeks ago would be made available?
Namely:
Wikipedia Dataset
Facebook API

I really enjoyed your talk in the event and it would be appreciated if they are published in the repo.

Thanks in advance,
Charles

twitter_classifier compilation fails

$ sbt clean assembly
[info] Loading global plugins from /Users/elizachang/.sbt/0.13/plugins
[info] Loading project definition from /Users/elizachang/oss/reference-apps/twitter_classifier/scala/project
[info] Set current project to spark-twitter-lang-classifier (in build file:/Users/elizachang/oss/reference-apps/twitter_classifier/scala/)
[success] Total time: 0 s, completed Apr 29, 2015 3:47:00 PM
[info] Updating {file:/Users/elizachang/oss/reference-apps/twitter_classifier/scala/}scala...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Compiling 4 Scala sources to /Users/elizachang/oss/reference-apps/twitter_classifier/scala/target/scala-2.10/classes...
[error] /Users/elizachang/oss/reference-apps/twitter_classifier/scala/src/main/scala/com/databricks/apps/twitter_classifier/ExamineAndTrain.scala:51: value head is not a member of org.apache.spark.sql.Row
[error]     val texts = sqlContext.sql("SELECT text from tweetTable").map(_.head.toString)
[error]                                                                     ^
[error] /Users/elizachang/oss/reference-apps/twitter_classifier/scala/src/main/scala/com/databricks/apps/twitter_classifier/ExamineAndTrain.scala:62: value foreach is not a member of Array[Nothing]
[error]       some_tweets.foreach { t =>
[error]                   ^
[error] two errors found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 14 s, completed Apr 29, 2015 3:47:15 PM

Improvement to the Log Analyzer SQL example

The following code can be improved by better leveraging SQL:

// Calculate statistics based on the content size.
Tuple4<Long, Long, Long, Long> contentSizeStats =
    sqlContext.sql("SELECT SUM(contentSize), COUNT(*), MIN(contentSize), MAX(contentSize) FROM logs")
        .map(row -> new Tuple4<>(row.getLong(0), row.getLong(1), row.getLong(2), row.getLong(3)))
        .first();
System.out.println(String.format("Content Size Avg: %s, Min: %s, Max: %s",
    contentSizeStats._1() / contentSizeStats._2(),
    contentSizeStats._3(),
    contentSizeStats._4()));

Namely, SQL already suppports calculating an average via the AVG function. Therefore, the improved code snippet may look like as follows:

// Calculate statistics based on the content size.
Tuple3<Double, Long, Long> contentSizeStats =
    sqlContext.sql("SELECT AVG(contentSize), MIN(contentSize), MAX(contentSize) FROM logs")
        .map(row -> new Tuple3<>(row.getDouble(0), row.getLong(1), row.getLong(2)))
        .first();
System.out.println(String.format("Content Size Avg: %s, Min: %s, Max: %s",
    contentSizeStats._1(),
    contentSizeStats._2(),
    contentSizeStats._3()));

Wrong order for arguments in logs_analyzer/chapter1/python/README.md spark-submit example

The spark-submit example in
logs_analyzer/chapter1/python/README.md

says:
% ${YOUR_SPARK_HOME}/bin/spark-submit databricks/apps/logs/log_analyzer.py --py-files databricks/apps/logs/apache_access_log.py ../../data/apache.access.log

, but instead it should say:
% ${YOUR_SPARK_HOME}/bin/spark-submit --py-files databricks/apps/logs/apache_access_log.py databricks/apps/logs/log_analyzer.py ../../data/apache.access.log

Reason: The arguments are in wrong order and when executed you get an "Input path does not exist: file:... reference-apps/logs_analyzer/chapter1/python/--py-files..." exception.

spark-submit thinks that --py-files is a file because it doesn't expect it after log_analyzer.py. log_analyzer.py (the Python app) should go after the python files specified with the --py-files option.

See an example in the Python Programming Guide ("Standalone Programs" section).

WeatherApp

While running the sample WeatherApp I get the following exception while (embedded) Kafka is trying to connect to the ZooKeeper instance. Pls advice me to resolve this issue.

[INFO] [2016-11-28 17:53:49,727] [org.apache.zookeeper.ClientCnxn]: Opening socket connection to server 192.168.0.8/192.168.0.8:2181. Will not attempt to authenticate using SASL (unknown error)
[INFO] [2016-11-28 17:53:55,834] [org.apache.zookeeper.ZooKeeper]: Session: 0x0 closed
[INFO] [2016-11-28 17:53:55,834] [org.apache.zookeeper.ClientCnxn]: EventThread shut down
[ERROR] [2016-11-28 17:53:55,838] [org.apache.zookeeper.server.NIOServerCnxnFactory]: Thread Thread[main,5,main] died
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880) ~[zkclient-0.3.jar:0.3]
at org.I0Itec.zkclient.ZkClient.(ZkClient.java:98) ~[zkclient-0.3.jar:0.3]
at org.I0Itec.zkclient.ZkClient.(ZkClient.java:84) ~[zkclient-0.3.jar:0.3]
at com.datastax.spark.connector.embedded.EmbeddedKafka.(EmbeddedKafka.scala:29) ~[spark-cassandra-connector-embedded_2.10-1.1.0.jar:1.1.0]
at com.datastax.spark.connector.embedded.EmbeddedKafka.(EmbeddedKafka.scala:18) ~[spark-cassandra-connector-embedded_2.10-1.1.0.jar:1.1.0]
at com.datastax.spark.connector.embedded.EmbeddedKafka.(EmbeddedKafka.scala:23) ~[spark-cassandra-connector-embedded_2.10-1.1.0.jar:1.1.0]
at com.databricks.apps.WeatherApp$delayedInit$body.apply(WeatherApp.scala:46) ~[classes/:na]
at scala.Function0$class.apply$mcV$sp(Function0.scala:40) ~[scala-library.jar:na]
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) ~[scala-library.jar:na]
at scala.App$$anonfun$main$1.apply(App.scala:71) ~[scala-library.jar:na]
at scala.App$$anonfun$main$1.apply(App.scala:71) ~[scala-library.jar:na]
at scala.collection.immutable.List.foreach(List.scala:318) ~[scala-library.jar:na]
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32) ~[scala-library.jar:na]
at scala.App$class.main(App.scala:71) ~[scala-library.jar:na]
at com.databricks.apps.WeatherApp$.main(WeatherApp.scala:40) ~[classes/:na]
at com.databricks.apps.WeatherApp.main(WeatherApp.scala) ~[classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_45]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_45]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_45]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_45]
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) ~[idea_rt.jar:na]

Session concept

Hello,
Thanks for this tutorial!

I was wondering if it could be possible to compute the session metric like Google Analytics does using Apache SPARK in offline mode? https://support.google.com/analytics/answer/2731565?hl=en

The algorithm should aggregate requests for the same IP and check if the time between 2 requests (timestamp in the log files, not when the message is received) has been more than 30minutes (to determine inactivity of the user).

Exception in thread "main" java.lang.NullPointerException

I am getting below error while trying to run ExamineAndTrain.scala. Any help with the error will be greatly appreciated

Exception in thread "main" java.lang.NullPointerException
at java.util.Hashtable.put(Hashtable.java:514)
at java.util.Properties.setProperty(Properties.java:161)
at java.lang.System.setProperty(System.java:787)
at com.databricks.apps.twitter_classifier.Utils$.parseCommandLineWithTwitterCredentials(Utils.scala:32)
at com.databricks.apps.twitter_classifier.Collect$.main(Collect.scala:26)
at com.databricks.apps.twitter_classifier.Collect.main(Collect.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:292)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Compile error in Renderer.java from Log Analyzer from IOUtils

I cloned the Databricks project, updated my JDK and JRE to version 8, and got the following compilation error:

Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile (default-compile) on project log-analyzer: Compilation failure
[ERROR] ${user}/databricks/reference-apps/logs_analyzer/app/java8/src/main/java/com/databricks/apps/logs/Renderer.java:[19,28] error: no suitable method found for toString(InputStream,Charset)

So I added the dependency to the pom.xml, and downloaded and installed the Apache IOUtils JAR to my local .m2 repository, but I am still getting this same error.

Has anyone seen this before? Any ideas what could be causing it out of the box like this? I didn't remove Java 7, because I have other development work that depends on it, but it is disabled.

train.md

Not a big fan of how "train.md" obscures the important points of the code. It was much simpler for students to comprehend the featurizing before this code was refactored.

wrong usage in README.md [logs_analyzer/app/java8]

  • spark-submit wrong java class: It should be LogAnalyzerAppMain instead of LogsAnalyzerReferenceAppMain
  • should use uber-log-analyzer-1.0.jar instead of log-analyzer-1.0.jar to avoid ClassNotFound exception -> java.lang.NoClassDefFoundError: com/google/common/base/Charsets

twitter_classifier/Collect.scala: Pending TODO can be completed: SPARK-3390 was fixed in Spark 1.2.0.

Line 42 of reference-apps/twitter_classifier/scala/src/main/scala/com/databricks/apps/twitter_classifier/Collect.scala can now be safely removed, as SPARK-3390 was fixed in pull request #2364 for Apache 1.2.0.

If you use Spark 1.2.0, this is the code that can be removed:

.filter(!_.contains("boundingBoxCoordinates")) // TODO(vida): Remove this workaround when SPARK-3390 is fixed.

If you remove it for Spark 1.1.0, Collect.java won't break when run, but ExamineAndTrain.scala will do, with a "scala.MatchError: StructType(List())" exception. It will be caused by the "boundingBoxCoordinates" json entries, as Spark 1.1.0 doesn't handle them properly.

twitter_classifier hangs

When I run the twitter_classifier example, it seems to hang when it tries to get the rdd.count() inside the foreachRDD loop. When it hangs, it prints out something similar to the following in my terminal:

[Stage 1:>                                                          (0 + 0) / 3]

spark-submit wrong java class: It should be LogAnalyzer instead of LogsAnalyzer

logs_analyzer/chapter1/java6/README.md and logs_analyzer/chapter1/java8/README.md are referring to com.databricks.apps.logs.LogsAnalyzer whereas they should refer to com.databricks.apps.logs.LogAnalyzer. This is causing an "java.lang.ClassNotFoundException: com.databricks.apps.logs.chapter1.LogsAnalyzer" error.

They should say

%  ${YOUR_SPARK_HOME}/bin/spark-submit
   --class "com.databricks.apps.logs.LogAnalyzer"
   --master local[4]
   target/log-analyzer-1.0.jar
   ../../data/apache.access.log

instead of

%  ${YOUR_SPARK_HOME}/bin/spark-submit
   --class "com.databricks.apps.logs.LogsAnalyzer"
   --master local[4]
   target/log-analyzer-1.0.jar
   ../../data/apache.access.log

java.lang.NoClassDefFoundError: kafka/admin/CreateTopicCommand in com.databricks.apps.WeatherApp

After successfully (with some edits to port incrementing) running part
[1] com.databricks.apps.WeatherClientApp
in sbt/sbt weather/run
I tried to start [2] com.databricks.apps.WeatherApp and ran into compilation problems.

~/software/spark/spark-1.3.0-bin-hadoop2.4/work/databricksSparkRef/reference-apps/timeseries/scala$ sbt/sbt weather/run
Launching sbt from sbt/sbt-launch-0.13.6.jar
[info] Loading project definition from /home/wynrs1/software/spark/spark-1.3.0-bin-hadoop2.4/work/databricksSparkRef/reference-apps/timeseries/scala/project
[info] Set current project to timeseries samples with cassandra and kafka (in build file:/home/wynrs1/software/spark/spark-1.3.0-bin-hadoop2.4/work/databricksSparkRef/reference-apps/timeseries/scala/)
[warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list

Multiple main classes detected, select one to run:

[1] com.databricks.apps.WeatherClientApp
[2] com.databricks.apps.WeatherApp

Enter number: 2

[info] Running com.databricks.apps.WeatherApp
[INFO] [2015-05-27 16:25:07,722] [com.databricks.apps.weather.WeatherSettings]: Starting up with spark master 'local[*]' cassandra hosts '127.0.0.1'
[INFO] [2015-05-27 16:25:07,735] [com.databricks.apps.weather.WeatherSettings]: Found 4 data files to load.
ZooKeeperServer isRunning: true
ZooKeeper Client connected.
Starting the Kafka server at 127.0.1.1:2181
error java.lang.NoClassDefFoundError: kafka/admin/CreateTopicCommand$
java.lang.NoClassDefFoundError: kafka/admin/CreateTopicCommand$
at com.datastax.spark.connector.embedded.EmbeddedKafka.createTopic(EmbeddedKafka.scala:61)
at com.databricks.apps.WeatherApp$delayedInit$body.apply(WeatherApp.scala:49)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at com.databricks.apps.WeatherApp$.main(WeatherApp.scala:40)
at com.databricks.apps.WeatherApp.main(WeatherApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
Caused by: java.lang.ClassNotFoundException: kafka.admin.CreateTopicCommand$
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at com.datastax.spark.connector.embedded.EmbeddedKafka.createTopic(EmbeddedKafka.scala:61)
at com.databricks.apps.WeatherApp$delayedInit$body.apply(WeatherApp.scala:49)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at com.databricks.apps.WeatherApp$.main(WeatherApp.scala:40)
at com.databricks.apps.WeatherApp.main(WeatherApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
[trace] Stack trace suppressed: run last weather/compile:run for the full output.

Any suggestion?

logs_analyzer/chapter1/scala/build.sbt - "provided" should go inside quotes

logs_analyzer/chapter1/scala/build.sbt has a typo when appending new values to the libraryDependencies variable: It should use "provided" instead of provided.

As of now, when you run "sbt package" it produces this error message:

logs_analyzer/chapter1/scala/build.sbt:7: error: not found: value provided
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.0.1" % provided'''

Therefore, the assignment should be:

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.0.1" % "provided"

libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.0.1" % "provided"

libraryDependencies += "org.apache.spark" %% "spark-streaming" % "1.0.1" % "provided"

Instead of:

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.0.1" % provided

libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.0.1" % provided

libraryDependencies += "org.apache.spark" %% "spark-streaming" % "1.0.1" % provided

sbt/sbt weather/run [error]

How can I get past this error?
reference-apps/timeseries/scala$ sbt/sbt weather/run

[error] /home/wynrs1/software/spark/spark-1.3.0-bin-hadoop2.4/work/databricksSparkRef/reference-apps/timeseries/scala/timeseries-weather/src/main/scala/com/databricks/apps/WeatherClientApp.scala:18: object deploy is not a member of package com.sun
[error] import com.sun.deploy.config.ConfigFactory

Spark 2 compatibility

As Spark 2 is not a breaking news anymore, it's time to make reference apps working with it.
I can see at least the following things to do:

  • Use SparkSession instead of SQLContext
  • Use Dataset API
  • Update JavaDStream.foreachRDD() invocations to new contract (returns nothing)
  • Update JavaPairDStream.updateStateByKey() to new contract (uses Spark implementation of Optional)

when run the code get the RuntimeException

When I follow the README file's steps one by one, after I run spark-submit, I got the error:
ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1).java.lang.RuntimeException: Error parsing logline, I know where it source from, but know idea how to resolve it, has any tips?

Problem in SBT Assembly : ivyStylePatterns Resolver URL is down

I am new to Scala and Spark. It may not be an issue and may be I don't know how to use it.
I am trying to build "twitter_classifier" project.
I am on CDH 5.5.1 and in build.sbt I have correctly mentioned spark version as 1.5.0.
when I do sbt compile , it works with success.

When I do "sbt assembly", it fails with following error
[info] Set current project to sbt (in build file:/home/anirusharma/reference-apps/twitter_classifier/scala/sbt/)
[error] Not a valid command: assembly
[error] Not a valid project ID: assembly
[error] Expected ':' (if selecting a configuration)
[error] Not a valid key: assembly
[error] assembly
[error] ^

I notice in file plugins.sbt a url has mentioned
"http://scalasbt.artifactoryonline.com/scalasbt/sbt-plugin-releases"

This URL repository is down now. I tried to replace it with following url
"https://repo.typesafe.com/typesafe/ivy-releases"

But it does not work and it still gives error, I have tried few other urls there but it may not work.
I don't know whether this error is really because above mentioned URL does not work or something else is root cause of error (i.e why it does now work , when I replace the URL). May be I am replacing with incorrect URL or some other problem

Kindly help in resolving

Algorithm constraints check failed: SHA1withRSA while running Twitter Classifier Example

Hi,
I have successfully compiled the Twitter classifier sample and I am trying to run the first program to collect the tweets. When I run the example I am running into this issue:

16/06/13 21:52:43 ERROR scheduler.ReceiverTracker: Deregistered receiver for stream 0: Restarting receiver with delay 2000ms: Error receiving tweets - sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Algorithm constraints check failed: SHA1withRSA Relevant discussions can be found on the Internet at: http://www.google.co.jp/search?q=d0031b0b or http://www.google.co.jp/search?q=1db75522 TwitterException{exceptionCode=[d0031b0b-1db75522 db667dea-99334ae4 db667dea-99334ae4 db667dea-99334ae4], statusCode=-1, message=null, code=-1, retryAfter=-1, rateLimitStatus=null, version=3.0.3} at twitter4j.internal.http.HttpClientImpl.request(HttpClientImpl.java:192) at twitter4j.internal.http.HttpClientWrapper.request(HttpClientWrapper.java:61) at twitter4j.internal.http.HttpClientWrapper.get(HttpClientWrapper.java:89) at twitter4j.TwitterStreamImpl.getSampleStream(TwitterStreamImpl.java:176) at twitter4j.TwitterStreamImpl$4.getStream(TwitterStreamImpl.java:164) at twitter4j.TwitterStreamImpl$TwitterStreamConsumer.run(TwitterStreamImpl.java:462) Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: Algorithm constraints check failed: SHA1withRSA

My java is
/usr/jdk64/java-1.8.0-openjdk-1.8.0.45-28.b13.el6_6.x86_64/jre/bin/java

minor spelling error

The line:
"Click here to read this content one in html format."

Should be:

"Click here to read this content in HTML format."

Row class

Hi, Guys!

I tried the Log Analyzer examples. In particular the LogAnalyzerSql one. I include in my classpath spark-core_2.10.1.6.1 as well as spark-sql_2.10.1.6.2 but I keep getting compilation errors: cannot Access Row class... if I use other jar versions (1.1.0, etc.) I keep getting missing classes (DataFrames etc.) Could any one tell me which versions of these jars should I include to run the LogAnalyzerSql examples? I have google this but nobody else seems to have any similar issue...

thanks!

error in running ExamineAndTrain.scala

15/03/26 15:01:02 ERROR Executor: Exception in task 4.0 in stage 2.0 (TID 16)
scala.MatchError: StructType(List()) (of class org.apache.spark.sql.catalyst.types.StructType)
at org.apache.spark.sql.json.JsonRDD$.enforceCorrectType(JsonRDD.scala:348)
at org.apache.spark.sql.json.JsonRDD$$anonfun$enforceCorrectType$1.apply(JsonRDD.scala:350)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.apache.spark.sql.json.JsonRDD$.enforceCorrectType(JsonRDD.scala:350)
at org.apache.spark.sql.json.JsonRDD$$anonfun$enforceCorrectType$1.apply(JsonRDD.scala:350)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.apache.spark.sql.json.JsonRDD$.enforceCorrectType(JsonRDD.scala:350)
at org.apache.spark.sql.json.JsonRDD$$anonfun$org$apache$spark$sql$json$JsonRDD$$asRow$1$$anonfun$apply$12.apply(JsonRDD.scala:381)
at scala.Option.map(Option.scala:145)
at org.apache.spark.sql.json.JsonRDD$$anonfun$org$apache$spark$sql$json$JsonRDD$$asRow$1.apply(JsonRDD.scala:380)
at org.apache.spark.sql.json.JsonRDD$$anonfun$org$apache$spark$sql$json$JsonRDD$$asRow$1.apply(JsonRDD.scala:365)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.sql.json.JsonRDD$.org$apache$spark$sql$json$JsonRDD$$asRow(JsonRDD.scala:365)
at org.apache.spark.sql.json.JsonRDD$$anonfun$org$apache$spark$sql$json$JsonRDD$$asRow$1$$anonfun$apply$7.apply(JsonRDD.scala:369)
at org.apache.spark.sql.json.JsonRDD$$anonfun$org$apache$spark$sql$json$JsonRDD$$asRow$1$$anonfun$apply$7.apply(JsonRDD.scala:369)
at scala.Option.map(Option.scala:145)
at org.apache.spark.sql.json.JsonRDD$$anonfun$org$apache$spark$sql$json$JsonRDD$$asRow$1.apply(JsonRDD.scala:368)
at org.apache.spark.sql.json.JsonRDD$$anonfun$org$apache$spark$sql$json$JsonRDD$$asRow$1.apply(JsonRDD.scala:365)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.sql.json.JsonRDD$.org$apache$spark$sql$json$JsonRDD$$asRow(JsonRDD.scala:365)
at org.apache.spark.sql.json.JsonRDD$$anonfun$jsonStringToRow$1.apply(JsonRDD.scala:38)
at org.apache.spark.sql.json.JsonRDD$$anonfun$jsonStringToRow$1.apply(JsonRDD.scala:38)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1167)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:904)
at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:904)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
at org.apache.spark.scheduler.Task.run(Task.scala:54)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)

Examples force a fixed cores number for Spark local master

Guidelines to run examples force user to run Spark local master with 4 cores:
--master local[4]
Real number of logical cores may differ from user to user. It would be better to let Spark decide how many cores it needs:
--master local[*]

Remove most of the relative links throughout the docs and push people to gitbook menus instead

All the links throughout the docs cause two problems:

  • Maintainability (see #8)
  • Encourage people not to use gitbooks and instead click around the github project

The later is a less than ideal experience for reading the doc - the github project is better served for developers looking to make a contribution.

Let's remove all the relative links from the doc pages and just let the gitbook menu do the work

An example on Structured Streaming

There are examples illustrating Streaming with SQL processing. I suppose, in Spark 2 the preferred way of processing streaming data with SQL queries is Spark Structured Streaming. Makes sense to rework at least one example to illustrate this technique. Candidates are:

  • LogAnalyzerStreamingSQL
  • LogAnalyzerStreamingImportDirectory

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.