Comments (4)
@ccwgit Me too. So how to solve this issue? Thanks.
from spark-sql-perf.
got exactly the same problem with Spark 2.1.1. Wondering if anyone got it fixed.
from spark-sql-perf.
I updated my code. But I don't know if is true.
val breakdownResults = if (includeBreakdown) {
val depth = queryExecution.executedPlan.collect { case p: SparkPlan => p }.size
val physicalOperators = (0 until depth).map(i => (i, queryExecution.executedPlan))
val indexMap = physicalOperators.map { case (index, op) => (op, index) }.toMap
val timeMap = new mutable.HashMap[Int, Double]
physicalOperators.reverse.map {
case (index, node) =>
messages += s"Breakdown: ${node.simpleString}"
val newNode: SparkPlan = buildDataFrame.queryExecution.executedPlan
val executionTime = measureTimeMs {
newNode.execute().foreach((row: Any) => Unit)
}
timeMap += ((index, executionTime))
val childIndexes = node.children.map(indexMap)
val childTime = childIndexes.map(timeMap).sum
messages += s"Breakdown time: $executionTime (+${executionTime - childTime})"
BreakdownResult(
node.nodeName,
node.simpleString.replaceAll("#\\d+", ""),
index,
childIndexes,
executionTime,
executionTime - childTime)
}
} else {
Seq.empty[BreakdownResult]
}
from spark-sql-perf.
Failed to compile the benchmark on spark2.3.2. Looks like some libraries missed.
sh-4.2# bin/run --help
Using as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
[info] Loading project definition from /opt/spark/bin/spark-sql-perf-master/project
Missing bintray credentials /root/.bintray/.credentials. Some bintray features depend on this.
[info] Set current project to spark-sql-perf (in build file:/opt/spark/bin/spark-sql-perf-master/)
[warn] Credentials file /root/.bintray/.credentials does not exist
[info] Compiling 66 Scala sources to /opt/spark/bin/spark-sql-perf-master/target/scala-2.11/classes...
[warn] /opt/spark/bin/spark-sql-perf-master/src/main/scala/com/databricks/spark/sql/perf/CpuProfile.scala:107: non-variable type argument String in type pattern Seq[String] (the underlying of Seq[String]) is unchecked since it is eliminated by erasure
[warn] case Row(stackLines: Seq[String], count: Long) => stackLines.map(toStackElement) -> count :: Nil
[warn] ^
[warn] /opt/spark/bin/spark-sql-perf-master/src/main/scala/com/databricks/spark/sql/perf/tpcds/TPCDS.scala:30: no valid targets for annotation on value sqlContext - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param)
[warn] class TPCDS(@transient sqlContext: SQLContext)
[warn] ^
[warn] /opt/spark/bin/spark-sql-perf-master/src/main/scala/com/databricks/spark/sql/perf/tpch/TPCH.scala:167: no valid targets for annotation on value sqlContext - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param)
[warn] class TPCH(@transient sqlContext: SQLContext)
[warn] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:48: not found: type ClassificationNode
[error] .asInstanceOf[ClassificationNode]
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:59: not found: type RegressionNode
[error] .asInstanceOf[RegressionNode]
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:180: not found: type RegressionLeafNode
[error] new RegressionLeafNode(prediction, impurity, impurityStats)
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:182: not found: type ClassificationLeafNode
[error] new ClassificationLeafNode(prediction, impurity, impurityStats)
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:196: not found: type RegressionInternalNode
[error] new RegressionInternalNode(prediction, impurity, gain,
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:197: not found: type RegressionNode
[error] leftChild.asInstanceOf[RegressionNode], rightChild.asInstanceOf[RegressionNode],
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:197: not found: type RegressionNode
[error] leftChild.asInstanceOf[RegressionNode], rightChild.asInstanceOf[RegressionNode],
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:200: not found: type ClassificationInternalNode
[error] new ClassificationInternalNode(prediction, impurity, gain,
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:201: not found: type ClassificationNode
[error] leftChild.asInstanceOf[ClassificationNode], rightChild.asInstanceOf[ClassificationNode],
[error] ^
[error] /opt/spark/bin/spark-sql-perf-master/src/main/scala/org/apache/spark/ml/ModelBuilderSSP.scala:201: not found: type ClassificationNode
[error] leftChild.asInstanceOf[ClassificationNode], rightChild.asInstanceOf[ClassificationNode],
[error] ^
[warn] three warnings found
[error] 10 errors found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 10 s, completed Oct 31, 2018 11:52:58 PM
from spark-sql-perf.
Related Issues (20)
- How to put data into external storage?
- suitable exector-memory for spark-sql-perf testing
- Validating the correctness of results HOT 2
- The Query and Generate mismatch
- For spark-3.0.0, there is no method called org.apache.spark.sql.SQLContext.createExternalTable
- build errors due to dependencies HOT 1
- Spark 3.0.0 compile error
- Getting error when analyzing the columns
- Use CHAR/VARCHAR types in TPCDSTables HOT 2
- Error when trying to create binary from source code
- sbt run error with unresolved dependency
- NoSuchMethodError on Spark 3.1 in Databricks HOT 1
- sbt package failed with unresolved dependency HOT 5
- executor_per_core is fixed to 1 vCores in spark-sql-perf on EMR
- Build failed
- Compilation failed for Spark 3.2.0
- command "build/sbt .." failed with unresolved dependency HOT 1
- genData, the data isn`t stored the location I set. HOT 1
- genData,the tpchdata always stored in the dbgen directory.
- does it has plan to support tpcds 3.2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from spark-sql-perf.