Code Monkey home page Code Monkey logo

archived-sansa-query's Introduction

Archived Repository - Do not use this repository anymore!

SANSA got easier to use! All its code has been consolidated into a single repository at https://github.com/SANSA-Stack/SANSA-Stack

SANSA Query

Maven Central Build Status License Twitter

Description

SANSA Query is a library to perform SPARQL queries over RDF data using big data engines Spark and Flink. It allows to query RDF data that resides both in HDFS and in a local file system. Queries are executed distributed and in parallel across Spark RDDs/DataFrames or Flink DataSets. Further, SANSA-Query can query non-RDF data stored in databases e.g., MongoDB, Cassandra, MySQL or file format Parquet, using Spark.

For RDF data, SANSA uses vertical partitioning (VP) approach and is designed to support extensible partitioning of RDF data. Instead of dealing with a single triple table (s, p, o), data is partitioned into multiple tables based on the used RDF predicates, RDF term types and literal datatypes. The first column of these tables is always a string representing the subject. The second column always represents the literal value as a Scala/Java datatype. Tables for storing literals with language tags have an additional third string column for the language tag. Its uses Sparqlify as a scalable SPARQL-SQL rewriter.

For heterogeneous data sources (data lake), SANSA uses virtual property tables (PT) partitioning, whereby data relevant to a query is loaded on the fly into Spark DataFrames composed of attributes corresponding to the properties of the query.

SANSA Query SPARK - RDF

On SANSA Query Spark for RDF the method for partitioning an RDD[Triple] is located in RdfPartitionUtilsSpark. It uses an RdfPartitioner which maps a Triple to a single RdfPartition instance.

  • RdfPartition - as the name suggests, represents a partition of the RDF data and defines two methods:
    • matches(Triple): Boolean: This method is used to test whether a triple fits into a partition.
    • layout: TripleLayout: This method returns the TripleLayout associated with the partition, as explained below.
    • Furthermore, RdfPartitions are expected to be serializable, and to define equals and hash code.
  • TripleLayout instances are used to obtain framework-agnostic compact tabular representations of triples according to a partition. For this purpose it defines the two methods:
    • fromTriple(triple: Triple): Product: This method must, for a given triple, return its representation as a Product (this is the super class of all Scala tuples)
    • schema: Type: This method must return the exact Scala type of the objects returned by fromTriple, such as typeOf[Tuple2[String,Double]]. Hence, layouts are expected to only yield instances of one specific type.

See the available layouts for details.

SANSA Query SPARK - Heterogeneous Data Sources

SANSA Query Spark for heterogeneous data sources (data data) is composed of three main components:

  • Anlyser: it extracts SPARQL triple patters and groups them by subject, it also extracts any operation on subjects like filters, group by, order by, distinct, limit.
  • ูPlanner: it extracts joins between subject-based triple patter groups and generates join plan accordingly. The join order followed is left-deep.
  • Mapper: it access (RML) mappings and matches properties of a subject-based triples patter group against the attributes of individual data sources. If a match exists of every property of the triple pattern, the respective data source is declared relavant and loaded into Spark DataFrame. The loading into DataFrame is performed using Spark Connectors.
  • Executor: it analyses SPARQL query and generates equivalent Spark SQL functions over DataFrames, for SELECT, WHERE, GROUP-BY, ORDER-BY, LIMIT. Connection between subject-based triple pattern groups are translated into JOINs between relevant Spark DataFrames.

Usage

The following Scala code shows how to query an RDF file SPARQL syntax (be it a local file or a file residing in HDFS):

val spark: SparkSession = ...

val lang = Lang.NTRIPLES
val triples = spark.rdf(lang)("path/to/rdf.nt")


val partitions = RdfPartitionUtilsSpark.partitionGraph(triples)
val rewriter = SparqlifyUtils3.createSparqlSqlRewriter(spark, partitions)

val qef = new QueryExecutionFactorySparqlifySpark(spark, rewriter)

val port = 7531
val server = FactoryBeanSparqlServer.newInstance.setSparqlServiceFactory(qef).setPort(port).create()
server.join()

An overview is given in the FAQ section of the SANSA project page. Further documentation about the builder objects can also be found on the ScalaDoc page.

For querying heterogeneous data sources, refer to the documentation of the dedicated SANSA-DatLake component.

How to Contribute

We always welcome new contributors to the project! Please see our contribution guide for more details on how to get started contributing to SANSA.

archived-sansa-query's People

Contributors

aklakan avatar cescwang1991 avatar dependabot[bot] avatar dgraux avatar gezimsejdiu avatar imransilvake avatar lorenzbuehmann avatar mnmami avatar patrickwestphal avatar simonbin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

archived-sansa-query's Issues

Exception in QuerySystem with valid SPARQL query

When trying to run the QuerySystem like so

import java.io.File

import org.apache.commons.io.FileUtils
import org.apache.jena.graph.Triple
import org.apache.jena.riot.Lang
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession

import net.sansa_stack.query.spark.semantic.QuerySystem

object Foo {
  val symbol = Map(
    "space" -> " " * 5,
    "blank" -> " ",
    "tabs" -> "\t",
    "newline" -> "\n",
    "colon" -> ":",
    "comma" -> ",",
    "hash" -> "#",
    "slash" -> "/",
    "question-mark" -> "?",
    "exclamation-mark" -> "!",
    "curly-bracket-left" -> "{",
    "curly-bracket-right" -> "}",
    "round-bracket-left" -> "(",
    "round-bracket-right" -> ")",
    "less-than" -> "<",
    "greater-than" -> ">",
    "at" -> "@",
    "dot" -> ".",
    "dots" -> "...",
    "asterisk" -> "*",
    "up-arrows" -> "^^")

  def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder
      .master("local[*]")
      .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .appName("SANSA - Semantic Partitioning")
      .getOrCreate()

    import net.sansa_stack.rdf.spark.io._
    import net.sansa_stack.rdf.spark.partition.semantic.RdfPartition

    val log: RDD[Triple] = spark.rdf(Lang.NTRIPLES)("/tmp/log.nt")
    val partition: RDD[String] = new RdfPartition(
      symbol, log, "/tmp/sem_partitions",
      spark.sparkContext.defaultMinPartitions).partitionGraph()

    val resultsDir = new File("/tmp/results")
    FileUtils.deleteDirectory(resultsDir)

    val qs = new QuerySystem(
        symbol,
        partition,
        "/tmp/query.sparql",
        resultsDir.getAbsolutePath,
        spark.sparkContext.defaultMinPartitions)
    qs.run()
  }
}

with /tmp/query.sparql containing the simple SPARQL query

SELECT ?s
WHERE
  { 
    ?s   ?p  ?o .
  }

I get an IndexOutOfBoundsException:

Exception in thread "main" java.lang.IndexOutOfBoundsException: 5
	at scala.collection.mutable.ResizableArray$class.apply(ResizableArray.scala:43)
	at scala.collection.mutable.ArrayBuffer.apply(ArrayBuffer.scala:48)
	at net.sansa_stack.query.spark.semantic.QuerySystem$$anonfun$refactorUnionQueries$1.apply$mcVI$sp(SparqlQuerySystem.scala:167)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
	at net.sansa_stack.query.spark.semantic.QuerySystem.refactorUnionQueries(SparqlQuerySystem.scala:142)
	at net.sansa_stack.query.spark.semantic.QuerySystem$$anonfun$run$1.apply$mcVI$sp(SparqlQuerySystem.scala:48)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
	at net.sansa_stack.query.spark.semantic.QuerySystem.run(SparqlQuerySystem.scala:46)

Modifying the query to

SELECT ?s
WHERE {
    ?s   ?p  ?o .
  }

at least makes the error disappear.

Queries with variables for predicates do not work; e.g. Select ?s { ?s ?p ?o }

Dataframes are created per predicate and appropriate dataframes are looked up based on a query's predicates. Therefore, queries with variables for predicates fail. The simplest fix that would need to be done is to create a UNION over all existing predicate dataframes. A more sophisticated solution would be to try to re-use candidate selector from Saleems query federation work and / or my RDB2RDF work.

Problem with a SPARQL query containing DISTINCT and ORDER BY

I'll demonstrate the problem using the Sparqlify example: https://github.com/SANSA-Stack/SANSA-Examples/blob/develop/sansa-examples-spark/src/main/scala/net/sansa_stack/examples/spark/query/Sparqlify.scala.

Run the Sparqlify class in the server/endpoint mode pointing to the rdf.nt as input (--input src/main/resources/rdf.nt). Execute the following query:

SELECT DISTINCT ?x ?y WHERE {
    ?x <http://xmlns.com/foaf/0.1/givenName> ?y .
}
ORDER BY ?y

Observe the error in the server console.

Exception in thread "Thread-31" java.lang.RuntimeException: java.lang.RuntimeException: org.apache.spark.sql.AnalysisException: cannot resolve '`a_1.o`' given input columns: [o, o_2, s, l, o_1]; line 4 pos 9;
'Sort ['a_1.o ASC NULLS FIRST, 'a_1.l ASC NULLS FIRST], true
+- Distinct
   +- Project [o#55 AS o#296, l#56 AS l#297, s#54 AS s#298, o#55 AS o_1#299, l#56 AS o_2#300]
      +- SubqueryAlias `a_1`
         +- SubqueryAlias `http://xmlns.com/foaf/0.1/givenname_xmlschema#string_lang`
            +- LogicalRDD [s#54, o#55, l#56], false

	at org.aksw.jena_sparql_api.web.utils.RunnableAsyncResponseSafe.run(RunnableAsyncResponseSafe.java:29)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: org.apache.spark.sql.AnalysisException: cannot resolve '`a_1.o`' given input columns: [o, o_2, s, l, o_1]; line 4 pos 9;
'Sort ['a_1.o ASC NULLS FIRST, 'a_1.l ASC NULLS FIRST], true
+- Distinct
   +- Project [o#55 AS o#296, l#56 AS l#297, s#54 AS s#298, o#55 AS o_1#299, l#56 AS o_2#300]
      +- SubqueryAlias `a_1`
         +- SubqueryAlias `http://xmlns.com/foaf/0.1/givenname_xmlschema#string_lang`
            +- LogicalRDD [s#54, o#55, l#56], false

	at org.aksw.jena_sparql_api.web.servlets.SparqlEndpointBase$3.run(SparqlEndpointBase.java:352)
	at org.aksw.jena_sparql_api.web.utils.RunnableAsyncResponseSafe.run(RunnableAsyncResponseSafe.java:26)
	... 1 more
Caused by: org.apache.spark.sql.AnalysisException: cannot resolve '`a_1.o`' given input columns: [o, o_2, s, l, o_1]; line 4 pos 9;
'Sort ['a_1.o ASC NULLS FIRST, 'a_1.l ASC NULLS FIRST], true
+- Distinct
   +- Project [o#55 AS o#296, l#56 AS l#297, s#54 AS s#298, o#55 AS o_1#299, l#56 AS o_2#300]
      +- SubqueryAlias `a_1`
         +- SubqueryAlias `http://xmlns.com/foaf/0.1/givenname_xmlschema#string_lang`
            +- LogicalRDD [s#54, o#55, l#56], false

	at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$3.applyOrElse(CheckAnalysis.scala:110)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$3.applyOrElse(CheckAnalysis.scala:107)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:278)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:278)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:277)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:275)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:275)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
	at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
	at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:275)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:93)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:93)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:105)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:105)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
	at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:104)
	at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:116)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1$2.apply(QueryPlan.scala:121)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:121)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:126)
	at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
	at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:126)
	at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:93)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:107)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:85)
	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:85)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:108)
	at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
	at net.sansa_stack.query.spark.sparqlify.QueryExecutionUtilsSpark.createQueryExecution(QueryExecutionUtilsSpark.java:23)
	at net.sansa_stack.query.spark.sparqlify.QueryExecutionSparqlifySpark.executeCoreSelect(QueryExecutionSparqlifySpark.java:38)
	at org.aksw.jena_sparql_api.core.QueryExecutionBaseSelect.execSelect(QueryExecutionBaseSelect.java:407)
	at org.aksw.jena_sparql_api.web.servlets.ProcessQuery.processQuery(ProcessQuery.java:117)
	at org.aksw.jena_sparql_api.web.servlets.ProcessQuery.processQuery(ProcessQuery.java:75)
	at org.aksw.jena_sparql_api.web.servlets.SparqlEndpointBase$3.run(SparqlEndpointBase.java:349)
	... 2 more

No results from QuerySystem with simple s-p-o query

When trying to run the QuerySystem like so

import java.io.File

import org.apache.commons.io.FileUtils
import org.apache.jena.graph.Triple
import org.apache.jena.riot.Lang
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession

import net.sansa_stack.query.spark.semantic.QuerySystem

object Foo {
  val symbol = Map(
    "space" -> " " * 5,
    "blank" -> " ",
    "tabs" -> "\t",
    "newline" -> "\n",
    "colon" -> ":",
    "comma" -> ",",
    "hash" -> "#",
    "slash" -> "/",
    "question-mark" -> "?",
    "exclamation-mark" -> "!",
    "curly-bracket-left" -> "{",
    "curly-bracket-right" -> "}",
    "round-bracket-left" -> "(",
    "round-bracket-right" -> ")",
    "less-than" -> "<",
    "greater-than" -> ">",
    "at" -> "@",
    "dot" -> ".",
    "dots" -> "...",
    "asterisk" -> "*",
    "up-arrows" -> "^^")

  def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder
      .master("local[*]")
      .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .appName("SANSA - Semantic Partitioning")
      .getOrCreate()

    import net.sansa_stack.rdf.spark.io._
    import net.sansa_stack.rdf.spark.partition.semantic.RdfPartition

    val log: RDD[Triple] = spark.rdf(Lang.NTRIPLES)("/tmp/log.nt")
    val partition: RDD[String] = new RdfPartition(
      symbol, log, "/tmp/sem_partitions",
      spark.sparkContext.defaultMinPartitions).partitionGraph()

    val resultsDir = new File("/tmp/results")
    FileUtils.deleteDirectory(resultsDir)

    val qs = new QuerySystem(
        symbol,
        partition,
        "/tmp/query.sparql",
        resultsDir.getAbsolutePath,
        spark.sparkContext.defaultMinPartitions)
    qs.run()
  }
}

with /tmp/query.sparql containing the simple s-p-o SPARQL query

SELECT ?s
WHERE {
    ?s   ?p ?o .
  }

I only get empty files in the result directory, even though neither log nor partition are empty.

can not run Query example

Here is my example code
`val sc = sparkSession.sparkContext
val sqlc = sparkSession.sqlContext

val filepath = "./data/xxxxx.ttl"

val triples = sparkSession.rdf(lang)(filepath)

//Query
import net.sansa_stack.query.spark.query._
val sparqlQuery = "SELECT * WHERE {?s ?p ?o} LIMIT 10"
val result = triples.sparql(sparqlQuery)
result.rdd.foreach(println)
`

I get following error

`CAST TO string
CAST TO string
CAST TO double precision
CAST TO string
CAST TO string
Exception in thread "main" org.apache.spark.sql.catalyst.parser.ParseException:
mismatched input 'FROM' expecting (line 2, pos 0)

== SQL ==
SELECT a_45.C_3 C_3, a_45.C_4 C_4, a_45.C_5 C_5, a_45.C_11 C_11, a_45.C_6 C_6, a_45.C_10 C_10, a_45.C_7 C_7, a_45.C_8 C_8, a_45.C_9 C_9, a_45.C_14 C_14, a_45.C_13 C_13, a_45.C_12 C_12
FROM
^^^
( SELECT a_1.s C_14, CAST(NULL AS string) C_13, CAST(NULL AS bigint) C_12, CAST(NULL AS string) C_11, a_1.o C_10, CAST('https://tac.nist.gov/tracks/SM-KBP/2019/ontologies/InterchangeOntology#justifiedBy' AS string) C_3, CAST(NULL AS string) C_5, CAST(NULL AS string) C_4, CAST(NULL AS string) C_7, CAST(NULL AS string) C_6, CAST(NULL AS double precision) C_9, CAST(NULL AS string) C_8, CAST('urn:x-arq:DefaultGraph' AS string) `C_15``

Am i missing something here ? I am using 0.6.1-SNAPSHOT version of "sense-rdf" and "sense-query"

Thanks

I got error every time I try to load large ttl file

I got "org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 28.0 failed 4 times, most recent failure: Lost task 0.3 in stage 28.0 (TID 43, 10.3.1.9, executor 20): java.lang.OutOfMemoryError: Requested array size exceeds VM limit" everytime I load large ttl "450MB".

Do you know why?

"null" values in Query result format

Hi

SANSA-Query returns strange "null" values in addition to the expected variables.
For the attached sample file (extension is changed as .ttl is not allowed) and for the simple query
"SELECT ?o WHERE {?s ?p ?o} "
test2.ttl.txt

I get following ?o
[Alice,,null] [Bob,,null] [Clare,,null] [null,null,c922def1e6fa0a4aec50621290367fd1] [null,null,94f03f04f0cef66952474449e02a1942] [CT,,null]

Is there an issue with variable binding ?

Thanks

Graph partitioning-based query system doesn't support SPARQL functions which may take N arguments

I tried to run the graph partitioning-based query system as exemplified in the SANSA-Examples repository. The query I tried to execute is

SELECT DISTINCT  ?s
WHERE
  { ?s   <http://www.specialprivacy.eu/langs/splog#logEntryContent>  ?s0 .
    ?s0  <http://www.specialprivacy.eu/langs/usage-policy#hasData>  ?allSuperClassesVar1 ;
         <http://www.specialprivacy.eu/langs/usage-policy#hasPurpose>  <http://www.specialprivacy.eu/vocabs/purposes#Admin> ;
         <http://www.specialprivacy.eu/langs/usage-policy#hasRecipient>  <http://www.specialprivacy.eu/vocabs/recipients#Public> ;
         <http://www.specialprivacy.eu/langs/usage-policy#hasStorage>  <http://www.specialprivacy.eu/vocabs/locations#ThirdParty> .
    ?s   <http://www.specialprivacy.eu/langs/splog#dataSubject>  <http://www.example.com/users/433a4347-e2c7-4e07-a0fd-a054a62ba37f>
    FILTER(?allSuperClassesVar1 NOT IN (<http://www.specialprivacy.eu/vocabs/data#Activity>, <http://www.specialprivacy.eu/langs/usage-policy#AnyData>))
  }

And what I get is this stack trace

Exception in thread "main" java.lang.UnsupportedOperationException: Not support the expression of ExprFunctionN
	at net.sansa_stack.query.spark.graph.jena.ExprParser.visit(ExprParser.scala:72)
	at org.apache.jena.sparql.expr.ExprFunctionN.visit(ExprFunctionN.java:120)
	at org.apache.jena.sparql.algebra.walker.WalkerVisitor.visitExprFunction(WalkerVisitor.java:265)
	at org.apache.jena.sparql.algebra.walker.WalkerVisitor.visit(WalkerVisitor.java:252)
	at org.apache.jena.sparql.expr.ExprFunctionN.visit(ExprFunctionN.java:120)
	at org.apache.jena.sparql.algebra.walker.WalkerVisitor.walk(WalkerVisitor.java:91)
	at org.apache.jena.sparql.algebra.walker.Walker.walk$(Walker.java:104)
[...]

pointing to this match-case expression which explicitly restricts supported filters to Expressions (i.e. basically everything that takes only up to two arguments).

Problem querying RDF containing a triple with an object of type XSD double

I'll demonstrate the problem using the Sparqlify example: https://github.com/SANSA-Stack/SANSA-Examples/blob/develop/sansa-examples-spark/src/main/scala/net/sansa_stack/examples/spark/query/Sparqlify.scala.

Add a triple to src/main/resources/rdf.nt with an object of type XSD double. E.g., simply replace the following triple:

<http://commons.dbpedia.org/resource/Category:People> <http://commons.dbpedia.org/property/width> "100.0"^^<http://dbpedia.org/datatype/perCent> .

with:

<http://commons.dbpedia.org/resource/Category:People> <http://commons.dbpedia.org/property/width> "100.0"^^<http://www.w3.org/2001/XMLSchema#double> . 

Run the Sparqlify class in the server/endpoint mode pointing to the rdf.nt as input (--input src/main/resources/rdf.nt). Execute the simple SELECT * WHERE {?s ?p ?o} query in the browser. Observe the error in the server console.

Exception in thread "Thread-37" java.lang.RuntimeException: java.lang.RuntimeException: org.apache.spark.sql.catalyst.parser.ParseException: 
mismatched input 'FROM' expecting <EOF>(line 2, pos 0)

No results for query that matches blank node

Hi SANSA-Query team,

SANSA-Query does not return any result if it matches a blank node.
Using following query
""" prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> SELECT ?stm ?en ?enAtr ?enAtrVal WHERE { ?stm rdf:subject ?en . ?stm rdf:predicate ?enAtr . ?stm rdf:object ?enAtrVal . } """

test2.ttl
returns expected results. But
test.ttl does not show ANY result, not even the non-blank node result. I also see that in the blank node case, it also outputs lots of debug statement such as
CAST TO string CAST TO string CAST TO string CAST TO string CAST TO string CAST TO string CAST TO string CAST TO string CAST TO string CAST TO string CAST TO string
I do set the valid language as
val lang = Lang.TURTLE

I see the same behavior for ".nt" format. Please suggest me what is causing this issue and if there is a way to fix it.

Thanks
Sumit

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.