Welcome to Apache Jena, a Java framework for writing Semantic Web applications.
See https://jena.apache.org/ for the project website, including documentation.
The codebase for the active modules is in git:
Apache Jena
Home Page: https://jena.apache.org/
License: Apache License 2.0
Welcome to Apache Jena, a Java framework for writing Semantic Web applications.
See https://jena.apache.org/ for the project website, including documentation.
The codebase for the active modules is in git:
Jena 4.5.0
Tracks PR #1375
No response
See PR #1273.
The cause seems to be javacc/javacc#72
We encountered this issue when a SPARQL SERVICE clause was sending a large-ish Geometry literal of USA to Fuseki. It stalls forever trying to parse the query.
Ideally, the buffer would expand exponentially or there is an alternative PR linked in the javacc issue. Currently, the parsing buffer is apparently grown in steps of 2KiB
"qtp771666241-136" #136 prio=5 os_prio=0 cpu=13385,35ms elapsed=5538,68s tid=0x00007fa188007800 nid=0x15730d runnable [0x00007fa1341f9000]
java.lang.Thread.State: RUNNABLE
at org.apache.jena.sparql.lang.arq.SimpleCharStream.ExpandBuff(SimpleCharStream.java:42)
at org.apache.jena.sparql.lang.arq.SimpleCharStream.FillBuff(SimpleCharStream.java:103)
at org.apache.jena.sparql.lang.arq.SimpleCharStream.readChar(SimpleCharStream.java:197)
at org.apache.jena.sparql.lang.arq.ARQParserTokenManager.jjMoveNfa_0(ARQParserTokenManager.java:4369)
at org.apache.jena.sparql.lang.arq.ARQParserTokenManager.jjMoveStringLiteralDfa0_0(ARQParserTokenManager.java:211)
at org.apache.jena.sparql.lang.arq.ARQParserTokenManager.getNextToken(ARQParserTokenManager.java:4793)
at org.apache.jena.sparql.lang.arq.ARQParser.jj_ntk_f(ARQParser.java:8162)
at org.apache.jena.sparql.lang.arq.ARQParser.PathElt(ARQParser.java:3603)
at org.apache.jena.sparql.lang.arq.ARQParser.PathEltOrInverse(ARQParser.java:3635)
at org.apache.jena.sparql.lang.arq.ARQParser.PathSequence(ARQParser.java:3565)
at org.apache.jena.sparql.lang.arq.ARQParser.PathAlternative(ARQParser.java:3544)
at org.apache.jena.sparql.lang.arq.ARQParser.Path(ARQParser.java:3538)
at org.apache.jena.sparql.lang.arq.ARQParser.VerbPath(ARQParser.java:3493)
at org.apache.jena.sparql.lang.arq.ARQParser.PropertyListPathNotEmpty(ARQParser.java:3418)
at org.apache.jena.sparql.lang.arq.ARQParser.TriplesSameSubjectPath(ARQParser.java:3365)
at org.apache.jena.sparql.lang.arq.ARQParser.TriplesBlock(ARQParser.java:2512)
at org.apache.jena.sparql.lang.arq.ARQParser.GroupGraphPatternSub(ARQParser.java:2425)
at org.apache.jena.sparql.lang.arq.ARQParser.GroupGraphPattern(ARQParser.java:2387)
at org.apache.jena.sparql.lang.arq.ARQParser.WhereClause(ARQParser.java:858)
at org.apache.jena.sparql.lang.arq.ARQParser.SelectQuery(ARQParser.java:137)
at org.apache.jena.sparql.lang.arq.ARQParser.Query(ARQParser.java:31)
at org.apache.jena.sparql.lang.arq.ARQParser.QueryUnit(ARQParser.java:22)
at org.apache.jena.sparql.lang.ParserARQ$1.exec(ParserARQ.java:48)
at org.apache.jena.sparql.lang.ParserARQ.perform(ParserARQ.java:95)
at org.apache.jena.sparql.lang.ParserARQ.parse$(ParserARQ.java:52)
at org.apache.jena.sparql.lang.SPARQLParser.parse(SPARQLParser.java:33)
at org.apache.jena.query.QueryFactory.parse(QueryFactory.java:144)
at org.apache.jena.query.QueryFactory.create(QueryFactory.java:83)
at org.apache.jena.fuseki.servlets.SPARQLQueryProcessor.execute(SPARQLQueryProcessor.java:251)
at org.apache.jena.fuseki.servlets.SPARQLQueryProcessor.executeBody(SPARQLQueryProcessor.java:234)
at org.apache.jena.fuseki.servlets.SPARQLQueryProcessor.execute(SPARQLQueryProcessor.java:213)
at org.apache.jena.fuseki.servlets.ActionService.executeLifecycle(ActionService.java:58)
at org.apache.jena.fuseki.servlets.SPARQLQueryProcessor.execPost(SPARQLQueryProcessor.java:83)
at org.apache.jena.fuseki.servlets.ActionProcessor.process(ActionProcessor.java:34)
at org.apache.jena.fuseki.servlets.ActionBase.process(ActionBase.java:55)
at org.apache.jena.fuseki.servlets.ActionExecLib.execActionSub(ActionExecLib.java:125)
at org.apache.jena.fuseki.servlets.ActionExecLib.execAction(ActionExecLib.java:99)
at org.apache.jena.fuseki.server.Dispatcher.dispatchAction(Dispatcher.java:164)
at org.apache.jena.fuseki.server.Dispatcher.process(Dispatcher.java:156)
at org.apache.jena.fuseki.server.Dispatcher.dispatch(Dispatcher.java:83)
at org.apache.jena.fuseki.servlets.FusekiFilter.doFilter(FusekiFilter.java:48)
at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:202)
at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1600)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:450)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:387)
at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:202)
at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1600)
at org.apache.jena.fuseki.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:284)
at org.apache.jena.fuseki.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:247)
at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:210)
at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1600)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:506)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:131)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:578)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:223)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1571)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:221)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1378)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:176)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:463)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1544)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:174)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1300)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:129)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:122)
at org.eclipse.jetty.server.Server.handle(Server.java:562)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$0(HttpChannel.java:505)
at org.eclipse.jetty.server.HttpChannel$$Lambda$636/0x000000084084d040.dispatch(Unknown Source)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:762)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:497)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:282)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:319)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:100)
at org.eclipse.jetty.io.SelectableChannelEndPoint$1.run(SelectableChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.runTask(AdaptiveExecutionStrategy.java:412)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.consumeTask(AdaptiveExecutionStrategy.java:381)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.tryProduce(AdaptiveExecutionStrategy.java:268)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy.lambda$new$0(AdaptiveExecutionStrategy.java:138)
at org.eclipse.jetty.util.thread.strategy.AdaptiveExecutionStrategy$$Lambda$624/0x0000000840830c40.run(Unknown Source)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:407)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:894)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1038)
at java.lang.Thread.run([email protected]/Thread.java:829)
query is something simple as
{ ?c ^<http://www.opengis.net/ont/geosparql#sfContains> "<?xml version=\"1.0\" encoding=\"UTF-8\"?><gml:MultiSurface xmlns:gml=\"http://www.opengis.net/ont/gml\" gml:id=\"g2015_2014_0.104.wkb_geometry\" srsDimension=\"2\" srsName=\"urn:ogc:def:crs:EPSG::3857\"><gml:surfaceMember><gml:Polygon gml:id=\"g2015_2014_0.104.wkb_geometry.1\"><gml:exterior><gml:LinearRing><gml:posList>HUGE POS LIST</gml:posList></gml:LinearRing></gml:exterior></gml:Polygon></gml:surfaceMember></gml:MultiSurface>"^^<http://www.opengis.net/ont/geosparql#gmlLiteral> }
automatically injected from a service clause
4.6.0
If we want to execute a query in isolated context with unique set of independent functions and p-functions,
we will encounter some difficulties:
org.apache.jena.sparql.util.Context#copy
performs only shallow copying, not deeporg.apache.jena.query.QueryExecution
So we have to spend some time to find the proper way.
I think this can be simplified by adding two new helper-methods:
Context#copyWithFunctionRegistries(Context)
- the method to create a copy of Context
with copied function and p-function registriesQueryExecutionFactory#create(Query, Graph, Context)
- the method to create QueryExceution
for a Graph
with the given contextYes
I'm investigating a performance degradation that seems to happen only in combination with HTTP compression and text/csv.
I'm opening this issue here first without a complete demo, in case anyone has an idea where to look.
I am running a request to fuseki like this:
curl http://fuseki/sparql -H 'Accept: text/csv' -d 'query=select *{?s ?p ?o.}limit 100000' | wc -l
100001
if I only add the -H 'Accept-Encoding: gzip'
header, like browsers will do, the same query:
curl http://fuseki/sparql -H 'Accept: text/csv' -H 'Accept-Encoding: gzip' -d 'query=select *{?s ?p ?o.}limit 100000' | zcat | wc -l
100001
it takes 22 times as long!
Interestingly though, the other result formats (json, tsv) work fast both with and without the Accept-Encoding
In backward mode, it's impossible to retrieve functors, since they are always filtered out:
While in forward and forward rete modes, the filterFunctors options is honored:
Would it be possible to honor the filterFunctors options in backward mode too?
Application added servlet filter are added after the FusekiFilter despite the javadoc saying they are before.
This means they can't preprocess Fuseki data service requests.
Jena's current approach to fetch bindings for a SERVICE clause is to instantiate the service pattern for each binding.
This issue is about adding support for bulk requests.
Opening this issue to allocate a issue number for my corresponding PR draft.
Hi,
Can Fuseki be used with Jena - Model ? Similar to Virtuoso - VirtGraph :
VirtGraph graph = new VirtGraph("xxx/sparql", "jdbc:virtuoso://xxx:1111", "dba", "xxx");
VirtModel model = new VirtModel(graph);
model.add(...);
Not by writing SPARQL - INSERT/DELETE :
INSERT DATA {
<xxx> <xxx> <xxx> .
<xxx> <xxx> <xxx> .
}
I find it more convenient than SPARQL, such as Model.createTypedLiteral, without requiring me to specify a specific type .
I know Jena & TDB & Fuseki . I read Apache Jena Fuseki , but I didn't find the API I wanted .
Best Regards
Something is wrong in OpAsQuery that causes expressions to get lost:
Query before = QueryFactory.create("SELECT ?z { BIND('x' AS ?y) BIND(?y AS ?z) }");
Op op = Algebra.compile(before);
Query after = OpAsQuery.asQuery(op);
System.out.println(after);
Expected result:
SELECT ?z {
SELECT ('x' AS ?y) (?y AS ?z)
WHERE
{ }
}
or equivalently
SELECT (?y AS ?z) {
SELECT ('x' AS ?y)
WHERE
{ }
}
Actual result:
SELECT (?y AS ?z) // Missing ('x' AS ?y)
WHERE
{ }
It seems its related to projecting only some of the variables and
it may be related to some magic with unit tables - but I'm not sure.
If possible, I'd prefer OpAsQuery to not try to optimize 'unused' binds (isn't that something the optimizer should take care of?).
For example, if the constant 'x'
was replaced with the variable ?x
as in
Query before = QueryFactory.create("SELECT ?z { BIND(?x AS ?y) BIND(?y AS ?z) }");
then IMO the expected result should nonetheless be a query that includes all the expressions such as:
SELECT ?z {
SELECT (?x AS ?y) (?y AS ?z)
WHERE
{ }
}
W.r.t. 'the transformed query must yield same output'-contract an equivalent transformation would actually be
SELECT ?z
WHERE
{ }
because the other variables are not bound. But then the expressions gets lost which might be undesired.
The reason is, that it may be desired to treat such graph patterns as templates and apply substitutions afterwards - so values would eventually be supplied even if there is a unit table now.
First off, thanks for the great software, we use it a lot and it's brilliant ;)
I stumbled upon something today while trying to parse a json-ld response from Jena/Fuseki. I have a test store with a few books, their URIs look like this: <http://onbetween.ch/3ms/cms#book_1>
.
If there's a custom predicate <http://onbetween.ch/3ms/cms#book>
, there's an overlap with the book URIs and the JSON-LD serialisation is no longer valid :-/ as I get book URIs like this: book:_1
.
Here's the very simple CONSTRUCT query that I send to Fuseki (version 4.4.0):
In [67]: response = requests.post('http://localhost:3030/threems_example', data={'query': """
...: PREFIX cms: <http://onbetween.ch/3ms/cms#>
...:
...: CONSTRUCT {
...: ?a cms:book ?c
...: }
...: FROM <http://example/cmstest_data>
...: WHERE {
...: ?a a cms:Collection.
...: VALUES ?a { cms:collection_1 }.
...: ?c a cms:Book.
...: }
...: """})
In [68]: print(response.content.decode())
@prefix schema: <http://schema.org/> .
@prefix threems: <http://onbetween.ch/3ms/core#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix cms: <http://onbetween.ch/3ms/cms#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix cmsapi: <http://onbetween.ch/3ms/cmsapi#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix xml: <http://www.w3.org/XML/1998/namespace> .
@prefix cmsui: <http://onbetween.ch/3ms/cmsui#> .
@prefix dc: <http://purl.org/dc/elements/1.1/> .
cms:collection_1 cms:book cms:book_B , cms:book_C , cms:book_2 , cms:book_1 , cms:book_A .
Which is correct, got a simple collection with 5 books.
Now if I request JSON-LD form Fuseki (just adding headers Accept: application/ld+json
to the query)
In [73]: response = requests.post('http://localhost:3030/threems_example', data={'query': """
...: PREFIX cms: <http://onbetween.ch/3ms/cms#>
...:
...: CONSTRUCT {
...: ?a cms:book ?c
...: }
...: FROM <http://example/cmstest_data>
...: WHERE {
...: ?a a cms:Collection.
...: VALUES ?a { cms:collection_1 }.
...: ?c a cms:Book.
...: }
...: """}, headers={'Accept': 'application/ld+json'})
In [74]: print(response.content.decode())
{
"@id" : "cms:collection_1",
"book" : [ "book:_B", "book:_C", "book:_2", "book:_1", "book:_A" ],
"@context" : {
"book" : {
"@id" : "http://onbetween.ch/3ms/cms#book",
"@type" : "@id"
},
"schema" : "http://schema.org/",
"threems" : "http://onbetween.ch/3ms/core#",
"owl" : "http://www.w3.org/2002/07/owl#",
"cms" : "http://onbetween.ch/3ms/cms#",
"xsd" : "http://www.w3.org/2001/XMLSchema#",
"skos" : "http://www.w3.org/2004/02/skos/core#",
"rdfs" : "http://www.w3.org/2000/01/rdf-schema#",
"cmsapi" : "http://onbetween.ch/3ms/cmsapi#",
"xml" : "http://www.w3.org/XML/1998/namespace",
"rdf" : "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
"cmsui" : "http://onbetween.ch/3ms/cmsui#",
"dc" : "http://purl.org/dc/elements/1.1/"
}
}
Fuseki serializes the books like this:
"book" : [ "book:_B", "book:_C", "book:_2", "book:_1", "book:_A" ],
I wasn't sure if that's actually correct json-ld serialisation, but trying on the json-ld playground here I get this interpretation:
<http://onbetween.ch/3ms/cms#collection_1> <http://onbetween.ch/3ms/cms#book> <book:_1> .
<http://onbetween.ch/3ms/cms#collection_1> <http://onbetween.ch/3ms/cms#book> <book:_2> .
<http://onbetween.ch/3ms/cms#collection_1> <http://onbetween.ch/3ms/cms#book> <book:_A> .
<http://onbetween.ch/3ms/cms#collection_1> <http://onbetween.ch/3ms/cms#book> <book:_B> .
<http://onbetween.ch/3ms/cms#collection_1> <http://onbetween.ch/3ms/cms#book> <book:_C> .
Which is incorrect. Should be:
<http://onbetween.ch/3ms/cms#collection_1> <http://onbetween.ch/3ms/cms#book> <http://onbetween.ch/3ms/cms#book_1> .
<http://onbetween.ch/3ms/cms#collection_1> <http://onbetween.ch/3ms/cms#book> <http://onbetween.ch/3ms/cms#book_2> .
<http://onbetween.ch/3ms/cms#collection_1> <http://onbetween.ch/3ms/cms#book> <http://onbetween.ch/3ms/cms#book_A> .
<http://onbetween.ch/3ms/cms#collection_1> <http://onbetween.ch/3ms/cms#book> <http://onbetween.ch/3ms/cms#book_B> .
<http://onbetween.ch/3ms/cms#collection_1> <http://onbetween.ch/3ms/cms#book> <http://onbetween.ch/3ms/cms#book_C> .
It's a bit of an edge case, but actually for us this may happen when working with organisation specific ontologies.
I'm more of a python dev nowadays but if I can help let me know. If you can confirm it's a bug, I can also look into it, but I'm actually not 100% sure if that's not a JSON-LD spec issue.
Also if the predicate serialisation would be using the prefixes, i.e. cms:book
, this wouldn't happen, we would have cms:book_1
, something like:
{
"@id": "cms:collection_1",
"cms:book": [
"cms:book_B",
"cms:book_C",
"cms:book_2",
"cms:book_1",
"cms:book_A"
],
"@context": {
"cms:book": {
"@id": "http://onbetween.ch/3ms/cms#book",
"@type": "@id"
},
"schema": "http://schema.org/",
"threems": "http://onbetween.ch/3ms/core#",
"owl": "http://www.w3.org/2002/07/owl#",
"cms": "http://onbetween.ch/3ms/cms#",
"xsd": "http://www.w3.org/2001/XMLSchema#",
"skos": "http://www.w3.org/2004/02/skos/core#",
"rdfs": "http://www.w3.org/2000/01/rdf-schema#",
"cmsapi": "http://onbetween.ch/3ms/cmsapi#",
"xml": "http://www.w3.org/XML/1998/namespace",
"rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
"cmsui": "http://onbetween.ch/3ms/cmsui#",
"dc": "http://purl.org/dc/elements/1.1/"
}
}
What do you think ?
Minor.
This character is missing in atlas/lib/Chars.java
and TokenType
.
The following issue came up using the Maven related release for Jena ARQ 4.4.0. I see it was just updated to 4.5.0.
In the StreamRDFWriter class, calling:
public static StreamRDF getWriterStream(OutputStream output, RDFFormat format)
causes a hung / lock up condition. However, calling:
public static StreamRDF getWriterStream(OutputStream output, RDFFormat format, Context context)
with a null context does not hang the process.
...at lease in the application I've developed as an extension to OpenRefine. See RDF Transform.
Reviewing the code doesn't appear to reveal any issue as getWriterStream(output, format)
simply calls getWriterStream(output, format, null)
. Very odd. Perhaps a test pattern can help.
Additionally, there are some comment issues and / or possible code corrections for these functions. For:
getWriterStream(OutputStream output, RDFFormat format, Context context)
the comments declare:
@return StreamRDF, or null if format is not registered for streaming.
No mention of exceptions. However, the code clearly throws an exception:
if ( x == null ) throw new RiotException("Failed to find a writer factory for "+format) ;
As documented, a return null;
would be enough.
Some light humor...
Why are these StreamRDF...
classes in ...riot/system/
and not in .../riot/writer/stream/
?
And what's up with ...riot/system/stream
only holding Locator...
classes?
There is a lot of good resource material for RDFStream that could use some attention. The documentation on RIOT streaming (see Working with RDF Streams in Apache Jena) needs some luv'n to document the access to and use of the various stream classes. Particularly, the use of RDFFormat vs Lang (RDFFormat seems to be the new hotness).
Most of the documentation is centered around using datasets, models, and graphs. Far enough. However, there are exigent use cases for processing large RDF datasets where the "pretty" printers just don't scale...as documented. An iterative, streaming service is needed without first loading up a structure (i.e., duplicating the data) whether "in memory" or "persistent". Sequentially reading in non-RDF data, processing discreet units to an RDF compliant form, and writing (preferably in BLOCKS
form) directly to an RDF file (or to a repository) is more performant...even if there is some duplicative results.
Hmmm, the C coded Serd library seems to be very performant, small, and converts to several formats. Could the code be reviewed and converted to Java to help speed this kind of processing?
Class issues...documentation issues...a little frustration. I do plan on spending some time contributing to this effort...at least the documentation part.
Thanks for Jena.
Report: users@ email.
--file=someFile.ttl --update
still results in a read-only dataset to be configured.
This applies to both Fuseki binaries with a command line interface.
This file would benefit from some updating to mention use of GH issues.
In version 4.5.0 of Apache Jena Fuseki, line 117 is:
exec "$JAVA" $JVM_ARGS $LOGGING -cp "$CP" org.apache.jena.fuseki.cmd.FusekiCmd "$@"
$LOGGING
is set using $FUSEKI_HOME
. I define the FUSEKI_HOME
variable in my environment. It has an embedded space. Consequently, the fuseki-server
script expands the LOGGING
variable into multiple arguments. Java doesn't parse the arguments as the script intends. Can/should $LOGGING
be surrounded in double quotes?
Include parsing based on what is out there, not just the specs. For bearer tokens, some authentication mechanism have tokens which do not conform to the grammar rules in RFCs (e.g. JWT with padded components).
Switch to a builder style for creating AuthHeader
records directly, not requiring use of the header parser.
if I register a WatchStatementListener in this model, when call above function to remove a sub-graph, the listener will not work
The test bellow fails for jena-shacl but passes for topbraid-shacl.
test:
public class JenaShaclBugTest {
@Test
public void testJenaShacl() {
testShaclRules(JenaShaclBugTest::runJenaShaclRules);
}
@Test
public void testTopbraidShacl() {
testShaclRules(JenaShaclBugTest::runTopbraidShaclRules);
}
private void testShaclRules(BinaryOperator<Model> ruleEngine) {
Model dataModel = createDataGraph();
dataModel.write(System.out, "ttl");
Model rules = createShapeGraph();
rules.write(System.out, "ttl");
Model result = ruleEngine.apply(dataModel, rules);
result.write(System.out, "ttl");
List<Statement> reports = result
.listStatements(null, RDF.type, SHACLM.ValidationReport).toList();
Assertions.assertEquals(1, reports.size());
List<Resource> results = reports.get(0).getSubject()
.listProperties(SHACLM.result).mapWith(Statement::getResource).toList();
Assertions.assertEquals(1, results.size());
}
private static Model runJenaShaclRules(Model dataModel, Model rules) {
ValidationReport res = ShaclValidator.get().validate(rules.getGraph(), dataModel.getGraph());
return res.getModel();
}
private static Model runTopbraidShaclRules(Model dataModel, Model shapesModel) {
Resource report = org.topbraid.shacl.validation.ValidationUtil.validateModel(dataModel, shapesModel, false);
return report.getModel();
}
private static Model createDataGraph() {
String ns = "http://example#";
Model m = ModelFactory.createDefaultModel()
.setNsPrefixes(PrefixMapping.Standard)
.setNsPrefix("ex", ns);
Resource classA = m.createResource(ns + "A");
Resource classB = m.createResource(ns + "B");
m.createResource(ns + "Individual", classA);
classA.addProperty(RDFS.subClassOf, classB);
return m;
}
private static Model createShapeGraph() {
String ns = "http://example#";
Model m = ModelFactory.createDefaultModel()
.setNsPrefixes(PrefixMapping.Standard)
.setNsPrefix("sh", SHACL.getURI())
.setNsPrefix("ex", ns);
m.createProperty("http://example/test").addProperty(OWL.imports, m.createProperty(SHACL.getURI()));
Property customPredicate = m.createProperty(ns + "custom");
Resource specialTarget = m.createResource(ns + "SpecialTarget");
// a sh:NodeShape
m.createResource(ns + "AShape", SHACLM.NodeShape)
.addProperty(SHACLM.target, specialTarget)
.addProperty(customPredicate, RDFS.subClassOf)
;
// a sh:ConstraintComponent
m.createResource(ns + "CustomConstraintComponent")
.addLiteral(SHACLM.message, "Fail: can't find object for predicate")
.addProperty(RDF.type, SHACLM.ConstraintComponent)
.addProperty(SHACLM.parameter,
m.createResource().addProperty(SHACLM.path, customPredicate))
.addProperty(SHACLM.nodeValidator, m.createResource()
.addProperty(RDF.type, SHACLM.SPARQLSelectValidator)
.addProperty(SHACLM.prefixes, m.createResource()
.addProperty(SHACLM.declare, m.createResource()
.addLiteral(SHACLM.namespace, RDF.getURI())
.addLiteral(SHACLM.prefix, "rdf")
)
)
.addLiteral(SHACLM.select, "" +
"SELECT $this (rdf:type AS ?path) (?class as ?value)\n" +
"WHERE {\n" +
" $this a ?class .\n" +
" FILTER NOT EXISTS {\n" +
" ?any $custom ?class\n" +
" }\n" +
"}")
)
;
// a sh:SPARQLTarget
specialTarget
.addProperty(RDF.type, SHACLM.SPARQLTarget)
.addProperty(SHACLM.prefixes, m.createResource()
.addProperty(SHACLM.declare, m.createResource()
.addLiteral(SHACLM.namespace, ns)
.addLiteral(SHACLM.prefix, "ex")
)
)
.addLiteral(SHACLM.select, "" +
"SELECT ?node\n" +
"WHERE {\n" +
" ?node a ex:A\n" +
"}")
;
return m;
}
}
dependencies:
<dependency>
<groupId>org.apache.jena</groupId>
<artifactId>jena-arq</artifactId>
<version>4.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.jena</groupId>
<artifactId>jena-shacl</artifactId>
<version>4.4.0</version>
</dependency>
<dependency>
<groupId>org.topbraid</groupId>
<artifactId>shacl</artifactId>
<version>1.4.2</version>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>5.8.2</version>
<scope>test</scope>
</dependency>
resources:
Adding templates to customization the github experience.
This is not to preclude receiving pull requests, issue and bug reports by other channels such as JIRA.
I have a java program that when I run it in the context of my IDE Netbeans, when making the call RIOT.getContext() as the first line will return:
symbol:http://jena.apache.org/ARQ#regexImpl = symbol:http://jena.apache.org/ARQ#javaRegex
symbol:http://jena.apache.org/ARQ#registryFunctions = org.apache.jena.sparql.function.FunctionRegistry@4659191b
symbol:http://jena.apache.org/ARQ#constantBNodeLabels = true
symbol:http://jena.apache.org/ARQ#registryPropertyFunctions = org.apache.jena.sparql.pfunction.PropertyFunctionRegistry@55634720
symbol:http://jena.apache.org/ARQ#stageGenerator = org.apache.jena.tdb2.solver.StageGeneratorDirectTDB@4b0d79fc
symbol:http://jena.apache.org/ARQ#enablePropertyFunctions = true
symbol:http://jena.apache.org/ARQ#registryServiceExecutors = org.apache.jena.sparql.service.ServiceExecutorRegistry@4c1909a3
symbol:http://jena.apache.org/ARQ#strictSPARQL = false
but, when I run the program using java -jar myprog.jar, ARQ.getContext() will return null.
My JDK:
openjdk version "17.0.3" 2022-04-19
OpenJDK Runtime Environment GraalVM CE 22.1.0 (build 17.0.3+7-jvmci-22.1-b06)
OpenJDK 64-Bit Server VM GraalVM CE 22.1.0 (build 17.0.3+7-jvmci-22.1-b06, mixed mode, sharing)
Any thoughts to why this is happening?
Currently (4.5.0) the Fuseki UI is built in jena-fuseki-ui
then pulled into jena-fuseki-webapp
.
This issue is to change to delivering the UI in the jena-fuseki-ui
artifact jar file. The change is a step towards having the UI available in Fuseki Main.
This is issue is not changing the way the UI is delivered to apache-jena-fuseki
, where is the webapp
directory, and delivered to jena-fuseki-war
.
If I run shex p shapes.shex
, and parsing fails, an error message is printed, but the Unix exit code is 0
. I expect it to be non-zero.
The shacl
command behaves as expected, and exits with a code 1
when parsing fails.
I'm using this Jena version on Mac installed via homebrew:
Jena: VERSION: 4.4.0
Jena: BUILD_DATE: 2022-01-30T15:09:41Z
We are currently stumbling upon an issue with consuming GML exported from e.g. WFS
It cannot be used (directly) in Jena because the official GML standard uses a different xmlns than the one Jena searches for, even though the local names of the elements are the same.
I've also opened opengeospatial/ogc-geosparql#330 for clarification
Preparation for PR #1273 .
Creating an issue so I don't lose track of this.
We are using Bootstrap Vue for the UI. It gives developers the same that vanilla Bootstrap offers, but instead of using CSS classes directly, it gives them Vue components, ready to use.
While it was great for rapid development for the migration of Backbone.JS to Vue.js, unfortunately now we are stuck with Vue 2 as Bootstrap Vue has not upgraded to Vue 3 yet - bootstrap-vue/bootstrap-vue#5196
I think we might be able to move to vanilla Bootstrap and create simple components, assuming we are not using too many Bootstrap Vue components.
Alternatively, we can move to another UI library, with the disadvantage that unless it's a variation of Bootstrap, we would have a different UI look and feel.
Once this is solved, we might be ready to move to Vue 3 and Vite.
Currently, from javacc 6.0.
This parser is only for jena0-core, standalone, running tests.
See also #1328.
Query query = QueryFactory.create("select ?s where {?s ?p ?o} limit 10");
AuthEnv.get().registerUsernamePassword(new URI("https://myserver.edu/sparql-auth"), Settings.user, Settings.password);
HttpClient client = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(10))
.version(HttpClient.Version.HTTP_1_1)
.build();
RDFConnection con = RDFConnectionRemote.create()
.destination(host)
.queryEndpoint("sparql-auth")
.httpClient(client)
.build();
QueryExecution qe = con.query(query);
ResultSet results = qe.execSelect();
is throwing an error:
Exception in thread "main" java.lang.IllegalArgumentException: invalid header value: "Digest username="username", realm="SPARQL", nonce="b7ef0b701772c22b88db561ac58bd55f", uri="/sparql-auth?query=SELECT ?s
WHERE
{ ?s ?p ?o }
LIMIT 10
", qop=auth, cnonce="D8DA17AFEAD93444", nc=00000000, response="4d8ba8cef8aaa7f22d1adf811c6ac903", opaque="5ebe2294ecd0e0f08eab7690d2a6ee69""
at java.net.http/jdk.internal.net.http.common.Utils.newIAE(Utils.java:286)
at java.net.http/jdk.internal.net.http.HttpRequestBuilderImpl.checkNameAndValue(HttpRequestBuilderImpl.java:113)
at java.net.http/jdk.internal.net.http.HttpRequestBuilderImpl.setHeader(HttpRequestBuilderImpl.java:119)
at java.net.http/jdk.internal.net.http.HttpRequestBuilderImpl.setHeader(HttpRequestBuilderImpl.java:43)
at org.apache.jena.http.auth.DigestLib.lambda$buildDigest$0(DigestLib.java:119)
at org.apache.jena.http.auth.AuthLib.handle401(AuthLib.java:124)
at org.apache.jena.http.auth.AuthLib.authExecute(AuthLib.java:54)
at org.apache.jena.http.HttpLib.execute(HttpLib.java:536)
at org.apache.jena.http.HttpLib.execute(HttpLib.java:493)
at org.apache.jena.sparql.exec.http.QueryExecHTTP.executeQuery(QueryExecHTTP.java:497)
at org.apache.jena.sparql.exec.http.QueryExecHTTP.performQuery(QueryExecHTTP.java:471)
at org.apache.jena.sparql.exec.http.QueryExecHTTP.execRowSet(QueryExecHTTP.java:168)
at org.apache.jena.sparql.exec.http.QueryExecHTTP.select(QueryExecHTTP.java:160)
at org.apache.jena.sparql.exec.QueryExecutionAdapter.execSelect(QueryExecutionAdapter.java:117)
at org.apache.jena.sparql.exec.QueryExecutionCompat.execSelect(QueryExecutionCompat.java:97)
at com.mycompany.rad.RDF.<init>(RDF.java:90)
at com.mycompany.rad.RDF.main(RDF.java:105)
changing
QueryExecution qe = con.query(query);
to
QueryExecution qe = con.query("select ?s where {?s ?p ?o} limit 10");
and it works.
My environment is:
java -version
openjdk version "17.0.3" 2022-04-19
OpenJDK Runtime Environment GraalVM CE 22.1.0 (build 17.0.3+7-jvmci-22.1-b06)
OpenJDK 64-Bit Server VM GraalVM CE 22.1.0 (build 17.0.3+7-jvmci-22.1-b06, mixed mode, sharing)
Perhaps SPARQL string isn't being uuencoded properly?
Sending a query string longer then the GET
request threshold, i.e. POST
send mode is used, then the body content isn't marked as UTF-8 encoding:
PREFIX wd: <http://www.wikidata.org/entity/>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX geo: <http://www.opengis.net/ont/geosparql#>
PREFIX geof: <http://www.opengis.net/def/function/geosparql/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX coy: <https://schema.coypu.org/#>
PREFIX data: <https://data.coypu.org/country/>
PREFIX wikibase: <http://wikiba.se/ontology#>
PREFIX bd: <http://www.bigdata.com/rdf#>
PREFIX mwapi: <https://www.mediawiki.org/ontology#API/>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
SELECT * {
BIND("Curaçao" AS ?str)
SERVICE <https://query.wikidata.org/sparql> {
SELECT ?item ?itemLabel ?typeLabel ?str {
SERVICE wikibase:mwapi {
bd:serviceParam wikibase:endpoint "www.wikidata.org";
wikibase:api "EntitySearch";
mwapi:search ?str ;
mwapi:language "en";
wikibase:limit 5 .
?item wikibase:apiOutputItem mwapi:item.
?num wikibase:apiOrdinal true.
}
?item (wdt:P279|wdt:P31) ?type
FILTER(?type not in (wd:Q4167410, wd:Q13442814, wd:Q13433827))
FILTER (EXISTS {?type wdt:P279* wd:Q618123} || EXISTS {?type wdt:P279* wd:Q1048835 })
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
}
}
}
ignore the meaning of the query, it just does an entity lookup in Wikidata via SERVICE
clause. The important thing is the BIND
with a string Curaçao
having a non ASCII char.
The result of this query is empty with (at least) Jena 4.4.0
and 4.5.0 SNAPSHOT
- it works with Jena 4.1.0
though. It also works if we remove one of the FILTER
s in the query which leads to a simple GET
request.
I remember that the HTTP API was switched to the Java 11 internal one, that might be the point where the behavior changed.
Note, I know that according to the Standard the body should always be treated as UTF-8, at least it's stated:
Note that UTF-8 is the only valid charset here.
so it looks more like a Blazegraph issue in the end.
Nevertheless, the UTF-8 encoding was probably explicitly stated in the old HTTP API implementation.
I tried a quick fix in the method QueryExecHTTP::executeQueryPostBody
// Use SPARQL query body and MIME type.
private HttpRequest.Builder executeQueryPostBody(Params thisParams, String acceptHeader) {
// Use thisParams (for default-graph-uri etc)
String requestURL = requestURL(service, thisParams.httpString());
HttpRequest.Builder builder = HttpLib.requestBuilder(requestURL, httpHeaders, readTimeout, readTimeoutUnit);
contentTypeHeader(builder, WebContent.contentTypeSPARQLQuery + "; charset=UTF-8"); // this line has been changed
acceptHeader(builder, acceptHeader);
return builder.POST(BodyPublishers.ofString(queryString));
}
This solved the issue. Clearly, I don't think it should be necessary when following the standard, but I doubt it's harmful to set the encoding?
4.5.0
As of now, a Model can be written in many different RDFFormats.
However I noticed there are no JSONLD 1.1 Compact options, such as there are for JSON LD 1.0
Is this something that could be added?
and convert the java modifications of ViolationCodes.java
to the XML-driven update process.
would be nice to have:
just a suggestion!
At the moment the only blocker for Vue 3, I think, is Bootstrap Vue that hasn't upgraded to Vue 3 yet.
bootstrap-vue/bootstrap-vue#5196
The same happened to me in Cylc UI, where we adopted Vuetify. But Jena UI is a much smaller and simpler app, so I'm thinking in doing the following:
bootstrap-vue
, use simply bootstrap
That should prevent us from being locked in a version until another library upgrades to Vue 4, 5, etc. Furthermore, using plain Bootstrap makes it easy too in case we someday have to move to React/Angular/Svete/etc (as we did from Backbone → Vue.)
I'm on version 4.3.2 through docker secoresearch/fuseki and have a dataset with name '/fedora'. The database is growing like crazy after deletes and posts of the same element.
So I have understood I need to proactively compact the database, like described here: https://jena.apache.org/documentation/tdb2/tdb2_admin.html#compaction
DatabaseMgr.compact(dataSet.asDatasetGraph(), true);
But how to I get to the first parameter, dataSet
?
I have a RDFConnection object initialized with username/password.
Provide various ways to customize the behaviour of AuthBearerFilter
by subclassing.
The class itself provides a framework for bearer token handling as described in RFCs 7235, 6750 and others.
Some environments pass bearer token information information in other ways.
Some environments do not assume challenge-response (401), but assume the token is always present if needed.
In order to accommodate these variations and other styles, AuthBearerFilter
can have protected methods to modify its behaviour.
A quick request to make the following single method public so that I can inject a bearer token - if possible for 4.5.0-SNAPSHOT. On the main branch I already get 4.6.0-SNAPSHOT - so I am not sure if PRs for 4.5.0 are still possible.
Because of the lack of interceptors in java HttpClient (I am underwhelmed...) this seems to be the only reliable way to make bearer token auth working with jena. De-facto, this AuthEnv is a kind of interceptor framework after all...
Of course adding a bearer token helper method to AuthLib would also be nice, but from my side this can happen in 4.6.0 if I can register my own auth mod for now.
public class AuthEnv {
void registerAuthModifier(String requestTarget, AuthRequestModifier authModifier) {
// Without query string or fragment.
String serviceEndpoint = HttpLib.endpoint(requestTarget);
//AuthEnv.LOG.info("Setup authentication for "+serviceEndpoint);
authModifiers.put(serviceEndpoint, authModifier);
}
}
This line in RefEval.evalDS(...)
:
List<Binding> list = new ArrayList<>((int)dsg.size()) ;
But according to the Javadoc for DatasetGraph.size()
:
/** Get the size (number of named graphs) - may be -1 for unknown */
So -1
should be expected and treated specially to avoid a potential IllegalArgumentException
for negative initial size.
(We do not use RefEval
and I have not seen this exception actually occur. I noticed the potential problem when nosing around the source code, looking at callers of DatasetGraph.size()
.)
Hello folks,
More a question than an issue but did I miss something or there's no changelog for the releases?
It's always a bit mysterious to upgrade dependencies when no changelog is provided. It would be nice if there was one (not necessarily on GitHub, on the official website is good as well).
Thanks :)
When switching to the latest approach to authenticate against a remote SPARQL endpoint using digest authentication I noticed that authentication no longer worked.
Looking into the jena-arq code the problem seems to be with class org.apache.jena.http.auth.AuthLib and specifically method handle401(). In here, when method DigestLib.buildDigest() is called, the request method and request target parameters seem to be passed in inverse order. Doing a simple debug on my end and switching these values around seems to resolve the issue.
Could you please confirm on your end? If yes, then I would be happy to submit a PR that resolves this.
Tracks PR #1268.
node-forge 1.3.0 is now available.
The plugin does not appear to be configured - no <configuration><instructions>
.
The following query works as expected:
BASE <https://foo.bar/> SELECT ?x { BIND(IRI('baz') AS ?x) }
-------------------------
| x |
=========================
| <https://foo.bar/baz> |
-------------------------
However, using the same graph pattern in the following update request ...
BASE <https://foo.bar/> INSERT { ?x ?x ?x } WHERE { BIND(IRI('baz') AS ?x) }
CONSTRUCT WHERE { ?s ?p ?o }
... yields a result that suggests that the the base IRI was ignored:
<file:///tmp/baz> <file:///tmp/baz> <file:///tmp/baz> .
The sparql update spec does not say anything about the IRI function so my assumption is that the behavior should be consistent with that for querying. As a consequence, the issue is with the behavior of Jena.
The reason is that the E_IRI
function is only implemented to extract the base IRI from a currently running query - it lacks the logic to work with update requests.
class E_IRI {
@Override
public NodeValue eval(NodeValue v, FunctionEnv env)
{
String baseIRI = null ;
if ( env.getContext() != null )
{
Query query = (Query)env.getContext().get(ARQConstants.sysCurrentQuery) ;
if ( query != null )
baseIRI = query.getBaseURI() ;
}
return NodeFunctions.iri(v, baseIRI) ;
}
}
One solution would be to add another ARQConstants.sysCurrentUpdate(Request)
constant.
Alternatively, UpdateRequest and Query both implement Prologue - so instead of sysCurrentQuery another constant such as ARQConstants.sysCurrentPrologue could be used.
Either way, running an update request would then have to set the appropriate context attribute and the E_IRI function would have to consider it.
4.6.0
For the query
SELECT ?s WHERE {
?s <http://test#fun> ?o .
}
where <http://test#fun>
is a custom property function, this code works as expected:
DatasetGraph dg = new DatasetGraphWrapper(DatasetGraphFactory.wrap(data.getGraph()), context);
try (QueryExecution qe = QueryExecutionFactory.create(QueryFactory.create(query), dg)) {
...
}
but this one does not work:
try (QueryExecution qe = QueryExecution.create().query(query).model(data).context(context).build()) {
...
}
from #1374
full test is attached:
TestQueryExecution.java.txt
No response
No response
4.6.0-SNAPSHOT
When trying to re-activate one of my sparql benchmark processors (LSQ) I noticed that the following line no longer works with a remote HTTP connection:
try(QueryExecution qe = conn.query(query)) {
qe.setTimeout(connectionTimeoutForRetrieval, executionTimeoutForRetrieval);
}
Initial/connect timeout may be a bit of a niche feature but when working with third-party endpoints it can be a very handy thing - I once had a case where a downed sparql endpoint would hold the connection for several minutes; that's why eventually I made use of this Jena feature.
In my case, the line above ends up at QueryExecHTTPBuilder which raises a UnsupportedOperationException
:
class QueryExecHTTPBuilder {
@Override
public QueryExecHTTPBuilder initialTimeout(long timeout, TimeUnit timeUnit) {
throw new UnsupportedOperationException();
}
}
Attempting to using the more recent conn.newQuery().setTimeout()
API also does not work, because it lacks lacks a method to set the initial (aka connect) timeout alltogether.
According to stack overflow the Java http client actually supports this feature so it seems it's just a matter of wiring this up again:
// set the connection timeout value to 30 seconds (30000 milliseconds)
final HttpParams httpParams = new BasicHttpParams();
HttpConnectionParams.setConnectionTimeout(httpParams, 30000);
client = new DefaultHttpClient(httpParams);
No response
Yes
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.