dbpedia / archivo Goto Github PK
View Code? Open in Web Editor NEWDBpedia Archivo - Augmented Ontology Archive powered by Databus
Home Page: https://archivo.dbpedia.org/
License: GNU Affero General Public License v3.0
DBpedia Archivo - Augmented Ontology Archive powered by Databus
Home Page: https://archivo.dbpedia.org/
License: GNU Affero General Public License v3.0
As we switched to Mercurial, there are several other repositories.... this is the default repository and it contains Miscellaneous things like the logo. The Extraction Framework for example is in the "extraction_framework" repository. Please look here: http://dbpedia.hg.sourceforge.net/hgweb/dbpedia/
Use known version properties if available for ontology versioning.
Right now the big ontology table here can't be scrolled horizontally, so if the user adds more columns they can't be seen
Hello archivo developers,
Is there kind of a schedule when new snapshots are taken? Currently, you don't refer to the latest released version of OEO, for example.
Is there a way to inform you about new releases to take a further snapshot? Thanks!
Use the ontology resource (eg. http://dataid.dbpedia.org/ns/core.rdf) as backup if the non-information resource is not available.
Which timezone do you use in your timestamps? For instance, Cinelab ontology
has 2020.06.10-175249
as value for Latest Timestamp
. Is it UTC?
https://github.com/dbpedia/Archivo/blob/master/shacl-library/LODE.ttl checks for rdfs:label, rdfs:comment.
But some ontologies use alternatives:
Please allow these as alteranative props in the SHACL shapes
Example about https://databus.dbpedia.org/ontologies/vocab.getty.edu/ontology/:
sh:focusNode <http://vocab.getty.edu/ontology#ulan1316_principal_was> ;
sh:resultMessage "rdfs:label is missing or is no Literal"@en ;
skos:prefLabel "ulan1316_principal_was";
dc:title "principal was - person";
sh:focusNode <http://vocab.getty.edu/ontology#aat2216_require> ;
sh:resultMessage "rdfs:comment is missing or is no Literal"@en ;
dct:description """things - require - things [in order to exist or work].
Example: broderie anglaise requires eyelets; compact disc players require compact discs""" .
Right now Archivo does not handle ontologies correctly that not contain the class definitions but link to them (example: DBpedia ontology) via http://open.vocab.org/terms/defines.
That is the reason the difference between the "official" version has much less triples (8035) than the "develop" version (32177).
It should be possible to include the RDF content from the linked Classes/Datatypes etc. (e.g from dbo:Person) in the ontology.
Start using PYLODE agian in v3
Archivo does have documentation distributions, it would be nice to make them easier accessible, so that archivo could be an one stop ontology lookup tool for humans, like https://devdocs.io/ for programming languages.
When adding/suggesting an ontology via http://archivo.dbpedia.org/add, and upon a successful "suggestion" there is lack of informative message. For example, if the ontology is successfully accepted, then show "The suggested ontology has been successfully accepted."
Also, upon an successful addition, in the UI there are shown set of messages, some in "green", some in "orange/red".
Maybe add a title above the messages, e.g. "Processing log:" and some explanation, e.g. "Note that the red/orange warnings are not critical, but highly suggested to fix them in near future.
Maybe also instead of showing "The suggested ontology has been successfully accepted." show "The suggested ontology has been successfully accepted with some non-critical issues/warnings."
I tried to suggest another ontology for Archivo and it was rejected. It loads just fine in OWL version 5 or 3.5.
Check this and figure out the problem, and let me know what to do.
http://micra.com/COSMO/COSMO.owl
Pat Cassidy
[email protected]
and jump from version to version easily in the diff viewer
optionally would be also good to switch to show only local changes but also have an option to show also changes of imported ontologies (as "virtual" versions see #26 )
Using CORS compatibility for ontologies as the 5th star of archivo:
Proposed qualifying criteria for the 5th star:
As suggested in the forum.
Currently only the pellet reasoner is used for checking the consistency of an ontology for the 4th Archivo Star.
But unfortunately Pellet has some performance issues e.g for https://archivo.dbpedia.org/info?o=http://purl.obolibrary.org/obo/envo.owl. By checking the consistency with more/a better reasoner this issue could be prevented. This would also cover the purpose of the Star (checking the compliance with common reasoners) better.
Greetings all,
Checking on the ENVO entry, we noted that the Pellet timeout is reported as a fail. This doesn't seem quite right, wouldn't this be more a "we don't know". So a "?" rather than a star or non-star?
You may also be interested in the OBO Dashboard checks we run on OBO resources.
I am trying to add this ontology but am not able to do. Please check the issue.
For example the bioregistry saves it's ontologies in the .obo
format.
It would be nice to allow the parsing/loading of such files into Archivo, but currently rapper/rdflib do not support such files.
Also then the discovery process for the bioregistry needs to be adapted (add new key download_obo
).
Currently the MOD endpoint does not support such large queries
Virtuoso 42000 Error The estimated execution time 869218 (sec) exceeds the limit of 40000 (sec)
So either refactor the query or set hte limit on the MOD endpoint up.
Refactoring the query would be far more simple (e.g. just split it to avoid the UNION
)
Until then discovery via void mod will be switched off.
to add later:
Right now the skos:ConceptSchemes in Archivo pass the LODE conformity tests because those are focused on owl:Ontology and owl:Classes, but LODE does not document skos:ConceptSchemes or skos:Concepts.
When I initially suggested this ontology, I accidentally entered https:// instead of http:// The resulting check failed at the equality step because the ontology about statement did not match.
I retried submitting the IRI http://purl.org/dsw/ but the ontology was immediately rejected without going through all of the checks (as opposed to showing all of the checks and giving a final panel with the reason for rejecting). I'm not sure why the IRI was rejected the second time without actually running any checks beyond the initial "Index check".
If you can try rechecking the IRI http://purl.org/dsw/, that would be greats.
see e.g. https://bioregistry.io/registry/ero
offer json output including download links and uri scheme as starting point for Archivo
Hello,
I tried to parse the turtle of your RDF Schema ontology file at https://databus.dbpedia.org/ontologies/w3.org/2000--01--rdf-schema/2020.06.10-215336/2000--01--rdf-schema_type=parsed.ttl with a library that uses the N3.js parser.
There, I ran into a problem with the base IRI resolution. I opened an issue with the maintainers of that library, and it seems that there might be an error in the RDF Schema turtle file: rdfjs-base/parser-n3#15
The problem is in the beginning of the turtle code:
@base <http://www.w3.org/2000/01/rdf-schema#> .
@prefix rdf: <../../1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <> .
@prefix owl: <../../2002/07/owl#> .
@prefix dc: <http://purl.org/dc/elements/1.1/> .
<>
dc:title "The RDF Schema vocabulary (RDFS)" ;
a owl:Ontology ;
rdfs:seeAlso <rdf-schema-more> .
rdfs:Class
a rdfs:Class ;
rdfs:comment "The class of classes." ;
rdfs:isDefinedBy <> ;
rdfs:label "Class" ;
rdfs:subClassOf rdfs:Resource .
The hash symbol (#) at the end of the base IRI is apparently stripped by the parser (?) and the resulting triples then contain invalid IRIs:
Unfortunately, I don't have time right now to look too deeply into whether your turtle or the N3.js implementation is correct but I at least wanted to let you know about the issue.
It would be nice to have range filters for the archive, e.g show only ontologies with 3 or 4 stars.
Also others should be possible, like triples between 100 and 1000 or sth like that
we need a strategy to resolve that somehow. it can happen that the databus will not work for a day and this would give an http 500 error for the info page and maybe even more for all ontologies that had an update during the outage. i dont know how long this error will persist until the next new version is found, but maybe this will stay forever until resolved manually.
hope that this can be fixed with databus 2.1 migration
http 500 happend when new databus replaced the old one for the following ontologies on jun 30th
https://archivo.dbpedia.org/info?o=https://w3id.org/sense
https://archivo.dbpedia.org/info?o=https://w3id.org/lbd/aec3po/ontology
Currently Archivo has the problem of setting the base URI correctly, e.g. with http://eunis.eea.europa.eu/rdf/sites-schema.rdf#Site
Detect imported ontologies (based on owl:import statements) and
There is a problem with proxying to the DBpedia Archivo frontend.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.