Code Monkey home page Code Monkey logo

json-wikipedia's Introduction

json-wikipedia travis-ci-badge

Json Wikipedia contains code to convert the Wikipedia XML dump into a JSON or avro dump.

  • Please be aware that this tool does not work with the multistream dump.

Setup

compile the project running

mvn package

the command will produce a JAR file containing all the dependencies the target folder.

Convert the Wikipedia XML

You can convert the Wikipedia dump to JSON format by running the commands:

java -jar target/json-wikipedia-*.jar -input wikipedia-dump.xml.bz -output wikipedia-dump.json[.gz] -lang [en|it]

or

./scripts/convert-xml-dump-to-json.sh [en|it] wikipedia-dump.xml.bz wikipedia-dump.json[.gz]

Or to Apache Avro:

java -jar target/json-wikipedia-*.jar -input wikipedia-dump.xml.bz -output wikipedia-dump.avro -lang [en|it]

or

./scripts/convert-xml-dump-to-json.sh [en|it] wikipedia-dump.xml.bz wikipedia-dump.avro

Content of the output

Both the commands will produce a file contain a file containing a record for each article. In the JSON format each line of the file contains an article of dump encoded in JSON. Each record can be deserialized in an Article object, which represents an enriched version of the wikitext page. The Article object contains:

  • the title (e.g., Leonardo Da Vinci);
  • the wikititle (used in Wikipedia as key, e.g., Leonardo_Da_Vinci);
  • the namespace and the integer namespace in the dump;
  • the timestamp of the article;
  • the type, if it is a standard article, a redirection, a category and so on;
  • if it is not in English the title of the correspondent English Article;
  • a list of tables that appear in the article ;
  • a list of lists that that appear in the article ;
  • a list of internal links that appear in the article (each link containing the type (image, link, table link etc) and the position in the page;
  • a list of external links that appear in the article;
  • if the article is a redirect, the pointed article;
  • a list of section titles in the article;
  • the text of the article, divided in paragraphs (PLAIN, no wikitext);
  • the categories and the templates of the articles;
  • the list of attributes found in the templates;
  • a list of terms highlighted in the article;
  • if present, the infobox.

Usage

Once you have created (or downloaded) the JSON dump (say wikipedia.json), you can iterate over the articles of the collection easily using this snippet:

RecordReader<Article> reader = new RecordReader<Article>(
		"wikipedia.json",new JsonRecordParser<Article>(Article.class)
)

for (Article a : reader) {
// do what you want with your articles
}

In order to use these classes, you will have to install json-wikipedia in your maven repository:

mvn install

and import the project in your new maven project adding the dependency:

<dependency>
    <groupId>it.cnr.isti.hpc</groupId>
	<artifactId>json-wikipedia</artifactId>
	<version>2.0.0-SNAPSHOT</version>
</dependency>

Schema

The full schema of a record is encoded in avro

json-wikipedia's People

Contributors

bitdeli-chef avatar dav009 avatar dependabot[bot] avatar diegoceccarelli avatar h-shariati avatar sirrujak avatar sparrowv avatar tgalery avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

json-wikipedia's Issues

hpc-utils0.0.7

There is no hpc-utils 0.0.7 available.

[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Failed to resolve artifact.

Missing:
----------
1) it.cnr.isti.hpc:hpc-utils:jar:0.0.7

  Try downloading the file manually from the project website.

  Then, install it using the command: 
      mvn install:install-file -DgroupId=it.cnr.isti.hpc -DartifactId=hpc-utils -Dversion=0.0.7 -Dpackaging=jar -Dfile=/path/to/file

  Alternatively, if you host your own repository you can deploy the file there: 
      mvn deploy:deploy-file -DgroupId=it.cnr.isti.hpc -DartifactId=hpc-utils -Dversion=0.0.7 -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]

  Path to dependency: 
    1) it.cnr.isti.hpc:json-wikipedia:jar:1.0.0
    2) it.cnr.isti.hpc:hpc-utils:jar:0.0.7

----------
1 required artifact is missing.

for artifact: 
  it.cnr.isti.hpc:json-wikipedia:jar:1.0.0

from the specified remote repositories:
  central (http://repo1.maven.org/maven2),
  sonatype-snapshots (https://oss.sonatype.org/content/repositories/snapshots),
  de.tudarmstadt.ukp.wikipedia (http://zoidberg.ukp.informatik.tu-darmstadt.de/artifactory/public-snapshots/)

Can you please update the POM or give more information how to solve that issue with a hotfix?

how to convert dumps other than EN or IT?

Can you add documentation to the readme on how to sufficiently extend this solution to other languages? FR and ES did not work.
E.g...
$ ./scripts/convert-xml-dump-to-json.sh fr /u01/wikip/dumps.wikipedia/frwiki/frwiki-latest-pages-articles.xml.bz2 ./frwiki-latest-pages-articles.json

Converting mediawiki xml dump to json dump (./frwiki-latest-pages-articles.json)
2021-12-15 00:25:50,990 1086 [main] ERROR it.cnr.isti.hpc.wikipedia.cli.MediawikiToJsonCLI - Parsing the mediawiki
java.lang.IllegalArgumentException: No enum constant it.cnr.isti.hpc.wikipedia.article.Language.FR
at java.base/java.lang.Enum.valueOf(Enum.java:240) ~[na:na]
at it.cnr.isti.hpc.wikipedia.article.Language.valueOf(Language.java:8) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at it.cnr.isti.hpc.wikipedia.parser.ArticleParser.(ArticleParser.java:66) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at it.cnr.isti.hpc.wikipedia.reader.WikipediaArticleReader.(WikipediaArticleReader.java:95) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at it.cnr.isti.hpc.wikipedia.cli.MediawikiToJsonCLI.call(MediawikiToJsonCLI.java:55) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at it.cnr.isti.hpc.wikipedia.cli.MediawikiToJsonCLI.call(MediawikiToJsonCLI.java:27) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at picocli.CommandLine.executeUserObject(CommandLine.java:1953) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at picocli.CommandLine.access$1300(CommandLine.java:145) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at picocli.CommandLine$RunLast.handle(CommandLine.java:2346) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at picocli.CommandLine$RunLast.handle(CommandLine.java:2311) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at picocli.CommandLine.execute(CommandLine.java:2078) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]
at it.cnr.isti.hpc.wikipedia.cli.MediawikiToJsonCLI.main(MediawikiToJsonCLI.java:65) ~[json-wikipedia-2.0.0-SNAPSHOT.jar:na]

Links with empty Anchors

There are links with empty descriptions.
Looking at the code there is a comment When anchor is empty, it means is the same of id.
Which sort of makes sense since they look like: [[wikipedia_article]]. [1]

so when one does getDescription() links with empty descriptions would return their id. However when the object gets serialised to json the description field will be empty

As an user of jsonwikipedia this looked a bit weird at the beginning. I think the serialised description should equal getDescription.
If you agree on this point I can make a PR.

[1] https://github.com/diegoceccarelli/json-wikipedia/blob/master/src/main/java/it/cnr/isti/hpc/wikipedia/article/Link.java#L53

Create jar with dependency using `maven-shade-plugin` instead of assembly

At the moment we use the goal assebly:assembly to produce a jar with all the dependency.

maven-shade-plugin is better than assembly http://2mohitarora.blogspot.com/2011/12/maven-shade-plugin-better-than-maven.html and it will produce that jar when running the normal maven target package.

Also set it up so that the main can be run from the jar. Example here: http://maven.apache.org/plugins/maven-shade-plugin/examples/executable-jar.html

Java API usage: how to parse a single article ?

Hi,

Initially posted here idio#43, but should have started here as it is the main repo :)

I'm trying to parse single wikipedia xml file. Like the mercedes.xml in the test of this repo. Following the code in the test section I tried something like:

import it.cnr.isti.hpc.wikipedia.article.Article
import it.cnr.isti.hpc.wikipedia.parser.ArticleParser


val parser = new ArticleParser("en")
val testXml = IOUtils.getFileAsUTF8String("./mercedes.xml")
val testArticle = new Article()
parser.parse(testArticle, testXml)

But the result is strange. Many properties are blank, like title, wikiTitle, ... and paragraphs / clean text are also wrongly parsed. I guess I'm doing something wrong ^^ If you could show some usage of the API to process a single article in xml it would be very great :)

Thanks,
Thomas

Installation

Hello,

Can you please make the guide a little more descriptive as a newbie I have no clue how the hell I should go about installing this..

Thanks in advance!

Multiprocessing ability with Apache spark

Once you get to the main xml content of the wikidump transforming the xml into json can get a severe speed up by running on spark. This has already been done at the idio fork of this repo, so this pr severs as as basis for introducing this https://github.com/idio/json-wikipedia/pull/3/files. A few pointers:

  • since the forks have diverged severely, it's easier to start a new pr (from a branch)
  • use latest spark
  • benchmark with the simple wikipedia

hpc-utils dependency is missing

(see also #17 discussion)

hpc-utils is also missing from the dropbox repo specified in the pom. I fixed that in our fork by using jitpack.

The artifactory was a fixed url on my dropbox folder and dropbox decided to disable the absolute paths in September :/ let's move it to Jitpack ( as soon as I discover what is it and how it works :))

Invalid json

The example json you have provided is invalid. I was hesitant to use as the wikipedia dump is very huge and the output from this would be invalid.

Can you please get back to me on this.

Thank you

Re-enable codecov on travis

commit: d3d6398 disabled sending the report to codecov because we had to switch from cobertura to jacoco (cobertura wasn't compatible with openjdk11).

Find a way to send the report to codecov (it can be done also change the codecoverage framework in case it is needed)

Fix issue with de.tudarmstadt.ukp.wikipedia

When compiling the project with mvn clean compile there are issue with this repo. Please resolve the issue making sure that the project still compiles and succeeds the tests.

[WARNING] Could not transfer metadata de.tudarmstadt.ukp.wikipedia:de.tudarmstadt.ukp.wikipedia.timemachine:1.2.0-SNAPSHOT/maven-metadata.xml from/to ossrh (https://oss.sonatype.org/service/local/staging/deploy/maven2/): Not authorized , ReasonPhrase:Unauthorized.
[WARNING] Failure to transfer de.tudarmstadt.ukp.wikipedia:de.tudarmstadt.ukp.wikipedia.timemachine:1.2.0-SNAPSHOT/maven-metadata.xml from https://oss.sonatype.org/service/local/staging/deploy/maven2/ was cached in the local repository, resolution will not be reattempted until the update interval of ossrh has elapsed or updates are forced. Original error: Could not transfer metadata de.tudarmstadt.ukp.wikipedia:de.tudarmstadt.ukp.wikipedia.timemachine:1.2.0-SNAPSHOT/maven-metadata.xml from/to ossrh (https://oss.sonatype.org/service/local/staging/deploy/maven2/): Not authorized , ReasonPhrase:Unauthorized.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.