Code Monkey home page Code Monkey logo

jwiki's People

Contributors

fastily avatar intracer avatar tigerfell avatar xbony2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jwiki's Issues

Add support for skipping some cookies from saving in JwikiCookieJar

JwikiCookieJar save all new cookies without any filtering, but after each page edit - there is a new cookie like "mediawikiPostEditRevision98996=saved" (MediaWiki send post-edit cookie indicating the user just saved a particular revision).

After several calls of wiki.edit() - there are a lot of cookies "mediawikiPostEditRevision=saved" in each new http-request, so web server start to return - "400 Bad Request - request header or cookie too large".

it would be great if JwikiCookieJar skip such cookies from saving

Double redirect oddness

On my wiki, I have a double redirect that looks like this:

1. Getting Started (IndustrialCraft)/fr (edit) →‎ IndustrialCraft 2/Guide/fr →‎ Getting Started (IndustrialCraft 2)/fr

This is structured as double redirect -> normal redirect -> target.

I have been working on a command in my bot used to fix double redirects. My code looks something like this:

wiki.querySpecialPage("DoubleRedirects", -1).each {
	def target = wiki.getPageText(it).replaceAll("#REDIRECT \\[\\[", "").replaceAll("]]", "")
	def targetsTarget = wiki.getPageText(target).replaceAll("#REDIRECT \\[\\[", "").replaceAll("]]", "")
	wiki.edit(target, "testy testy"/*"#REDIRECT [[${targetsTarget}]]"*/, "Fixed double redirect.")
}

(This code might be kind of confusing if you don't know Groovy and because I have a habit of compacting things but just for your reference)

My bot made this edit.

Basically, the querySpecialPage method using the "DoubleRedirects" special page gives an array of the pages of the "normal redirects" in the double redirects page (in this case "IndustrialCraft 2/Guide/fr") instead of the actual double redirects ("Getting Started (IndustrialCraft)/fr").

Unable to upload to mediawiki: illegal-filename and NullPointerException

jwiki version: 1.8.0

Mediawiki versions:

Product Version
MediaWiki 1.36.1
PHP 7.4.21 (apache2handler)
MariaDB 10.6.3-MariaDB-1:10.6.3+maria~focal
ICU 63.1

I'm trying to get upload to work, but it's failing with a NullPointerException due to a JSON "illegal-filename" error.

Here is my test code:

import okhttp3.HttpUrl;
import org.fastily.jwiki.core.Wiki;

import java.nio.file.Path;

public class WikiTest {
  public static void main(String[] args) throws Exception {
    HttpUrl mediawikiUrl = HttpUrl.parse("http://localhost:8087/api.php");

    Wiki wiki = new Wiki.Builder()
        .withApiEndpoint(mediawikiUrl)
        .withLogin("wikiuser", "*********")
        .build();

    System.out.println(wiki.getPageText("Main Page") );
    
    Path testFile = Path.of("test");

    wiki.upload(testFile, "File:test", "test description", "test reason");
  }
}

The file test is at the root and contains the word "test".

Here is the output:

Aug 06, 2021 05:06:28 PM
INFO: [<Anonymous> @ localhost]: Try login for wikiuser
Aug 06, 2021 05:06:29 PM
INFO: [Wikiuser @ localhost]: Getting user rights for Wikiuser
Aug 06, 2021 05:06:29 PM
INFO: [Wikiuser @ localhost]: Logged in as wikiuser
Aug 06, 2021 05:06:29 PM
INFO: [Wikiuser @ localhost]: Fetching Namespace List
Aug 06, 2021 05:06:29 PM
INFO: [Wikiuser @ localhost]: Getting page text of Main Page
<strong>MediaWiki has been installed.</strong>

Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.

== Getting started ==
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
* [[Test page]]
Aug 06, 2021 05:06:29 PM
INFO: [Wikiuser @ localhost]: Uploading test
Aug 06, 2021 05:06:29 PM
FYI: [Wikiuser @ localhost]: Uploading chunk [1 of 1] of 'test'
Aug 06, 2021 05:06:29 PM
ERROR: [Wikiuser @ localhost]: Encountered an error, retrying - 0
java.lang.NullPointerException
	at org.fastily.jwiki.util.GSONP.getStr(GSONP.java:189)
	at org.fastily.jwiki.core.WAction.upload(WAction.java:223)
	at org.fastily.jwiki.core.Wiki.upload(Wiki.java:611)
	at com.joliciel.cnes.kiwi.WikiTest.main(WikiTest.java:30)
Aug 06, 2021 05:06:29 PM
ERROR: [Wikiuser @ localhost]: Encountered an error, retrying - 1
java.lang.NullPointerException
	at org.fastily.jwiki.util.GSONP.getStr(GSONP.java:189)
	at org.fastily.jwiki.core.WAction.upload(WAction.java:223)
	at org.fastily.jwiki.core.Wiki.upload(Wiki.java:611)
	at com.joliciel.cnes.kiwi.WikiTest.main(WikiTest.java:30)
Aug 06, 2021 05:06:29 PM
ERROR: [Wikiuser @ localhost]: Encountered an error, retrying - 2
java.lang.NullPointerException
	at org.fastily.jwiki.util.GSONP.getStr(GSONP.java:189)
	at org.fastily.jwiki.core.WAction.upload(WAction.java:223)
	at org.fastily.jwiki.core.Wiki.upload(Wiki.java:611)
	at com.joliciel.cnes.kiwi.WikiTest.main(WikiTest.java:30)
Aug 06, 2021 05:06:29 PM
ERROR: [Wikiuser @ localhost]: Encountered an error, retrying - 3
java.lang.NullPointerException
	at org.fastily.jwiki.util.GSONP.getStr(GSONP.java:189)
	at org.fastily.jwiki.core.WAction.upload(WAction.java:223)
	at org.fastily.jwiki.core.Wiki.upload(Wiki.java:611)
	at com.joliciel.cnes.kiwi.WikiTest.main(WikiTest.java:30)
Aug 06, 2021 05:06:29 PM
ERROR: [Wikiuser @ localhost]: Encountered an error, retrying - 4
java.lang.NullPointerException
	at org.fastily.jwiki.util.GSONP.getStr(GSONP.java:189)
	at org.fastily.jwiki.core.WAction.upload(WAction.java:223)
	at org.fastily.jwiki.core.Wiki.upload(Wiki.java:611)
	at com.joliciel.cnes.kiwi.WikiTest.main(WikiTest.java:30)
Aug 06, 2021 05:06:29 PM
INFO: [Wikiuser @ localhost]: Unstashing 'null' as 'File:test'
Aug 06, 2021 05:06:29 PM
ERROR: [Wikiuser @ localhost]: Encountered an error while unstashing, retrying - 0
Aug 06, 2021 05:06:29 PM
INFO: [Wikiuser @ localhost]: Unstashing 'null' as 'File:test'
Aug 06, 2021 05:06:29 PM
ERROR: [Wikiuser @ localhost]: Encountered an error while unstashing, retrying - 1
Aug 06, 2021 05:06:29 PM
INFO: [Wikiuser @ localhost]: Unstashing 'null' as 'File:test'
Aug 06, 2021 05:06:29 PM
ERROR: [Wikiuser @ localhost]: Encountered an error while unstashing, retrying - 2

Process finished with exit code 0

As you can see, the login and retrieve page work, but the upload fails.

When I place a breakpoint at gson-2.8.6.jar/com.google.gson line 47:

public static JsonElement parseString(String json) throws JsonSyntaxException {
    return parseReader(new StringReader(json));
  }

I see the following JSON:

{
  "error": {
    "code": "illegal-filename",
    "info": "The filename is not allowed.",
    "*": "See http://localhost:8087/api.php for API usage. Subscribe to the mediawiki-api-announce mailing list at &lt;https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce&gt; for notice of API deprecations and breaking changes."
  }
}

The NullPointException seems to be caused by the illegal-filename error. However, I cannot find anyway of discovering what is illegal about the filename "File:test".

I also tried with the filename "test" (although the class says to prefix with "File:"), but the identical error occurs.

First: the NullPointerException should instead return a meaningful exception.

Second: the input seems valid, why is mediawiki returning an "illegal-filename" exception? It would seem jwiki is doing something wrong.

Allow for different login name as username

Hello. I am a user creating a custom bot for my home wiki, the Official FTB Wiki. We are hosted on a wiki farm called gamepedia, and they have the wikis set up in a strange way in regards to bots. Basically, bots log in with a bot name that is different from their username. For this reason, I request some sort of built-in way to deal with this using jwiki. Thanks for your time ^^

In the mean time I'm using this hacky solution (in Groovy I should note):

wiki = new Wiki(null, null, (HttpUrl)HttpUrl.parse("https://ftb.gamepedia.com/api.php"))
WAction.postAction(wiki, "login", false, FL.pMap("lgname", args[1], "lgpassword", args[2], "lgtoken", wiki.getTokens(WQuery.TOKENS_LOGIN, "logintoken")))
wiki.conf.uname = "ESAEBSAD"
wiki.conf.token = wiki.getTokens(WQuery.TOKENS_CSRF, "csrftoken")
def wlField = Wiki.class.getDeclaredField("wl")
wlField.setAccessible(true)
wlField.get(wiki).put(wiki.conf.hostname, wiki)
		
wiki.conf.isBot = wiki.listUserRights(wiki.conf.uname).contains("bot")

Example Page no longer exists

The example page where you say to make edits does not exist. The structure of the project has changed since you created it. I'm not certain on how to use this project as there is not a particular file to write code in. Could you please offer some assistance with this? Thanks.

Creating a non-continuing MQuery with a specified limit

Hello,

jwiki has helped me out a lot, since I'm a beginner programmer and very uncomfortable with MediaWiki.

I'm having trouble figuring out how to create a terminating query that stops making requests once its received a certain number of items. For example, whatLinksHere("United States") will take forever, since MediaWiki will only return 500 items at a time per request as far as I know.

I want to be able to make a query that terminates just after a few requests to MediaWiki and returns whatever's been received. Is there a good way to do this?

Thank you!

Code style: using interfaces instead of implementation

In my opinion (and in the opinion of other Java developers) it would be better to let the methods of this API return collection interfaces instead of their implementation. So for example in the Wiki class you will see a lot of methods which return ArrayList<String> where it would be beter to return List<String>. The same goes for Map instead of HashMap and so forth.

What do you think @fastily ? It could be that I am overlooking some design consideration, in which case I would love you to enlighten me.

This issue is intended as constructive criticism, not merely bashing on you ;)

Login problems

I'm probably just doing this wrong, but doing...

wiki = new Wiki(args[1], args[2], "ftb.gamepedia.com")

gives me...

INFO: �[32m[<Anonymous> @ ftb.gamepedia.com]: Try login for ESAEBSAD@ESAEBSAD-bot�[0m
com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 12 path $
	at com.google.gson.JsonParser.parse(JsonParser.java:65)
	at com.google.gson.JsonParser.parse(JsonParser.java:45)
	at fastily.jwiki.core.WQuery.next(WQuery.java:297)
	at fastily.jwiki.core.Wiki.getTokens(Wiki.java:218)
	at fastily.jwiki.core.Wiki.login(Wiki.java:179)
	at fastily.jwiki.core.Wiki.<init>(Wiki.java:86)
	at fastily.jwiki.core.Wiki.<init>(Wiki.java:106)
	at fastily.jwiki.core.Wiki.<init>(Wiki.java:145)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
	at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:83)
	at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrapNoCoerce.callConstructor(ConstructorSite.java:105)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor(CallSiteArray.java:60)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:235)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:263)
	at xbony2.esaebsad2.ESAEBSAD2.main(ESAEBSAD2.groovy:37)
Caused by: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 12 path $
	at com.google.gson.stream.JsonReader.syntaxError(JsonReader.java:1568)
	at com.google.gson.stream.JsonReader.checkLenient(JsonReader.java:1409)
	at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:542)
	at com.google.gson.stream.JsonReader.peek(JsonReader.java:425)
	at com.google.gson.JsonParser.parse(JsonParser.java:60)
	... 17 more
java.lang.NullPointerException
	at fastily.jwiki.core.Wiki.getTokens(Wiki.java:218)
	at fastily.jwiki.core.Wiki.login(Wiki.java:179)
	at fastily.jwiki.core.Wiki.<init>(Wiki.java:86)
	at fastily.jwiki.core.Wiki.<init>(Wiki.java:106)
	at fastily.jwiki.core.Wiki.<init>(Wiki.java:145)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
	at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:83)
	at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrapNoCoerce.callConstructor(ConstructorSite.java:105)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor(CallSiteArray.java:60)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:235)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:263)
	at xbony2.esaebsad2.ESAEBSAD2.main(ESAEBSAD2.groovy:37)
Exception in thread "main" java.lang.SecurityException: Failed to log-in as null @ ftb.gamepedia.com
	at fastily.jwiki.core.Wiki.<init>(Wiki.java:87)
	at fastily.jwiki.core.Wiki.<init>(Wiki.java:106)
	at fastily.jwiki.core.Wiki.<init>(Wiki.java:145)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
	at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:83)
	at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrapNoCoerce.callConstructor(ConstructorSite.java:105)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor(CallSiteArray.java:60)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:235)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:263)
	at xbony2.esaebsad2.ESAEBSAD2.main(ESAEBSAD2.groovy:37)

The arguments should be right ("ESAEBSAD@ESAEBSAD-bot" and my password).

method wiki.basicGET

Hi,

could you provide an example for the method wiki.basicGET please? That would be awesome! I tried to use it, however I only got empty responses.

Unable to write to Scribunto Module namespace

Hello,

I am working on a automated task handler to do various update and maintenance routines and ran into a bit of an issue with Jwiki only when attempting to write to Module:<pagename>...

I'm honestly stumped as to what is causing the issue, i confirmed bot permissions are set and added ".withDebug(true)" to the builder to the Wiki object.

I can see it is loading the namespace as:

"828": {
        "id": 828,
        "case": "first-letter",
        "subpages": "",
        "canonical": "Module",
        "*": "Module"
      },

When it attempts to create a page or edit an existing page (overwrite) with the String "testpage" it gives:

DEBUG: [L1ghtsword @ wiki.<domain>.net]: {
  "error": {
    "code": "scribunto-lua-error-location",
    "info": "Lua error at line 1: \u0027\u003d\u0027 expected near \u0027\u0027.",
    "*": "See https://wiki.<domain>.net/wiki/api.php for API usage. Subscribe to the mediawiki-api-announce mailing list at \u0026lt;https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce\u0026gt; for notice of API deprecations and breaking changes."
  }
}

I worked out that " \u0027 " is " ' " and " \u003d " is " = " but i'm not sure why these are being prepended to the wiki text when editing... Is it possible to prevent that from happening when trying to write to id 828?

Cleaning the markup from the text

Hi,

Thanks for your project, really good one. I tried your jwiki starter. While I get the text from the wikipedia article, it would be good to have something that removes the wikipedia markup and outputs clean text.

Example output:
Some native animals are also adept at surviving bushfires.Effects of fire on plants and animals {{cite web......

Desired output:
Some native animals are also adept at surviving bushfires

getTextExtract not working after implementing external Authentification

Hi, I used jwiki get via getTextExtract the first part of a page. It worked fine, but when I implemented an external authentification via PluggableAuth, this method always returns null.

My class looks like this:

	private final String botUsername = "Admin1@bot";
	private final String botPw = "<bot-password>";

	private final Wiki wiki;

	public PageServiceImpl() {
		HttpUrl.Builder urlBuilder = HttpUrl.parse(url).newBuilder();

		this.wiki = new Wiki.Builder()
				.withApiEndpoint(urlBuilder.build())
				.withLogin(botUsername, botPw)
				.build();

		this.restTemplate = new RestTemplate();

	}
...

        public String getShortText(String title) {
		String result = wiki.getTextExtract(title);
		
		System.out.println(wiki.exists(title));
		System.out.println(result);
		
		return result;
	}

there is no error in the logs, and he seems to login just fine, but the two System.out.println statemens return "true" and "null", so he finds it, it exists, but for some reason it can't read the content of the wiki page.
As I said, the only thing that changed is, that I configure mediawiki to use external authentification via PluggableAuth, it worked just fine before that

Loggin Error when trying to use fandom as a builder

I managed to use fandom as my builder, I can get all the info and stuff
But when I try to login, it fails.
I tried :

  • real username and pass
  • creating a bot account
  • using the 2 username/password that are given with a bot account
    Here is the log :
    INFO: �[32m[ @ laboulangerie.fandom.com]: Try login for CroissantBot@CroissantBot�[0m
    false

Here is my simple test code (with username and password hidden) :

public class App {
    public static void main( String[] args ){
    	Wiki wiki = new Wiki.Builder().withDomain("laboulangerie.fandom.com/fr/api.php").build();
    	System.out.println(wiki.login("myusername", "mypassword"));
    }
}

Crash when trying to create wiki object with Wiki.Builder().build()

NOTE: I am trying to run this code as part of an android app using android studio and gradle.
I also noticed someone opened an issue about this problem in the past but no solution was found.

This is my code:

import io.github.fastily.jwiki.core.*;
import io.github.fastily.jwiki.dwrap.*;
import okhttp3.HttpUrl;

public class MainActivity extends AppCompatActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        Wiki wiki = new Wiki.Builder().build();
    }
}

When creating the wiki object the program crashes with the error being:
"java.lang.NullPointerException: Attempt to read from field 'com.google.gson.JsonObject io.github.fastily.jwiki.core.WQuery$QReply.input' on a null object reference in method 'void io.github.fastily.jwiki.core.Wiki.refreshNS()'"

Add http version of Builder.withDomain()

return withApiEndpoint(HttpUrl.parse(String.format("https://%s/w/api.php", domain)));

if I call Wiki.Builder().withDomain(), format string will set domain name with https.
but my private test wiki is not configured with https, and there is no way to set http url with builder conviniently, and if I call withDomain(), It will cause a NullPointerException because there is no https url for private wiki.

Connection à un wiki en local

Bonjour chers Tous,
Quelle est la bonne procédure pour se connecter à un wiki en local sur son PC?
Lorsque je fais ça
Wiki wiki = new Wiki.Builder().build();
il se connecte à en.wikipedia.org
Et lorsque je fais ça
Wiki wiki = new Wiki.Builder().withDomain("mediawiki").build();
mediawiki étant mon hostname, il se connecte à 127.0.0.1:443 au lieu de 127.0.0.1:80 ou directement sur mon wiki.

Cordialement !

GPL with classpath exception

Is there a chance you could include a classpath exception to your licencing? That way we could use this fine lib in internal products without the legal department screaming at us. ;)

`Wiki#prefixIndex` is improperly documented

The documentation for Wiki#prefixIndex states:

Does the same thing as Special:PrefixIndex.

However, this is incorrect. Looking at the MediaWiki documentation, it states:

Perform a prefix search for page titles.

Despite the similarity in names, this module is not intended to be equivalent to Special:PrefixIndex; for that, see action=query&list=allpages with the apprefix parameter. The purpose of this module is similar to action=opensearch: to take user input and provide the best-matching titles. Depending on the search engine backend, this might include typo correction, redirect avoidance, or other heuristics. 

This means that Wiki#prefixIndex should have its documentation updated to state this fact. The actual method that should note it does the same thing as Special:PrefixIndex should be Wiki#allPages, as noted in the MediaWiki documentation.

I also have a fork I am maintaining with updates to jwiki at SizableShrimp/jwiki, e.g. caching login values and retrying requests after logging in if a badtoken error is received along with other general updates and improvements. I haven't made a PR for these changes because they cause breaking API changes and also my fork is slightly modified to be compatible with Jitpack so that I don't have to host my own maven repo. Let me know if you would like to receive a PR with these changes in the near future.

Fandom usage

Is it possible to use this to create fandom pages ?

Malformedjsonexception in getlinks on page

I get many of the following error when calling getLinksOnPage many times.
It's similar to the login error from another issue, but when I try changing
new Wiki("en.wikipedia.org"); to
new Wiki("en.wikipedia.org/w/api.php"); ,
nothing changed.
Here's the code that triggers the exception: https://hastebin.com/bopagefewo.cs
The strange thing is that the code still works after a while, but a stack trace full of thousands of these is not desirable.

com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Use
com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 12 path $
at com.google.gson.JsonParser.parse(JsonParser.java:65)
at com.google.gson.JsonParser.parse(JsonParser.java:45)
at fastily.jwiki.core.WQuery.next(WQuery.java:303)
at fastily.jwiki.core.MQuery.getContProp(MQuery.java:72)
at fastily.jwiki.core.MQuery.getLinksOnPage(MQuery.java:289)
at fastily.jwiki.core.Wiki.getLinksOnPage(Wiki.java:799)
at cpen221.mp3.wikimediator.WikiMediator.lambda$getConnectedPages$0(WikiMediator.java:175)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 12 path $
at com.google.gson.stream.JsonReader.syntaxError(JsonReader.java:1568)
at com.google.gson.stream.JsonReader.checkLenient(JsonReader.java:1409)
at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:542)
at com.google.gson.stream.JsonReader.peek(JsonReader.java:425)
at com.google.gson.JsonParser.parse(JsonParser.java:60)
... 11 more

Getting the url of a page

How do I get the url of that page, I went through the docs multiple times but still didn't find any method to get the url. Kindly lmk if theres one

VisualEditor Preview API

Can you please add new methods to interact with the VisualEditor API?
I didn't find a documentation (but didn't look for it long).
I am in particular interested by what happens when we click the Preview button from VisualEditor. It allows to preview any WikiCode sent to it by returning the corresponding HTML code.

Here is the client code of the VisualEditor extension:
https://github.com/wikimedia/mediawiki-extensions-VisualEditor/blob/ef25e831b99471bde1510d925d8fcc46da115e54/modules/ve-mw/init/ve.init.mw.ArticleTarget.js#L964

Here is the API server code:
https://github.com/wikimedia/mediawiki-extensions-VisualEditor/blob/ef25e831b99471bde1510d925d8fcc46da115e54/includes/ApiVisualEditor.php#L224

Filter to get contributions that are new pages only

I don't know if this is possible with the mediawiki api but on Special:Contributions, I can filter it by pressing the button with the label "Only show edits that are page creations." For my bot I'm basically looking through every contribution which is very taxing (lol i crashed the wiki multiple times, "429 Too Many Requests"), so if it can be filtered just through the API, that would be a very good thing.

Login token

I noticed I am limited to only 500 results and wanted to fix this by logging in, I believe a logged-in bot can retrieve 5000 results. Unfortunately, I noticed Wikia uses Mediawiki version 1.19.24, and therefore fails when a login token is retrieved via this url: https://tibia.wikia.com/api.php?meta=tokens&format=json&action=query&type=login with the response:
{"warnings":{"query":{"*":"Unrecognized value for parameter 'meta': tokens"}}}

This seems to be because this way of requesting a token, using "meta=tokens" is introduced in mediawiki version 1.20.0. It seems Wikia is stuck to this horribly old version of mediawiki :(

Do you think there is some work-around to be able to request a token, to be able to login, using the old way, which works for mediawiki 1.19? Or would this technically be not possible?

resolveRedirect cause NPE

When I call resolveRedirect I've a NullPointerException
java.lang.NullPointerException: null at java.util.ArrayList.addAll(ArrayList.java:581) at fastily.jwiki.core.MQuery.getNoContList(MQuery.java:127) at fastily.jwiki.core.MQuery.resolveRedirects(MQuery.java:418) at fastily.jwiki.core.Wiki.resolveRedirect(Wiki.java:1030)

To reproduce:

String titre = "Java EE"; Wiki wiki = new Wiki("fr.wikipedia.org"); String redirection = wiki.resolveRedirect(titre);;

Retrieving multiple page contents in one mediawiki API call

Hello,

I tried out this library today for my github project TibiaWikiApi. I am exploring to use your library, but currently I'm using jwbf.
I want to do the following thing:

  1. Get a list of titles in a category, this is no problem with your wiki.getCategoryMembers() method.
  2. Get all those pages individually.

For this second step I did not find an efficient method yet. Of course it is possible to do multiple subsequent calls to the wiki api using your wiki.getPageText() for every page title, but I would like to do this more efficiently. In the mediawiki API this is possible, e.g.:
https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=jsonfm&formatversion=2&titles=Foo|Bar|Cat
All page titles are listed in the titles parameter, separated by a pipe. The only limit is how much characters fit in an url, before it gets too long. As this results in less api calls, this method is a lot faster.

What method from this library can be used to achieve this? If this is not possible yet, I can make an attempt to build this myself and offer a pull request.

Android Issue

trying to start Wiki wiki = new Wiki.Builder().build(); causes the issue "Caused by: java.lang.NullPointerException: Attempt to read from field 'com.google.gson.JsonObject io.github.fastily.jwiki.core.WQuery$QReply.input' on a null object reference"

this code should work with android correct?

Screenshot 2022-06-13 231135

Accessing wiki pages from proxy environment

I am not able to use jwiki because my pc is accessing internet in a proxy environment. Given proxy credentials, is there any way to access wiki pages and how could it be?

Request to make the logging optional

Hi,

I'd like to thank you for this great and damn easy wiki api. And, it really works cool! It logs for every call, its great! But, I want to get rid of the logs while its being used in an application. And, I can't turn it off cause there's no way. Can you please add a feature to disable the logging when unneeded?
Thank you!

Regards

idea to make contributions easier

Hi,
Thank you for a really useful library. I would need to add a "fetch all pages of a given category" feature, so I looked at the code to see how I could contribute.
My first impression is that even if this is trivial, some naming conventions could change to help contributors find their way. NS, FS, etc... could be spelled out in longer names?
I forked the repo and I am making these changes for my convenience. Would you accept a pull request in this sense?

getTextExtract returns null

Hi,

I want to use getTextExtract, but it keep returning null.

I have a simple wikipage called "Otto von Bismarck" which text includes:

Otto von Bismarck war eine Person aus Deutschland.

Ich bin nicht Teils des ersten Absatzes`

I tried running both getTextExtract and getPageText:

System.out.println(wiki.getTextExtract("Otto_von_Bismarck"));
System.out.println(wiki.getPageText("Otto_von_Bismarck"));

which returns:

Feb. 01, 2023 09:30:39 AM
INFO: [Admin @ <wiki-ip> ]: Getting a text extract for Otto_von_Bismarck
null
Feb. 01, 2023 09:30:39 AM
INFO: [Admin @ <wiki-ip>]: Getting page text of Otto_von_Bismarck
Otto von Bismarck war eine Person aus Deutschland.

Ich bin nicht Teils des ersten Absatzes

Querypage(s) support

Basically support for DoubleRedirects, Ancientpages, Mostimages, etc, would be very useful. I'm trying to get it working on my own but having some difficulties. Thanks for your time ^^ (example operation in an ApiSandbox)

URL to wiki domain assumes format which is not general enough

In the Wiki class you provide a String called 'domain' as input to the constructor. It is used in a constructor to construct an URL in the following format: https://%s/w/api.php

Although this is correct for Wikipedia, it is not how it works for Wikia wikis (like the one I use). There the format is: https://%s/api.php, so without the slash w.

My 'work-around' is to use the following constructor:
wiki = new Wiki(null, null, HttpUrl.parse("https://tibia.wikia.com/api.php"), null, null)

But it would be nice if there was a more user-friendly way, or some explanation in the documentation that you have to take care for this if you're not using Wikipedia but another wiki.

error: cannot access Wiki

class file has wrong version 55.0, should be 52.0
Please remove or make sure it appears in the correct subdirectory of the classpath.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.