ankicommunity / ankicommunity-api-server Goto Github PK
View Code? Open in Web Editor NEWDjango-based Anki Sync and API server
License: GNU Affero General Public License v3.0
Django-based Anki Sync and API server
License: GNU Affero General Public License v3.0
I managed to sync in various situation with and w/o images but I also got into a situation in which from Anki Desktop I have this error:
/djs/sync/applyGraves
POST HTTP/1.1
Content-Length: 684
Content-Type: multipart/form-data; boundary=8c0a09497de60d5e-50bd504f1c7e1b00-830495ea2a49990d-eb6af20e78b0e954
Http-X-Real-Ip: 192.168.32.1
Http-X-Forwarded-For: 192.168.32.1
Http-X-Forwarded-Proto: http
Http-Host: djankiserv.wikidattica.org
Http-Connection: close
Http-Accept: */*
Http-X-Forwarded-Host: djankiserv.wikidattica.org
Http-X-Forwarded-Port: 443
Http-X-Forwarded-Server: c862b9f3bf7f
Http-Accept-Encoding: gzip
<QueryDict: {'c': ['1'], 'k': ['njs0i8hylaltkpbufn7egtvboq22zik3'], 's': ['r`E^Bs={pI'], 'data': [<InMemoryUploadedFile: data ()>]}>
Internal Server Error: /djs/sync/applyGraves
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 179, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/generic/base.py", line 70, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/decorators.py", line 50, in handler
return func(*args, **kwargs)
File "./djankiserv/sync/views.py", line 123, in base_applyGraves
col_handler.applyGraves(chunk=data.get("chunk"))
File "./djankiserv/sync/__init__.py", line 53, in applyGraves
self.remove(chunk)
File "./djankiserv/sync/__init__.py", line 176, in remove
self.col.decks.rem(oid, children_too=False)
File "./djankiserv/unki/decks.py", line 220, in rem
self.col.sched.emptyDyn(did)
AttributeError: 'Scheduler' object has no attribute 'emptyDyn'
Not Found: /djs/sync/abort
The same collection synced to a completely new user doesn't have any problem.
Is there some tests I can do to debug what's happening?
After working in a (well respected in France) company that always rebased before merging, I have come to appreciate the clean git history it promotes and the smaller, incremental (with feature flag activation if required) development cycle.
Given the (current and likely long-term) usages and risk profile of this project, it also seems to be a logical way to move forward here. In a large, multi-national company that has releases that will live for years and both point and minor releases on those releases, I can understand wanting to go with git flow. This project seems to merit significantly more agility.
That said, I am also all for obligatory reviews, even for me. This can become a little tricky though, if there aren't at least a couple of people who know how most of the code works.
Hi,
I've read related issues and learned that djankiserv is going to replace anki-sync-server.
How about the plan for the docker version?
I think there are plenty of users are waiting for the docker version of djankiserv so that he or she can upgrade to the most-recently updated version of ANKI.
Thanks.
To support async
connections. Auto generate OpenAPI documentation (#9), etc
I'm slowly making my way through the initial deployment of a staging env.
When running:
# helm install -f overrides.yaml djankiserv_test djankiserv/charts/djankiserv/
I get the following error
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "ClusterIssuer" in version "cert-manager.io/v1alpha2"
I confirmed that the cert-manager is installed:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 1 2021-02-20 17:23:43.794226023 +0000 UTC deployed cert-manager-v0.16.1 v0.16.1
Here's my overrides.yaml (with sensitive information scrubbed):
djankiserv:
host: djankiserv.mydomain.com
ingress:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
tls:
secretName: letsencrypt-cert-staging
clusterissuer:
staging:
enabled: true
email: [email protected]
This would be an improvement to the current JWT token authentication. However, since the current system works, this is not a high priority ticket.
Please +1 this ticket if this is a feature you would like to see implemented.
The code in the settings knows that the server can support both postgresql and mariadb. The same is true with the requirements.txt files.
We can reduce the complexity of the codebase by making the features database agnostic. The goal is to have a single database (most likely PostgreSQL) as the default database. But to allow users to switch in a MariaDB/MySQL database by modifying the config.
This is technically what the codebase is doing by it treats both databases as first-class citizens which is unnecessarily bloating the codebase.
It might be worth combining the userdb and the maindb so the application only needs a single database by default.
It will probably be useful to support the existing environment variables for backward-compatibility. But my gut tells me it won't be used by most our users, so having a single database configuration should lower the barrier-to-entry for new users, thus improve the adoption of the server.
anki
has a choice between a couple of spaced repetition algorithms, see: https://faqs.ankiweb.net/what-spaced-repetition-algorithm.html and https://anki.tenderapp.com/kb/anki-ecosystem/the-anki-21-scheduler.
These are clearly both very defective when you have significant extra information about the items that the learner is memorising. As an example, let's say a learner has a list of vocabulary items they are learning in a foreign language. They have learnt the word "chunking", and the next repetition is programmed for 7 days time. Let's then assume that the learner has read the word "chunking" in context 50 times in the intervening period, hasn't looked it up on any of those occasions, and now clearly has no utility in having it repeated in Anki. While the learner will obviously not spend a great deal of time saying the item is "easy", it may be more efficient to increase the amount of time before the next review. Let's further suppose that the learner has only seen the word 3x, and has looked it up on every occasion. It is likely useful not only to show the word but possibly to show it earlier than predicted and possibly more often. The algorithm is based on information it gets from the tool, and if significant amounts of extra information about learner's knowledge of the item is available, then that information can be used to tweak the algorithm. It is also a little irrelevant the effect this has on memorisation if not being able to tweak means the learner uses the tool less - the overall benefit will be degraded if the learner stops using the tool because it is frustrating.
There are also cases (for example for language learning) where it is useful to have the entire set of vocabulary (so "all words of the language"). If a learner comes across a word then he should be able to indicate to Anki that he knows the word, so he shouldn't get it showed to him in the normal way - it should be considered "known" and so some multiplier already applied.
While the system will relatively quickly adapt to most of these cases, when the number of items goes into the thousands (so when learning the vocab of any language), having very sophisticated over the showing of cards enables the system to carefully manage the cognitive load placed on the learner. It also significantly enriches what can be done with the database - it means that hundreds/thousands of words can be considered as 'known' yet still have that knowledge confirmed at a later date through the algorithm.
The review_in
parameter is an attempt to control this on the API note creation/modification methods. It defaults to the standard behaviour (i.e., an added note is considered as new and cards are programmed as any new cards would be). As this functionality is fully non-standard, it needs to be properly tested and the consequences fully understood.
I think it'd be beneficial and inviting for new users to have a Dockerfile, ideally with an image hosted at Docker Hub available. If you share my opinion, I'd be glad to work on it.
Hello,
I'm currently crafting the user friendly (as much as possible) Docker image and Docker Compose configuration. See the progress here:
I have successfully launched a server with this stack:
djankiserv at kuklinistvan@bec42f3
This is a fork which only contains some pending documentation in a PR, and one additional reverted commit bdbffe9, which breaks the requirements in the Docker image. It was added a couple hours ago :) I know it is not the right solution and I will fix it later, just wanted to get straight to the point.
PostgreSQL 13.1
NGINX 1.19.5, proxy and static file server, plain http
On the client side:
On Anki Desktop the synchronization seems to work more or less. On AnkiDroid I get a generic error message. See also:
WHY: This will read the Django project and display interactive documentation on how to use the API minimising the barrier to entry for developing on top of the API. It will also allow for an API first approach to the API.
HOW: This was just the first one that came up when I googled it: https://django-rest-swagger.readthedocs.io/en/latest/. But there's a bunch of packages that might help with this.
WHAT: This is an example, see: https://petstore.swagger.io/.
Hello!
I have read the documentation but failed to start the server locally. What do I need to configure in settings? Can anyone help with this?
Hello,
I have been using Anki but want to be able to use it on my phone as well as my computer. I am running anki 2.1.54 on Linux. I really don't want to use an old version as I am using a number of plugins and they don't work properly on old version. I have not even messed with Anki droid yet as I want to get my desktop synced to a remote server then download it all onto the phone. I don't want to have any of my data on anyone else's server so I don't want to use Anki web for the syncing.
I spun up the anki sync server docker container and try and sync but I get an error:
A network error occurred.
Error details: error decoding response body: missing field `mod` at line 1 column 99
Serverside the logs show a 403 like so:
[2022-12-29 06:34:20,095]:INFO:ankisyncd.http:172.18.0.1 "GET /msync/begin?k=7056d457e1c9f6c57f711165ae99fb9f&v=a
nki%2C2.1.54+%28b6a7760c%29%2Clin%3Aubuntu%3A22.04 HTTP/1.0" 403 157
This seems rather odd as the login part works correctly, so what's the 403 about?
I assume this is because my Anki is too new. I tried downloading anki 2.1.19 as that appeared to be a known working version. But that version of Anki won't even open my decks properly so I can't test sync with it.
I am kinda figuring this server is a lost cause, however if it could potentually be made to work I would live some pointers in the direction to go. I just don't wanna sync a ton of time into this vs just doing the flashcards on a computer.
So I notice djankiserv
which says it works with Anki 2.1+. However the last docker image update was like 2 years ago which seems rather suspect. Does this server actually work with something as new as anki 2.1.54? If so can I use the image on dockerhub or would I need to build my own?
Any information about the current state of things would be great. Its super confusing going between all the repo's trying to figure out what is actually current and what works with what versions.
Finally I did notice there is a built in Anki sync deal in recent versions, but I want to run it on a server which is inconvenient to have the whole GUI and everything and I really need media syncing so its just not what I need at the moment.
Hi, got a little issue,
i've created a new user with empty collection, synced with server account (which collection is also empty), then added a deck with about 13000 cards and 6000 medias. Started syncing, the process crashed every 2000 cards, anki reports that ankiweb encountered a problem.
However the chunk of 2000 is uploaded fine to the server, to sync 6000 medias 3 sync sessions were necessary. After that no problems were detected. Nginx error log contains no report about it, media log only says that media syncing failed without specifying the details.
Also if i create a user, add the deck and only then perform first synchronization with the server no crash occurs (or occurs rare, i have tested it few times only, uploading the deck and downloading it back from another account).
No problems at all with decks containing less than 2000 cards.
We should design the API to define Resources
and HTTP Verbs
This is a decent read: https://blog.restcase.com/4-maturity-levels-of-rest-api-design/.
Actions:
The repo is here: https://github.com/ankicommunity/anki-desktop-addons.
This is more a initiative because we've got a lot of addons flying around. We moving them all to one place.
Hello,
I don't have any mysql client on my machine and ended up with the following error when trying to install the required dependency. (I tried both with pip install and from the sources)
As the app seems to be compatible with both postgres and mysql, not having a mysql client shouldn't prevent from using it. Am I right ?
Collecting mysqlclient==2.0.1 (from -r requirements.txt (line 7)) Using cached https://files.pythonhosted.org/packages/a5/e1/e5f2b231c05dc51d9d87fa5066f90d1405345c54b14b0b11a1c859020f21/mysqlclient-2.0.1.tar.gz Complete output from command python setup.py egg_info: /bin/sh: mysql_config: command not found /bin/sh: mariadb_config: command not found /bin/sh: mysql_config: command not found Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-tu4ee4ry/mysqlclient/setup.py", line 15, in <module> metadata, options = get_config() File "/tmp/pip-install-tu4ee4ry/mysqlclient/setup_posix.py", line 65, in get_config libs = mysql_config("libs") File "/tmp/pip-install-tu4ee4ry/mysqlclient/setup_posix.py", line 31, in mysql_config raise OSError("{} not found".format(_mysql_config_path)) OSError: mysql_config not found
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.