alexandresenpai / enma Goto Github PK
View Code? Open in Web Editor NEWEnma is a Python library designed to fetch and download manga and doujinshi data from many sources including Manganato and NHentai.
License: MIT License
Enma is a Python library designed to fetch and download manga and doujinshi data from many sources including Manganato and NHentai.
License: MIT License
This is an enhancement request to add the capability to more easily filter by page count of a source. This is only really applicable to sources like nhentai where there is only one "chapter" as far as I can tell.
Currently, I do something along the lines of the following to filter by a minimum page count (probably don't need the sum loop as I believe nhentai only has one chapter for each result but ¯\_(ツ)_/¯):
search_results_with_minimum_pages = []
search_results = enma.search(query)
for result in search_results.results:
doujin_information = enma.get(result.id, False)
total_page_count = sum(
chapter.pages_count for chapter in doujin_information.chapters
if chapter is not None and chapter.pages_count is not None
)
if total_page_count >= doujin_page_minimum:
search_results_with_minimum_pages.append(doujin_information)
File "/usr/local/lib/python3.7/dist-packages/NHentai/nhentai.py", line 19
@Cache(max_age_seconds=3600, max_size=1000, cache_key_position=1, cache_key_name='id').cache
^
SyntaxError: invalid syntax
I'm unsure if I set up my authorization correctly, since enma.get()
keeps returning None
:
from typing import cast
from enma import CloudFlareConfig, DefaultAvailableSources, Enma, NHentai, Sort
from rich.pretty import pprint
enma = Enma[DefaultAvailableSources]()
config = CloudFlareConfig(user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36",
cf_clearance="6s ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ 5-1.0.1.1-■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ 6DNtA")
enma.source_manager.set_source(source_name="nhentai")
nh_source = cast(NHentai, enma.source_manager.source)
nh_source.set_config(config=config)
manga = enma.get(identifier="50000")
print(manga)
Am I supposed to include everything between cf_clearance=
and csrftoken=
?
Do I include the ;
semicolon at the end of cf_clearance
?
For further support, I have included a censored image of what I see on my browser:
import requests
r = requests.get("https://translate.google.com/translate?sl=vi&tl=en&hl=vi&u={url}&client=webapp")
print(r.status_code)
print(r.content)
The Manga object doesn't contain character or parody variables and NHentai.get() doesn't parse them either. Can these be updated to fetch them?
First, thank you for all your hard work on this, can't be easy maintaining a repo like this. Second, I know issues like this have already been posted, but I think I have some additional issues/information to add [identifying info removed].
Some quick backstory: I've been scraping nhentai for years using a combination of urllib, request, and BeautifulSoup. My python script would gather new links and write them to a txt file, which I would then load once a month to HDoujinDownloader to download and sort. 14 days ago, my script stop working. After investigating I found that Cloudfare had been preventing my script from successfully gathering links. While trying to find work-around solutions, I was lead here. I tried the code found in Issue #39 :
from NHentai import NHentai, CloudFlareSettings
nhentai_async = NHentai(request_settings=CloudFlareSettings(csrftoken="hne[...]", cf_clearance="1qDg[...]"))
print(nhentai_async.get_page(page=1))
And was met with this error:
File "C:[...]\AppData\Local\Programs\Python\Python310\lib\site-packages\NHentai\sync\infra\adapters\request\http\implementations\sync.py", line 37, in handle_error
raise Exception(f'An unexpected error occoured while making the request to the website! status code: {request.status}')
AttributeError: 'Response' object has no attribute 'status'
My suspicion is that more advanced CloudFare settings have been applied to nhentai and that those settings are preventing successful runs. Is that something you can confirm? Any advice on which direction to head in would be most appreciated.
Thanks again for all your hard work.
As mentioned in #55, the first example in the README that uses DefaultAvailableSources
does not work with the current iteration of Enma. Please consider updating the example.
When i write a simple line of code import NHentai
, the program crashes due to incorrect import statements in file NHentai/__init__.py
.
If you replace this lines of code from NHentai/__init__.py
:
from entities.doujin import Doujin, DoujinThumbnail
from entities.page import HomePage, SearchPage, TagListPage, GroupListPage, CharacterListPage, ArtistListPage
from entities.links import CharacterLink
with this:
from .entities.doujin import Doujin, DoujinThumbnail
from .entities.page import HomePage, SearchPage, TagListPage, GroupListPage, CharacterListPage, ArtistListPage
from .entities.links import CharacterLink
everything will work.
I'd appreciate if the readme.md
could provide more usage examples and their differences. Just single line examples would be great, and maybe even example responses so we'd know what to expect.
the doujin info should contain the upload timestamp
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/discord/client.py", line 343, in _run_event
await coro(*args, **kwargs)
File "/home/bfbf510a/bots/pixivbot/main.py", line 325, in on_message
await get_ehentai_with_id(id)
File "/home/bfbf510a/bots/pixivbot/main.py", line 213, in get_ehentai_with_id
doujin = nhentai.get_doujin(id = str(id))
File "/usr/local/lib/python3.9/dist-packages/NHentai/utils/cache.py", line 22, in wrapper
new_execution = function(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/NHentai/nhentai.py", line 41, in get_doujin
SOUP = self._fetch(urljoin(self._API_URL, f'gallery/{id}'), is_json=True)
File "/usr/local/lib/python3.9/dist-packages/NHentai/base_wrapper.py", line 32, in _fetch
return PAGE_REQUEST.json() if is_json else BeautifulSoup("", 'html.parser')
File "/usr/lib/python3/dist-packages/requests/models.py", line 897, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3/dist-packages/simplejson/__init__.py", line 518, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
File "/usr/lib/python3/dist-packages/simplejson/scanner.py", line 79, in scan_once
return _scan_once(string, idx)
File "/usr/lib/python3/dist-packages/simplejson/scanner.py", line 70, in _scan_once
raise JSONDecodeError(errmsg, string, idx)
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Python 3.9.5
File "/home/cakestwix/.local/lib/python3.10/site-packages/NHentai/sync/infra/adapters/request/http/implementations/sync.py", line 37, in handle_error
raise Exception(f'An unexpected error occoured while making the request to the website! status code: {request.status}')
AttributeError: 'Response' object has no attribute 'status'
from NHentai import NHentai, CloudFlareSettings
nhentai_async = NHentai(request_settings=CloudFlareSettings(csrftoken="SECRET", cf_clearance="SECRET"))
print(nhentai_async.get_page(page=1))
ImportError: cannot import name 'DefaultAvailableSources' from 'enma' (C:\Users\USER\AppData\Local\Programs\Python\Python310\lib\site-packages\enma_init_.py)
I'm on python 3.10.6 and windows
As the title suggests, I'm having an issue where the search method is always returning the default page of search results, 1. I'm attempting to use the search method to jump to the 3rd page of the search results. Which is what I assume the page argument is for...
Enma version: 2.2.1
Python version: 3.12
OS: Windows 10 (Though I'm using bash for windows)
Source: This is happening while using the nHentai source. I haven't tested with any other sources.
My code:
from typing import cast
from enma import Enma, DefaultAvailableSources, CloudFlareConfig, NHentai, Sort
enma = Enma[DefaultAvailableSources]()
config = CloudFlareConfig(
user_agent='<my-user-agent>',
cf_clearance='<my-cf-clearance>'
)
enma.source_manager.set_source(source_name='nhentai')
nh_source = cast(NHentai, enma.source_manager.source)
nh_source.set_config(config=config)
foo = enma.search('tanline', 3)
print(foo)
print('Output: ' + str(foo.page))
The printed object (sorry, I didn't take the time to make it prettier. It's late.):
Second print result:
Output: 1
This will always be the case. No matter what number I supply to the page argument, it is always returned as 1
.
That's the only issue, everything else works perfectly fine. I'm able to search, view and download everything. I just need to get this search pagination implemented and I'll be golden ponyboy!
Please let me know if I'm misunderstanding what the page
parameter is supposed to be used for, as this could very well be a layer 8 issue. However, if that's the case and this always returns 1 then I don't really see why it's a required value.
Thank you so much for this library and all the hard work put into it!
Are there any plans to support e-hentai sites?
This might not be the most useful API feature to implement, but it would add another way for the API to interact with Nhentai.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/discord/client.py", line 382, in _run_event
await coro(*args, **kwargs)
File "/usr/src/cables/cabling_bot/events/message_media_expand.py", line 356, in handle_message
result = await self.handler_mapping[fqdn](msg, url)
File "/usr/src/app/events/message_media_expand.py", line 189, in handle_nhentai_gallery
response: Doujin = await self.nhentai.get_doujin(id=gallery_id)
File "/usr/local/lib/python3.10/site-packages/NHentai/utils/cache.py", line 38, in wrapper
new_execution = await function(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/NHentai/nhentai_async.py", line 41, in get_doujin
SOUP = await self._async_fetch(urljoin(self._API_URL, f'gallery/{id}'), is_json=True)
File "/usr/local/lib/python3.10/site-packages/NHentai/base_wrapper.py", line 46, in _async_fetch
return await response.json() if is_json else BeautifulSoup("", 'html.parser')
File "/usr/local/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 1103, in json
raise ContentTypeError(
aiohttp.client_exceptions.ContentTypeError: 0, message='Attempt to decode JSON with unexpected mimetype: text/html; charset=utf-8', url=URL('https://nhentai.net/api/gallery/404970')
Reproducing the code path in python console, I see that the request is returning a Cloudflare protection page and HTTP 503.
>>> import json
>>> import requests
>>> r = requests.get('https://nhentai.net/api/gallery/404970')
>>> r.json()
Traceback (most recent call last):
File "/home/erin/.cache/pypoetry/virtualenvs/app-417_n4la-py3.10/lib/python3.10/site-packages/requests/models.py", line 910, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/erin/.cache/pypoetry/virtualenvs/app-417_n4la-py3.10/lib/python3.10/site-packages/requests/models.py", line 917, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: [Errno Expecting value] <!DOCTYPE HTML>
...
<h1><span data-translate="checking_browser">Checking your browser before accessing</span> nhentai.net.</h1>
...
>>> r.status_code
503
I've reached out to nHentai directly, as I don't see any instructions on their site for hitting the API in an authenticated way to prevent this from happening.
Hello,
I had some issues doing a multi tag search with this module as in
but when I plugged that into your Search Function the params would get changed to:
locally I added this line your search function in nhentai.py and it solves the problem.
params = {'query': str(query), 'page': page, 'sort': sort} if sort is not None else {'query': query, 'page': page}
++params = urllib.parse.urlencode(params, safe=':+ ')
SOUP = self._fetch(urljoin(self._API_URL, f'galleries/search'), params=params, is_json=True)
thanks,
calibear20
PS i suck at github
Just as exactly as the title said, I want to a way to get the characters that appeared in the doujins and the parody tag of the doujin. Just wanted a few more information about the doujin. :)
the genres (which I presume would be representative of tags) for NHentai, is returning an empty list always. This makes it impossible to filter results by tags.
Traceback (most recent call last):
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 994, in _validate_conn
conn.connect()
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connection.py", line 386, in connect
self.sock = ssl_wrap_socket(
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\util\ssl_.py", line 370, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 1040, in _create
self.do_handshake()
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
OSError: [Errno 0] Error
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\adapters.py", line 439, in send
resp = conn.urlopen(
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 719, in urlopen
retries = retries.increment(
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\util\retry.py", line 400, in increment
raise six.reraise(type(error), error, _stacktrace)
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\packages\six.py", line 734, in reraise
raise value.with_traceback(tb)
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 994, in _validate_conn
conn.connect()
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connection.py", line 386, in connect
self.sock = ssl_wrap_socket(
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\util\ssl_.py", line 370, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 1040, in _create
self.do_handshake()
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
urllib3.exceptions.ProtocolError: ('Connection aborted.', OSError(0, 'Error'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:/Users/medjed/Documents/github/kiyobot/db.py", line 4, in <module>
d = nh.get_random()
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\NHentai\__init__.py", line 138, in get_random
doujin_page = requests.get(f'{self._BASE_URL}/random/')
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError(0, 'Error'))
I'm using Windows, python 3.8.2
Searching by author, or viewing all doujinshi of a author, is a useful function to me.
We can observe that the artist page and the search page are quite alike (maybe the same) in nhentai.
So, I added a function to nhentai.py:
# add by elnino
def search_by_author(self,
author: str,
page: int,
) -> SearchResult:
request_response = self.__make_request(url=f'https://nhentai.net/artist/{author}',
params={'page': page})
query = 'artist/' + author
# below code is the same as the search() function
soup = BeautifulSoup(request_response.text, 'html.parser')
search_results_container = soup.find('div', {'class': 'container'})
pagination_container = soup.find('section', {'class': 'pagination'})
last_page_a_tag = pagination_container.find('a',
{'class': 'last'}) if pagination_container else None # type: ignore
total_pages = int(last_page_a_tag['href'].split('=')[-1]) if last_page_a_tag else 1 # type: ignore
if not search_results_container:
return SearchResult(query=query,
total_pages=total_pages,
page=page,
total_results=0,
results=[])
search_results = search_results_container.find_all('div', {'class': 'gallery'}) # type: ignore
if not search_results:
return SearchResult(query=query,
total_pages=total_pages,
page=page,
total_results=0,
results=[])
a_tags_with_doujin_id = [gallery.find('a', {'class': 'cover'}) for gallery in search_results]
thumbs = []
for a_tag in a_tags_with_doujin_id:
if a_tag is None: continue
doujin_id = a_tag['href'].split('/')[-2]
if doujin_id == '': continue
result_cover = a_tag.find('img', {'class': 'lazyload'})
cover_uri = None
width = None
height = None
if result_cover is not None:
cover_uri = result_cover['data-src']
width = result_cover['width']
height = result_cover['height']
result_caption = a_tag.find('div', {'class': 'caption'})
caption = None
if result_caption is not None:
caption = result_caption.text
thumbs.append(Thumb(id=doujin_id,
cover=Image(uri=cover_uri or '',
mime=MIME.J,
width=width or 0,
height=height or 0),
title=caption or ''))
return SearchResult(query=query,
total_pages=total_pages,
page=page,
total_results=25 * total_pages if pagination_container else len(thumbs),
results=thumbs)
and tested with the following code:
config = CloudFlareConfig(
user_agent='',
cf_clearance=''
)
nHentai = NHentai(config)
resultList = nHentai.search_by_author('yd', 1).results
for result in resultList:
print(result)
and it works.
Similarily, the tag page can also be searched in this way.
So, I think the process of parsing html can be abstracted to a common function, which can be called by search()
, search_by_author()
and search_by_tag()
.
I am not good at python, and the above function is enough for my use. So I make this as an issue, instead of a PR.
I might be stupid, and there's already a way to do that, but I would like to have a way to get the latest doujin (or ID, that's the thing I'm interested in).
Hello
Recently, I found a bug on get_doujin
function that it only returns 0 favorites from popular doujins. While I do think its the problem from api itself but I've used some api wrappers that still give right amount of favorites.
Thanks for your concern!
Currently you can only get the main title with the .title attribute, but you can't get the secondary title
It would be a great addition if you added a way to get it
Hi just want to check if the function for nhentai is working correctly at the moment or not, I have been getting 404 error since earlier this week as shown on attachment below:
I have also visited the nhentai api that enma is using manually and getting 404 error as well.
I wonder if that is something to do with my settings or any changes on nhentai api end?
doujin = enma.random()
^^^^^^^^^^^^^
File "/home/reaper/.local/lib/python3.11/site-packages/enma/infra/entrypoints/lib/__init__.py", line 80, in wrapper
return callable(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/reaper/.local/lib/python3.11/site-packages/enma/infra/entrypoints/lib/__init__.py", line 145, in random
response = self.__random_use_case.execute() # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/reaper/.local/lib/python3.11/site-packages/enma/application/use_cases/get_random.py", line 19, in execute
result = self.__manga_repository.random()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/reaper/.local/lib/python3.11/site-packages/enma/infra/adapters/repositories/nhentai.py", line 312, in random
id = cast(Tag, soup.find('h3', id='gallery_id')).text.replace('#', '')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'text'```
Hi, I hope you are doing well in this pandemic situation.
I tried using your nhentai api and got this error message when trying to import the package: from NHentai import NHentai
.
This error persists when I'm trying to install it in Google Colab (Python 3.6.9) or my local virtual environment (Python 3.8.0).
Thanks in advance.
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-3-1dc4a5a8e803> in <module>()
----> 1 from NHentai import NHentai
/usr/local/lib/python3.6/dist-packages/NHentai/__init__.py in <module>()
6 import requests
7
----> 8 from entities.doujin import Doujin, DoujinThumbnail
9 from entities.page import HomePage, SearchPage, TagListPage, GroupListPage, CharacterListPage, ArtistListPage
10 from entities.links import CharacterLink
ModuleNotFoundError: No module named 'entities'
Não sei se meu celular é ruim ou oq,mas na maioria das vezes ocorre de as imagens nāo carregarem,ou carregarem e depois de certo tempo sumirem, também ocorre de tudo carregar normal e quando vc da scroll para baixo as imagens somem.
Hoje quando abri o aplicativo para testar aconteceu de travar tudo,aí não sei se a RAM do celular acabou ou oq,mas é isso.
Celular é um Samsung J320M(2016),bem velhinho,com Android desatualizado
Manga class defaulting to current datetime
enma.source_manager.set_source(source_name="nhentai")
nh_source = cast(NHentai, enma.source_manager.source)
nh_source.set_config(config=config)doujin = enma.get(171558)
print(doujin.updated_at)
from NHentai import NHentai, CloudFlareSettings
nhentai_async = NHentai(request_settings=CloudFlareSettings(csrftoken="hne[...]", cf_clearance="1qDg[...]"))
print(nhentai_async.get_page(page=1))
Being locked out of the api even with the right cf cookies. Error message " NHentai.core.handler.ApiError: You are not allowed to access this resource "
It only returns urls like 'https://i.nhentai.net/galleries/{id}/{page}.jpg
, but the file extension can also be .png, making the list incorrect in that case
IDK what happen..
Logs:
2022-05-06T00:36:50.902540+00:00 app[worker.1]: File "/app/helper_funcs/nhentai.py", line 1, in
2022-05-06T00:36:50.902540+00:00 app[worker.1]: from NHentai import NHentaiAsync
2022-05-06T00:36:50.902541+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.8/site-packages/NHentai/init.py", line 4, in
2022-05-06T00:36:50.902541+00:00 app[worker.1]: from . import nhentai
2022-05-06T00:36:50.902541+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.8/site-packages/NHentai/nhentai.py", line 19
2022-05-06T00:36:50.902542+00:00 app[worker.1]: @Cache(max_age_seconds=3600, max_size=1000, cache_key_position=1, cache_key_name='id').cache
2022-05-06T00:36:50.902542+00:00 app[worker.1]: ^
2022-05-06T00:36:50.902542+00:00 app[worker.1]: SyntaxError: invalid syntax
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.