Code Monkey home page Code Monkey logo

enma's Introduction

Aehooo!
Seja bem-vindo ao meu perfil no Github!

Atualmente atuo como Engenheiro de Software no time de Google Cloud Platform da Hvar Consulting.

No meu tempo livre gosto de desenhar, jogar e, acima de tudo, construir códigos e/ou sistemas inúteis religiosamente focados em algo relacionado a anime/mangá. Desde aplicações em real-time à scrappers. hahahaha

Como me encontrar?

Sinceramente, é bem tranquilo me achar pela internet, seja por redes de desenho ou de código, em todas você pode fácilmente me encontrar procurando por: AlexandreSenpai. Mas listarei para vocês alguns links diretos.

Fatos aleatórios sobre mim

essa sessão só está aqui por que o github sugeriu

  • 🔭 Atualmente estou trabalhando em: Um código para traduzir e editar automáticamente mangás.
  • 🌱 Atualmente estou aprendendo: Arquitetura de microsserviços.
  • 👯 Estou procurando colaborar em: Qualquer projeto, sério, se tiver espaço pra contrib tô dentro.
  • 💬 Me pergunte sobre: Python, as vezes, mesmo que eu não saiba, é algo novo que eu aprendo.
  • 📺 Estilo preferido de jogo: Rítmo. OSU É MUITO BOM.
  • 😄 Pronomes: Ele/Dele
  • ⚡ Fato engraçado: Sou péssimo em jogos de azar, mesmo gostando, e dou graças a Deus que caça-níquel é proíbido no Brasil.

enma's People

Contributors

alexandresenpai avatar el-nino2020 avatar ilexisthemadcat avatar kiranajij avatar murilo-bracero avatar nadiefiind avatar nokutokamomiji avatar sdvcrx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

enma's Issues

Difference Between `get()` and `search()`?

I'd appreciate if the readme.md could provide more usage examples and their differences. Just single line examples would be great, and maybe even example responses so we'd know what to expect.

Enma Import Error

ImportError: cannot import name 'DefaultAvailableSources' from 'enma' (C:\Users\USER\AppData\Local\Programs\Python\Python310\lib\site-packages\enma_init_.py)

I'm on python 3.10.6 and windows

upload date

the doujin info should contain the upload timestamp

AttributeError: 'Response' object has no attribute 'status'

First, thank you for all your hard work on this, can't be easy maintaining a repo like this. Second, I know issues like this have already been posted, but I think I have some additional issues/information to add [identifying info removed].

Some quick backstory: I've been scraping nhentai for years using a combination of urllib, request, and BeautifulSoup. My python script would gather new links and write them to a txt file, which I would then load once a month to HDoujinDownloader to download and sort. 14 days ago, my script stop working. After investigating I found that Cloudfare had been preventing my script from successfully gathering links. While trying to find work-around solutions, I was lead here. I tried the code found in Issue #39 :

from NHentai import NHentai, CloudFlareSettings
nhentai_async = NHentai(request_settings=CloudFlareSettings(csrftoken="hne[...]", cf_clearance="1qDg[...]"))
print(nhentai_async.get_page(page=1))

And was met with this error:

File "C:[...]\AppData\Local\Programs\Python\Python310\lib\site-packages\NHentai\sync\infra\adapters\request\http\implementations\sync.py", line 37, in handle_error
raise Exception(f'An unexpected error occoured while making the request to the website! status code: {request.status}')
AttributeError: 'Response' object has no attribute 'status'

My suspicion is that more advanced CloudFare settings have been applied to nhentai and that those settings are preventing successful runs. Is that something you can confirm? Any advice on which direction to head in would be most appreciated.

Thanks again for all your hard work.

JSON Decode Error

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/discord/client.py", line 343, in _run_event
    await coro(*args, **kwargs)
  File "/home/bfbf510a/bots/pixivbot/main.py", line 325, in on_message
    await get_ehentai_with_id(id)
  File "/home/bfbf510a/bots/pixivbot/main.py", line 213, in get_ehentai_with_id
    doujin = nhentai.get_doujin(id = str(id))
  File "/usr/local/lib/python3.9/dist-packages/NHentai/utils/cache.py", line 22, in wrapper
    new_execution = function(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/NHentai/nhentai.py", line 41, in get_doujin
    SOUP = self._fetch(urljoin(self._API_URL, f'gallery/{id}'), is_json=True)
  File "/usr/local/lib/python3.9/dist-packages/NHentai/base_wrapper.py", line 32, in _fetch
    return PAGE_REQUEST.json() if is_json else BeautifulSoup("", 'html.parser')
  File "/usr/lib/python3/dist-packages/requests/models.py", line 897, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3/dist-packages/simplejson/__init__.py", line 518, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 370, in decode
    obj, end = self.raw_decode(s)
  File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 400, in raw_decode
    return self.scan_once(s, idx=_w(s, idx).end())
  File "/usr/lib/python3/dist-packages/simplejson/scanner.py", line 79, in scan_once
    return _scan_once(string, idx)
  File "/usr/lib/python3/dist-packages/simplejson/scanner.py", line 70, in _scan_once
    raise JSONDecodeError(errmsg, string, idx)
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Python 3.9.5

Cloudflare JS preventing API requests

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/discord/client.py", line 382, in _run_event
    await coro(*args, **kwargs)
  File "/usr/src/cables/cabling_bot/events/message_media_expand.py", line 356, in handle_message
    result = await self.handler_mapping[fqdn](msg, url)
  File "/usr/src/app/events/message_media_expand.py", line 189, in handle_nhentai_gallery
    response: Doujin = await self.nhentai.get_doujin(id=gallery_id)
  File "/usr/local/lib/python3.10/site-packages/NHentai/utils/cache.py", line 38, in wrapper
    new_execution = await function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/NHentai/nhentai_async.py", line 41, in get_doujin
    SOUP = await self._async_fetch(urljoin(self._API_URL, f'gallery/{id}'), is_json=True)
  File "/usr/local/lib/python3.10/site-packages/NHentai/base_wrapper.py", line 46, in _async_fetch
    return await response.json() if is_json else BeautifulSoup("", 'html.parser')
  File "/usr/local/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 1103, in json
    raise ContentTypeError(
aiohttp.client_exceptions.ContentTypeError: 0, message='Attempt to decode JSON with unexpected mimetype: text/html; charset=utf-8', url=URL('https://nhentai.net/api/gallery/404970')

Reproducing the code path in python console, I see that the request is returning a Cloudflare protection page and HTTP 503.

>>> import json
>>> import requests
>>> r = requests.get('https://nhentai.net/api/gallery/404970')
>>> r.json()
Traceback (most recent call last):
  File "/home/erin/.cache/pypoetry/virtualenvs/app-417_n4la-py3.10/lib/python3.10/site-packages/requests/models.py", line 910, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/erin/.cache/pypoetry/virtualenvs/app-417_n4la-py3.10/lib/python3.10/site-packages/requests/models.py", line 917, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: [Errno Expecting value] <!DOCTYPE HTML>
...
    <h1><span data-translate="checking_browser">Checking your browser before accessing</span> nhentai.net.</h1>
...
>>> r.status_code
503

I've reached out to nHentai directly, as I don't see any instructions on their site for hitting the API in an authenticated way to prevent this from happening.

nhentai's request ends up with 404 status code

Hi just want to check if the function for nhentai is working correctly at the moment or not, I have been getting 404 error since earlier this week as shown on attachment below:
image
I have also visited the nhentai api that enma is using manually and getting 404 error as well.
image
I wonder if that is something to do with my settings or any changes on nhentai api end?

Suggestion: searching by author

Searching by author, or viewing all doujinshi of a author, is a useful function to me.

We can observe that the artist page and the search page are quite alike (maybe the same) in nhentai.

So, I added a function to nhentai.py:

# add by elnino
    def search_by_author(self,
                         author: str,
                         page: int,
                         ) -> SearchResult:
        request_response = self.__make_request(url=f'https://nhentai.net/artist/{author}',
                                               params={'page': page})
        query = 'artist/' + author
        # below code is the same as the search() function
        soup = BeautifulSoup(request_response.text, 'html.parser')

        search_results_container = soup.find('div', {'class': 'container'})
        pagination_container = soup.find('section', {'class': 'pagination'})

        last_page_a_tag = pagination_container.find('a',
                                                    {'class': 'last'}) if pagination_container else None  # type: ignore
        total_pages = int(last_page_a_tag['href'].split('=')[-1]) if last_page_a_tag else 1  # type: ignore

        if not search_results_container:
            return SearchResult(query=query,
                                total_pages=total_pages,
                                page=page,
                                total_results=0,
                                results=[])

        search_results = search_results_container.find_all('div', {'class': 'gallery'})  # type: ignore

        if not search_results:
            return SearchResult(query=query,
                                total_pages=total_pages,
                                page=page,
                                total_results=0,
                                results=[])

        a_tags_with_doujin_id = [gallery.find('a', {'class': 'cover'}) for gallery in search_results]

        thumbs = []

        for a_tag in a_tags_with_doujin_id:
            if a_tag is None: continue

            doujin_id = a_tag['href'].split('/')[-2]

            if doujin_id == '': continue

            result_cover = a_tag.find('img', {'class': 'lazyload'})
            cover_uri = None
            width = None
            height = None

            if result_cover is not None:
                cover_uri = result_cover['data-src']
                width = result_cover['width']
                height = result_cover['height']

            result_caption = a_tag.find('div', {'class': 'caption'})

            caption = None
            if result_caption is not None:
                caption = result_caption.text

            thumbs.append(Thumb(id=doujin_id,
                                cover=Image(uri=cover_uri or '',
                                            mime=MIME.J,
                                            width=width or 0,
                                            height=height or 0),
                                title=caption or ''))

        return SearchResult(query=query,
                            total_pages=total_pages,
                            page=page,
                            total_results=25 * total_pages if pagination_container else len(thumbs),
                            results=thumbs)

and tested with the following code:

config = CloudFlareConfig(
    user_agent='',
    cf_clearance=''
)

nHentai = NHentai(config)
resultList = nHentai.search_by_author('yd', 1).results
for result in resultList:
    print(result)

and it works.

Similarily, the tag page can also be searched in this way.

So, I think the process of parsing html can be abstracted to a common function, which can be called by search(), search_by_author() and search_by_tag().

I am not good at python, and the above function is enough for my use. So I make this as an issue, instead of a PR.

Connection aborted, OSError

Traceback (most recent call last):
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 665, in urlopen
    httplib_response = self._make_request(
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 376, in _make_request
    self._validate_conn(conn)
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 994, in _validate_conn
    conn.connect()
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connection.py", line 386, in connect
    self.sock = ssl_wrap_socket(
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\util\ssl_.py", line 370, in ssl_wrap_socket
    return context.wrap_socket(sock, server_hostname=server_hostname)
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 1040, in _create
    self.do_handshake()
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
OSError: [Errno 0] Error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\adapters.py", line 439, in send
    resp = conn.urlopen(
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 719, in urlopen
    retries = retries.increment(
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\util\retry.py", line 400, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\packages\six.py", line 734, in reraise
    raise value.with_traceback(tb)
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 665, in urlopen
    httplib_response = self._make_request(
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 376, in _make_request
    self._validate_conn(conn)
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 994, in _validate_conn
    conn.connect()
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connection.py", line 386, in connect
    self.sock = ssl_wrap_socket(
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\util\ssl_.py", line 370, in ssl_wrap_socket
    return context.wrap_socket(sock, server_hostname=server_hostname)
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 1040, in _create
    self.do_handshake()
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
urllib3.exceptions.ProtocolError: ('Connection aborted.', OSError(0, 'Error'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:/Users/medjed/Documents/github/kiyobot/db.py", line 4, in <module>
    d = nh.get_random()
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\NHentai\__init__.py", line 138, in get_random
    doujin_page = requests.get(f'{self._BASE_URL}/random/')
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\api.py", line 76, in get
    return request('get', url, params=params, **kwargs)
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\sessions.py", line 530, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\sessions.py", line 643, in send
    r = adapter.send(request, **kwargs)
  File "C:\Users\medjed\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\adapters.py", line 498, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError(0, 'Error'))

I'm using Windows, python 3.8.2

ModuleNotFoundError: No module named 'entities'

Hi, I hope you are doing well in this pandemic situation.
I tried using your nhentai api and got this error message when trying to import the package: from NHentai import NHentai .
This error persists when I'm trying to install it in Google Colab (Python 3.6.9) or my local virtual environment (Python 3.8.0).
Thanks in advance.

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-3-1dc4a5a8e803> in <module>()
----> 1 from NHentai import NHentai

/usr/local/lib/python3.6/dist-packages/NHentai/__init__.py in <module>()
      6 import requests
      7 
----> 8 from entities.doujin import Doujin, DoujinThumbnail
      9 from entities.page import HomePage, SearchPage, TagListPage, GroupListPage, CharacterListPage, ArtistListPage
     10 from entities.links import CharacterLink

ModuleNotFoundError: No module named 'entities'

Crashes e bugs

Screenshot_2020-07-24-11-25-51

Não sei se meu celular é ruim ou oq,mas na maioria das vezes ocorre de as imagens nāo carregarem,ou carregarem e depois de certo tempo sumirem, também ocorre de tudo carregar normal e quando vc da scroll para baixo as imagens somem.
Hoje quando abri o aplicativo para testar aconteceu de travar tudo,aí não sei se a RAM do celular acabou ou oq,mas é isso.
Celular é um Samsung J320M(2016),bem velhinho,com Android desatualizado

Problem with cookies. Status code 503

File "/home/cakestwix/.local/lib/python3.10/site-packages/NHentai/sync/infra/adapters/request/http/implementations/sync.py", line 37, in handle_error
    raise Exception(f'An unexpected error occoured while making the request to the website! status code: {request.status}')
AttributeError: 'Response' object has no attribute 'status'
from NHentai import NHentai, CloudFlareSettings
nhentai_async = NHentai(request_settings=CloudFlareSettings(csrftoken="SECRET", cf_clearance="SECRET"))
print(nhentai_async.get_page(page=1))

image

SyntaxError at NHentai/nhentai.py", line 19

IDK what happen..

Logs:
2022-05-06T00:36:50.902540+00:00 app[worker.1]: File "/app/helper_funcs/nhentai.py", line 1, in
2022-05-06T00:36:50.902540+00:00 app[worker.1]: from NHentai import NHentaiAsync
2022-05-06T00:36:50.902541+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.8/site-packages/NHentai/init.py", line 4, in
2022-05-06T00:36:50.902541+00:00 app[worker.1]: from . import nhentai
2022-05-06T00:36:50.902541+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.8/site-packages/NHentai/nhentai.py", line 19
2022-05-06T00:36:50.902542+00:00 app[worker.1]: @Cache(max_age_seconds=3600, max_size=1000, cache_key_position=1, cache_key_name='id').cache
2022-05-06T00:36:50.902542+00:00 app[worker.1]: ^
2022-05-06T00:36:50.902542+00:00 app[worker.1]: SyntaxError: invalid syntax

Issue with cf_clearance

I'm unsure if I set up my authorization correctly, since enma.get() keeps returning None:

from typing import cast

from enma import CloudFlareConfig, DefaultAvailableSources, Enma, NHentai, Sort
from rich.pretty import pprint

enma = Enma[DefaultAvailableSources]()
config = CloudFlareConfig(user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36",
                          cf_clearance="6s ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ 5-1.0.1.1-■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ 6DNtA")
enma.source_manager.set_source(source_name="nhentai")
nh_source = cast(NHentai, enma.source_manager.source)
nh_source.set_config(config=config)

manga = enma.get(identifier="50000")
print(manga)

Am I supposed to include everything between cf_clearance= and csrftoken= ?
Do I include the ; semicolon at the end of cf_clearance?

For further support, I have included a censored image of what I see on my browser:
image

You are not allowed to access this resource

from NHentai import NHentai, CloudFlareSettings
nhentai_async = NHentai(request_settings=CloudFlareSettings(csrftoken="hne[...]", cf_clearance="1qDg[...]"))
print(nhentai_async.get_page(page=1))

Being locked out of the api even with the right cf cookies. Error message " NHentai.core.handler.ApiError: You are not allowed to access this resource "

Can't import this module

When i write a simple line of code import NHentai, the program crashes due to incorrect import statements in file NHentai/__init__.py.
If you replace this lines of code from NHentai/__init__.py:

from entities.doujin import Doujin, DoujinThumbnail
from entities.page import HomePage, SearchPage, TagListPage, GroupListPage, CharacterListPage, ArtistListPage
from entities.links import CharacterLink

with this:

from .entities.doujin import Doujin, DoujinThumbnail
from .entities.page import HomePage, SearchPage, TagListPage, GroupListPage, CharacterListPage, ArtistListPage
from .entities.links import CharacterLink

everything will work.

Search method always returns page 1 no matter what page is supplied.

As the title suggests, I'm having an issue where the search method is always returning the default page of search results, 1. I'm attempting to use the search method to jump to the 3rd page of the search results. Which is what I assume the page argument is for...

Enma version: 2.2.1
Python version: 3.12
OS: Windows 10 (Though I'm using bash for windows)
Source: This is happening while using the nHentai source. I haven't tested with any other sources.

My code:

from typing import cast
from enma import Enma, DefaultAvailableSources, CloudFlareConfig, NHentai, Sort

enma = Enma[DefaultAvailableSources]()
config = CloudFlareConfig(
    user_agent='<my-user-agent>',
    cf_clearance='<my-cf-clearance>'
)

enma.source_manager.set_source(source_name='nhentai')
nh_source = cast(NHentai, enma.source_manager.source)
nh_source.set_config(config=config)

foo = enma.search('tanline', 3)
print(foo)
print('Output: ' + str(foo.page))

The printed object (sorry, I didn't take the time to make it prettier. It's late.):

image

Second print result:
Output: 1

This will always be the case. No matter what number I supply to the page argument, it is always returned as 1.

That's the only issue, everything else works perfectly fine. I'm able to search, view and download everything. I just need to get this search pagination implemented and I'll be golden ponyboy!

Please let me know if I'm misunderstanding what the page parameter is supposed to be used for, as this could very well be a layer 8 issue. However, if that's the case and this always returns 1 then I don't really see why it's a required value.

Thank you so much for this library and all the hard work put into it!

Error while import

File "/usr/local/lib/python3.7/dist-packages/NHentai/nhentai.py", line 19
@Cache(max_age_seconds=3600, max_size=1000, cache_key_position=1, cache_key_name='id').cache
^
SyntaxError: invalid syntax

Tags feature is not working

the genres (which I presume would be representative of tags) for NHentai, is returning an empty list always. This makes it impossible to filter results by tags.

URL Params for Search get encoded

Hello,

I had some issues doing a multi tag search with this module as in

  • Tag:Boner Tag:Big Language:English

but when I plugged that into your Search Function the params would get changed to:

  • Tag%3ABoner%20Tag%3ABig%20Language%3AEnglish
    and it would break the query.

locally I added this line your search function in nhentai.py and it solves the problem.

    params = {'query': str(query), 'page': page, 'sort': sort} if sort is not None else {'query': query, 'page': page}

++params = urllib.parse.urlencode(params, safe=':+ ')
SOUP = self._fetch(urljoin(self._API_URL, f'galleries/search'), params=params, is_json=True)

thanks,

calibear20

PS i suck at github

https://stackoverflow.com/questions/23496750/how-to-prevent-python-requests-from-percent-encoding-my-urls/23497912

AttributeError: 'NoneType' object has no attribute 'text'

    doujin = enma.random()
             ^^^^^^^^^^^^^
  File "/home/reaper/.local/lib/python3.11/site-packages/enma/infra/entrypoints/lib/__init__.py", line 80, in wrapper
    return callable(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/reaper/.local/lib/python3.11/site-packages/enma/infra/entrypoints/lib/__init__.py", line 145, in random
    response = self.__random_use_case.execute() # type: ignore
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/reaper/.local/lib/python3.11/site-packages/enma/application/use_cases/get_random.py", line 19, in execute
    result = self.__manga_repository.random()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/reaper/.local/lib/python3.11/site-packages/enma/infra/adapters/repositories/nhentai.py", line 312, in random
    id = cast(Tag, soup.find('h3', id='gallery_id')).text.replace('#', '')
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'text'```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.