Code Monkey home page Code Monkey logo

Comments (6)

icaca avatar icaca commented on June 8, 2024 2

After several days of experiments, my guess is that playwright cannot pass the human-machine verification. Thank you very much for your help.

from scrapy-playwright.

icaca avatar icaca commented on June 8, 2024

After my debugging, I found that scrapy did not send accept, which would cause the return of http code 202 to be empty. After adding it, the javascript will be returned correctly, but playwright does not wait for the web page to be verified.

from scrapy-playwright.

elacuesta avatar elacuesta commented on June 8, 2024

This report is not actionable, please include a minimal, reproducible example.

from scrapy-playwright.

icaca avatar icaca commented on June 8, 2024
from scrapy.spiders import Spider
import re
import scrapy
from urllib.parse import urlencode
from playwright.async_api import async_playwright


class PlaywrightSpider(Spider):
    name = "test01"
    custom_settings = {
        "PLAYWRIGHT_BROWSER_TYPE": "chromium",
        "PLAYWRIGHT_LAUNCH_OPTIONS": {
            "headless": False,
            "timeout": 20 * 1000,  # 20 seconds
        }
    }
    allowed_domains = [
        "cn.classic.warcraftlogs.com", "classic.warcraftlogs.com"
    ]

    players = [67849152]
    char_url = "https://cn.classic.warcraftlogs.com/character/id/{0}?mode=detailed&zone=1020#metric=dps"
    char_detail_url = "https://cn.classic.warcraftlogs.com/character/rankings-raids/{id}/default/1002/3/5000/5000/Any/rankings/0/0?dpstype=rdps&class=Any&signature={sign}"

    start_urls = [
        "https://cn.classic.warcraftlogs.com/character/id/67849152?mode=detailed&zone=1020#metric=dps"
    ]  # avoid using the default Scrapy downloader

    def start_requests(self):
        for player in self.players:
            eas_url = self.char_url.format(player)

            yield scrapy.Request(
                eas_url,
                meta={
                    "playwright": True,
                    "playwright_include_page": False,
                    "playwright_context_kwargs": {
                        "java_script_enabled": True,
                        "ignore_https_errors": True,
                    },
                    'id': player,
                    'referer': eas_url,
                },
                headers={
                    'Referer':
                    eas_url,
                    'agent':
                    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36',
                    'accept':
                    'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
                },
                callback=self.parse_player,
                errback=self.errback_close_page,
            )

    def parse_pass(self, response):
        pass

    async def parse_player(self, response, **kwargs):

        # page = response.meta["playwright_page"]
        # title = await page.title()

        # await page.close()
        # await page.context.close()
        # print("11111111111111", response.text)

        if response.status == 202:
            print("challenge")
            return

        meta = response.meta

        regex = r"rankings-raids.*signature=' \+ '(.*)'"
        res = re.search(regex, response.text)

        sign = None
        if res:
            sign = res.group(1)
        else:
            self.logger.info("ref=%s res=%s resp:%s", meta["referer"], res,
                             response.text)
            return

        meta["id"] = "67849152"

        # print('_token={}'.format(response.css("meta[name=csrf-token]::attr(content)").get()))
        self.logger.info(
            "_token=%s",
            response.css("meta[name=csrf-token]::attr(content)").get())

        yield response.follow(
            self.char_detail_url.format(id=meta["id"], sign=sign),
            method='POST',
            body=urlencode({
                '_token':
                response.css("meta[name=csrf-token]::attr(content)").get()
            }),
            headers={
                'Referer': self.char_url,
                'Content-Type':
                'application/x-www-form-urlencoded; charset=UTF-8'
            },
            meta=response.meta,
            #   dont_filter=True,
            #   priority=0,
            callback=self.parse_detail)
        pass

    def parse(self, response, **kwargs):
        # 'response' contains the page as seen by the browser
        return {"url": response.url}

    def parse_detail(self, response):
        # print(response.text)
        pass

    async def errback_close_page(self, failure):
        print(failure)
        # await page.close()

from scrapy-playwright.

icaca avatar icaca commented on June 8, 2024

I want to debug several js requests after the page returns 202, including headers and body, to find out why it cannot pass human-computer verification.
Because my request only returns parse_player once, those js requests in the middle will not enter my code. How should I debug this information. I originally wanted to use mitmproxy to capture the packet, but I found that chrome can obtain the request information, while scrapy seems to have some SSL errors and cannot capture everything.


Client TLS handshake failed. The client does not trust the proxy's certificate for 936453fdc45b.507bb30a.us-west-2.token.awswaf.com (OpenSSL Error([('SSL routines', '', 'ssl/tls alert certificate unknown')]))



[scrapy-playwright] INFO: Browser chromium launched
[scrapy-playwright] DEBUG: Browser context started: 'default' (persistent=False, remote=False)
[scrapy-playwright] DEBUG: [Context=default] New page created, page count is 1 (1 for all contexts)
[scrapy-playwright] DEBUG: [Context=default] Request: <GET https://cn.classic.warcraftlogs.com/character/id/67849152?mode=detailed&zone=1020> (resource type: document)
[scrapy-playwright] DEBUG: [Context=default] Response: <202 https://cn.classic.warcraftlogs.com/character/id/67849152?mode=detailed&zone=1020>
[scrapy-playwright] DEBUG: [Context=default] Request: <GET https://936453fdc45b.507bb30a.us-west-2.token.awswaf.com/936453fdc45b/8c8c9a139a90/01574e66d2ee/challenge.js> (resource type: script, referrer: https://cn.classic.warcraftlogs.com/)
[scrapy-playwright] DEBUG: [Context=default] Response: <200 https://936453fdc45b.507bb30a.us-west-2.token.awswaf.com/936453fdc45b/8c8c9a139a90/01574e66d2ee/challenge.js>
[scrapy.core.engine] DEBUG: Crawled (202) <GET https://cn.classic.warcraftlogs.com/character/id/67849152?mode=detailed&zone=1020#metric=dps> (referer: https://cn.classic.warcraftlogs.com/character/id/67849152?mode=detailed&zone=1020#metric=dps) ['playwright']
challenge
[scrapy.core.engine] INFO: Closing spider (finished)


from scrapy-playwright.

elacuesta avatar elacuesta commented on June 8, 2024

If I understand correctly what you're trying to do, you could use playwright_page_event_handlers to handle the Playwright responses with the response event.

from scrapy-playwright.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.