Code Monkey home page Code Monkey logo

Comments (3)

michaelvsinko avatar michaelvsinko commented on May 18, 2024 2

Thanks for the answer.
But I still don't understand why this is happening, because some pages are crawled fine.

With/out dont_filter=True works the same. Also I tried to reuse playwright page but still getting same error.

I turned off the headless mode and was watching the process. It looks like the same with/out error. My real crawler includes a PageCoroutine that waits for the elements on the page to load by waiting for a special class on the loading bar element. The page loads fine and the coroutine is waiting to load. The error occurs after the target class appears. So target page exists and elements on it too.
I need functional solution right now, so I fixed the _download_request_with_page function but this is obviously not a solution.

...
headers = None
status = 200
if response:
    headers = Headers(response.headers)
    headers.pop("Content-Encoding", None)
    status = response.status
respcls = responsetypes.from_args(headers=headers, url=page.url, body=body)

return respcls(
    url=page.url,
    status=status,
    headers=headers,
    body=body,
    request=request,
    flags=["playwright"],
)

from scrapy-playwright.

michaelvsinko avatar michaelvsinko commented on May 18, 2024

I think the problem with scrapy-fake-useragent, because works fine without it, but the author does not comments issues since December

from scrapy-playwright.

elacuesta avatar elacuesta commented on May 18, 2024

Hi, thanks for the report. From a quick look, I think it has to do with the following from the upstream docs for Page.goto:

page.goto either throws an error or returns a main resource response. The only exceptions are navigation to about:blank or navigation to the same URL with a different hash, which would succeed and return null.

(Note that it says null instead of None: it's a translation of the docs for the original JS version).

I'm able to reproduce consistently with the following:

import scrapy
from scrapy.crawler import CrawlerProcess


class HeadersSpider(scrapy.Spider):
    name = "headers"

    def start_requests(self):
        yield scrapy.Request(
            url="http://example.org#1",
            meta={"playwright": True, "playwright_include_page": True},
        )

    def parse(self, response):
        return scrapy.Request(
            url="http://example.org#2",
            meta={"playwright": True, "playwright_page": response.meta["playwright_page"]},
            dont_filter=True,
        )


if __name__ == "__main__":
    process = CrawlerProcess(
        settings={
            "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor",
            "DOWNLOAD_HANDLERS": {
                "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
            },
        }
    )
    process.crawl(HeadersSpider)
    process.start()
$ python examples/headers.py 
(...)
2021-04-10 20:00:57 [scrapy.core.scraper] ERROR: Error downloading <GET http://example.org#2>
Traceback (most recent call last):
  File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/scrapy/core/downloader/middleware.py", line 45, in process_request
    return (yield download_func(request=request, spider=spider))
  File "/Users/eus/zyte/scrapy-playwright/venv-scrapy-playwright/lib/python3.8/site-packages/twisted/internet/defer.py", line 824, in adapt
    extracted = result.result()
  File "/Users/eus/zyte/scrapy-playwright/scrapy_playwright/handler.py", line 140, in _download_request
    result = await self._download_request_with_page(request, spider, page)
  File "/Users/eus/zyte/scrapy-playwright/scrapy_playwright/handler.py", line 180, in _download_request_with_page
    headers = Headers(response.headers)
AttributeError: 'NoneType' object has no attribute 'headers'
(...)

I'm not entirely sure why the error occurs in your case though, you don't seem to be setting dont_filter=True, using a custom dupefilter or reusing a playwright page.

from scrapy-playwright.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.