Code Monkey home page Code Monkey logo

Comments (4)

GoogleCodeExporter avatar GoogleCodeExporter commented on August 23, 2024
ok, I tried your scenario but it works for me.

You are right that it skips both of those pages as they are bigger than 4mb but 
it does continue...

Please try crawling with logger configured to show DEBUG logs, do you see 
something additional ?


Here are my logs:
19:24:47 INFO  [main] - [CrawlController]- Crawler 1 started
19:24:47 INFO  [main] - [CrawlController]- Crawler 2 started
19:24:47 INFO  [main] - [CrawlController]- Crawler 3 started
19:24:49 INFO  [Crawler 1] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/?C=S%3BO%3DA
19:24:49 INFO  [Crawler 2] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/
19:24:50 DEBUG [Crawler 2] - [WebCrawler]- Skipping: 
http://www.ics.uci.edu/icons/unknown.gif as it contains binary content which 
you configured not to crawl
19:24:50 WARN  [Crawler 2] - [WebCrawler]- Skipping a URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/tumor.bam.bai which was bigger ( 
4523128 ) than max allowed size
19:24:50 INFO  [Crawler 3] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/
19:24:51 WARN  [Crawler 2] - [WebCrawler]- Skipping a URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam.bai which was bigger 
( 4534848 ) than max allowed size
19:24:51 INFO  [Crawler 3] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/?C=D%3BO%3DA
19:24:51 INFO  [Crawler 3] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/?C=N%3BO%3DD
19:24:51 INFO  [Crawler 2] - [WebCrawler]- URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/?C=N%3BO%3DD
19:24:52 DEBUG [Crawler 2] - [WebCrawler]- Skipping: 
http://www.ics.uci.edu/icons/back.gif as it contains binary content which you 
configured not to crawl
19:24:52 INFO  [Crawler 3] - [WebCrawler]- URL: http://www.ics.uci.edu/~yil8/



And it continues on and on...

Original comment by [email protected] on 29 Jan 2015 at 5:27

  • Changed state: Accepted

from crawler4j.

GoogleCodeExporter avatar GoogleCodeExporter commented on August 23, 2024
Ah, i was running 1 crawler instance, not 3.
Also i was using
@Override
public WebURL handleUrlBeforeProcess(WebURL curURL) {
    System.out.println("handling " +curURL.getURL());
      return curURL;
}

2015-01-28 23:56:14,063 INFO  [main] - 
[edu.uci.ics.crawler4j.crawler.CrawlController] - Crawler 1 started
2015-01-28 23:56:14,516 INFO  [Crawler 1] - 
[edu.uci.ics.crawler4j.crawler.WebCrawler] - 1 URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/?C=S%3BO%3DA
2015-01-28 23:56:14,626 INFO  [Crawler 1] - 
[edu.uci.ics.crawler4j.crawler.WebCrawler] - 2 URL: 
http://www.ics.uci.edu/~yil8/public_data/
2015-01-28 23:56:14,896 WARN  [Crawler 1] - 
[edu.uci.ics.crawler4j.fetcher.PageFetcher] - Failed: Page Size (4523128) 
exceeded max-download-size (1048576), at URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/tumor.bam.bai
2015-01-28 23:56:14,896 WARN  [Crawler 1] - 
[edu.uci.ics.crawler4j.crawler.WebCrawler] - Skipping a page which was bigger 
than max allowed size: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/tumor.bam.bai
2015-01-28 23:56:15,302 WARN  [Crawler 1] - 
[edu.uci.ics.crawler4j.fetcher.PageFetcher] - Failed: Page Size (4534848) 
exceeded max-download-size (1048576), at URL: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam.bai
2015-01-28 23:56:15,302 WARN  [Crawler 1] - 
[edu.uci.ics.crawler4j.crawler.WebCrawler] - Skipping a page which was bigger 
than max allowed size: 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam.bai

(obviously bad technique i didnt use the logger for preProcess.... but from 
what i recall last line was: "handling 
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam" or tumor.bam
they are huge files, it was hanging for at least 45 minutes until i stopped it

(I circumvented .bam files for now, while i am crawling, but ill see if i can 
get a better log once my current crawl is done.)

Original comment by [email protected] on 29 Jan 2015 at 7:04

from crawler4j.

GoogleCodeExporter avatar GoogleCodeExporter commented on August 23, 2024
I found the problem, It got caught on my trap avoidance algorithm.
It was pointing back to the same page with some links, with different urls.

Original comment by [email protected] on 29 Jan 2015 at 8:30

from crawler4j.

GoogleCodeExporter avatar GoogleCodeExporter commented on August 23, 2024
Ok.


Thank you for the report though.

Original comment by [email protected] on 2 Feb 2015 at 10:56

  • Changed state: Invalid

from crawler4j.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.