Comments (4)
ok, I tried your scenario but it works for me.
You are right that it skips both of those pages as they are bigger than 4mb but
it does continue...
Please try crawling with logger configured to show DEBUG logs, do you see
something additional ?
Here are my logs:
19:24:47 INFO [main] - [CrawlController]- Crawler 1 started
19:24:47 INFO [main] - [CrawlController]- Crawler 2 started
19:24:47 INFO [main] - [CrawlController]- Crawler 3 started
19:24:49 INFO [Crawler 1] - [WebCrawler]- URL:
http://www.ics.uci.edu/~yil8/public_data/PyLOH/?C=S%3BO%3DA
19:24:49 INFO [Crawler 2] - [WebCrawler]- URL:
http://www.ics.uci.edu/~yil8/public_data/
19:24:50 DEBUG [Crawler 2] - [WebCrawler]- Skipping:
http://www.ics.uci.edu/icons/unknown.gif as it contains binary content which
you configured not to crawl
19:24:50 WARN [Crawler 2] - [WebCrawler]- Skipping a URL:
http://www.ics.uci.edu/~yil8/public_data/PyLOH/tumor.bam.bai which was bigger (
4523128 ) than max allowed size
19:24:50 INFO [Crawler 3] - [WebCrawler]- URL:
http://www.ics.uci.edu/~yil8/public_data/PyLOH/
19:24:51 WARN [Crawler 2] - [WebCrawler]- Skipping a URL:
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam.bai which was bigger
( 4534848 ) than max allowed size
19:24:51 INFO [Crawler 3] - [WebCrawler]- URL:
http://www.ics.uci.edu/~yil8/public_data/?C=D%3BO%3DA
19:24:51 INFO [Crawler 3] - [WebCrawler]- URL:
http://www.ics.uci.edu/~yil8/public_data/?C=N%3BO%3DD
19:24:51 INFO [Crawler 2] - [WebCrawler]- URL:
http://www.ics.uci.edu/~yil8/public_data/PyLOH/?C=N%3BO%3DD
19:24:52 DEBUG [Crawler 2] - [WebCrawler]- Skipping:
http://www.ics.uci.edu/icons/back.gif as it contains binary content which you
configured not to crawl
19:24:52 INFO [Crawler 3] - [WebCrawler]- URL: http://www.ics.uci.edu/~yil8/
And it continues on and on...
Original comment by [email protected]
on 29 Jan 2015 at 5:27
- Changed state: Accepted
from crawler4j.
Ah, i was running 1 crawler instance, not 3.
Also i was using
@Override
public WebURL handleUrlBeforeProcess(WebURL curURL) {
System.out.println("handling " +curURL.getURL());
return curURL;
}
2015-01-28 23:56:14,063 INFO [main] -
[edu.uci.ics.crawler4j.crawler.CrawlController] - Crawler 1 started
2015-01-28 23:56:14,516 INFO [Crawler 1] -
[edu.uci.ics.crawler4j.crawler.WebCrawler] - 1 URL:
http://www.ics.uci.edu/~yil8/public_data/PyLOH/?C=S%3BO%3DA
2015-01-28 23:56:14,626 INFO [Crawler 1] -
[edu.uci.ics.crawler4j.crawler.WebCrawler] - 2 URL:
http://www.ics.uci.edu/~yil8/public_data/
2015-01-28 23:56:14,896 WARN [Crawler 1] -
[edu.uci.ics.crawler4j.fetcher.PageFetcher] - Failed: Page Size (4523128)
exceeded max-download-size (1048576), at URL:
http://www.ics.uci.edu/~yil8/public_data/PyLOH/tumor.bam.bai
2015-01-28 23:56:14,896 WARN [Crawler 1] -
[edu.uci.ics.crawler4j.crawler.WebCrawler] - Skipping a page which was bigger
than max allowed size:
http://www.ics.uci.edu/~yil8/public_data/PyLOH/tumor.bam.bai
2015-01-28 23:56:15,302 WARN [Crawler 1] -
[edu.uci.ics.crawler4j.fetcher.PageFetcher] - Failed: Page Size (4534848)
exceeded max-download-size (1048576), at URL:
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam.bai
2015-01-28 23:56:15,302 WARN [Crawler 1] -
[edu.uci.ics.crawler4j.crawler.WebCrawler] - Skipping a page which was bigger
than max allowed size:
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam.bai
(obviously bad technique i didnt use the logger for preProcess.... but from
what i recall last line was: "handling
http://www.ics.uci.edu/~yil8/public_data/PyLOH/normal.bam" or tumor.bam
they are huge files, it was hanging for at least 45 minutes until i stopped it
(I circumvented .bam files for now, while i am crawling, but ill see if i can
get a better log once my current crawl is done.)
Original comment by [email protected]
on 29 Jan 2015 at 7:04
from crawler4j.
I found the problem, It got caught on my trap avoidance algorithm.
It was pointing back to the same page with some links, with different urls.
Original comment by [email protected]
on 29 Jan 2015 at 8:30
from crawler4j.
Ok.
Thank you for the report though.
Original comment by [email protected]
on 2 Feb 2015 at 10:56
- Changed state: Invalid
from crawler4j.
Related Issues (20)
- NullPointerException when crawling links with no HREF HOT 1
- emove Non official http status codes HOT 1
- PageFetcher is unreadable HOT 1
- crawler fail due to http 303 see other HOT 8
- Scraping iframes, base64,vb scripts HOT 6
- FileNotFoundException: .m2\repository\edu\uci\ics\crawler4j\4.0\crawler4j-4.0.jar!\tld-names.zip HOT 8
- Proxy information get lost when using basic authentication HOT 3
- Slf4j libraries missing on some configurations HOT 1
- Jar with dependencies
- Crawlers storage folder taking up too much space. HOT 4
- Crawling over disallowed paths from robots.txt HOT 1
- robots.txt isn't crawled HOT 4
- Memory usage HOT 1
- Crawler time delay when exiting
- Crawler is slow
- All links on a page are not recognized
- Unable to download RTF files using crawler4j
- Restricting crawler to crawl within the given page domain
- JVM heap size keeps increasing considerably
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from crawler4j.