Previous [ 1] [ 2] [ 3] [ 4] [ 5] [ 6] [ 7] [ 8] [ 9] [ 10] [ 11] [ 12] [ 13] [ 14] [ 15] [ 16] [ 17] [ 18] [ 19] [ 20] [ 21]

@

Journal of Information Science and Engineering, Vol. 31 No. 3, pp. 839-859 (May 2015)


A Full-Coverage Two-Level URL Duplication Checking Method for a High-Speed Parallel Web Crawler*


ARJUMAND YOUNUS, KYU-YOUNG WHANG, HYUK-YOON KWON AND YEON-MI YEO
Department of Computer Science
Korea Advanced Institute of Science and Technology
Daejeon 305-701 Korea
E-mail: {arjumand; kywhang; hykwon; ymyeo}@mozart.kaist.ac.kr

For efficient large-scale Web crawlers, URL duplication checking is an important technique since it is a significant bottleneck. In this paper, we propose a new URL duplication checking technique for a parallel Web crawler; we call it full-coverage two level URL duplication checking (full-coverage-2L-UDC). Full-coverage-2L-UDC provides efficient URL duplication checking while ensuring maximum coverage. First, we propose two-level URL duplication checking (2L-UDC). It provides efficiency in URL duplication checking by communicating at the Web site level rather than at the Web page level. Second, we present a solution for the so-called coverage problem, which is directly related to the recall of the search engine. It is the first solution for the coverage problem in the centralized parallel architecture. Third, we propose an architecture, FC2- LUDCbot, for a centralized parallel crawler using full-coverage-2L-UDC. We build a seven-agent FC2L-UDCbot for extensive experiments. We show that the crawling speed of FC2L-UDCbot is approximately proportional to the number of agents (i.e., FC2L-UDCbot is faster than a single-machine crawler by 6.9 times). Full-coverage-2L-UDC allows FC2L-UDCbot to be scalable to the number of agents since it effectively deals with the overheads incurred in a parallel environment. Through an in-depth analysis, we construct a cost model for estimating the crawling speed of a scaled-up crawler. Using the model, we show that FC2L-UDCbot can crawl Google-scale Web pages within several days using dozens of agents.

Keywords: centralized parallel crawler, two-level URL duplication checking, coverage problem, high-speed parallel Web crawler, cost model

Full Text () Retrieve PDF document (201505_05.pdf)

Received December 11, 2013; revised June 11, 2014; accepted August 25, 2014.
Communicated by Vincent S. Tseng.
* This work was supported by the National Research Foundation of Korea (NRF) grant funded by Korean Government (MSIP) (No. 2012R1A2A1A05026326).