Sampling the National Deep Web

A huge portion of today’s Web consists of web pages filled with information from myriads of online databases. This part of the Web, known as the deep Web, is to date relatively unexplored and even major characteristics such as number of searchable databases on the Web or databases’ subject distribut...

Full description

Saved in:
Bibliographic Details
Published inDatabase and Expert Systems Applications pp. 331 - 340
Main Author Shestakov, Denis
Format Book Chapter
LanguageEnglish
Published Berlin, Heidelberg Springer Berlin Heidelberg 2011
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN9783642230875
3642230873
ISSN0302-9743
1611-3349
DOI10.1007/978-3-642-23088-2_24

Cover

More Information
Summary:A huge portion of today’s Web consists of web pages filled with information from myriads of online databases. This part of the Web, known as the deep Web, is to date relatively unexplored and even major characteristics such as number of searchable databases on the Web or databases’ subject distribution are somewhat disputable. In this paper, we revisit a problem of deep Web characterization: how to estimate the total number of online databases on the Web? We propose the Host-IP clustering sampling method to address the drawbacks of existing approaches for deep Web characterization and report our findings based on the survey of Russian Web. Obtained estimates together with a proposed sampling technique could be useful for further studies to handle data in the deep Web.
ISBN:9783642230875
3642230873
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-642-23088-2_24