To do statistics over the Web, we need samples. Thus, I want to know how to pick a Web page at random, without making much effort. If you are Google or Microsoft, it is easy. But what about the rest of us? And what if I want to pick users at random on Facebook?
In effect, I want to sample a virtually infinite data set: I consider the Web to be infinite because I cannot enumerate all elements in it. Given finite resources against an infinite data set, can I sample fairly?
Thankfully, the objects in this set are linked and indexed. Hence, I came up with two sampling strategies:
- Using the hyperlinks: Start with any Web page. Recursively follow hyperlinks until you have many Web pages. Pick a page at random in this set. (Mathematically, I am sampling nodes in an infinite graph.)
- Using the index: Sampling items Pick a set of common words, enter them into a search engine. Pick a page at random in the result set.
Are there any other strategies? What can I say (if anything) about the biases of my sampling?
Further reading: Benoit, D. and Slauenwhite, D. and Schofield, N. and Trudel, A., World’s First Class C Web Census: The First Step in a Complete Census of the Web, Journal of Networks 2 (2), 2007.
Update: An anonymous reader points me to Ziv Bar-Yossef and Maxim Gurevich, Random sampling from a search engine’s index, JACM 55 (5), 2008.
Update 2: Chris Brew points me to Henzinger et al., On Near-Uniform URL Sampling, WWW 2000.
Update 3: Regarding Facebook, I found Gjoka et al., Unbiased Sampling of Facebook, 2009.
I think an idea may be to just search a search engine for a single letter instead of a list of words and then choose a random url from a random page from the entire set of results.
That may avoid the issue of having the search engine assign a “relevancy” score as well being biased to sites with specific words.
Also this paper
http://www9.org/w9cdrom/88/88.html
which includes methods for balancing out the tendency for random walks to reach well-connected pages more often than poorly-connected ones,
Generate a random bit.ly url. It usually ends with 4 to 6 characters and numbers.
for instance, this page is: http://bit.ly/KSZ44
but it’s probably biased toward geek topics..
For facebook users it is quite easy to pick randomly:
1. pick a random number $RANDOMNUMBER
2. check if there is somebody at
http://www.facebook.com/home.php#/profile.php?id=$RANDOMNUMBER
3. if yes: done,
if not, goto 1
Davids suggestion is not really random as you will only be able to find pages that have been bit.ly’d and this is probably mainly true for relatively new sites.
Rather than picking common words I have done this by picking from one to three random words, and then using the “lucky” button (via a direct url call) button on Google. This *appears* to give many more unusual pages better visibility – and it’s humorous in wondering how certain random combinations result in redirection to some pages.
If you make an assumption that what Google / Bing / anyother search engine indexes is representative of the web, then you can use MCMC sampling methods to produce a random sample of documents from a search engine. See http://portal.acm.org/citation.cfm?id=1411509.1411514
I think that a random walk can be a bad idea, if implemented naively. If memory serves, the web has at a high level, three types of pages
1) pages with only out links
2) pages with only in links
3) pages with both
This concept can be extended to not just pages, but entire subgraphs. Once you get into some parts of the subgraph you can never get out, thus capturing your random walker. However, without a very large index, it would be difficult to identify these parts of the graph.
Another alternative is to buy a copy of the web. See ClueWeb09 for 1B web pages for less than $1K
The idea of using a Metropolis Hastings Algorithm came to my mind.
Say your webgraph is your Markov chain graph. And you want to sample randomly. one idea is to start with an initial seed and run your Metropolis Hastings algorithm with a burn in period and it will give you a state randomly and normally distributed around your seed.
Similar idea can be used for uniform distribution 🙂
Hi,
You should check this Yahoo! service: http://random.yahoo.com/bin/ryl
As far as I know, it produces random pages taken from their index. I used it in the past and, although there certainly shows a bias towards .com pages, it is pretty convenient.
Best, Dani
The Yahoo! generator is pretty cool. It seems to only show top-level pages, though.
Given that many web pages are actually generated through parameters (see http://www.epinions.com/Refrigerators for just one example), the number of different pages a site can offer rises with the number of parameter combinations. How does one define “uniform sampling” in that context?
A few good-looking hits stem from this query: http://www.google.com/search?hl=en&q=how+to+sample+uniform+from+the+web&aq=f&oq=&aqi=