The Netflix competition is a $1 million game to build the best possible movie recommender system. It has already contributed to science tremendously by providing the largest freely available collaborative filtering filter data set (about 2GB): it is at least an order of magnitude larger than any other similar data set. It has also generated many valuable research papers. Among interesting contributions is a paper showing that the anonymized data might not be so anonymized, after all.
However, Greg wonders whether the game itself will have a valuable output:
Participants may be overfitting to the strict letter of this contest. Netflix may find that the winning algorithm actually is quite poor at the task at hand — recommending movies to Netflix customers — because it is overoptimized to this particular contest data and the particular success metric of this contest.
Because I have written collaborative filtering papers in the past, on multidimensionality and rules, on the Slope One scheme and on the data normalization problem, people were quick to ask me if I would participate. The issue was quickly settled: the rules of the game forbid people from Quebec from participating. But privately, I expressed concerns that the game would be more about tuning and tweaking than about learning new insights into the science of collaborative filtering. I never expressed these concerns publicly for fear that it might be badly interpreted.
I do not think that the next step in collaborative filtering is to find ways to improve accuracy according to some metric. I think this game got old circa 2000. I am rather looking forward to people coming up with drastically new problems and insights.
Disclaimer. If you are working on the Netflix game, please continue. I do not deny that it is an interesting engineering challenge.