Scientists, businessman and even spies are supposed to analyze data collaboratively. Are they?
If you are a scientist, you are familiar with the following type of research collaboration:Â a lowly student collects the data, crunches the numbers and plots the data. Other collaboratorsâ€”such as the professorâ€”merely comment on the tables and plots. Similarly, the CEO sees the pie chart, while the assistant crunches the numbers.Â That is vertical collaboration: you clean the basement and I will clean the main floor.
Yet, reliable data analysis requires horizontal collaboration. Â Indeed, there are downsides to task specialization:
- By never looking at the data, senior scientists and managers rely on experience and hearsay. Their incoming bandwidth is dramatically reduced. Nature is the best coauthor. Consider how the best American spies were fooled prior to 9/11 while all the data to catch the terrorists was available. Bandwidth is a requirement to be smart.
- When a single person crunches the numbers, hard-to-detect errorsÂ creep in. The problem is serious: Ioannidis showed that most research findings are wrong.
- With nobody to review the source data, the sole data analyst is more likely to cheat. Why rerun these tests properly, when you can just randomly dismiss part of the data? People are lazy: when given no incentive, we take the easy way out.
The common justification for task specialization is that senior researchers and managers do not have the time. Yet, 30 years ago, researchers and managers did not type their own letters. Improve the tools, and reduce task specialization.
WithÂ Sylvie NoÃ«l,Â I decided to have a closer look. My preliminary conclusions are as follows:
- There are adequate tools to support rich collaboration over data analysis. Collaboratories have been around for a long time. We have the technology! Yet, we may need a disruption: inexpensive, accessible and convenient tools. The current migration tower Web-based applications might help.
- Given a chance, everyone will pitch in. To make our demonstration, we collected user data from sites such as IBM Many Eyes andÂ StatCrunch. We then ran anÂ Ochoa-Duval analysis. We find that the network of users within web-based data analysis tools is comparable to other Web 2.0 sites.
As a database researcher, I think that further progress lies with loosely coupled data (no big tables! no centralized tool!) and flexible visualization tools (stop the pie charts! go with tag clouds!).Â I am currently looking for new research directions on this problem, any idea?