Is Wikipedia anti-intellectual?

Sanger recently posted a provocative piece where he argues that geeks suffer from anti-intellectualism. His stance is that democratic sites such as  Wikipedia (which he co-founded) are founded on anti-intellectualism. He sums up this techno anti-intellectualism using five beliefs:

  1. Experts do not deserve any special role in declaring what is known.
  2. Books are an outmoded medium because they involve a single person speaking from authority.
  3. The classics, being books, are also outmoded.
  4. The digitization of information means that we don’t have to memorize nearly as much.
  5. You don’t have to go to college, which is overpriced and so reserved to the elite anyway.

My take:

  1. In the Google era, we do not need formal experts as much as we used to. Back in the days, if you wanted to learn about combinatorics, you took a class in college. In fact, you probably had to take a class to even know what combinatorics was! The other alternative was to read the papers and the books on the topic, which were only accessibly from a college library. These days, you can get in touch with hundreds of passionate fans of combinatorics on Math Overflow where you can ask and answer questions, and even build a reputation. You can read, for free, the Electronic journal of combinatorics. The same is true of just about every topic.
  2. The dominance of the long form (e.g., books) was a by-product of our technology. If you are going to print and distribute a piece of work, it needs to have a certain volume for the operation to be financially viable. If you sell a 300-pages philosophy book for $50 and make a profit, you cannot easily sell a 3-page philosophical document for $0.50 and still make a profit because you have fixed fees and because few people can be bothered to drive to a bookstore to buy 3 pages. Moreover, books need to be self-contained, you cannot use hyperlinks to refer the reader to background knowledge. That is not to say that long documents are a thing of the past (e.g., the Harry Potter novels), but electronic media is more flexible.
  3. I conjecture that the classics have never been so popular. I constantly refer back to the classics through Project Gutenberg or ebooksgratuits.com. I constantly read about bloggers who cite the classics. I talk with a lot of people who reread classics on their kindle or iPad.
  4. Memorization is shallow learning, we learn by applying ideas. Anyone can memorize the three axioms of Newton. Denis G. Rancourt famously showed that his fourth-year Physics students did not understand these three axioms. Memorization gives you the illusion of knowledge. It is a dangerous illusion.
  5. You can succeed without college, and a college degree is not success. It used to be that a college degree, any college degree, meant that you were a success. Anyone who holds on to this belief is in for a rude awakening.

Further reading: Fear of Illegibility by Rader is another take on Sander’s essay.

Why I still program

People expect that, as you grow older, you give up practical jobs such as programming for more noble tasks such as managing a team and acquiring funding. This especially true in academia where “real professors” delegate the details, keeping only the “big picture stuff”. In other words, organizations are geared toward vertical collaboration: a hierarchical structure where people on top supervise other (cheaper) employees. In research, this means that the senior scientists have the ideas which  junior scientists implement. Over time, the senior scientists may become unable to do what the junior scientists do, but they will become experts at acquiring funding. This model can scale up: the senior scientist can direct middle-level scientists who then supervise the younger scientists, and so on. Jorge Cham referred to this model as the Profzi scheme because it works best when funding is abundant and ever increasing.

The counterpart is horizontal collaboration. In this model, the senior scientists do everything, from having the big idea to executing it.  They prefer to automate or avoid busy work when possible. Collaboration is mostly used to get a different point of view and complementary expertise. This model still works when funding is scarce, but it fails to scale up with respect to the number of people involved: horizontal collaboration is necessarily intimate.

The type of work that each model supports best differs. I conjecture that vertical collaboration favors long-term plans and predictable results. I believe horizontal collaboration favors serendipity and “wild” ideas.

As a sign that I favor horizontal collaboration, I still program even though I am old. This is unusual. It is so unusual as to raise eyebrows. Some programming takes time, a lot of time. I can spend two or three months a year programming. Presumably, my time is too valuable to be spent on a lowly task like programming that can be best done by people earning a fraction of my income. So why do I still program?

Maybe my best advocate would be the master himself, Donald Knuth:

People who discover the power and beauty of high-level, abstract ideas often make the mistake of believing that concrete ideas at lower levels are relatively worthless and might as well be forgotten. (…) on the contrary, the best computer scientists are thoroughly grounded in basic concepts of how computers actually work, and indeed that the essence of computer science is an ability to understand many levels of abstraction simultaneously.

But I also have my own arguments:

  • I want my work to be significant, to have impact. Yet even the widely cited research papers are rarely read. Very few research papers have significant impact. However, it is comparatively easier to do work that matters with software. For example, recently a team from Facebook integrated one of my compressed bitmap index library in Apache Hive: the Hadoop-based framework for data warehousing. I am willing to bet good money that nobody at Facebook read the original paper for which I wrote this software.
  • Time and time again, implementing my ideas has forced me to understand them better. A common scenario is that something that sounded reasonable on paper suddenly feels unwieldy when you must implement it. I also often discover bugs in my mathematical arguments through implementation. Could I outsource this work to others? Maybe. But the process would not be as fruitful.
  • You do get better at programming over time. I have been building up my expertise for decades. It is enjoyable to start from scratch and solve a difficult problem in days when you know that others would take weeks or months to do the same.

If my arguments are reasonable, and if even Donald Knuth is on my side, why does it still surprise people when I admit to be a programmer-scientist? I believe that the rejection of programming as a lower activity can be explained by the Theory of the leisure class. In effect, we do not seek utility but prestige. There is no prestige in tool-making, cooking or farming. To maximize your prestige, you must rise up to the leisure class: your work must not be immediately useful. There is more prestige in being a CEO or a politician, than in being a nurse or a cook. Scientists who supervise things from afar have more prestige. Programming is akin to tool-making, thus people from the leisure class won’t touch it. People will call themselves engineer or analyst or developer, but rarely “programmer” because it is too utilitarian.

Warning: Not everyone should be programming. It is a time consuming activity. Because I program so much, there are many other exciting things I cannot do.


Further reading
: Brooks, The Design of Design, 2010.

Subscribe to this blog
in a reader
or by Email.

Automation will make you obsolete, no matter who you are


I was part of the first generation of kids to receive computers as gifts. I was also part of the first generation of professionals to adopt computer-assisted tele-work: I can work from my bedroom just as efficiently as from my campus office. I routinely organize and attend meetings while I am at random locations. This week-end, my 7 year-old son  repaired our vacuum-cleaning robot by taking it apart on the kitchen floor:  the contacts with the battery were dirty. Meanwhile, I was on the kitchen table building a solar-powered robot.

Computers can already be superior to human beings on most specialized tasks:

  • Researchers have recently found that a computer persona could more engaging socially than a bona fide human being. This person you are chatting with on Facebook or Twitter, are you sure it is a human being? Maybe you are dealing with a robot, and that is why this person is so responsive and systematically friendly. I suspect that most software is asocial simply because we did not bother to implement sociability.
  • Computers can beat any human being at chess and checker. In fact, you could play 1 million games of checker against a computer, and we know you would never win, not once. How do computers beat you? Not through logic alone. They rely on an extensive database: that is, they have experience, more experience than any human being. Computers show creativity and good judgment when playing these games.
  • A tool like Google Mail sorts my mail automatically for me, and archives it nicely. This used to require a human being making judgement calls about what mail was junk, what mail was high priority, and so on. Yet it has been nicely automated.

Alas our technology is critically limited: we are unable to give computers general intelligence. Does it matter as far as automation is concerned? I believe that general intelligence is overrated in the workplace.

For example, can computers without general intelligence replace managers and accountants? Consider Walmart. We often think of Walmart as a discount store, but it is also the direct result of the largest and most ambitious business automation project ever. Walmart is not killing its competitors just by offering poor wages: it is killing them because it has automated much of the supply and accounting management.

Could computers replace teachers? Khan Academy shows that the lecture component has already been replaced. What about grading? In the software industry, we already use on a large scale automated testing: to determine whether a candidate can program in Java, he is asked to fill out an online questionnaire. Employers rely on these tests more than on college grades. The only reason college professors still grade Calculus and programming assignments by hand is that they lack the incentive to automate it. But have no fear: for-profit colleges are already hard at work automating everything. Would students prefer to have a “personal touch”? I don’t think so: I believe students would rather have quick and detailed automated  feedback than wait for a tired professor to scribble a few notes in the margin of their assignment. (And let us be honest: most marking is done by underpaid teaching assistants who don’t care that much).

In fact, most jobs require little general intelligence:

  • Jobs are highly specialized. You can sum up 80% of what most people do with 4 or 5 different specific tasks. In most organizations, it is a major faux pas to ask the wrong person: there is a one-to-one matching between people and tasks.
  • Jobs don’t require that you to understand much of what is going on. You only need to fake some understanding of the context the same way a spam filter fakes an understanding of your emails. Do you think that the salesman at the appliance store knows why some dishwashers have a shredder and some don’t, and why it matters? Do you think that the professors know what the job market is like for their graduates?

Nevertheless, some believe their job cannot be automated. Most of them are wrong.

For example… Surely, we won’t replace politicians by robots? We may not replace them, but they will become obsolete anyhow. I believe that computers enable a different from of government altogether where we have little need for politicians. In most of the western world, we use representative democracy, with local politicians being elected and sent to a central government, where they form the ruling class. Yet with an entire population having Internet access, we don’t need politicians to represent the people, people can speak for themselves. Most politicians are already more or less powerless since nobody really believe they represent their people. You think that government without professional politicians would be chaos? I am sure there are people who think that without newspapers, individuals cannot be informed.

Whether you are a  lawyer, a medical doctor, a professor or a politician, you already are obsolete. We are just waiting for someone to write the software that will replace you. You replacement won’t pass the Turing test, but nobody will care.

Further reading: The future is already here – it’s just not very evenly distributed, Jobless recovery, the Luddite fallacy and the 4-hour workweek and If robots, machines, and self-service replaced most of the work currently done by humans, what would humans do?

Credit: Special thanks to Seb Paquet,  Phil Jones and Stefan King for online discussions.

The perils of filter-then-publish

Why do I prefer the publish-then-filter system, which dominates social media such as blogs, to the traditional filter-then-publish system used by scientific journals? Because the conventional peer review system (filter-then-publish) has disastrous consequences:

  1. In the conventional peer review system, you seek to please the reviewers who in turn try to please the editor who in turn is trying to guess what the readers want. It should not be a surprise that the papers are optimized for peer review, not for the reader. While you will eventually get your work published, you may have to drastically alter it to make it pass peer review. A common theme is that you will need to make it look more complicated. In a paper I published a few years ago, I had to use R*-trees, not because I needed them, but because other authors had done so. When I privately asked them why they had used R*-trees, the answer was “it was the only way to get our paper in a major conference”. So my work has been made more complicated for the sole purpose of impressing the reviewers: “look, I know about R*-trees too!” Several times, during the course of peer review, I was asked to remove material which was judged to be “textbook material”: didactic material is frowned upon in many circles (hint: it is not fancy enough).  Be warned: if you find an easy way to prove a result, and it ends up looking trivial in retrospect, your work may become unpublishable. You will need to invent complex related problems to pass peer review. It explains why several  important results appear as remarks in long and complicated papers. Either purposefully, or by habit, people will write in a way to make their paper pass peer review even if it makes the work inaccessible. Do you think research papers have to be boring? If so, you have been brainwashed.
  2. The conventional system is legible: you can count and measure a scientist’s production. The incentive is to produce more of what the elite wants. In a publish-then-filter system nobody cares about quantity: only the impact matters. And impact can mean different things to different people. It allows for more diversity in how people produce and consume science. Thus, if you think it would be better if we stopped counting research papers, then you should reject conventional peer review and favor the publish-then-filter system.
  3. The difference between filter-then-publish and publish-then-filter is analogous with the difference between Soviet central planning and a free market. You either let a select few decide, or you let the market decide. You can either trust that the people will be smart enough, or you can delegate the selection to a few trusted experts.
  4. The conventional peer review system pretends to delegate the assessment of scientists to review boards. Instead of reading each other, we trust brands. The net result is that people hire and promote each others without reading the work. Thus, the conventional system kills any incentive to build a coherent and interesting body of work: you are just a machine that produces research papers as commodities. You know how you succeed in science these days? Take a few ideas, then try every small variation of these ideas and make a research paper out of each one of them. Each paper will look good and be written quickly, but your body of work will be highly redundant. Instead of working toward deep contributions, we encourage people to repeat themselves more and more and collect many shallow contributions. We sacrifice scholarship for vanity.

Further reading: Become independent of peer review and Three myths about scientific peer review.

Source: This post was inspired by a comment made by Sylvain Hallé.

You cannot refuse to publish our paper because…

I feel strongly that the conventional peer review process needs to evolve to a publish-then-filter model. That is, I do not believe that a few select individuals should decide what is worth publishing.

But  to openly face others, and their criticism, requires a little bit of intelligence and backbone. These are necessary for healthy science. These qualities are even more important in a publish-then-filter model: you are exposing your unfiltered work to the world.

For their own good,  I would like to exclude from science those who cannot pass an elementary test of maturity. For example: can you tell what is wrong with the following submission letter? (Hint: if you cannot, forget science, it is not for you.)


Dear Editor,

It is with pleasure that we are submitting our article for immediate publication in your journal. Unfortunately, you cannot refuse to publish our paper because:

  • We have been working for three years on this paper. It is as ready as it will ever be.
  • We had to fund the work of the students. Real money was spent on this paper.
  • We, the authors, have unanimously agreed that this paper is ready for immediate publication. Who are you to disagree?
  • We have followed the outline of the other articles in your journal. Our paper looks just like the other papers.

Sincerely yours,

The authors

Time-saving versus work-inducing software

At a glance, office software like Word, PowerPoint or Excel, are great time savers. Nobody would want to go back to the era before Word Processors?

Unfortunately, I believe that this same software bears part of the blame for our long working hours:

  • Word processors entice people to create too many documents. Microsoft Word is the king of corporate busy work. Wherever I have worked, people got busy crafting all sort of useless internal reports or plans. And, of course, reports must be properly formatted with a title page and an index, just in case someone might print it. And updating old documents can be messy: it almost invariably involves formatting bugs. I am tired of having to check that the font is the same throughout the document. Why can’t machines format documents automatically in a consistent manner? Of course, they can and they have been doing it since the seventies (hint: DocBook, LaTeX, web content management systems like blogs).
  • Spreadsheet software is great for prototyping ideas. If I have half an hour to do an analysis, it is hard to beat Excel. There is a catch however: it is difficult to reuse old spreadsheets with new data. Thus, in most organizations, there is a multiplication of spreadsheets. And spreadsheets tend to grow to include many pages, all poorly documented and fragile. Code reuse is possible, but difficult in Excel. Yet there are perfectly good frameworks for data processing such as R. They are orders of magnitude more powerful and less work intensive.
  • PowerPoint is responsible for 90% of the bad business presentations. Have you noticed that Bill Gates frequently give talks without PowerPoint? In fact, he became a much better speaker since he stopped using so many silly slides. But what is worse is that people spend a lot of time on these slides instead of preparing good talks. And remember: not giving a talk is often the best option.

Microsoft is not the sole company to blame. In universities, most assignments and exams are still marked by hand whereas we have had the technology to automate 90% of the marking for years!

Happily, I find that some software really does save labor:

  • Most web content management systems let the author write and publish efficiently. Maintaining this blog is cost-effective: with only a few hours of work every week, I can reach thousands. I spend almost no time on repetitive tasks.
  • Scripting has gotten a lot better in the last 20 years, and it is very useful. I get a lot of my data processing done in Python. My only regret is that so few people learn scripting languages.
  • Obviously, Wikipedia is amazing at saving time.
  • With Doodle, scheduling meetings is an order of magnitude faster than with Microsoft Outlook.
  • Cell phones are work-inducing, obviously. However, I conjecture that tablet-based computing is time-saving. People write shorter comments and emails. They tend to start fewer documents. Users of an iPad will spend more time reading than writing. Isn’t it about time that we take some time off to read instead of producing more than others can consume?

What is the underlying thread? Time-saving software tends to be produced by less civilized people.  Software written by large corporations will probably be work-inducing.

Further reading: Of Lisp Macros and Washing Machines (via Hosh Hsiao) and Conway’s law (via John D. Cook)

Scaling MongoDB

I have been spending much time thinking about a future where document-oriented databases are the default. Though they have their problems, I think that they are far better suited for what most people want to do than relational databases.

MongoDB is one of the best document-oriented database system around: it is mature, scalable, open source and commercially supported.  You can set it up to run Amazon’s cloud in minutes.

A long time ago, the good people at O’Reilly sent me a copy of Scaling MongoDB by Kristina Chodorow, one of the developers of MongoDB. Kristina has a total of four books at O’Reilly.  This one is short, but the writing is nearly perfect. The book looks beautiful too. And no, not all O’Reilly books are that good.

If you are new to MongoDB, you should get a more complete book like MongoDB: The Definitive Guide. Scaling MongoDB is about the specific, but important issues raised by scalability. Having a separate book makes sense. Indeed, Learning the basics of MongoDB is not hard. But figuring it out sharding is a lot harder and deserves a book.

Improve your impact with abundance-based design

People design all the time: new cars, new software, new houses. All design is guided by constraints (cost, time, materials, space) and by objectives (elegance, quality). Constraints are limitations: you only have so much money, so many days… whereas objectives are measures that you seek to either maximize or minimize. In practice, either the constraints or the objectives may dominate. You are either worried about limited ressources, or you seek to maximize the quality of your result.

Our ancestors were probably often forced into scarcity-based design. When your very survival is in question, you build whatever shelter you can in the hours you have left before nightfall. We are probably wired for good scarcity-based design as it is a survival trait.

Any monkey can live in scarcity. However, abundance-based design is crucial if you want to maximize your impact.

Facebook engineers do abundance-based design. They are mainly worried about improving Facebook and pursuing objectives such as usability, but much less worried about time or disk space. Similarly, when I build a model sailboat, costs and time are nearly irrelevant, I mostly care that my boat be pretty and that it handles well. As a researcher, most of my research papers are the result of abundance-based design. It does not matter how long I work on the research projects, as long as the result has impact. Similarly, my blog is the result of abundance-based design. Nobody is forcing me to write on a regular schedule. And I have no set limit on the time I spend on my blog.

Many people choose to simulate scarcity-based design, maybe because it comes with an adrenaline rush. In fact, the adrenaline rush is good indication that you are in scarcity mode. You will often hear scarcity-based designers say that they are running of time, money or space. They may spend much time planning or worrying about costs and deadlines. There are many examples of artificial scarcity-based design:

  • One of the great fallacies of software engineering is that what matters in the software industry is how long it takes and how much it costs. But anyone who has been in the software industry long enough knows that the real problem is that most software is bad. Some of it is atrocious. For example, Apple iTunes is a disgrace.  I don’t care whether the iTunes team finished on time and within budget. Their software is crap. They failed as far as I am concerned.
  • Nobody cares how long it took  you to write your novel or research paper. Yet people sign deals with publishers with fixed deadlines and others choose to publish in conferences with fixed deadline. They create external pressure, on purpose.

Frankly, if you are a designer such as an artist, a fiction writer, a scientist or a scholar, you should have a feeling of urgency, not worry. A single strategy may suffice to put you in abundance mode:

  • Reduce the quantity. Apple is well known for having few products. Despite having billions of dollars, they focus on few projects. And their new project have often fewer features than the competition. By focusing your attention, you ensure abundance. Don’t start more projects than you can’t execute with ease.

Further reading: Publishing for Impact by John Regehr and The merits of chasing many rabbits at the same time by Alain Désilets.

Is science more art or industry?

picture by bdesham
In my previous post, I argued that people who pursue double-blind peer review have an idealized “LEGO block” view of scientific research. Research papers are “pure” units of knowledge and who wrote them is irrelevant.

Let us take this LEGO block view to its ultimate conclusion.

If science is pure industry, producing standardized elements—called research papers, why should papers be signed as if they were pieces of art? The signature is obviously irrelevant. Nobody cares who made a given LEGO block. Thus, I propose we omit names from research papers. It should not change anything, and it will be fairer.

Indeed, why not have anonymous papers all the way? Journals could publish articles without ever telling us who they are from. We would ignore, for example, which papers were written by Einstein or Turing. How is that relevant? How does it help us to appreciate a given paper to know it was written by Turing?

What would we do for conferences? Because papers are standard units, people could attend conferences and be assigned a paper, any paper, to present. Presenting your own work is a bit too egotistical anyhow.

Of course, for recruiting or promotion purposes, we would need to be able to map research papers to individuals. But, because papers are standard units, all you care about is the number of papers and related statistics. Thus, an academic c.v. would not list research papers, but instead provide a key that could be used to retrieve productivity statistics.

Of course, this is not, even at a first approximation, how science works. Science is more art than industry. That is why we put our names on research papers. It does matter that it is Turing that wrote a given paper. It helps us understand the paper better to know its author, its date and its context. When I receive a paper to review, I try to see how the authors work, what their biases are.

Research papers present a view of the world. But like Plato’s cave, this view is fundamentally incomplete. If a paper report the results from some experiments they conducted, the paper is not these experiments: it is only a view on these experiments. It is necessarily a biased view. Do you know what the biases of these particular authors are?

Let us be candid here. When reviewing research papers, there is no such thing as objectivity. Some papers are interesting to the reviewer, some aren’t. What makes it interesting has to do with whether the world view presented is compatible with the reviewer’s world view. And because different individuals have (or should have) different world views, it does matter who wrote the paper even if we omit names. It helps me to find your paper interesting if I can put myself in your shoes, get to know who you are. An anonymous paper is far more likely to be boring to me, because it is hard to have empathy for the authors.

Some days, we all wish it did not matter who we are. Can’t people just look at our work on its own? You can get your wish by becoming a bureaucrat or a factory worker. Science is for people who want to see their name in print, people who want to build their reputation and cater to their inflated ego. In short, good science is interesting.

The case against double-blind peer review

Many scientific journals use double-blind peer review. That is, the authors submit their work in a way that cannot be traced back to them. Meanwhile, the authors do not know who the reviewers are. In this way, the reviewers are free to speak their mind. It feels fair because the reviewers cannot be influenced (in theory) by the declared affiliation of the authors or their relative fame.

How well does it work in practice? You would expect double-blind reviewing to favor people from outside academia. Yet Blank (1991) reported that the opposite is true: authors from outside academia have a lower acceptance rate under double-blind peer review. Moreover, Blank indicates that double-blind peer review is overall harsher. This is not a surprise: It is easier to pull the trigger when the enemy wears a mask.

Meanwhile, there is at best a slight increase in the quality of the papers due to double-blind peer review (De Vries et al., 2009), everything else being equal. However, not everything is equal under double-blind peer review. What is the subtext? That somehow, the research paper is a standalone artefact, an anonymous, standardized piece of LEGO. That it should not be viewed as part of a stream of papers produced by an author. It sends a signal that an original research program is a bad idea. Researchers should be interchangeable. And to assess them, we might as well count the number of their papers since these papers are standard artifacts anyhow.

But that is counter-productive! Research papers are often only interesting when put in a greater context. It is only when you align a series of papers, often from the same authors, that you start seeing a story develop. Or not. Sometimes you only realize how poor someone’s work is by collecting their papers and noticing that nothing much is happening: just more of the same.

Researchers must make verifiable statements, but they should also try to be original and interesting. They should also be going somewhere. Research papers are not collection of facts, they represent a particular (hopefully correct) point of view. A researcher’s point of view should evolve, and how it does is interesting. Yet it is a lot easier to understand a point of view when you are allowed to know openly who the authors are.

Are there cliques and biases in science? Absolutely. But the best way to limit the biases is transparency, not more secrecy. Let the world know who rejected which paper and for what reasons.

Source: This blog post came about through an online exchange with Philippe Beaudoin.

References:

  • Blank, R.M., The effects of double-blind versus single-blind reviewing: Experimental evidence from the American Economic Review, The American Economic Review 81 (5), 1991.
  • De Vries, D.R. and Marschall, E.A. and Stein, R.A., Exploring the Peer Review Process: What is it, Does it Work, and Can it Be Improved? Fisheries 34 (6), 2009.

Update: Mark Wilson has another argument against double-blind peer review. What if you pick up good ideas from double-blind papers that are later rejected and remain unpublished? How do you acknowledge the contribution of the authors of the unpublished work?

Update 2: Patrick Lam points out that the programming languages community now uses a variant of double-blind review for some conferences (like PLDI or POPL, the top PL conferences) where the authors are asked to submit blinded papers, but the identities are revealed to the reviewers after they submit their first-draft reviews.

Further reading: I have more comprehensive argument in a latter blog post.

Ten things Computer Science tells us about bureaucrats

Originally, the term computer applied to human beings. These days, it is increasingly difficult to distinguish reliably machines from human beings: we require ever more challenging CAPTCHAs.

Machines are getting so good that I now prefer dealing with computers than bureaucrats. I much prefer to pay my taxes electronically, for example. Bureaucrats are rarely updated, and they tend to require constant attention like aging servers.

In any case, a bureaucracy is certainly an information processing “machine”. If each bureaucrat is a computer, then the bureaucracy is a computer network. What does Computer Science tell us about bureaucrats?

  1. Bureaucracies are subject to the halting problem. That is, when facing a new problem, it is impossible to know whether the bureaucracy will ever find a solution. Have you ever wondered when the meeting would end? It may never end.
  2. Brewer’s theorem tell us that you cannot have consistency, availability and partition tolerance in a bureaucracy. For example, accounting departments freeze everything once a year. This unavailability is required to achieve yearly consistency.
  3. Parallel computing is hard. You may think that splitting the work between ten bureaucrats would make it go ten times faster, but you are lucky if it goes faster at all.
  4. One the cheapest way to improve the speed of a bureaucracy is caching. Keep track of what worked in the past. Keep your old forms and modify them instead of starting from scratch.
  5. Pipelining is another great trick to improve performance. Instead of having bureaucrats finish the entire processing before they pass on the result, have them pass on their completed work as they finish it. If you have a long chain of bureaucrats, you can drastically speed up the processing.
  6. Code refactoring often fails to improve efficiency. Correspondingly, shuffling a bureaucracy is just for show: it often fails to improve productivity.
  7. Bureaucratic processes spend 80% of their time with 20% of the bureaucrats. Optimize them out.
  8. Know your data structures: a good organigram should be a balanced tree.
  9. When an exception occurs, it goes back the ranks until a manager can handle it. If the CEO cannot handle it, then the whole organization will crash.
  10. The computational complexity is often determined by looking at the loops. That is where your code will spend most of its time. In a bureaucracy, most of the work is repetitive.

Update: Neal Lathia commented that neither bureaucrats nor computers understand humor.

Update: “This is a fairly well-known model, and no it isn’t computer science that is at the root of what you are noticing. It is early operations research. Taylorism in fact. There was a conscious effort in the 20s and 30s to bring Taylorist style a…ssembly line/operations research thinking into white collar work, starting with organizing pools of typists, secretaries and other office workers the same way banks of machine tools were organized into flow shops and assembly lines. The exact same Taylorist time-and-motion study tools were applied (in fact, in the 30s this was so popular that women’s magazines carried articles about time-and-motion in the kitchen. Example: puzzles like “what’s the fastest way to toast 3 slices of bread on a pan that can hold 2 and toast 1 side at a time?) Computer science itself was initially strongly influenced by shopfloor OR… that’s where metaphors like queues come from after all.” (Venkatesh Rao)

The Open Java API for OLAP is growing up!

olap4j log
Software is typically built using two types of programming languages. On the one hand, we have query languages (e.g., XQuery, SQL or MDX). On the other, we have the regular programming languages (C/C++, Java, Python, Ruby). A lot of effort is spent on the mismatch between these two programming styles. It remains a sore point in many projects.

Microsoft has been trying especially hard to resolve this mismatch. Their LINQ component allows you to use relational or XML data sources directly in your favorite language (e.g., C#).

Oracle has its own solution, the Oracle Java API. It allows you to query OLAP databases directly from Java, without SQL or MDX.

Unfortunately, these solutions are vendor-specific. With the rise of Open Source Business Intelligence, we seek open solutions which are shared and co-developed.

That is what the Open Java API for OLAP (olap4j) is. Anyone can build an OLAP engine and offer support for olap4j. You then get MDX support for free. Best of all, an application written against olap4j should work with any olap4j-compliant OLAP server which includes SQL Server Analysis Service and SAP Business Information Warehouse. And if you add Mondrian, you can get olap4j compliance out of any common relational database management system such as MySQL.

Lead by the Linus Torvalds of OLAP (Julian Hyde), olap4j finally reached version 1.0. A leading-edge feature I find interesting is that olap4j supports notifications which should enable real-time OLAP applications. Whenever something changes at the database level, the OLAP server can notify its clients, effectively pushing a notification. One obvious application is in the financial industry where data must be quickly updated.

Further reading: The press release for olap4j 1.0. Julian Hyde’s blog post on this topic. Luc Boudreau’s blog post. See also some of my older blog posts: JOLAP is dead, OLAP4J lives? (2008) and JOLAP versus the Oracle Java API (2006).

How information technology is really built

One of my favorite stories is how Greg Linden invented the famous Amazon recommender system, after after being forbidden to do so. The story is fantastic because what Greg did is contrary to everything textbooks say about good design. You just do not bypass the chain of command! How can you meet your budget and deadline?

In college, we often tell students a story about how software and systems are built. We gather requirements, we design the system, we get a budget, and then we run the project, eventually finishing within budget and while respecting the agreed upon time frame.

This tale makes a lot of sense to people who build bridges, apparently. It not like they can afford to build three different bridge prototypes and then ask people to choose which one they prefer, after checking that all of them are structurally sound.

But software systems are different.

Consider Facebook. Everyone knows Facebook. It is a robust system. It serves 600 million users with only 2000 employees. Surely, they are excessively careful. Maybe they are, but they do not build Facebook the way we might build bridges.

Facebook relies on distributed MySQL. But don’t expect any 3 Normal Forms. No join anywhere in sight (Agarwal, 2008). No schema either: MySQL is used as a key-value store, in what is a total perversion of a relational database. Oh! And engineers are given direct access to the data: no DBA to preserve the data from the evil and careless developers.

Because they don’t appear to like formal conceptual methodologies, I expect you won’t find any entity-relationship (ER) diagram at Facebook. But then, maybe you will find them in large Fortune 100 companies? After all, that is what people like myself have been teaching for years! Yet no ER diagram was found in ten Fortune 100 companies (Brodie & Liu, 2010). And it is not because large companies have simple problems. The average Fortune 100 has ten thousand information systems, of which 90% are relational. A typical relational database has between 100 and 200 tables with dozens of attributes per table.

In a very real way, we have entered a post-methodological era as far as the design of information systems is concerned (Avison and G. Fitzgerald, 2003). The emergence of the web has coincided with the death of the dominant methods based on the analytic thought and lead to the emergence of sensemaking as a primary paradigm.

This is no mere coincidence. At least, two factors have precipitated the fall of the methodologies designed in the seventies:

  • The rise of the sophisticated user. These days, the average user of an information system knows just as much about how to use the systems than the employees of the information technology department. The gap between the experts and the users has fallen. Oh! The gap is only apparent: few users even understand how the web work. But they know (or think they do) what it can do and how it can work. Yet, we continue to see users as mere faceless objects for who the systems are designed (Iivari, 2010). The result? 93% of accounts are never used in enterprise business intelligence systems (Meredith and O’Donnell, 2010). Users now expect to participate in the design of their tools. For example, Twitter is famous for its hashtags which are used to mine trends, and which are the primary source of semantic metadata on Twitter. Yet did you know that they were invented by a random user, Chris Messina, in a modest tweet back in 2007? It is only after users started adopting hashtags that Twitter, the company, adopted it. Hence, Twitter is really a system which is co-designed by the users and the developers. If your design methodology cannot take this into account, it might be obsolete. Recognizing this, Facebook is not content to test new software in the abstract, using unit tests. In fact, code is tested during the deployment for user reactions. If people react badly to an upgrade, the upgrade is pulled back. In some real way, engineers must please users, not merely satisfy formal requirements representing what someone thought the users might want.
  • The exploding number of computers. According to Garner, Google had 1 million servers in 2007. Using cloud computing, any company (or any individual) can run software on thousands of servers worldwide without breaking the bank. Yet Brewer’s theorem says that, in practice, you cannot have both consistency and availability (Gilbert and Lynch, 2002). Can your design methodology deal with inconsistent data? Yet, that is what many NoSQL database systems (such as Cassandra or MongoDB) offer. Maybe you think that you will just stick with strong consistency. JPMorgan tried it and they ended up freezing $132 million and losing thousands of loan applications during a service outage (Monash, 2010). Most likely, you cannot afford to have strong consistency throughout without sacrificing availability. As they say, it is mathematically impossible. Brewer’s theorem is only the tip of the iceberg though: what works for one mainframe, does not work for thousands of computers. Not anymore than a human being is a mere collection of thousands of cells. There is a qualitative difference in how systems with thousands (or millions) of computers must be designed compared with a mainframe system. Problems like data integration are just not on your radar when you have a single database. We have moved from unicellular computers to information ecosystems. If your design methodology was conceived for mainframe computers, it is probably obsolete in 2011.

Building great systems is more art than science right now. The painter must create to understand: the true experts build systems, not diagrams. You learn all the time or you die trying. You innovate without permission or you become obsolete.

Credit: The mistakes and problems are mine, but I stole many good ideas from Antonio Badia.

References:

You can assess trends by the status of the participants

I conjecture that, everything else being equal, the level of your education is inversely correlated with innovation.

  • At first, a new idea appears interesting, but it carries no prestige. And there are few financial incentives. Think homebrew computers before Apple. Or blogging in 2003. The people who first join are sociopaths (as per the Gervais principle) who often lack formal education. They recognize what this new idea might do. Yet they are unconcerned by their place in society. They may not even have a resume.
  • Once an idea picks up steam, incentives become more apparent. The community then expands to include people who are slightly more conformist. You will start to see more college degrees. Companies are built. Jobs are created. Think blogging in 2005, or Apple releasing its first personal computer.
  • After some time, it has become obvious that the idea is solid. Think Apple a few months after the Apple II was launched. Or blogging in 2010. People who value greatly prestige finally join up. You start to see prestigious degrees. There are now established practices and some level of expected conformity.

If my conjecture is at least partly true, then you can assess trends by the status of the participants. When you see new trends, such as homebrew 3D printers or open source electronics, how many MIT degrees do you see? Conversely, by the time the prestigious degrees are flocking in, maybe the real innovation is elsewhere?

Fun fact: In 2007, Morgan Stanley, Lehman Brothers, JP Morgan, and Goldman Sachs were among the top 10 employers of MIT graduates. (Source)

Acknowledgement: I was inspired to write this post by P. Bannister.

Social Web or Tempo Web?

Back in 2004, Tim O’Reilly observed that the Web had changed, and coined the term Web 2.0. This new Web is made of several layers which enable the Social Web. Wikipedia and Facebook are defining examples of the Social Web.

This sudden discovery of the Social Web feels wrong to me. In the early nineties, I was an active user of Bulletin Board Systems (BBS). While it was not the Web, or even part of the Internet, BBSes were clearly a social media. You know the multi-user games people play on Facebook? We had that back in 1990. The graphics were poorer, obviously, but it was all about meeting people.

The barrier to entry keeps getting lower, to the point were even grand-fathers are now on Facebook. But the Web has hardly been limited to an elite. Even BBSes were quite democratic: retired teachers would chat with young hackers all the time. It is the extreme low cost of computers and their ubiquity which makes the Social Web so widespread.

A much more interesting change has received less notice: the tempo of the Web is changing. Geocities made it easy for anyone to create a home page. But updating your home page was a slow process. In effect, our mental model of the Web was that of a library, and Web sites were books that could be updated from time to time. Eventually, we gave up on this model and decided to view the Web as a data stream. This realization changed everything.

The pace used to range from static web pages to flaming on posting boards. We have now expanded our temporal range. We can now communicate with high frequency in short bursts. Twitter is one extreme: it is akin to techno music. Facebook is somewhat slower, and more elaborate, maybe  like rock. Posting research articles is no music at all: it is akin to the rhythm of the Earth around the Sun. These tools don’t just differ on the frequency of the updates, but also on their volume, and on the length of the pauses.

Maybe we should try to understand the Web by analogy with music. How does the Web sound to you, today?

Further reading: See my blog post Is Map-Reduce obsolete? Also, be sure to check Venkatesh Rao’s blog. Rao has a new book which I will review in the future.

Know the biases of your operating system

Douglas Rushkoff wrote in Life Inc. that our society is nothing more than an operating system upon which we (as software) live:

The landscape on which we are living “the operating system on which we are now running our social software” was invented by people, sold to us as a better way of life, supported by myths, and ultimately allowed to develop into a self-sustaining reality.

In turn, operating systems are designed and maintained by engineers who make choices and have biases. He makes us realize that corporations, these virtual beings which live forever and are granted full privileges (including free speech), are not natural but are bona fide inventions. He also stresses that central currencies, that is, the concept that the state must have a monopoly on the currency, is also an invention: why is it illegal to switch to an alternative currency in most countries?

We fail to see these things, or rather, we take them for granted because they are our operating system. Someone used to Microsoft Windows takes for granted that a desktop computer must behave like Microsoft Windows: they cannot suffer MacOS or Linux, at least initially, because it feels instinctively wrong. Anyone, like myself, who uses non-Microsoft operating systems in a predominantly Microsoft organization is constantly exposing hidden assumptions. “No, my document was not written using Microsoft Word.”

Science has an operating system as well. One of its building block is traditional peer review: you submit a research paper to an editor who picks a few respected colleagues who, in turn, advise him on whether your work is valid or not. By convention, any work which did not undergo this process is suspect. In Three myths about peer review, Michael Nielsen reminded us that traditional peer review is not a long tradition, and is not how correctness is assessed in science. Gregori Perelman by choosing to forgo traditional peer review while publishing some of the most important mathematical work of our generation could not have made Nielsen’s point stronger. Similarly, we believe that serious academics must publish books through a reputable publisher: self-publishing a book would be a sure sign that you are a crank. Years ago, scholars who had blogs were clowns (though this has changed). We also value greatly the training of new Ph.D. students, even when there is no evidence that the job market needs more doctors. We value greatly large research grants, even when they take away great researchers from what they like best (doing research) and turn them into what they hate doing (managing research). But nobody is willing to question this system because the alternative is unthinkable. “You mean that I could use something beside Microsoft Windows?”

In my previous post, I challenged public education. Some people even went so far as to admit that my post felt wrong. I suspect that this feeling is not unlike the feeling one gets when switching from Windows to Linux. “Where is Internet Explorer?”

Several people cannot imagine that you can become smart without a formal education which includes at least a high school diploma. It is not that the counter-examples are missing (there are plenty: Bobby Fischer, Walt Disney, James Bach and Richard Branson). It is simply hard to imagine that you could do away with brick-and-mortal schools and still have scholarship and intelligence. Similarly, we cannot imagine a world without corporations or without central currency, or science without formal peer review.

Challenging preconceived notions is difficult because your feelings will betray you. Radically new ideas feel wrong. The cure is to try to remember how it felt like when you were first exposed to these ideas. On this note, Andre Vellino pointed me to Disciplined Minds, a book so controversial that it got its author fired! It reminded me of my feelings as a student about exams, grades and teachers:

  • Exams and grades appear neutral: on the face of it, they are merit-based challenges. While in fact, they are really tests of conformity. To get good grades, you must organize much of your life around what others expect you to do. I cannot think of a good reason why most people would care about the integral of x3 cos(x). Why do we require such technical knowledge of so many people? The reason is simple: if you can set aside all other interests to learn calculus just because you are told to do so, then you are good at learning what you are told to learn. If you refuse to hand in an assignment because you think it is stupid, you will be punished. It does not matter if you use the free time to be even more productive on some valid scholarship.
  • Teachers appear unpolitical at a first glance. They teach commonly accepted facts to students. However, teachers are political because they never challenge the curriculum, and when they do, they are frequently fired. As a kid, I refused to learn my multiplication tables. I was repeatedly chastised for failing to memorize them: instead, I would design algorithms to quickly deduce the correct answer  without rote memorization. This was called cheating, and my teachers would wait for the small pause and then interrupt me: “you are cheating again, you have to memorize”. I still have not memorized my multiplication tables. Why is it that no teacher ever opposed the requirement that we memorize multiplication tables? Because their job involves teaching obedience.

So, the same way corporations and central currencies are not neutral, public education is not neutral. Kids are naturally curious. If you leave them alone, they will learn eagerly. Alas, they will also refuse to learn what you are telling them to learn. This is precisely what schools are meant to break.

Public education historically helped class mobility. Publicly funded scholar have also greatly contributed to our advancement. However, as the world is changing through increased automatization and globalization, we may need to drastically shift gear. Stephen Downes answered my previous post with a pointer to his essay Five key questions. In this essay, Downes offers a foundational principle for a renewed public education:

It represents a change of outlook from one where education is an essential service that much be provided to all persons, to one where the role of the public provider is overwhelmingly one of support and recognition for an individual’s own educational attainment. It represents an end to a centrally-defined determination of how an education can be obtained, to one that offers choices, resources and assessment.

Downes challenges conformity as a core value for education. Quite the opposite: he calls into question the idea that education should be “managed”. I believe he would agree that one of the great tragedy of public education is the centrally mandated curriculum. This was ideal preparation for the slow-moving corporations of the sixties and seventies. In 2011, why punish a kid who decides to spent five years building a robot?

Go ask your kid to name a planet. If his answer his Jupiter, Mars or Earth. Be worried. In the twenty-first century, we need kids who answer Eris or MakeMake.

Further reading: Brian Martin, Review of Jeff Schmidt’s Disciplined Minds: A Critical Look at Salaried Professionals and the Soul-Battering System that Shapes their Lives, Radical Teacher, No. 62, 2001, pp. 40-43.

Governments should stop funding higher education?

Everyone knows that publicly funded education is good. Right? Wait! Why?

 

  • “Schools have substantial non-financial benefits.” This argument assumes that people who forgo schooling are uneducated. It is weaker in the Wikipedia era. Kids are naturally curious, and they now have access to unlimited and inexpensive information. And this same argument could be used to justify free Internet for all, which would be considerably cheaper than free schools.
  • “If it costed hundreds of thousands of dollars to complete a Ph.D., nobody would do it.” Assuming that there is any demand at all for Ph.D.s, people with a Ph.D. would receive much higher salaries if fewer people have them. These higher salaries would entice more students to complete a Ph.D.
  • “Poor people are unable to get an education without government funding.” Students who can borrow the money, ought to be willing to do so if the expected return on their education investment far exceeds the interest rates charged by the bank or a private investor. True: Some students who show little promise, and have few ressources, would be unable to get an college degrees. Is that fair? Is it fair that only the most promising engineers get a job with Google? Is it fair that my wife is more beautiful than yours? Is it fair that kids go hungry in the richest country in the world?
  • “To make our corporations more competitive.” By funding schools, governments entice more students to study which artificially boosts the supply of graduates. In turn, this lowers the salaries of these same graduates. Corporations benefit from these lower wages while they only contribute a small fraction of the cost. Therefore, public schools are equivalent to subsidizing corporations. Any country that would stop funding higher education without a corresponding immigration policy, would see a rise in the wages of college graduates. This would be unfavorable to corporations which rely on college degrees to select employees.  But corporations do not have to hire college graduates. Most corporate jobs are a form of paper pushing. They could easily replace college degrees with less expensive certifications.

So,  why should the public fund schools?

Further reading: How and Why Government, Universities, and Industry Create Domestic Labor Shortages of Scientists and High-Tech Workers, A World Without Public School, What If Public Schools Were Abolished? and Self-interest and public funding of education.

Disclosure: I work for a public university. My kids attend a public school.

Breaking news: HTML+CSS is Turing complete

A programming language is Turing complete if it equivalent to a Turing machine. In practice, it means that any algorithm can be implemented. Most programming languages are Turing complete, including SQL, ECMAScript, Java, C, and so on.

HTML5 + CSS3 is also Turing complete because it can be used to program a Rule 110 automaton.

Source: Thanks to Jakob Voß for the pointer.

Jobless recovery, the Luddite fallacy and the 4-hour workweek

The Luddite fallacy says that innovation destroys jobs. It is believed to be a fallacy because increased productivity causes prices to fall, which then boosts demand which then creates new jobs. While locally or temporarily, there might be economic growth without job creation, economists believe that job growth will always follow. In effect, it represents the belief that technological innovation is always good economically.

This is a central dogma among economists. Why? Because jobless growth would destroy mass market economies. Indeed, jobs are the primary mean by which money flows back from the corporations to the consumers. Without jobs, consumers leave the market, effectively destroying it. Should the Luddite fallacy fail—if labour-saving technologies did increase unemployment—we would be pushed into some form of communism where the governments would need to either artificially create jobs, or provide direct financial support to the population.

And it is not necessary for the Luddite fallacy to fail entirely to have problems. Imagine that as we innovate, maybe through better machine learning and robotics, we keep losing 2% of all jobs every year. For every 100 jobs destroyed, only 98 new jobs are created. Within a few short years, we would be left with a sizeable fraction of the population which has left the job market entirely, and another fraction which is either unemployed or underemployed.

As bank tellers, cashiers, secretaries, and middle managers are being replaced by software and robotics, the conventional wisdom is that new jobs are created. Maybe people become robotics technicians or solar panel experts. However, how many futuristic jobs are really created? Many of the jobs created in new companies are the same jobs that disappeared: administrative assistant, accountant, manager, engineer, and so on. Certainly, Facebook is a futuristic service. But while it serves hundreds of millions of users, it only has 2000 employees.

In his latest TED talk, Bill Gates hinted that the future looks bleak for governments. Health costs are rising faster than government revenues, even if you assume that the economy will do well. Thus, it is entirely possible that we will need to drastically reduce the cost of healthcare and education. Automation might be the only path forward which would not sacrifice our quality of life. Will we be closing schools and sending the kids to Khan Academy? We may have to automate much of the medical testing and diagnosis. Robotics might be needed to help support an aging population. All these great innovations might be necessary, and they may also lead to further job destruction (or the equivalent lack of job creation).

To make things worse, much of the recent gains from technology are nonmonetary. They fall outside the realm of economics. For example, while journalists have been losing their jobs, I have had access to better content than ever, through blogs and e-books, mostly for free or for little money. Recently, the giant free porn site RedTube has been killing much of the online porn industry: it offers unlimited porn for free.

Should the Luddite fallacy fail, there is an alternative to communism. We should be entering into the age of the 4-hour workweek. We have been investing our productivity gains into more production, which has lead to more jobs. But what if the cycle breaks at some point in the future? The solution might be to redefine “work”. Many bureaucrats only create work for themselves and others. Most of accounting, for example, could be automated. Researchers famously publish papers only for the sake publishing papers. Instead, we should call on accountants or researchers a few hours, here and there, to handle the interesting and difficult problems. Employers, instead of buying 40 hours of someone’s time, would buy the option of calling on the individual in times of need. We would all work far fewer hours, we would preserve the market economies and we would be happier. To make this possible, we need a deep cultural change. How do you feel about people who work only a few hours a week, or a month? My impression is that we look down on anyone who works less than 30 hours a week. Presumably, we inherited this prejudice from the industrial age. We need to let go.

Further reading: The Lights in the Tunnel by Martin Ford.

Innovation and model boundaries

When designing an information system, a piece of software or a law, experts work from a model. This model must have boundaries. When these boundaries are violated, life may become difficult:

  • After over two years in lock-out, the journalists of the largest Montreal newspaper decided to bow to their bosses and accept whatever they could. What happened? In Quebec, we have a strong anti-scab law. In theory, therefore, you cannot publish a newspaper without your newsroom. Except, of course, that the law assumes work happens in the offices. In practice, anyone can work for a newspaper from home, sending documents by email. The boundary of the work model in this case is the (physical) office. Entrepreneurs are smart enough to realize that they do not need the conventional journalists, working in the newsroom. Alas, consumers are now realizing that they do not need the newspapers in the first place. We have broken models all the way up.
  • Email was designed to work within a small and  non-commercial network. It was never meant for the world we live in. And I attribute much of the rise of social networks (e.g., Facebook) to the need to recreate these small non-commercial networks. The irony, of course, is that people who are investing in Facebook right now tend to forget that the boundaries of the Facebook model have much to do with avoiding spam.
  • Entity-relationship models were designed in an era when corporations had few databases which were relatively small and simple. Moreover, you could isolate various databases: there was no need for your databases to ever be compatible with the databases of another company.   Today, many corporations have databases with hundreds of attributes, all designed by different experts. The semantics can vary widely within the corporation. What is worse is that you must often reconcile the semantics across several companies inside supply chains. The net result is that if  you wanted an actual entity-relationship diagram which represents what is actually going on, it would never fit in anyone’s head. We have done much work on schema evolution and data integration but we fall short of solving the problems without pain.

The boundaries of our models are constantly being broken. These broken pipes represent vast opportunities. A new CEO who wants to turn a company around should look for breached boundaries in the company’s model. Competitors should look for broken water pipes as a way to identify soft targets.

I believe this calls for an entirely new field of expertise. Some people should become experts are recognizing broken models.

Here is my favorite broken model: E-commerce is pathetic and inefficient because it is based on the pretense that we need stores. Like we do not need newspapers (just news), we do not need stores (just products). In the nineties, people were talking about software agents that would go out and shop for us. Why do I have to do so much manual labor to find the best product and the best price? Sometimes, it almost looks like sabotage. Consider this scenario. You go see your doctor. Instead of handing you out a prescription that you will need to present in some drug store, the doctor just tells you “the medicine will be delivered to your house tomorrow morning, it will be automatically debited from your insurer’s account”. See? No store. Ironically, the future of e-commerce might be to do away with stores entirely. Oh! And it won’t be invented by people running e-commerce stores.