Is software a neutral agent?

We face an embarrassing amount of information but when we feel overwhelmed, as Clay Shirky said, “It’s not information overload. It’s filter failure.” Unavoidably, we rely heavily on recommender systems as filters. Email clients increasingly help you differentiate the important email from the routine ones, and they regularly hide from your sight what qualifies as junk. Netflix and YouTube work hard so that you are mostly presented with content you want to watch.

Unsurprisingly, YouTube, Facebook, Netflix, Amazon and most other big Internet players have heavily invested in their recommender systems. Though it is a vast field with many possible techniques, one key ingredient is collaborative filtering, a term first coined in 1992 by David Goldberg (now at eBay but then at Xerox Parc). It has become known through, in part, the work done at Amazon by Greg Linden on the item-to-item collaborative filtering (“people who liked this book also liked these other books”). The general theorem underlying collaborative filtering is that if people who are like you like something, then you are more likely to like such a thing. Thus, we should not be mistaken and think that the recommender systems are sets of rules inputted by experts. They are in fact an instance of machine learning where the software learns to predict us by watching us.

But this also means that these filters, these algorithms, are in part a reflection of what we are, how we act. And these algorithms know us better than we may think. And that’s true even if you share nothing about yourself. For example, Jernigan and Mistree showed in 2009 that based solely on the profiles of the people who declared to be your friends, an algorithm can determine your sexual orientation. Using minute traces that you unavoidably leave online, we can determine your sexual orientation, ethnicity, religious and political views, your age, and your gender. There is an entire data-science industry that is dedicated to tracking what we buy, what we watch… Whether they do it directly or not, intentionally or not, recommender systems in YouTube, Facebook, Netflix, Amazon take into account your personal and private attributes in selecting content for you.

We should not be surprised that we are tracked so easily. The overwhelming majority of the Internet players are effectively marketing agents, paid to provide you with relevant content. It is their core business to track you.

However, though polls are also a reflection of our opinions, it has long been known that they influence the vote, even when pollsters are as impartial as they can be. Recommender systems therefore not neutral, they affect our behavior. For example, some researchers have observed that recommender systems tend to favor blockbusters over the long tail. This can be true even as, at the individual level, the system makes you discover new content… seemingly increasing your reach… while leaving the small content producers in the cold.

Some algorithms might be judged unfair or “biased”. For example, it has been shown that if you self-identify as a woman, you might see online fewer ads for high paying jobs than if you are a man. This could be explained, maybe, by a natural tendency for men to click on jobs for higher paying jobs, compared to women. If the algorithm seeks to maximize content that it believes is interesting to you based on your recorded behavior, then there is no need to imagine a nefarious ad agency or employer.

In any case, we have to accept software as an active agent that helps shape our views and our consumption rather than a mere passive tool. And that has to be true even when the programmers are as impartial as they can be. Once we set aside the view of software as an impartial object, we can no longer remain oblivious to its effect on our behavior. At the same time, it may become increasingly difficult to tweak this software, even for its authors, as it grows in sophistication.

How do you check how the algorithms work? The software code is massive, ever-changing, on remote servers, and very sophisticated. For example, the YouTube recommender system relies on deep learning, the same technique that allowed Google to defeat the world champion at Go. It is a complex collection of weights that mimics our own brain. Even the best engineers might struggle to verify that the algorithm behaves as it should in all cases. And government agencies simply cannot read the code as if it were recipes, assuming that they can even legally access it. But can governments at least measure the results or enable the providers to give verifiable measures? Of course, if governments have complete access to our data, they can, but is that what we want?

The Canadian government has tried to regulate what kind of personal data companies can store and how the can store it (PIPEDA). In a globalized world, such laws are hard to enforce but even if they could be enforced, would they be effective? Recall that from minute traces, software can tell more about you than you might think… and, ultimately, people do want to receive personalized services. We do want Netflix to know which movies we really like.

Evidently, we cannot monitor Netflix the same way we monitor a TV station. We can study the news coverage that newspapers and TV shows provide, but what can we say about how Facebook paints the world for us?

We must realize that even if there is no conspiracy to change our views and behavior, software, even brutally boring statistics-based software, is having this effect. And the effect is going to get ever stronger and harder to comprehend.

Further reading:

  • Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on Ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1), 92-112.
  • Goldberg, D., Nichols, D., Oki, B. M. , and Terry, D. 1992. Using collaborative filtering to weave an information tapestry. Commun. ACM 35, 12 (December 1992), 61-70.
  • Fleder, D., & Hosanagar, K. (2009). Blockbuster culture’s next rise or fall: The impact of recommender systems on sales diversity. Management science,55(5), 697-712.
  • Jernigan, C., & Mistree, B. F. (2009). Gaydar: Facebook friendships expose sexual orientation. First Monday, 14(10).
  • Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences, 110(15), 5802-5805.
  • Linden, G., Smith, B., & York, J. (2003). Amazon. com recommendations: Item-to-item collaborative filtering. Internet Computing, IEEE, 7(1), 76-80.
  • Statt, N., YouTube redesigns its mobile apps with improved recommendations Using ‘deep neural networks’, April 26th, 2016
  • Tutt, A., An FDA for Algorithms (March 15, 2016).

We know a lot less than we think, especially about the future.

The inventors of the airplane, the Wright brothers, had little formal education (3 and 4 years of high school respectively). They were not engineers. They were not scientists. They ran a bicycle repair shop.

At the time of their invention, there was quite a bit of doubt as to whether airplanes were possible. It is hard to imagine how people could doubt the possibility of an airplane, but many did slightly over a century ago.

Lord Kelvin famously said that “heavier-than-air flying machines are impossible” back in 1895.

But that is not all. The American government had nonetheless funded an illustrious Physics professor, Samuel Langley with millions of dollars in today’s currency so that he would build an airplane. The man had written the textbook on aeronautic at the time.

Langley failed miserably. This lead the illustrious New York Times to publish this prediction:

flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years

It is likely at this point that many experts would have agreed with the New York Times. Flying was just not possible. We had given large sums to the best and smartest people. They could not make a dent in the problem. We had the greatest scientists in the world stating openly that flying was flat out impossible. Not just improbable, but impossible.

Yet only a few days later, with no government grant, no prestigious degree, no credential whatsoever, the Wright brothers flew an heavier-than-air machine. That was 1903.

In the first Word War of 1914, only ten years later, both camps used war planes.

The story is worse than I make it sound because even after the Wright brothers did fly… it took years for the Americans to notice. That is, people did not immediately recognize the significance of what the Wright brothers demonstrated.

You think we are smarter now and such silliness would not happen.

Here is what Steve Ballmer, Microsoft CEO said about the iPhone when it came out…

it [the iPhone] doesn’t appeal to business customers because it doesn’t have a keyboard, which makes it not a very good email machine. Right now we’re selling millions and millions and millions of phones a year, Apple is selling zero phones a year.

That was 2007. Today Apple sells about 60 million iPhones per month. How many phones does Microsoft sell? How many Microsoft phones have you seen lately?

To be fair, it is true that most new ideas fail. We get a new cure for Alzheimer’s every week. The fact that we get a new one every week is a pretty good indication that it is all hype. But the real lesson is not that we cannot break through hard problems. The true lesson is that we know a lot less than we think, especially about the future.

Pessimism is the easy way out. Asked about any new idea, I can simply say that it is junk. And I will be right 99% of the time. We obsess about not being wrong when, in fact, if you are not regularly wrong, you are simply not trying hard enough. What matters is that you are somehow able to see the important things as they are happening. Pessimists tend to miss everything but the catastrophes.

How will you die? Cancer, Alzheimer’s, Stroke?

Before the 1950s, many of us suffered from poliomyelitis and too many ended up crippled. Then we developed a vaccine and eradicated the disease. Before the second world war, many people, even the richest, could die of a simple foot infection. Then we mass-produced antibiotics and got rid of the problem.

I have stated that it is basically a matter of time before we get the diseases of old age (cancer, stroke, dementia…) under control. It is impossible to tell when it will happen. Could be a couple of decades, could be 45 years, could be a century or a bit more. As a precaution, you should never trust anyone who says he can predict the future more than a couple of years in advance. However, progress that is not impossible in principle tends to reliably happen, on its own schedule.

Whenever we will get the diseases of aging under control, we will end up with drastically extended healthspan. Simply put, most of us end up sick or dead because of the diseases of old age. Without these diseases, we would end up healthy for much longer.

It comes down to the difference between having airplanes and not having them. Having electricity or not having it. Having the Internet or not having it. These are drastic differences.

Stating that the diseases of aging will come under control at some point in our future should not be controversial. And you would hope that people would see this as a positive outcome.

Not so.

The prospect that we may finally defeat aging is either rejected as being too improbable, or, more commonly, is rejected as being undesirable. Nick Bostrom even wrote a fable to illustrate how people commonly react.

The “improbable” part can always be argued. Anything that has never been done can always be much harder to achieve than we think. However, some progress is evident. Jimmy Carter, a 91-year-old man, was “cured” from a brain tumor recently. Not long ago, such feats were unthinkable. So it becomes increasingly difficult to argue that a few decades of research cannot result in substantial medical progress.

So we must accept, at least in principle, that the diseases of aging may “soon” become under control where by soon, I mean “this century”. This would unavoidably extend human life.

Recently, one of my readers had this very typical reaction:

As for extending human life, I’m not for it.

If you tend to agree with my reader, please think it through.

Aging does not, by itself, kills us. What kills us are the diseases that it brings, such a stroke, dementia, cancer. So if you are opposed to people living healthier, longer lives, then you are favorable to some of these diseases. I, for one, would rather that we get rid of stroke, cancers and dementia. I do not want to see these diseases in my family.

Medical research is a tiny fraction of our total spending. Medical spending is overwhelming directed toward palliative care. To put it bluntly, we spend billions, trillions, caring for people who are soon going to die of Alzheimer’s or cancer. This is quite aside from the terrible loss of productivity and experience caused by these diseases.

If we could get rid of these diseases, we would be enormously richer… we would spend much less on medical care and have people who are a lot more productive. The cost of aging are truly enormous and rising right now. Keeping people healthy is a lot cheaper than keeping sick people from dying.

Moreover, increased lifespans in modern human beings are inexorably linked with lower fertility and smaller populations. Lifespans are short in Africa and long in Europe… yet it is Africa that is going to suffer from overpopulation.

As people are more confident to have long lives, they have fewer children and they have them later. Long-lived individuals tend to contribute more and use less support relatively speaking.

If you are in favor of short human lifespans through aging, then you must be opposed to medical research on the diseases of aging such as dementia, stroke, and cancer. You should, in fact, oppose anything but palliative care since curing dementia or cancer is akin to extending lifespan. You should also welcome news that members of your family suffer from cancer, Parkinson’s and Alzheimer’s. They will soon leave their place and stop selfishly using our resources. Their diseases should be cause for celebration.

Of course, few people celebrate when they learn that they suffer from Alzheimer’s. Yet this disease is all too natural. Death is natural. So are infectious diseases. We could reject antibiotics because dying of an infection is “natural”. Of course, we do not.

Others object that defeating the diseases of aging (cancer, Alzheimer’s, stroke…) means that we become immortal and that’s clearly troubling and maybe unsustainable. But it is unfounded. Short of rebuilding our bodies with nanotechnology, the best we could probably do is make it so that people of all chronological age have the mortality rate they had when they were thirty. That’s a very ambitious goal that I doubt we have any chance of reaching in this century. And yet, people in their thirties die all the time. They simply do not tend to die of aging.

Yet others fall prey to the Tithonus error and believe that if we somehow get the diseases of aging under control, we will remain alive while growing increasingly frail and vulnerable. But, of course, being vulnerable is the gateway to the diseases of old age. You cannot control the diseases of aging without making sure that people remain relatively strong.

Others fear that only the few will be able to afford medicine to keep the diseases of old age at bay… It is sensible to ask whether some people could have earlier access to technology, but from an ethical point of view, one should start with the observation that the poorest among us are the hardest hit by the diseases of aging. Bill Gates won’t be left alone to suffer in a dirty room with minimal care. Healthy poor people are immensely richer than sick “poor” people. Like vaccines, therapies to control the diseases of old age are likely to be viewed as public goods. Once more: controlling the diseases of old age will make us massively richer.

I am sure that, initially, some people expressed concerns regarding the use of antibiotics. When the Internet came of age, many people wrote long essays against it. Who would want to read newspapers on a screen? Who needs this expensive network? Now that we are starting to think about getting the diseases of aging, people object. But let me assure you that when it comes down to it, if there are cures against the diseases of aging, and you are old and sick, you will almost certainly accept the cure no matter what you are saying now. And the world will be better for it.

Please, let us just say no to dementia, stroke and cancer. They are monsters.

Further reading: Nick Bostrom, The Fable of the Dragon-Tyrant, Journal of Medical Ethics, 2005.

The powerful hacker culture

In my post the hacker culture is winning, I observed that the subculture developed in the software industry is infecting the wider world. One such visible culture shift is the concept of “version update”. In the industrial era, companies would design a phone, produce it and ship it. There might be a new type of phone the following year, but whatever you bought is what you got. In some sense, both politically and economically, the industrial era was inspired by the military model. “You have your orders!”

Yet, recently, a car company, Tesla, released an update so that all its existing cars acquired new functions (self-driving on highways). You simply could not even have imagined such an update in the industrial era.

It is an example of what I called innovation without permission, a feature of the hacker culture. It is an expression of the core defining hacker characteristic: playfulness and irreverence. Hackers will install Linux on the latest PlayStation even if Sony forbid it and made it impossible. Why would any team invest months of work on such a futile project?

What is unique to hackers is that displays of expertise have surpassed mere functionality to take a life of their own. Though my colleagues in the “Arts” often roll their eyes when I point it out, the hackers are the true subversive artists of the post-industrial era.

The hacker culture has proven its strength. We got Chelsea Manning’s and Julian Assange’s WikiLeaks, the somewhat scary underground work by Anonymous, Edward Snowden’s leak, the Panama papers and so forth. Aaron Swartz scared the establishment so much that they sought to put him behind bars for life merely because he downloaded academic articles.

You might object that many of these high-profile cases ended with the hackers being exiled or taken down… but I think it is fair to say that people like Aaron Swartz won the culture war. As a whole, more people, not fewer, are siding with the hackers. Regarding the Panama Papers, there were some feeble attempts to depict the leak as a privacy violation, but it no longer carries weight as an argument. TV shows increasingly depict hackers as powerful (and often rightful) people (e.g., House of Cards, The Good Wife, and Homeland).

Who is winning ground do you think?

What makes the hacker culture strong?

  • Hackers control the tools. Google, Microsoft and Apple have powerful CEOs, but they need top-notch hackers to keep the smartphones running. Our entire culture is shaped by how these hackers think through our tools.

    The government might be building up fantastic cyberweapons, but what the Snowden incident proved is that this may only give more power to the hackers. You know who has access to all your emails? Software hackers.

    Our tools have come to reflect the hacker culture. They are more and more playful and irreverent. We now have CEOs posting on Twitter using 140 characters. No “sincerely yours”, no corporate logo.

  • Hackers are rich with time and resources. Most companies need hackers, but they can’t really tell what the best ones are up to. How do you think we ended up with Linux running most of our Internet infrastructure? It is not the result of central planning or a set of business decisions. It happened with hackers were toying with Linux while the boss was looking. When you have employees stacking crates, it is easy for an industrial-age boss to direct them. How do you direct extremely smart people who are typing on keyboards?

    Apparently, Linus Torvalds work in his bathrobe at home. He spends a lot of time swearing at other people on posting boards. He can afford all of that because it is impossible to tell Linus what to do.

I don’t think it is mere coincidence if the powerful people are embracing the hacker culture. I could kid and point out that the true hackers may not represent many people, they may not formally hold much wealth, but they metaphorically control the voting machines and hold all the incriminating pictures. But rather, I think that smart people realize that the hacker culture might also be exactly what we need to prosper in the post-industrial era. The military approach is too crude. We don’t need more factories. We don’t need more tanks. But we sure can use smarter software. And that’s ultimately where the hackers take their power: they put results into your hands.

No more leaks with sanitize flags in gcc and clang

If you are programming in C and C++, you are probably wasting at least some of your time hunting down memory problems. Maybe you allocated memory and forgot to free it later.

A whole industry of tools has been built to help us trace and solve these problems. On Linux and MacOS, the state-of-the-art has been valgrind. Build your code as usual, then run it while under valgrind and memory problems should be identified.

Tools are nice but a separate check breaks your workflow. If you are using recent versions of the GCC and clang compilers, there is a better option: sanitize flags.

Suppose you have the following C program:

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char** argv)
{
   char * buffer = malloc(1024);
   sprintf(buffer, "%d", argc);
   printf("%s",buffer);
}

Save this file as s.c. The program should simply print out how many arguments were entered on the command line. Notice the call to malloc that allocates a kilobyte of memory. There is no accompanying call to free and so the kilobyte of memory is “lost” and only recovered when the program ends.

Let us compile the program with the appropriate sanitize flags (-fsanitize=address -fno-omit-frame-pointer):

gcc -ggdb -o s s.c -fsanitize=address -fno-omit-frame-pointer

When you run the program, you get the following:

$ ./s

=================================================================
==3911==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 1024 byte(s) in 1 object(s) allocated from:
    #0 0x7f55516b644a in malloc (/usr/lib/x86_64-linux-gnu/libasan.so.2+0x9444a)
    #1 0x40084e in main /home/dlemire/tmp/s.c:6
    #2 0x7f555127eec4 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21ec4)

SUMMARY: AddressSanitizer: 1024 byte(s) leaked in 1 allocation(s).

Notice how it narrows down to the line of code where the memory leak came from?

It is even nicer: the return value of the command will be non-zero meaning that if this code was run as part of software testing, you could automagically flag the code as being buggy.

While you are at it, you can add other sanitize flags such as -fsanitize=undefined to your code. The undefined sanitizer will warn you if you are relying on undefined behavior as per the C or C++ specifications.

These flags represent significant steps forward for people programming in C or C++ with gcc or clang. They make it a lot more likely that your code will be reliable.

Really, if you are using gcc or clang and you are not using these flags, you are not being serious.

How close are AI systems to human-level intelligence? The Allen AI challenge.

With respect to artificial intelligence, some people are squarely in the “optimist” camp, believing that we are “nearly there” as far as producing human-level intelligence. Microsoft co-founder’s Paul Allen has been somewhat more prudent:

While we have learned a great deal about how to build individual AI systems that do seemingly intelligent things, our systems have always remained brittle—their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific focus areas.

So Allen does not believe that we will see human-level artificial intelligence in this century. But he nevertheless generously created a foundation aiming to develop such human-level intelligence, the Allen Institute for Artificial Intelligence Science.

The Institute is lead by Oren Etzioni who obviously shares some of Allen’s “pessimistic” views. Etzioni has made it clear that he feels that the recent breakthroughs of Google’s DeepMind (i.e., beating the best human beings at Go) should not be exaggerated. Etzioni took for example the fact that their research paper search engine (Semantic Scholar) can differentiate between the significant citations and the less significant ones. The way DeepMind’s engine works is that it looks at many, many examples and learn from these examples because they are clearly and objectively classified (we know who wins and who loses a given game of Go). But there is no win/lose label on the content of research papers. In other words, human beings become intelligent in an unsupervised manner, often working from few examples and few objective labels.

To try to assess how far off we are from human-level intelligence, the Allen Institute launched a game where people had to design an artificial intelligence capable of passing 8th-grade science tests. They gave generous prizes to the best three teams. The questions touch various scientific domains:

  • How many chromosomes does the human body cell contain?
  • How could city administrators encourage energy conservation?
  • What do earthquakes tell scientists about the history of the planet?
  • Describe a relationship between the distance from Earth and a characteristic of a star.

So how far are we from human-level intelligence? The Institute published the results in a short paper.

Interestingly, all three top scores were very close (within 1%). The first prize went to Chaim Linhart who scored 59%. My congratulations to him!

How good is 59%? That’s the glass half-full, glass half-empty problem. Possibly, the researchers from the Allen Institute do not think it qualifies as human-level intelligence. I do not think that they set a threshold ahead of time. They don’t tell us how many human beings can’t manage to get even 59%. But I think that they now set the threshold at 80%. Is this because that’s what human-level intelligence represents?

All three winners expressed that it was clear that applying a deeper, semantic level of reasoning with scientific knowledge to the questions and answers would be the key to achieving scores of 80% and beyond, and to demonstrating what might be considered true artificial intelligence.

It is also unclear whether 59% represent the best an AI could do right now. We only know that the participants in the game organized by the Institute could not do better at this point. What score are the researchers from the Allen Institute able to get on their own game? I could not find this information.

What is interesting however is that, for the most part, the teams threw lots of data in a search engine and used information retrieval techniques combined with basic machine learning algorithms to solve the problem. If you are keeping track, this is reminiscent of how DeepMind managed to beat the best human player at Go: use good indexes over lots of data coupled with unsurprising machine learning algorithms. Researchers from the Allen Institute appear to think that this outlines our current limitations:

In the end, each of the winning models found the most benefit in information retrieval based methods. This is indicative of the state of AI technology in this area of research; we can’t ace an 8th grade science exam because we do not currently have AI systems capable of going beyond the surface text to a deeper understanding of the meaning underlying each question, and then successfully using reasoning to find the appropriate answer.

(The researchers from the Allen Institute invite us to go play with their own artificial intelligence called Aristo. So they do have a system capable of writing 8th grade tests. Where are the scores?)

So, how close are we to human-level artificial intelligence? My problem with this question is that it assumes we have an objective metric. When you try to land human beings on the Moon, there is an objective way to assess your results. By their own admission, the Allen Institute researchers tell us that computers can probably already pass Alan Turing’s test, but they (rightfully) dismiss the Turing test as flawed. Reasonably enough they propose passing 8th-grade science tests as a new metric. It does not seem far-fetched to me at all that people could, soon, build software that can ace 8th-grade science tests. Certainly, there is no need to wait until the end of this century. But what if I build an artificial intelligence that can ace these tests, would they then say that I have cracked human-level artificial intelligence? I suspect that they would not.

And then there is a little embarrassing fact: we can already achieve super-human intelligence. Go back in 1975 but bring the Google search engine with you. Put it in a box with flashy lights. Most people would agree that the search engine is nothing but the equivalent of a very advanced artificial intelligence. There would be no doubt.

Moreover, unlike human intelligence, Google’s intelligence is beyond our biology. There are billions of human brains… it makes no practical sense to limit computers to what brains can do when it is obviously more profitable to build machines that can do what brains cannot do. We do not ask for cars that walk like we do or for planes that fly like birds… why would we want computers that think like we do?

Given our limited knowledge, the whole question of assessing how close we are to human-level intelligence looks dangerously close to a philosophical question… and I mean this in a pejorative sense. I think that many “optimists” looking at the 59% score would say that we are very close to human-level intelligence. Others would say that they only got 59% by using a massive database. But we should have learned one thing: science not philosophy is the engine of progress and prosperity. Until we can make it precise, asking whether we can achieve human-level intelligence with software is an endlessly debatable question akin to asking how many angels fit in a spoon.

Still, I think we should celebrate the work done by the Allen Institute. Not because we care necessarily about mimicking human-level intelligence, but because software that can pass science tests is likely to serve as an inspiration for software that can read our biology textbooks, look at experimental data, and maybe help us find cures for cancer or Alzheimer’s. The great thing about an objective competition, like passing 8th-grade science tests, is that it cuts through the fog. There is no need for marketing material and press releases. You get the questions and your software answers them. It does well or it does not.

And what about the future? It looks bright:

In 2016, AI2 plans to launch a new, $1 million challenge, inviting the wider world to take the next big steps in AI (…)

Narrative illusions

Our brain contains lots of neurons and can do great things. I can read, write and speak fluently in two languages made of tens of thousands of words. Millions of human beings can do that, and much more. But our brains have also clear limits. For example, despite the fact that I have advanced degrees in Mathematics, I still take a small pause when I need to compute the tip (15%) at a restaurant, especially if I have had wine or beer. And I sometimes get the result wrong. My computer could do a billion such computations per second. I could do no more than a handful.

There is overwhelming evidence that our brains do not see the world as it is, but rather they only build a limited model. This is clear when you watch optical illusions. When trying to understand history, technology or your own organization, you often resort to narratives. We build stories. “Steve Jobs became CEO, then he made Apple great again through his charisma.” “Barack Obama took power, rebooted the economy and gave the Americans hope again.”

And then you get “narrative illusions”. Was Apple turned around single-handedly by Steve Jobs? Those who thought so predicted that his death would mean the end of Apple. Yet Apple has continued to grow just as well without Steve Jobs. If you sold your stock when Steve Jobs died, you lost money.

Narratives are hacks to help us cope with the world. Nobody can truly understand how Tesla semi-autonomous cars came along. It is complicated. There is capitalism. There is technological progress… There are thousands of engineers working throughout the world in a distributed matter… How can we comprehend how it all came together? So our puny monkey brains just conclude that this one guy (Elon Musk) did it. He came in, said “I want smart electric cars” and silly engineers went out and built it. End of story.

We often sum up Google by saying “two kids invented a magical algorithm called PageRank and became billionaires”. That’s not even 1% of the story, but we can understand this narrative and it suffices.

We should not, we cannot reject narratives. It is how our brain has to deal with the world. We should, however, be aware that these stories are not the same as reality. Elon Musk does not build next-generation rockets while he programs autonomous cars. Bill Gates did not write Windows 95. Steve Jobs did not invent the tablet, the mouse, the modern computer or even the smartphone. Barack Obama did not bring peace and prosperity to the world.

We hear a lot about how computers cannot think like we do or see like we do. But, one day, our descendants might outgrow most of our narrative illusions. They might look back our puny attempts to understand the world and think “how could they survive?”.

Credit: This blog post was inspired by a Facebook exchange with Seb Paquet and Philippe Beaudoin.

Being shallow is rational

Pundits often lament who people have become shallow. They no longer sit down to read books cover from cover. Instead of writing thoughtful 2-page emails, they write a single line. Sometimes they do not even write it themselves, as they often delegate to an artificial intelligence like Google’s Inbox.

In The myth of the unavoidable specialization, I argued that far from heading toward a world where everyone is a narrow specialist, we are headed toward a world of hypergeneralists. And what hypergeneralists do is to surf on the surface as fast as they possibly can. Hypergeneralists do not spend 3 weeks reading one book. They skim 3 books a day.

I believe that hypergeneralists are onto something.

Our world is characterized by three attributes:

  • It is fast changing. Entire new fields are created and destroyed every few years. The economy turns around every decade. When you spend three months studying deeply one topic, you have to consider that it could have greatly diminished value in a few years.

    At this point, people typically suggest that you can invest in topics that do not change. However, it is a lot harder than you might think to predict what won’t change. In the 1990s, there was no sign that newspapers would become obsolete. In fact, newspaper owners had every reason to think that their golden years were ahead of them. Only ten years ago, programmers would have considered that investing in Microsoft expertise was a safe bet. Those who did so missed the whole mobile revolution and they mostly missed the cloud revolution as well. They still can write beautiful desktop apps, but nobody cares, not even Microsoft.

  • It is vast. For the last few centuries, it has been impossible for any human being to read everything that is being written. But it has now gotten to the point where no matter who narrowly you define a domain, you cannot possibly hope to keep up even if you do nothing all day but read your peers.

  • It is varied and interconnected. Maybe you think that studying ancient Greece or Group theory will make you immune to obsolescence. But people who study ancient Greece using virtual reality these days to walk the streets of Athens. And I bet that there are people applying the emerging new field of deep learning to Group theory.

You should always be ready to learn a new programming language, a new concept borrowed from sociology and a new statistical test. Your mind needs to remain agile.

Computers can always provide you with the details. If I need, this morning, to learn everything there is to learn about a specific form of cancer or about a new programming language, it is easy. It is easy as long as my mind is prepared for it.

What is a prepared mind?

  • You need to be agile. You should be able to go from thinking like a physicist to thinking like a programmer within a few hours. Sometimes you need to cover many different roles quickly, sometimes you need to go deep into one specific role. This requires you to have received various training.
  • Your memory is an index, not an encyclopedia. What is important is not how quickly you can remember facts out of thin air, but how quickly you can look things up. To look things up, you need to know that they even exist. Your mind must be aware of many things, but it does not need to store the details.
  • You must be able to process information quickly using constantly tuned filters. Our brains are not great at thinking deeply and quickly. It is one or the other. But you can use a good set of heuristics to guide you. In effect, you must develop good filters. Your filters need to be constantly adjusted, as you risk being blind to important new facts… but you cannot live effectively without good filters.
  • You need to constantly expand your mind with people and tools. No matter who you are, your naked brain is not smart enough. Trying to make it alone, without great tools, is trying to get around on foot. You can be the greatest athlete on the planet, you still benefit from having access to a car and to planes. If you are not connected to super smart people, you also cannot win. The smart crowd knows more than you do.

And this sums up the hypergeneralist: agile thinking, memory like an index, finely tuned filters and mind expansion using people and tools.

Let us forget about the old man living in a monastery, reading thick books in isolation. He is the scholar of the past, not the bright mind of the future.

Could virtual-reality make us smarter?

When the web initially took off, there were major concerns that it was “dumbing us down”. There are similar concerns with e-books making us dumber. I am quite sure that when we first started to use the written word, there were related concerns: “not having to remember all of your thoughts will rot your brain”.

I think that these concerns are unwarranted. It is possible that kids today have a harder time doing long division on paper than previous generations. But even if that is the case, it should not concern us. If it so happens that our kids grow up to live in a future where computers are less ubiquitous than today, then the zombie apocalypse will be upon us and you should be training for survival, not for long divisions.

My kids spend more time playing video games than on their homework. A lot more time. I could stop them short and train them in mathematics instead. They could then impress their teachers. But I expect that they are probably learning more useful skills through their gaming than I could impart through my lessons. Moreover, I suspect that video games are good for our brain. There is still little science on the subject. I would not waste my time playing “brain games”, for example. But you can do a lot worse for your brain than playing Mario Cart.

The next revolution in gaming (and computing) might be virtual reality. I have already pre-ordered a PlayStation VR bundle.

You just know that people will voice concerns about what virtual reality does to our kids. These fears have started already. People talk about how isolated people will become and so forth.

My hypothesis is that this virtual-reality games will be found to significantly enhance cognition. A recent study by Alloway and Alloway (2015) found that proprioception enhanced our working memory. Virtual-reality is all about proprioception so I bet that virtual-reality users will find their working memory is improving.

But that’s only the beginning.

When I was younger, I assumed that “training your brain” would involve doing mathematics. People who prefer to play sports would be, naturally, dumber. I’ll give you a hint as to what path I followed: my wife says I look like a little nerd.

I think that the evidence is now overwhelming that I was wrong. People who play sports end up with healthier brains. People who play sudoku all day long end up very good at it, but they don’t get smarter. I bet however that if you were to play the virtual-reality equivalent of sudoku, you would get smarter.

So here is a free start-up idea: virtual-reality “brain games”. Build video games designed to improve various cognitive abilities. Team up with college professors who can independently check whether there is any meat to your claims.

Speaking for myself, I hope to find time to help push this agenda forward…

Setting up a “robust” Minecraft server on a Raspberry Pi

My kids are gamers, and they love Minecraft. Minecraft sells its client software, but the server software is freely available. Since it is written in Java, it can run easily on Linux. Meanwhile, you can order neat little Raspberry Pi Linux computers for less than $50. So, putting two and two together, you can build cheaply a little box (not much bigger than my hand) that can be used as a permanent, low-power, perfectly silent game server. And you can expose your kids to servers, Linux and so forth.

There are many guides to setting up a Minecraft server on a Raspberry Pi, but the information is all over the place, and often obsolete. So I thought I would contribute my own technical guide. It took me a couple of long evenings to set things up, but if you follow my instructions, you can probably get it done in a couple of hours, once you have assembled all the material.

  • You need to buy a Raspberry Pi. I recommend getting either a Raspberry Pi 2 or a Raspberry Pi 3. I tried long and hard to get a stable and fast server running on a first-generation Raspberry Pi, but it was not good.
    • You need a power cord to go with it.
    • Moreover, you need a micro SD card. I recommend getting, at least, an 8GB card. Given how cheap cards are, you might as well get a larger card.
    • I recommend getting a nice plastic box to enclose your Raspberry Pi, just so that it is prettier.
    • You might also need an ethernet cable if you do not have one already. If you are going to use the Raspberry Pi, it is best to connect it directly to your router: wifi is slower, more troublesome and less scalable.
    • An HDMI cable, an HDMI-compatible monitor or TV, a USB keyboard and a USB mouse are also be required at first.
  • Then you need to put the latest version of the Linux distribution for Raspberry Pi, Raspbian on the SD card. If you have an old version of the operating system, do not try to upgrade it unnecessarily. Starting from a fresh version is best. Simply follow the instructions from the Raspberry Pi website. Downloading the image files takes forever.
  • At first, you will need a monitor or a TV (with an HDMI connection), a keyboard and a mouse. Connect your Raspberry Pi to your router through the ethernet cable. Put the SD card in the Raspberry Pi. Plug the monitor, the keyboard, and the mouse. Plug the power in and it should start. It will launch in a graphical mode with mouse support and everything you expect from a modern operating system: we will soon get rid of this unnecessary luxury. If, like it happened to me, the card won’t stay plugged in, just use a rubber band. Hopefully, you have Internet access right away.
  • Go to the terminal. I recommend installing a couple of extra packages: sudo apt-get install netatalk screen. Then type sudo raspi-config. This command starts a little configuration tool. First, tell it to expand the file system so that it uses all the SD card. For safety, I recommend changing the default password (the basic account is called pi with password raspberry). You want to tell the Raspberry Pi to boot in the shell: Console Autologin Text console, automatically logged in as 'pi' user. In Internationalisation Options, you want to configure the time and locale. You may want to set the overclocking to the maximum setting. You want to assign the minimum amount of memory to the GPU (16 is enough) from Advanced Options. Make sure that the ssh server is on. Reboot the Raspberry Pi.
  • From your PC or Mac on the same network, you need to connect by ssh to pi@raspberrypi.local.
    On a Mac, just go to Terminal and type ssh pi@raspberrypi.local.
    If you are using Windows, you can access your Raspberry Pi via ssh by using Putty. You should now be in the bash shell. Once this work, you can unplug the Raspberry Pi from the monitor, the keyboard and the mouse. Your server is now “headless”.
  • Create a directory where you will install the Minecraft files: mkdir minecraft && cd minecraft.
  • Download the build file for Spigot (your chosen Minecraft software):

    wget https://hub.spigotmc.org/jenkins/job/BuildTools/lastSuccessfulBuild/artifact/target/BuildTools.jar
    
  • Build the server: java -jar BuildTools.jar. This will take forever. Go drink coffee.
  • Once this is done, start the server for the first time: java -jar -Xms512M -Xmx1008M spigot-1.9.jar nogui. This will create a file called eula.txt. You need to edit it with the command nano eula.txt. Make sure it reads eula=true.
  • Start the server a second time: java -jar -Xms512M -Xmx1008M spigot-1.9.jar nogui. It will take forever again. Go drink more coffee. Once the server return the command prompt, it should be operational. Have a Minecraft player connect to raspberrypi.local. Once you have verified that everything works, type stop.
  • We are going to create a convenient script to start the server. Type nano minecraft.sh and write the following:
    cd /home/pi/minecraft
    screen -S minecraft -d -m java -jar  -Xms512M -Xmx1008M spigot-1.9.jar nogui
    

    Make the script executable: chmod +x minecraft.sh.

  • To make the server more stable, type nano spigot.yml. Set view-distance: 5 and restart-script: /home/pi/minecraft/minecraft.sh.
  • Optionally, you may want to type nano server.properties and modify the greeting message given by the motd variable.
  • We want the server to start automatically when the Raspberry Pi reboots, so type nano /etc/rc.local and enter su -l pi -c /home/pi/minecraft/minecraft.sh right before the exit command.
  • Start the server again using the script: ./minecraft.sh. It will return you to the shell. To access the console of the server type screen -r minecraft, to return to the shell type ctrl-a d. At any point, you can now disconnect from the server.

And voilà! The result is a “robust” and low-cost Minecraft server. In my experience, the server will still crash on occasion, but it will be stable enough to be fun to use. Moreover, when it does crash, it will automatically reboot.

In the hope of improving performance, I use a plugin called LaggRemover. However, I do not know whether it actually helps stability and performance. Adding plugins is easy, just drop the corresponding jar file in the plugins directory under the Minecraft and rebooting the server (just type stop and relaunch minecraft.sh). (You can recover jar files from the Internet using the wget or curl commands in a shell.)

Next, you can make the server available on the Internet using a service like dyn.com, and some work on your router to redirect the Minecraft port (25565) from your router to the Raspberry Pi. It is not very difficult to do but it requires you to know a few things about how to configure your router. You should also be aware of the security implications.

Is there any point to all of this? Probably not. Minecraft servers like Spigot are memory hungry and the Raspberry Pi has little memory. However, the project has stretched my imagination and make me think of new possibilities. I used to recycle old PCs as home servers to provide backups and caching for various projects. I got tired of having old, noisy and bulky PC in my home… but I could literally stack a cluster of Raspberry Pi computers in a shoe box. The fact that there is no fan is really a blessing.