Beyond the PC toward virtual and augmented reality

PC sales have entered a slow decline. Today, you can literally work on your Microsoft Office documents no matter where you are, no matter what computing device you have.

Predicting the short-term future is easy: the next smartphones will be more powerful. The laptops will look more and more like tablets.

For a living, I write software… this used to require powerful PCs with large monitors. Today I get most of my work done on an ultrathin laptop with a relatively weak processor. I have more pixels than my dual-monitor setup from 15 years ago and a lot of extra computing power.

But using a laptop in 2016 feels antiquated. No matter how good my monitor gets, it is still a dumb flat piece of plastic. You can add a touch interface, but that’s at best a minor benefit. Still… programming on a tablet is not really possible.

So what is the future?

The PC is dying. Even Microsoft has accepted that. Tablets and smartphones are nice, but they can’t fully replace PCs.

What will?

I think that virtual reality (VR) and, ultimately, augmented reality (AR) is the answer. Imagine getting in your office, putting on your glasses and “seeing” not only the silly windows that your monitor currently shows you but a whole interactive interface that surrounds you.

Want to stand up and stretch your legs? Your work can follow you around.

Think I am crazy? Virtual Desktop is a working prototype of this idea that allows you to operate Windows using a VR system.

I don’t think I will be able to trade my laptop for a VR system this year, or even the year after that. There are significant hurdles to overcome.

  • Operating systems and applications are designed for PCs or mobile devices. Without VR-aware application and operating systems, we cannot hope for productive work.
  • The current VR hardware is maybe well suited to games, but probably not to office work. We need goggles that we can wear for hours, we need the equivalent of a usable keyboard. The screen resolution of the current VR systems might be too low for productive work.

How long before we can overcome these obstacles? I cannot tell, but I also cannot imagine how we could possibly continue indefinitely with laptops. As PC sales continue to fall, there is a strong incentive for entrepreneurs to offer us the next big thing. Billions of dollars stand in the balance.

Imagining the future trumps intelligence…

Whenever I meet young people, I alway stress how their future will be very different from the present.

To anyone who lived through the first Great War (1914-1918), they would have thought that the Second World War, if it were to happen, would be quite similar in nature. But, in fact, nothing of the sort happened. The first Great War saw the soldiers stuck in place in dirty holes for years… the Second World War saw soldiers literally running forward (or away) on the battlefield.

Importantly, the strategies that worked well in 1916 were totally obsolete by 1940. The difference between 1916 and 1940 is obvious to anyone who studies history. It should be equally obvious that the difference between 2016 and 2040 is going to much larger.

This means that whatever strategies you have today are going to be called into question in the next 25 years in radical ways. If you plan on doing the same things for next 25 years, you are planning to be a fool.

The Nazis were not smarter than the rest of Europe, but, as far as warfare went, they were able to out-innovate their competitors in a radical manner. The Nazis were able to invade entire countries in hours… not months or weeks, hours…

Progress is at least an order of magnitude faster in 2016 than it was in 1916… So the difference between 2016 and 2040 is probably going to feel more like the difference between 1916 and 1990.

I am never worried about my kids lacking intelligence, but I am often concerned when I see that they can’t imagine the future being different… If you are unable to imagine the future, how are you going to contribute toward inventing it?

Default random-number generators are slow

Most languages like Java and Go, come with standard pseudo-random-number generators. Java uses a simple linear congruential generator. Starting with a seed value, it generates a new value with the reccurence formula:

seed = (seed * 0x5DEECE66DL + 0xBL) & ((1L << 48) - 1);

The seed variable thus modified time and time again can be considered “random”. For many practical purposes, it is good enough.

It should also be quite fast. A multiplication has a latency of only 3 cycles on recent Intel processors, and that’s the most expensive operation. Java should be able to generate a new 32-bit random number integer every 10 cycles or so on a recent PC.

And that is the kind of speed you get with a straight-forwad implementation:

int next(int bits) {
    seed = (seed * 0x5DEECE66DL + 0xBL) & ((1L << 48) - 1);
    return (int) (seed >>> (48 - bits));
 }

Sadly, you should not trust this analysis. Java’s Random.nextInt is several times slower at generating random numbers than you would expect as the next table shows.

FunctionTiming (ns) on Skylake processor
Random.nextInt10.4
my next function above2.7

That’s a four-fold difference in performance between my implementation and the standard Java one!

Languages like Go do not fare any better. Even the venerable rand function from the C standard library is several times slower than you would expect.

Why? Because the standard Java API provides you with a concurrent random number generator. In effect, if you use the Java random number generator in a multithreaded context, it is safe: you will get “good” random number generator. Go and other languages do the same thing.

It is unclear to me why it is needed. You can easily get concurrency in a multithreaded context by using one seed per thread.

Evidently, language designers feel that random-number generators should be particularly idiot-proof. Why have the random-number generators received this particular type of attention?

For users who want less overhead, the Java API provides a class in the concurrent package called ThreadLocalRandom that is nearly as far as my naive function, as the next table shows.

FunctionTiming (ns) on Skylake processor
ThreadLocalRandom3.2

In any case, if you need to write fast software that depends on random numbers (such as a simulation), you probably want to pick your own random-number generator.

Reference: As usual, my benchmarking software is available online.

Credit: I am grateful to Viktor Szathmary for pointing out the ThreadLocalRandom class.

More of Vinge’s “predictions” for 2025…

In my previous post, I reviewed some of the predictions made in the famous science-fiction book Rainbows end. The book was written in 2006 by Vernor Vinge and set in 2025.

The book alludes to a massive book digitalization effort under way. When the book was written, Google had initiated its book digitalization effort. It is impossible to know exactly how far Google is along in its project, but they reported having digitalized about a quarter of all books ever published in 2013. Google plans to have digitalized most books ever published by 2020. This makes Vernor Vinge into a pessimist: it seems absolutely certain that by 2025, most books will be available electronically. Sadly, most books won’t be available for free, but that has more to do with copyright law than technology.

It is easy to anticipate social outcry at some advances. Vinge imagined that the digitalization of books would be fiercely resisted… but nothing of the sort happened. Google faced lawsuits… but no mob in the streets. That’s not to say that advances are not sometimes met with angry mobs. Genetic engineering is resisted fiercely, especially in Europe. Again, though, what gets people down in the streets is hard to predict.

What is interesting to me is that this massive book digitalization effort has not had a big impact. Even if we had free access to all of the world’s literature, I doubt most people would notice. Mostly, people do not care very much about old books. Wikipedia is a much bigger deal.

And this makes sense in a fast evolving civilization… It is not that we do not care about, say, history… it is just that most of us do not have time to dig in old history books… what we are looking for are a few experts who have this time and can report back to us with relevant information.

Will artificial intelligence eventually be able to dig into all these old books and report back to us in a coherent manner? Maybe. For now, much hope has been invested in digital humanities… and I do not think it taught us much about Shakespeare.

Rainbows end depict delivery drones. Autonomous flying drones able to make deliveries must have sounded really far-fetched in 2006. The main difficulty today is regulatory: we have the technology to build autonomous drones that take a package and deliver it hundreds of meters away to a drop zone. Battery power is an issue, but drones can play relay games. Amazon’s plan for delivery drones is right on target to give us this technology by 2025. Short of short-sighted governments, it seems entirely certain that some areas of the USA will have routine drone deliveries by 2025.

Though Vernor Vinge is often depicted as an extremely optimistic technologist, many of his fantastic visions from the turn of the century are coming on right on schedule. We are also reminded of how long ten years can be in technology… Autonomous drones went from science-fiction and advanced research projects, to being available for sale online in ten years. Not everything follows this path… some “easy problems” turn out to be much harder than we anticipated. But surprisingly often, hard problems are solved faster than expected.

Consciousness and free will are illusions: you are just a robot

As a computer scientist, it is natural for me to view the brain as a computer. And though computers have different abilities, they are also very much all equivalent at a fundamental level. You have machines that can read and execute instructions. Some machines can run faster, others can hold more instructions… but these are details. Your computer may use transistors, DNA methylation, an actual mechanical tape or neurons, but that’s just a matter of implementation.

Unlike digital computers, our brains evolved to support software functions that we do not fully understand yet.

I have kids. They are my kids. I look outside. There is snow. I can type at this computer. I know who “I” am.

I don’t understand “how” my brain does all that. I could not reproduce it in a digital computer I would build. I am, nevertheless, convinced that there is nothing magical going on. There is no need for action at a distance or mysterious quantum effects. That we do not yet understand something does not mean that it has to be particularly complicated or that it requires techniques that are far above ours. In 1900, nobody could build a plane. In 1915, planes were used for critical missions in the first great war.

The Cray supercomputer, the iPhone and blood

When I was in high school, the most powerful computer money could buy was the Cray 2. The thing would require a room all by itself. I don’t know how much it sold for, but an old MIT article says that it cost $140 per central-processing unit hour. In other words, if you had to ask how much it cost, you could not afford it.

It provided 1.9 GFLOPS peak performance. It had 1 GB of memory.

In a recent iPhone, the GPU alone has more number crunching power than the Cray 2 had and just as much memory.

We can play a little thought experiment. What if I had gone to the Cray engineers and told them that, one day, I would be able to fit their entire computer in my pocket. What would they have said?

And what if, today, I were to tell you that in 40 years, we will be able to fit all the computational power of your phone into nanobots that can live in your blood stream?

Revisiting “Holy Fire” (Bruce Sterling, 1996)

Bruce Sterling in a famous scifi novelist. One of his most celebrated novels was written 20 years ago: Holy Fire. It is a near-future novel, set in the late XXIst century. Sterling set it about a century in the future from the time he wrote it.

Near-future novels provide a set of “predictions”. Of course, nobody expects scifi novels to “come true”. However, novelists try to paint worlds that are credible. It is necessary to achieve the necessary suspension of disbelief.

Near-future novels often fail to pass the test of time because the future is never quite what we imagined. So how well did Sterling do in his 1996 novel Holy Fire? Not bad at all as far as technology is concerned. I give Sterling an A+ on his ability to project himself in the future in a way that remains relevant 20 years later.

Let recap briefly what technology was like in 1996. The Human Genome project was underway, but it was still a Moon shot. There was no such thing as a tablet or smartphone. Sony had introduced the Playstation. It was an innovative console because it used compact disks. Its processor ran at a fantastic 33 MHz (today’s PS4 runs at 1600 MHz). Though the web existed, blogging was unheard from. There was no search engine indexing a good fraction of the web, and the web was much smaller. Some human beings were still better than computers at Chess. Artificial intelligence was just an academic idea, we did not even have spam filters. Anybody who would have predicted that 20 years later, we would have self-driving cars would surely have been ridiculed.

Given this context, Sterling did not miss much.

Misses:

  • Though the novel is set in 2096, it seems that nothing like the smartphone has been invented. For example, people still have cameras. You can travel to a foreign city and get lost, fail to find major touristic sites. So GPS and Internet maps are not ubiquitous.
  • The novel still has television and television shows. Television is already an old-people medium in 2016. It won’t survive to 2096.
  • There does not seem to be ubiquitous Internet access. This makes sense considering that the author missed the smartphone revolution.

Probable misses:

  • Though people wear smart glasses, wearable technology is bulky and inconvenient. Mostly, people systematically have to go to a clinic to get health checks. It seems a lot more likely that in the near future, say 20 years, we will be able to monitor people’s health at home using unobtrusive devices.

Possibly correct:

  • The novel alludes to a Moon colony. At the time that the novel was written, Earth did not have an international space station. It seems that if we are have been able to sustain a space station, we ought to be able to sustain a Moon colony in the coming decades.
  • In the novel, the main protagonist is in Europe while she only speaks English. Instantaneous translation devices allow her to communicate, somewhat, with people who only speak their native language. Though it does not work as well as it should, this technology is already broadly available in 2016.
  • The novel depicts post-canine dogs that have extended longevity (40 years) and the ability to speak (through a speaker). We have developed working brain-computer interfaces (e.g., the cochlear implant). People and monkeys are able to control robotic arms using their brains. Could we ever make it so that a dog can speak?I have no clue but 2096 is a long time away.
  • In the novel, corrective eyewear is seemingly a thing of the past, at least in young people. When the novel was written, Lasik was not yet approved in the US. Today, many people choose to have interventions to avoid having to wear glasses. It is unclear whether technology will ever get good enough so that we can do away entirely with corrective eyewear, but it seems that it might.
  • The main protagonist undergoes a rejuvenation therapy of “neotelomeric extension” to extend her telomeres. Telomeres are a piece of information-free DNA at the end of our chromosomes that is believed by some to act as a “genetic clock”. It grows ever shorter as you age. Resetting these telomeres could fool the cells into believing that they are young again and lead to robust rejuvenation. In 2015, one human being (Liz Parrish) underwent gene therapy with the goal of lengthening her telomeres. Harvard’s geneticist George Church has stated that his lab. will be able to reset such a clock within 5 years. It is hard to tell at this point whether resetting the genetic clock will do any good or whether it is even possible, but it could.

Surely correct:

  • In the novel, one can take 3D pictures. That is, you can scan an individual and print it using a 3D printer. I think we have been able to do just that for some time.
  • Virtual and augmented reality are widely available in the novel, with many people wearing augmented reality glasses. Given how fast this field is advancing, we will probably have the technology depicted in novel within 5 years, not 90 years.
  • Cars are self-driving. We shall get the kind of cars that are depicted in novel in less than 10 years. In many ways, they are already here.

Conclusion: On the whole, it seems that Sterling is a pessimistic regarding technology. His characters still debate the benefits of self-driving cars and they still watch television. He missed the ubiquitous availability of smartphones that make maps and cameras obsolete. His main protagonist appears impressed by 3D printing in 2096… whereas anyone can buy a 3D printer in 2016. But this is nitpicking: the novel still seems credible, even 20 years later. This is quite an accomplishment.

Credit: Thanks to Peter Turney for suggesting this novel to me.

Pac-Man running at 1 million frames per second

In What does technology want?, Kevin Kelly argued that technology is on an evolutionary path. In some real sense, technology is alive and growing. It seeks out to improve itself at an ever faster rate. And it seems that technology currently loves software.

The difference between a recent processor and a processor from 10 years ago is ~10x. Yet it is not like Google is the same as it was, except 10x faster. Without better software, our video games would be like old-school Pac-Man, except that they would run at 1 million frames per second. If that sounds ridiculous, that’s because it is: software has improved thousandfolds.

I don’t think we are going to see an evil artificial intelligence trying to enslave us in the coming decades. However, we are going to get seriously insane software. It is not going to look like what we have today, only faster. It is probably unimaginable right now.

My most popular posts in 2015 (part II)

Techno-optimism

For several years now, I have grown more optimistic about the power of human innovation. Despite the barrage of bad news, the fact is that we are richer and healthier than we have ever been. Yes, I might not be rich or healthy compared to the luckiest among us… but on the whole, humanity has been doing well.

In Going beyond our limitations, I reflected on the “coming” end of Moore’s law. Our computers are using less power and they have chips that are ever smaller… but it is seemingly more and more difficult to improve matters. I argue that some of the pessimism is unwarranted: on the whole, we are still making nice progress… But it is true that, at the margin, we are facing challenges. I think we need to take them head on because I want robots in my blood hunting down diseases and chips in my brain helping me think more clearly.

In Could big data and wearables help the fight against diseases?, I argue that information technology could massively accelerate medical research.

In What a technology geek sees when traveling abroad, I reflect on how technology has evolved, by using a recent trip I made as a vantage point. In Amazing technologies from the year 2015…, I reflect on the technological progress we made in 2015.

This year, I read Rainbows end, a famous novel by Vernor Vinge set in 2025. The novel makes some precise prediction about 2025, one of them is that we shall cure Alzheimer’s by then (at least for some individuals). Interestingly, Hilary Clinton has announced a plan to do just that, should she be elected president of the USA. In Revisiting Vernor Vinge’s “predictions” for 2025, I have looked at the novel as a set of predictions, trying to sort them out into what is possible and what is less likely.

Aging

I like to stress how blind I can be to the obvious. I have a lot of education, probably too much education… but I am routinely wrong in very important ways. For example, I took biology in college and I got excellent grades. I attended really good schools. I studied hard. I have also been a gardener for most of my life. I have also kept green plants in my home for decades. Yet, until I reached my forties, I assumed that plants took most of their mass from the soil they were planted in. How could I ever think that? It is obviously deeply wrong. (In case you are also confused, plants are mostly made out of the carbon their extract from the CO2 in the air.) I just kept on assuming, even though I had all the facts at my disposal, and presumably all the required neurons, to know better.

And up till 2015, I assumed that aging was both unavoidable and irreversible. I guess I assumed that evolution had tuned bodies for an optimal lifespan and that whatever we got was the best we could get. After all, you buy a car and it lasts more or less ten years. You buy a computer and it lasts more or less five years. It makes sense, intuitively, that all biological organisms would have an expiry date based on wear and tear.

Yet this makes no sense. For example, I keep annual plants in my basement, making them perennial by pruning their flowers just before they bloom. If evolution drove living things to live as long as possible, these annual plants would be perennial. Yet we know that evolution did the opposite. We believe that plants were originally perennial and then became annual more recently in their evolution. In Canada, we have the Pacific salmon that dies out horribly after procreation while the similar Atlantic salmon can reproduce many times. There is a worm, the Strongyloides ratti that can live 400 days in its asexual form while only 5 days in its sexual form. So the very same worm, with the same DNA, can “choose” to live a hundred of times longer, or not. Many animal species like whales, some fishes (sturgeon and rougheye rockfish), some turtles, lobster do not age like we do… and they sometimes even age in reverse… meaning that their risk of death decreases with time while their fertility might go up.

So, clearly, we age because the body does not do everything it should to keep us in good shape. There are some forms of damage that your body cannot repair, but it could do a whole lot more. There is some kind of clock ticking… it is either the case that your body “wants” you to age on a schedule (like annual plants), or else your genes are simply ill-suited for longevity (because evolution does not care about what happens to the old). Whatever the case might be, aging is mostly a genetic disease we all suffer from.

It is all nice but what does it matter to us, human beings? Ten years ago at Stanford, Irina Conboy and her collaborators showed that the blood of a young mouse could effectively “rejuvenate” an old mice. This was a significant breakthrough but it got little traction… until recently. How does it work? Conboy knew that when we do organ transplantation, it is the age of the recipient that matters. If you put a young lung into an old body, it behaves like an old lung. And vice versa: an old hearth in a young body will act young. We now know that tissues respond to signals: if told to be young, cells behave as if they were young. We have been able to take cells from centenarians and reset them so that they look like young cells. So can you tell your body that you are young again? Drinking young blood won’t work, of course… instead, we want to identify the signals and tweak them accordingly. Recently, the Conboy lab. at UC Berkeley showed that oxitocin (a freely available hormone) could rejuvenate old muscles. There are ongoing clinical trials focusing on myostatin inhibition to allow old people to have normal muscle mass and strength. The race is currently on to identify these signals and find ways to modulate them: old signals should be silenced and young signals should be amplified. There are many ways to silence or amplify a signal, but because we do not have the cipher book, it is tricky business.

Harvard geneticist George Church has other angles of attack and he claims that “in just five or six years he will be able to reverse the aging process in human beings.” Church has been studying the genes of centenarians and he wants to identify protective alleles that we could then all receive through genetic engineering. Moreover, he has plans to up-regulate (and possibly down-regulate) certain genes. Indeed, as we age, many genes that were silent, are activated, and a few that were active are down-regulated. This gene regulation is part of what we call “perigenetic”: though your genes might be set for life, which genes are expressed at any given time is a dynamic and reversible process. So cells know how to be old or young and this seems to depend a lot of which genes are expressed. The process is also fully reversible as far as we can tell. Will George Church cure aging by 2020?

As it turns out, there are many other rejuvenation therapies in the works.

As you get older, your immune system starts to fail and even turn against you. Part of this process is that you effectively lose your thymus (around age 40): it becomes atrophied. The thymus is the organ in charge of “training” your immune cells. With it gone, your immune system gradually becomes incompetent. There are many ways to restore the thymus. There is an ongoing clinical trial to make controlled used of hormones to regrow it. Gene therapies could also work, as well as various transplantation approaches. Setting the thymus aside, it is more and more common that we create immune cells in a laboratory and inject them. This could be used to boost the immune system of the very old.

Stem cells therapies are fast growing. We are able, in principle, to take some of your skin cells, turn them into stem cells and then inject them back into your body so that they go on to act as new stem cells to help repair your joints or your brain. There is an endless stream of therapies in clinical trials right now. Not everything works as expected… one particular problem is that stem cells signal and respond to signalling… this means that how a given stem cell will behave in a given environment is complicated. Just randomly dumping stem cells in your body is likely ineffective… but scientists are getting more sophisticated.

Your body produces a lot of garbage as you grow old. The garbage accumulates. In particular, amyloids clog your brain and your heart and eventually kill you. Also some of your cells reach the end of their life but instead of just self-destroying (apoptosis), they just sit around and emit toxic signals. We already had clinical trials to clear some of this garbage… the results have not been overly positive. But what is more encouraging is that we have developed the technology… should we ever need it.

In any case, for me, 2015 was the year I realized that we are aging because we lack the technology to stay young. I have qualified aging as a software bug. We have the software of life, we can tweak it, and we think we can “upgrade” the software so that aging is no longer a feature. We don’t know how long this might take… could be centuries, could be decades… but I think we will get there as a species.

In 2015, the first clinical trial for an “anti-aging” pill was approved (metformin). This pill would, at best, slow down a little bit aging… but the trial is important as a matter of principle: the American government has agreed that we could test an anti-aging therapy.

I have written about how astronauts mysteriously age in space. As far as I can tell, this remains mostly unexplained. Radiations, gravity, aliens?

Scholarship

In Identifying influential citations, I reported on the launch of Semantic Scholar, an online tool to search for academic references that innovates by distinguishing meaningful citations from de rigueur ones.

I also reacted to the proposal by some ill-advised researchers to ban LaTeX in favor of Microsoft Word for the production of research articles.

Math education

I think that math. education is still far from satisfying. In On rote memorization and antiquated skills and Other useless school trivia: the quadratic formula, I attempted to document how we spend a lot of effort teaching useless mathematical trivial to millions of kids, for no good reason that I can see. (I have a PhD in Mathematics and I have published research papers in Mathematics.)

My most popular posts in 2015… (part I)

Programming

If you want the world to get progressively better, you have to do your part. Programmers can’t wait passively for hardware to get better. We need to do our part.

In particular, we need to better exploit our CPUs if we are to keep accelerating our software. Early in 2015, I wrote a blog post explaining how to accelerate intersections using SIMD instructions. I also wrote about how to accelerate hashing using the new instructions for carryless multiplications with a family called CLHash, showing how it could beat the fastest legacy techniques such as Google’s CityHash. I have since shown that CLHash is twice as fast on the latest Intel processors (Skylake).

Some might point out that using fancy CPU instructions in hardly practical in all instances. But I have also been able to show that we could massively accelerate common data structures in JavaScript with a little bit of engineering.

Economics

In Secular stagnation: we are trimming down, I have argued that progress in a post-industrial society cannot be measured by counting “physical goods”. In a post-industrial society, we get richer by increasing our knowledge, not by acquiring “more stuff”. This makes measuring wealth difficult. Economists, meanwhile, keep on insisting that they can compare our standards of living across time without difficulty. Many people insist that if young people tend to own smartphones and not cars, that’s because they can’t afford cars and must be content with smartphones. My vision of the future is one where most people own far fewer physical goods. I hope to spend time in 2016 better expressing this vision.

Innovation

Though programmers make up a small minority of the workforce (less than 1 in 20 employees at most places), their culture is having a deep impact on corporate cultures. I have argued that the “hacker culture” is winning. We get shorter cycles, more frequent updates, more flexibility.

In Theory lags practice, I have argued that we have to be willing to act before we have things figured out. We learned to read and write before we had formal grammars. Watt invented the engine before we had thermodynamics. The Wright brothers made an airplane without working the science out.

In Hackers vs. Academics: who is responsible for progress?, I have argued that if you want progress, you need people who are not specialist at self-promotion, but rather people who love to solve hard practical problems. In this sense, if we are ever going to cure Alzheimer’s or reach Mars, it is going to be thanks to hackers, not academics.