Oracle’s Java is a fast language… sometimes just as fast as C++. In Java, we commonly use polymorphism through interfaces, inheritance or wrapper classes to make our software more flexible. Unfortunately, when polymorphism is involved with lots of function calls, Java’s performance can go bad. Part of the problem is that Java is shy about fully inlining code, even when it would be entirely safe to do so.

Consider the case where we want to abstract out integer arrays with an interface:

public interface Array {
    public int get(int i);
    public void set(int i, int x);
    public int size();
}

Why would you want to do that? Maybe because your data can be in a database, on a network, on disk or in some other data structure. You want to write your code once, and not have to worry about how the array is implemented.

It is not difficult to produce a class that is effectively equivalent to a standard Java array, except that it implements this interface:

public final class NaiveArray implements Array {
    protected int[] array;
    
    public NaiveArray(int cap) {
        array = new int[cap];
    }
    
    public int get(int i) {
        return array[i];
    }
    
    public void set(int i, int x) {
        array[i] = x;  
    }
    
    public int size() {
        return array.length;
    }
}

At least in theory, this NaiveArray class should not cause any performance problem. The class is final, all methods are short.

Unfortunately, on a simple benchmark, you should expect NaiveArray to be over 5 times slower than a standard array when used as an Array instance, as in this example:

public int compute() {
   for(int k = 0; k < array.size(); ++k) 
      array.set(k,k);
   int sum = 0;
   for(int k = 0; k < array.size(); ++k) 
      sum += array.get(k);
   return sum;
}

You can alleviate the problem somewhat by using NaiveArray as an instance of NaiveArray (avoiding polymorphism). Unfortunately, the result is still going to be more than 3 times slower, and you just lost the benefit of polymorphism.

So how do you force Java to inline function calls?

A viable workaround is to inline the functions by hand. You can to use the keyword instanceof to provide optimized implementations, falling back on a (slower) generic implementation otherwise. For example, if you use the following code, NaiveArray does become just as fast as a standard array:

public int compute() {
     if(array instanceof NaiveArray) {
        int[] back = ((NaiveArray) array).array;
        for(int k = 0; k < back.length; ++k) 
           back[k] = k;
        int sum = 0;
        for(int k = 0; k < back.length; ++k) 
           sum += back[k];
        return sum;
     }
     //...
}

Of course, I also introduce a maintenance problem as the same algorithm needs to be implemented more than once… but when performance matters, this is an acceptable alternative.

As usual, my benchmarking code is available online.

To summarize:

  • Java fails to fully inline frequent function calls even when it could and should. This can become a serious performance problem.
  • Declaring classes as final does not seem to alleviate the problem.
  • A viable workaround for expensive functions is to optimize the polymorphic code by hand, inlining the function calls yourself. Using the instanceof keyword, you can write code for specific classes and, thus, preserve the flexibility of polymorphism.


I have always been interested in what makes us smart. So I read Amanda Ripley’s The Smartest Kids in the World in almost a single sitting. She is a good writer.

The core message of the book is simple and powerful. Entire countries can change how their kids rank in international academic competition within a few short years. For example, Poland (a relatively poor country) was ranked in 25th position in 2000 on mathematics, but in 13th position in 2009. Finland was the strongest democratic country in 2006, but Japan and South Korea have surpassed it in 2012. Meanwhile, the United States always does poorly despite outspending everyone on a per-student basis. Some Asian countries (e.g., Singapore and selected parts of China) put everyone else to shame, but she dismisses them as being too far from democratic countries.

She covers three education superpowers: Finland, South Korea and Poland. Beside the fact that they are all democracies, there is very little in common between these three countries, except that their 15-year-old fare well in academic tests. It is not clear whether there is any lesson to be learned from these countries.

To make things worse, her choice is somewhat arbitrary. For example, Canada (my home country) also does very well but she somehow decided that comparing Canada with the United States would not make a good story. Maybe Canada is not exotic enough by itself, but she could have consider the French province of Quebec. It is one of the poorest place in North America, but in the 2012 PISA test, Quebec scored 536 in Mathematics, which is as good as Japan and better than Finland (519), Poland (518), and a lot better than the United States (482).

The book is very critical of the Asian mindset. Kids in South Korea are drilled insanely hard, starting their school day at 8am, and often ending it at 11pm. Finland is a much nicer places in the book. Even Poland appears pleasant compared to South Korea.

To be fair, if half the things she writes about how kids are drilled in South Korea is true, I would never send my boys to school there.

Implicit in the book is the belief that the United States will pay a price in the new economy for its weak schools. American kids spend too much time playing football, and not enough time studying mathematics. Or so the book seems to imply.

This seems a bit simplistic.

My impression is that in some cultures, like South Korea, much of your life depends on how well you are doing at 15. So, unsurprisingly, 15-year-old kids do well academically. In the United States, people will easily forgive a poor high school record. You can compensate later on. So maybe American teenagers spend more time playing video games than doing calculus: who could blame them?

What is a lot more important for a country is how well your best middle-age workers do. The bulk of your companies are run by 40-something or 50-something managers and engineers. Only a select few do important work in their 20s. In my experience, you learn much of what you know by the time you are 40 “on the job”.

So I am not willing to predict bad times ahead for the United States based solely on the academic aptitude of their kids. I think we should be a lot more worried about the high unemployment rates among young people in Europe. Sure, French kids may earn lots of degrees… but if you do not have 10 years of solid work experience by the time you are 40, you are probably not contributing to your country as much as you could.

It is important to measure things. I am really happy that my kids are going to go to school in Quebec, a mathematics superpower at least as far as teenagers are concerned. But there may be trade-offs. For example, by drilling kids very hard, very early, you may drain their natural love of learning. This can lead to employees who will not be learning on their own, for the sake of it. Or you may discourage entrepreneurship.

As a general rule, we should proceed with care and avoid hubris because we may not know nearly as much as we think about producing smart kids.

We learned recently of the suicide of Stefan Grimm, a successful professor at the prestigious Imperial College in London. Professors Grimm regularly published highly cited articles in the best journals. He was highly productive. Unfortunately, some of his colleagues felt that he did not secure sufficiently large research grants. So he was to be fired.

It is not that he did not try. He was told that he was the most aggressive grant seeker in his school. He worked himself to death. I am willing to bet that he was failing my week-end freedom test. But he still failed to secure large and prestigious grants because others were luckier, harder working, or even smarter.

It is not remarkable that he felt a lot of pressure at work. It is not remarkable that he was fired despite being smart and hard-working. These things happen all the time. What is fascinating is the contrast between how most people view an academic job and Grimm’s reality.

Other academics (starting maybe with Richard Hall) described academia as an ‘anxiety machine’:

Throw together a crowd of smart, driven individuals who’ve been rewarded throughout their entire lives for being ranked well, for being top of the class, and through a mixture of threat and reward you can coerce self-harming behaviour out of them to the extent that you can run a knowledge economy on the fumes of their freely given labour.

(…)

I know plenty of professors and star researchers who eat, sleep and breathe research, and can’t understand why their junior colleagues (try to) insist on playing with their children on a Sunday afternoon or going home at 6. ‘You can’t do a PhD and have a social life’, my predecessor told me.

It is simply not very hard to find overly anxious professors. I know many who are remarkably smart and who have done brilliant work… but they remain convinced that they are something of a failure.

Successful academics have been trained to compete, and compete hard… and even when you put them in what might appear like cushy conditions, they still work 7 days a week to outdo others… and then, when they are told by colleagues that it is not yet enough… they take such nonsensical comments at face value… because it is hard to ignore what you fear most…

And it is all seen as a good thing… without harsh competition, how are you going to get the best out of people?

Did you just silently agree with my last sentence? It is entirely bogus. There is no trace of evidence that you can get the best out of people at high-level tasks through pressure and competition. The opposite is true. Worried people get dumber. They may be faster at carrying rocks… but they do not get smarter.

Stressing out academics, students, engineers or any modern-day worker… makes them less effective. If we had any sense, we would minimize competition to optimize our performance.

The problem is not that Grimm was fired despite his stellar performance, the problem is that he was schooled to believe that his worth was lowered to zero because others gave him a failing grade…

Source: Thanks to P. Beaudoin for the pointer.

Back in the 1970s, researchers astutely convinced governments that we could build intelligent systems out of reasoning engines. Pure logic would run the day. These researchers received billions for their neat ideas. Nothing much came out of it.

Of course, what we now call artificial intelligence works. Google is maybe the most tangible proof. Ask any question and Google will answer. But what lies behind Google is not a giant reasoning engine crunching facts. Rather it uses a mix of different techniques including machine learning and heuristics. It is messy and not founded on pure logic.

Collecting, curating and interpreting billions of predicates is a fundamentally intractable problem. So our AI researchers failed to solve real problems, time and time again. And their funding was cut, severely. We had what has become known as the AI winter.

But bad ideas will not die so easily. The AI researchers who started out in the 70s have trained PhD students with their generous funding. These PhD students have grown up to become professors themselves, and the wheel keeps on turning. As long as there are academics finding these ideas appealing, there will be grants and professorships. The ivory tower will not let the reality intrude.

Berners-Lee, inspired by these misguided researchers, proposed the Web as a way to revive the AI dream. For reasons having nothing to do with the original intent, the Web takes root and becomes ubiquitous.

Using his newfound fame, Berners-Lee goes back to his original dream and promotes the Semantic Web. The web should embrace the original vision from the 70s: collect and maintain lots of facts and use a query engine to create intelligence.

Ten years go by and hardly anything (outside academic conferences) comes out of this Semantic Web. Tens of thousands of research papers get written. Hardly anyone ever uses any of the technology produced and those who do achieve few useful results. Not to worry, Berners-Lee rebrand whatever remains as Linked Data.

And the wheel keeps on turning…

So when I am asked about the possibilities offered by Linked Data, I feel a great sorrow.

A lot of what I do is… quite frankly… not very good. To put it differently, almost everything I do fails to meet my quality standards. So I am constantly fighting to do better.

It is not perfectionism. Perfectionism is the refusal to do anything unless it meets your high standards. To put it differently, perfectionism is synonymous with a lack of humility. It is a belief that flawed results are below you.

Let me be blunt: perfectionists are self-centered pretentious pricks.

Next Page »

Powered by WordPress