Physics works with fundamental properties such as mass, speed, acceleration, energy, and so on. Quantum mechanics has a well known trade-off between position and momentum: you can know where I am, or how fast I am going, but not both at the same time.

Algorithms (and their implementations) also have fundamental properties. **Running time** and **memory usage** are the obvious ones. In practice, there is often a trade-off between memory usage and the running time: you can a low memory usage, or a short running time, but not both. Michael Mitzenmacher reminded me this morning of another: **correctness**. On some difficult problems, you can get a low memory usage and a short running time if you accept an approximate solution.

I believe there are other fundamental properties like **latency**. Consider problems where the volume of the solution and of the input is large: statistics, image processing, finding some subgraph or sublist, text compression, and so on. In such instances, the solution comes out as a stream. You can measure the delay between the input and the output. For example, a program that compresses text by first scanning the whole text might have high latency, even if the overall running time is not large. Similarly, we can give the illusion that a Web browser is faster by beginning the Web page rendering faster, even if the overall running time of the rendering is the same. As another example, I once wrote a paper on computing the running maximum/minimum of an array where latency was an issue.

It would be interesting to come up with a listing of all the fundamental properties of computing.

@Geoff Absolutely. And it might be hard to define latency as well. But the mere fact that it is hard to define these quantities should not prevent us from thinking about them as being fundamental concepts (if not properties).

@Geoff

Your question is entirely valid. However, the concept of the “running time” of algorithm is also somewhat fuzzy. It has become the subject of an entire subfield of Computer Science called Complexity Theory.

I think that it always comes down to (somewhat arbitrary) models and representations. We know that these models are useful because we implement algorithm in software and realize that the complexity of the algorithm is, indeed, closely related to the performance of the software. However, the match is not perfect.

Maybe I exaggerate when I try to compare “running time” or “latency” with mass or momentum. I do not know yet.

If correctness is a fundamental property then you should probably define it. I suspect this is harder than it sounds. Does it means it passes a set of tests? Does it have a specific mathematical relation that it must satisfy? Do the results make the customer happy? They are not synonymous nor are they mutually exclusive.

Clearly there are degrees of correctness, but it is a more relativistic measure than resource usage.

@Daniel Certainly not and I didn’t mean to suggest otherwise.

I do wonder, however, if a property is found to be very difficult to define whether or not it should even be considered a fundamental property. Or perhaps it means what the property is pertaining to — in this case algorithms — is ill-defined.

Interesting questions to ponder.

@Guy Excellent point. Yes, definitively.

I have to say – the three measures that you’re identifying sound a great deal like the traditional engineering triangle: good, fast, and/or cheap. You can make it as good as you want (correctness) but only at the expense of either speed or cost (memory/cycles/etc).

@Geoff: A standard way to define “correctness” to give a mathematically pure definition is generally via an approximation ratio: I can guarantee an answer within a factor of 2 of the optimal. In the case of randomized algorithms, another standard approach is a probabilistic statement: I get the correct answer with high probability (or probability at least x for whatever x is convenient).

So, I disagree with your assertion that correctness needs to be a “more relativistic measure” than running time. There are standard ways of examining it in the theoretical literature; one can certainly imagine many variations (and people do), but this is also true for measures like running time.

I’ve found that students are initially startled by the idea that correctness can be just another performance measure, like memory usage and computation time. It’s a challenging and important philosophical idea, well worth introducing in this form to undergraduates.

Peter Denning runs a “Great Principles of computing” project. http://cs.gmu.edu/cne/pjd/GP/GP-site/welcome.html

If you want to deal with parallel algorithms, you might also consider work and cost in addition to time, where cost = time * nb processors and work = total amount of operations done over all processors.

In the sequential world, the relationship between these three are straightforward (since nb processors = 1):

time = work = cost

However, this is not so in the parallel world, where you only have:

time <= work <= cost

And you also have possible tradeoff: using less processors with a different algorithm may reduce the cost but may increase the time — or not, in which case it might be considered "optimal" relative to the sequential version (if the parallel has the same cost as the sequential with an improved time.

For these fundamental properties, isn’t it important to draw a distinction between “computing” and “human computing”?

Speed vs. memory — that seems like a computer-only fundamental property.

But page rendering / latency seems more like a human perception issue. Same with correctness, to a certain extent.

@Zeno

There is a relation between latency and running time, but they are not the same thing.

Consider the rendering of HTML pages. Some browsers begin the rendering faster than others, at the cost of an larger overall running time.

Interesting post & discussion.

However, I don’t think that latency is as fundamental as running time, memory usage, and correctness.

If latency is the delay between the input and the output, then it is also something that we should file under “running time”.

I think correctness is most interesting when considered as an orthogonal dimension to other properties. Then latency is d(running time) / d(correctness), that is, the rate correctness changes as a function of time. For a traditional algorithm this would be undefined (you jump from 0 to 1 instantaneously), but for a loading webpage it might be linear. Similiarly, approximate algorithms often give a guaranteed correctness relative to the amount of memory available. At some quantity of memory and time the algorithm might be exact, but it can degrade better as memory declines.

Thanks for prompting this question, it’s interesting to think about modelling algorithm performance a multi-dimensional shape.

Shouldn’t there be a distinction between “algorithm level properties” and “algorithm implementation level properties”? Those are hardly the same.