Computer Science can be taken a natural science: the study of how the universe processes information. If it is a natural science, then does it build on Physics? Or does Physics build on Computer Science?

The answer is obvious (to me): Without algorithms there would be no Physics! Physics is built on the fundamental assumption that we can model the world using algorithms. **Computer Science is the most fundamental natural science.**

Does Computer Science make fundamental (and possibly falsifiable) predictions about the universe? Of course, it does! Take the strong Church-Turing thesis: the assumption that the universe is a Turing machine. It has deep implications:

- There is no problem solvable by a human brain that cannot be solved by a machine. In particular, creativity and intuition are computable. Philosophically, we have no soul (not anymore than a PC).
- We all live within a discrete computer simulation.Â Physics is digital. Continuous functions (such as
*f*(*x*)=sin(*x*)) are approximations to the discrete functions governing nature, and not the reverse. We all live in the Matrix.

The predictions are falsifiable:

- Come up with a task, such as writing poetry, that a brain can do, but not a digital computer.
- Show that we cannot reproduce a physical process using a digital computer.

Yet, these predictions remain unfalsified to this day.

To put it bluntly: The most fundamental of all natural laws is that we live in a Turing machine.

**Further reading**: The World is Digital (blog post by Dick Lipton)

Interestingly enough (and few people remember this) “computer science” was initially taught Mathematics. It then diverted into what we know today. But initially, it had nothing to do with programing, algorithms, data structures and all that good stuff.

Reading this provocative mini-essay reminds me the physics textbooks where the link between an equation and the one that followed was not explained by the author, who would simply “It is easy to show that…” or “It is left to the reader to show that Eq. 9 is equivalent to Eq. 8”. Many a sleepless night ensued…

So “Physics is built upon the fundamental assumption [I would rather say “postulate”] that we can model the world using algorithms” entails “Computer Science is the most fundamental natural science”. How so?

After all, algorithms have been around for a couple thousand years, but not Computer Science (unless we live in The Matrix, of course).

What I can understand more easily is that both Physics and Computer Science share the above-mentioned fundamental assumption. But what does it say about one being “more fundamental” than the other?

As to the falsification argument, I can just say that to be worth considering, a theory challenging accepted beliefs (because this is what a theory is) doesn’t have only to be falsifiable (a criteria that has been challenged anyway), but it should :

– explain all the phenomena that current theories explain, and:

– explain some phenomena that current theories are unable to explain, or make a prediction different from what one obtains with current theories

I don’t see that in the suggested predictions, which I would not qualify as predictions anyway : “Creativity and intuition are computable”? “We have no soul”? “We live within a discrete computer simulation”?

On a lighter basis, when I see the spam-protection device on this blog, I come to think that there will be a very very long time before there is no brain-solvable problem that a computer cannot solve.

@Marc

The Greeks had algorithms, but they did not have Physics. In this sense, Computer Science (the science of algorithms) came first. Without algorithms, we could not do Physics, but we can have algorithms without knowing about Physics.

As for creativity being computable, apparently, the best checkers players describe the current breed of software as “highly creative”. Unlike human beings who fall prey to regular patterns, the computer is able to examine a wider range of possibilities and “surprise” the human beings consistently.

This is contrary to the common belief that computers are “predictable” and “follow rules blindly”. In fact, it appears that computers can be more creative than we are.

Of course, if someone disagrees with this statement, they can simply challenge Computer Scientists with some task that *only* human beings can do well.

The problem is that the number of such tasks is rapidly diminishing. For just about every type of problems, there is software that can already match human beings.

If we ever get “stuck” where there are fundamental problems that we cannot solve using a digital computer, then we might think that indeed, our brain is different from a PC.

An entire field (AI) is working at trying to prove/disprove that brains are nothing but computers. So far, they have made an increasingly convincing case. Machine Learning is really a beautiful thing.

We can translate documents from French to English, with remarkable accuracy. Computers can systematically beat all checkers players. Computers can already beat 99.99% of human beings at Chess. Computers can drive cars. Computers can recognize individuals in pictures. Computers can write poems.

There is very strong evidence in favor of the fact that human brains are no more powerful than Turing machines.

If that’s not an amazing fact, an amazing theory, then I don’t know what is!

As I said in my comment on Richard Lipton’s post (http://rjlipton.wordpress.com/2009/10/04/the-world-is-digital/#comment-1607), I don’t buy the argument that the world is digital. Sure, I can simulate everything with a digital computer, but that does not mean that the world is fundamentally discrete. I could also deny that space-time is curved and simulate all the effects of General Relativity via a conspiratorial collection of forces and effects, but this has far less explanatory power and makes calculations far more difficult than the usual approach. Similarly, continuity plays a vital explanatory and simplifying role in current physical theories and I don’t think this has been adequately addressed in any paradigm of discrete physics. I’m still waiting for a good account of how relativity is supposed to work in a fundamentally discrete world.

As for the argument that everything is based on algorithms, I think one could make equally strong arguments that everything is based on statistics or logic or whatever your favorite discipline happens to be (the sociologists certainly gave it a shot in recent decades). It is all a matter of perspective and largely a result of the fact that our division of knowledge into disciplines is at least partially a human contingency rather than anything natural.

Sounds like the old question as to whether Math or Physics is more fundamental. Always struck me a kind of a non-question. They are not distinct.

In school I noted the intertwined history of both. Sometimes the Math was in advance of the Physics, and sometimes Physics called into existence new Math.

At the pragmatic level, the disciplines fall into patterns of quite differing usage. A Physics

careeris (usually) quite different.At the most fundamental level of theory, can Computer Science be in any way distinct from Physics?

This opinion is biased and provocative. CS and Physics are both fundamental and are parallel to each other.

If we really need to pick a winner between them, it is gonna be Physics. Physics is about the discovery of real world. CS, essentially, is a tool, a representation.

To see the impact of this two fields, google “Nobel Prize Physics” and “Turing Award” and compare the results.

A little tangential, but seeing this discussion, I couldn’t help but think of this xkcd comic on “Fields arranged by purity”:

http://xkcd.com/435/

“Biology is just applied chemistry … [which] is just applied physics” which is just applied math.

I’m with Matt here.

Let’s distinguish between science as field of study, and science as the thing that is studied. You are claiming that the universe is computational, and that therefore (on the thing-studied basis) that the universe is founded on computer science. However, computer science (the field of study) does not investigate the fundamental nature of the universe, but the synthetic nature of artificial computational systems. The field of study that does investigate the fundamental nature of the universe, is physics. Therefore, the hypothesis that the universe is fundamentally computational, is (at least as an empirically-provable hypothesis) the domain of physics.

In general, I think it is a mistake to confuse field-of-study with thing-studied; but even if we do this, a computational universe is an aspect of physics, not of computer science.

It reminds me abstract from the wonderful book “Computational beauty of nature”. (Please forgive if I misquote it)

Three professionals: surgeon, gardener and computer scientist were having the same argument – who has the oldest profession.

Surgeon said: God created Eve out of the bone of Adam, surely it is the act of surgery, hence my profession is the oldest.

Gardener: Yes, but before Adam and Eve, he created Garden of Edem, this is obviously the first encounter of the gardening.

And computer scientists:But before earth was created, there was chaos, where do you think chaos came from?

Speaking as both a physicist and a computer programmer who’s worked in infinite-dimensional quantum chaos simulations, it is my feeling that either the SCTT is incorrect, or our suspicions about the complexity class of several problems (factoring, for example), are wrong.

The key in the SCTT is “efficient” computation. If factoring (or any other algorithm belonging to BQP) can be shown to not belong to P, then we’ll have disproven SCTT. (Right? it’s been a long time since I’ve done complexity :))

In fact, I don’t think we need to go that far, the universe, as far as we can ascertain, already *is* a quantum computer, not a classical one. Building functional quantum computers is just an engineering matter at this point.

But it’s worse than that, because BQP vs BPP is a *probabilistic* assertion, not one about the underlying structure of the universe. Considering that a single electron in free space (under the present quantum formalism) has an infinite-dimensional basis, I suspect that space considerations alone will overwhelm any digital simulation of the universe.

I’m uncomfortable with your claim of falsifiability. It seems like an unjustified way to claim the label of “empirical science”.

<>

How could one possibly prove that it’s impossible for a computer to do something other than formal problems? And even the Halting problem is decidable *most* of the time.

The following papers seem relevant:

Kevin T Kelly – “Why You’ll Never Know whether Roger Penrose is a Computer”

<>

Joel Hamkins – “The Halting Problem is decidable on a set of asymptotic probability one”

I’m uncomfortable with your claim of falsifiability. It seems like an unjustified way to claim the label of “empirical science”.

: 1. Come up with a task, such as :writing poetry, that a brain can do, :but not a digital computer.

: 2. Show that we cannot reproduce a :physical process using a digital :computer.

How could one possibly prove that it’s impossible for a computer to do something other than formal problems? And even the Halting problem is decidable *most* of the time.

The following papers seem relevant:

Kevin T Kelly – “Why You’ll Never Know whether Roger Penrose is a Computer”

” The main result is that we can verify (in the limit) that we are computers if and only if we are not computers, which I call an “empirical paradox” ”

Joel Hamkins – “The Halting Problem is decidable on a set of asymptotic probability one”

@Aphyr

I am not aware of any implication that quantum Turing machines are more powerful than regular Turing machines. This may well be true, and if so, it would disprove the strong Church-Turing thesis. If you have a pointer to a proof, I’d like to see it. (I don’t think Shor’s algorithm counts, but I could be wrong.)

I suspect that space considerations alone will overwhelm any digital simulation of the universe.Then maybe an experiment could prove this, somehow? Thus disproving the strong Church-Turing thesis?

@Gustavo While my claim to falsifiability might be weak, the strong Church-Turing thesis is definitively falsifiable. Here is a program to do it:

1) Prove that some problem cannot be solved in polynomial time (such as factoring integers).

2) Show that some natural process factors integers in polynomial time.

@daniel: Comment 15 is exactly what I was proposing. A definitive *proof* that the universe is not a classical Turing Machine could be done by example: find a natural process which can not be efficiently simulated by said machine.

Part 2 of your program has already been accomplished in theory (Shor’s algorithm), and practice (DAMOP this year was full of working quantum computers). The only question remaining is to prove whether BQP is disjoint with BPP (or perhaps more fundamentally, whether their non-probabilistic equivalent classes are disjoint).

I don’t have too much time right now, but perhaps http://arxiv.org/pdf/quant-ph/0203034 would be an interesting read?

(enticement, from the article)

The Church-Turing thesis stipulates that all the functions that are effectively computable are in fact Turing computable, and vice versa, thus restricting the intuitive and informal notion of computability to the well-defined mechanical operations of Turing machines. This identification is a thesis and not a theorem, not even a conjecture, because it can never be proven right in the mathematical sense. Nevertheless, it can be shown to be

wrong if a counter-example can be found in which the computation steps are clearly and acceptably identified. Since it is testable, the thesis does not have the status of an axiom

in Mathematics either.

We dispute this thesis by showing in a later Section that there exist computable functions, computable by executing well-defined quantum mechanical procedures in a finite

manner, that are not Turing computable.

@Aphyr

The important point is that we agree strong Church-Turing thesis is falsifiable and thus a theory of nature?

Also, a relatively minor quibble: remember that Turing Machines have an infinite tape! Thus, if the universe is a TM, then the finite nature hypothesis is wrong.

The Wikipedia article you link to uses a very different definition of “Strong Church-Turing thesis”.

I like the idea of thinking of machine classes (e.g. Turing Machines) as functions from problems to asymptotic complexity classes… but you never know when these classes might collapse (e.g. if someone proves that P=NP)… Moreover, if P=NP, then quantum computers are equivalent to Turing Machines wrt complexity. So you end up collapsing not only problem classes, but machine classes too.

Step 1 of your program is notoriously hard for problems in NP (otherwise we would know that P!=NP).

SCTT?

stomach to caecum transit time?

stratum corneum turnover time?

subcutaneous tissue thickness?

supracerebellar transtentorial?

So is the BQP really JDIS in the KEWA of the BPP?

Darn acronymania!!!

(Yeah!

Strong Church-Turing Thesisbut that really pisses me off)The problem I see with the whole argument

is reflected in the way Einstein was limited in finding E=MC^2,

the math appears simple, and occams razor as a tool supports the logic.

I would look at things differently,

can you define physics OR computation of without a systemic construct?

since the primary requirement is that there is a non-monobloc state,

and here things get weird since a monobloc would by definition make the turing machine AND its data a single item/materia/composite?

can we define a plausible algorithm to describe the physics completely?

We may come close but I doubt we will ever describe the universe complete

as being part of the universe would would describe ourselves complete first

(this would lead to a review of religions imho)

@Kevembuangga See this rule:

Avoid UA (useless acronyms).

from my “Write good papers” page (http://www.daniel-lemire.com/blog/rules-to-write-a-good-research-paper/).

@Daniel: Sure, I’ll agree it’s a theory of nature if it’s subject to empirical verification. The wikipedia page and articles I’ve read seems to suggest that much…

Did you take a look at the paper I linked? I haven’t had a chance to read it fully yet, but if you’d like to discuss further, I’d be happy to take this over to e-mail. It’s a really interesting topic that I’d like to learn more about.

@Daniel

I was just thinking: is the Strong Church-Turing Thesis (SCTT) really falsifiable?

Any such falsification must involve a proof that nature solves a certain problem (i.e. via a “physical algorithm”) asymptotically faster than a Turing Machine. To my knowledge, there is no accepted way to prove complexity results empirically: remember you’re trying to draw conclusions about what happens to the computation time (and/or space) as n goes to infinity! To extrapolate to infinity, you need a theory (a.k.a. computational model) to tell you how your algorithm scales (in this case, this will be a physical theory). With a computational model in hand, you are back in CS Theory land, and can compare things asymptotically, but it’s hard to see how you could validate a physical model in the first place.

OTOH, it seems like we’ve accepted that the asymptotic behavior of PCs is modeled by Turing Machines… although we’ve seen in practice that this is an idealization, and PC performance is in fact slightly worse than that of Random-Access Machines.

@Aphyr No time yet to read the paper. Thanks for the link though.

@Gustavo The analysis of the performance of real-world implementations is tricky, but not always so tricky that nothing can be said about their asymptotic running time.

@Aphyr

Tien D. Kieu arguments against indecidability sound too good to be true and they likely are:

Three counterexamples refuting Kieu’s plan for “quantum adiabatic hypercomputation”; and some uncomputable quantum mechanical tasks

From Warren D. Smith’s publications.

Tien D. Kieu arguments against indecidability sound too good to be true and they likely areYeah, they are utter garbage (not just wrong, but thoroughly and embarrassingly wrong with no redeeming features). I went to a talk of his once and spent some time arguing with him afterwards. Basically, he starts with Hilbert’s 10th problem, about the solvability of Diophantine equations. This is known not to be solvable by Turing machines. He then writes down a Hamiltonian encoding this problem, so that finding a solution of the original equation is equivalent to finding the ground state of his quantum mechanical system. This is easy. He argues that of course you can find the ground state using a quantum computer, which is therefore more powerful than a Turing machine.

This assertion is provably wrong in all previously studied models of quantum computers, but he claims these models are unnecessarily restrictive and that in reality a quantum computer could do this. He ends up spending a lot of time arguing for this position, but it’s all just silly. The fundamental computational issue here is that the solutions to the Diophantine equation may involve enormous numbers of digits (growing not just quickly but faster than any computable function). Even aside from the ludicrous difficulty of engineering a physical system that can handle this level of precision, you never know when you have searched the solution space thoroughly enough. If you could really cool your system to absolute zero, you would get the true ground state, but you can’t actually reach absolute zero, and you never know whether you’ve come close enough to say there’s definitely no lower-energy state.

So what it comes down to is this. If you can create and manipulate an exact Hamiltonian of a certain form (on an infinite-dimensional Hilbert space), and if you can find a guaranteed way to bring it into its ground state, then you can use this to do things no Turing machine can do. In my experience, no expert is surprised or impressed by this, since it’s an easy deduction from grossly unphysical hypotheses. However, Kieu has a lot invested in this idea and has spent the last 10+ years arguing for it. It’s kind of depressing.

> Take the strong Church-Turing thesis: the assumption that the universe is a Turing machine.

If the universe is infinite (even just in Turing Machine-ish sense), it isn’t a TM, since the tape won’t be blank anywhere. But if people are finite, then everything they are capable of can be performed with a TM. Church-Turing thesis should be about the nature of human ability, not the nature of reality (of course, the latter can only be meaningfully conceptualized through the lens of the former).

@Vladimir

<>

(1) Where does this come from? I mean, “How do you know this?”

(2) Why is it relevant? And who said that TMs must have a blank tape?

Gustavo: TMs must have only finite non-blank tape span. I don’t think it’s particularly relevant, but it seems a mistake to interpret C-T as saying that the reality is a TM (also, I don’t find even a mention of this interpretation on Wikipedia). It’s enough to say that human ability (or algorithms, effectively computable functions) can be represented by TMs and leave the reality alone. For example, you can trivially assume reality to be equipped with an uncomputable string of random data, something not fitting in a TM, and that won’t break C-T.

Oh, my mistake: Wikipedia does mention this position in relation to C-T, with a link to “digital physics”. The positions don’t seem to be inherently linked, but it’s clear the “digital physics” gets the intuitive foundation from C-T.

I’m not sure about computer *science*, but I wrote an article a few years ago in which I tried to draw the methodological parallels between doing science and writing *software*.

http://web.ncf.ca/andre/publications/methodology_of_design.pdf

Poincare quipped about the scientific method “man proposes, nature disposes”. I don’t think it’s very different with software, except that “nature” is replaced by “design specifications”.

Most computer scientist does not study or attempt to better understand nature (“nature” in the sense of how everything works). I think computer science *can* be taken as a natural science, but most computer scientists work on problems that are tools for the natural science.

In addition, one can easily argue that “Logic is the most fundamental natural science” using your argument.

Cannot the argument be made that even the most primitive progenitors of our species consciously comprehended basic physics prior to consciously comprehending any type of algorithmic directions? Generally speaking, human consciousness is inescapably inundated with a reality of some sort, a reality that is expressed in some type of terms that define the limits (or lack thereof) of action. Our progenitors likely did not assume anything regarding modeling reality because they likely never had the notion or intent to do so. Rather, they built their understanding on the first kernel of the physical laws that they perceived, and only afterward did they have the “terms” necessary to solve the problems of their existence.

Furthermore, cannot it also be said that an understanding of patterns and algorithms themselves, as they possibly exist in our perceived reality, is an investigation of physics itself, that it necessitates a platform to actually perform its operations? And cannot it also be taken one step further back, i.e., questioning the very validity and ability of our own reason to accurately discern and defend the truth that we may or may not discover through an investigation of our physical reality? In this circumstance, does not philosophy, then, become an even more rudimentary discipline than physics or computer science?

The predictions cited and stated to be based off the C-T thesis rest on assumptions, and hence do not necessarily declare absolute, positive truths about the universe. John Searle’s Chinese room argument counters the idea that conscious understanding and original, free direction are shown in a computerized agent even of supreme sophistication. The argument can be reversed back on itself, however, i.e., on the human mind that claims it as original thought, and thus philosophy, once more, prevents epistemological certainty (even preventing the “prevention” of epistemological certainty, and so on, and so on, ad infinitum). And even if a computerized agent can duplicate any existing physical process, it necessitates a physical platform and physical terms first before it can execute its operations.

These are my thoughts, not necessarily my opinions. I look forward to someone rebutting or bolstering this commentary. To the original author: thank you for this post!

Regarding Church’s thesis and quantum computers:

Suppose some fancy computational device, call it an oracle, can solve the halting problem. Then, you claim: there is a computable function which cannot be computed by a turing machine, and so the Church thesis is disproved.

However, this is a misuse of the term computable function, and probably arises from confusion about what the Church thesis says. In fact, these issues have been studied in theoretical computer science: one can look up Turing jumps or Arithmetic heirarchy for more.

Further, there are computionally complete systems of total maps. For example, a reflexive object in a Cartesian closed category.

The Church thesis is not a mathematical statement about computer science, but more an insight about the relationship between the system in which one is working, and what that system can express.

@James

Yes, if a physically realizable machine could solve the halting problem, this would disprove the strong Church-Turing thesis. Of course, we would quickly replace it by something else, but it would have profound consequences, I believe.

You keep writing “Church thesis”, so maybe you are not clear on the fact that I address specifically the strong Church-Turing thesis which is as follows:

“The universe is equivalent to a Turing machine.”

@Daniel

Please try to understand that because computer science was known first it doesn’t necessarily mean it is the most fundamental natural science, it surely means it is more intuitive than physics(physics is more abstract) hence greeks learn algorithms before physics. Understanding the basics of nature would be very painful without the knowledge of physics. An algorithm can be for anything like baking a cake which is not science, by mathematical induction your statement is false.