Turing award recipient tears apart artificial intelligence

Nice article by Peter Naur, Turing award recipient, in the last Communications of the ACM (January 2007). He takes on the hypothesis that the human mind is nothing by a Turing machine, and tears it apart:

(…) human thinking basically is a matter of the plasticity of the elements of the nervous system, while computers—Turing machines—have no plastic elements.

Naturally, this is not the first time someone objects to this hypothesis which motivates the field of Artificial Intelligence (most famously Roger Penrose also objected with a completely different line of argument).

I’m not sure I understand why we can’t model the neural system in a Turing machine. My concern would be more to see whether we can do so in in almost O(n) time or where n is the number of neurons or another complexity metric. It would suffice that machine intelligence requires prohibitive (not necessarily NP-hard) computations for Naur’s point to hold. Even if, on paper, you could simulate a brain in O(n) time, you still have to demonstrate you have numerical stability.

He takes a shot at Natural Language Processing:

Talking of verbal `word senses’ given by `sets of linguistic contexts’ is an impossible way of describing human linguistic activity. Choosing between alternative senses of a polysemous word does not arise when people speak. (…)Typically the meaning of a word is ephemeral, entirely a matter of the particular conversation taking place.

The guy is interesting and has clearly a sizeable ego:

I have tried to have these articles published in journals, so far without any success. The present presentation, when published in the Communications of the ACM, will in fact be the first presentation of the Synapse-State Theory of mental life to appear in a journal. So I am clearly at the beginning of that twenty year period that it usually takes to have a scientific breakthrough accepted.

He is clearly a trouble-maker and I’m sure that Communications of the ACM debated whether to publish his article. But it is hard to deny publication right to a Turing Award recipient. Maybe some will regret he was awarded the prize in the first place.

I am now expecting a debate to ignite. Naur has made it clear that he expects to be around another 20 years to defend his point. This should prove interesting. Much is at stake.

Published by

Daniel Lemire

A computer science professor at the Université du Québec (TELUQ).

2 thoughts on “Turing award recipient tears apart artificial intelligence”

  1. My personnal guess is that the brain may eventually be modeled. As for language (and other things), I think the main issue is that there is a small piece of architecture that is highly replicated (like the visual system), but we just haven’t found it yet. (And since we can’t record in human brains with electrodes, it won’t come tomorrow!) But this is just a guess.

  2. Having read his Synapse State Theory paper, I don’t quite know what to think – He seems to have a point, but I fail to perceive the big difference with today’s commonly accepted model of the animal brain:
    That the differences and plasticity of neural synapses make up the “matter” of thought (if that is precise enough to describe it).

    Perhaps he’d be more accepted by his scientific colleagues if he cared more for other people’s work – He’d have to present sound arguments rather than just dismissing the accepted current research.

    And about the time complexity of modelling mental activity, wouldn’t that be more like Omega(N*S), where N = total number of neurons, and S = total number of synapses because:

    A ‘worst case’ scenario would probably be an activity involving every single neuron (or a very large amount of them anyway). If every neuron fires, you’d have to update every synapse connected to it, in order to update its new state of conductivity.

    And if we try to reason about the number of synapses compared to neurons, we get a (probably unrealistic) scenario of every neuron connected to all the remaining neurons, leaving us with (1/2)*n*(n+1) synapses. (because the problem essentially is SIGMA(s, from s=1 to s=n) )
    This leaves us with something in the order of Theta(n^2) (and that is regardless of whether a synapse is unidirectional or not), resulting in a total lower bound for mental modelling of Omega(n^3)

    I certainly look forward to reading his book in order to see for myself if he’s on to something or has simply given in to his megalomania.

Leave a Reply

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax