Peter has come up with a list of 21 (important) open problems in the field of Artificial Intelligence. I am not aware of any such list anywhere, so this might be an important contribution. For comparison, Wikipedia as a list of open problems in Computer Science. In the field of database, the closest thing to a list of open problems would be the Lowell report: it falls short of providing true open problems however.

I am a bit surprised to see Learning Chess, but not Learning Go, on his list since I have the impression that Deep Blue has pretty much learned to play Chess at a very high level, whereas the same is not true of Go.

Out of Peter’s list, two of the open problems stroke a cord with me:

  • Self-References in software. I am no expert in AI, but it seems to me that the main mystery we are facing today, the deepest mystery of all, is what is consciousness? Some say that as computers grow larger, more connected and more powerful, they will acquire consciousness. Maybe. I believe however that consciousness has to do with software that can reference and modify itself. Note that we can figure out what consciousness is without building a conscious robot, and we may build a conscious robot without knowing what consciousness is.
  • Approximate queries in databases. As we now have infinite storage, and as data is created and discarded faster than ever, we need smarter database systems in the sense that they can provide the human beings exactly what they need, just like a human assistant would, only faster. The key here is probably to use lossy database engines and approximate representations. I like this topic because while I am not an AI research, it is close to my interests. For related work, see our recent paper on OLAP Tag Clouds (to be presented at WEBIST 2008), our work on quasi-monotonic segmentations (to appear in IJCM), and my work on better piecewise linear segmentations (SDM 2007). This last paper is interesting because it was motivated by my frustration at defining what a flat segment is in time series, a concept human beings can agree upon easily, it seems.

4 Comments

  1. If you’re interested in self-reference and consciousness, then you will really enjoy Hofstadter’s latest book, “I Am A Strange Loop”, which is all about self-reference and consciousness.

    Comment by Peter Turney — 19/12/2007 @ 11:38

  2. AI has been solved.

    Quid est infra cum numeris Romanis?!

    Comment by Mentifex — 21/12/2007 @ 1:01

  3. Problem 1: Representation in Design
    A fundamental problem for both artificial intelligence and design remains the one of
    representation. What is it that a designer knows and how do we get a computer to know
    it? Even if we are less concerned with what a human designer knows we are still left with
    the question of what needs to be known to design and how to get a computer to know it
    and use it.

    The early work on representing design knowledge as rules burgeoned into frames and
    semantic nets. More recently approaches based on conceptual schemas, conceptual
    graphs and distributed representations have been attempted. Whilst these approaches all
    add to our ability to represent, there is still a wide gap between what a designer ‘knows’
    when designing and what a computer-based design aid ‘knows’.

    Follow Below Location To Reffer More>>

    http://pctetalk.com/79-Ten_Problems_In_AI.html

    Comment by vickey — 18/2/2008 @ 8:53

  4. Deep blue didn’t truly “learn chess”, in fact it cheated. For every play it generated several possible moves which a team of chess experts then chose from. It was more like a team of chess experts playing the champion, nothing more.

    Comment by TeknoRapture — 22/8/2011 @ 8:00

Sorry, the comment form is closed at this time.

« Blog's main page

Powered by WordPress