What are we going to do about ChatGPT?

Generative artificial intelligence, and in particular ChatGPT, has taken the world by storm. Some intellectuals are proposing we create a worldwide ban on advanced AI research, using military operations if needed. Many leading researchers and pundits are proposing a six-month ‘pause’, during which time we could not train more advanced intelligences.

The six-month pause is effectively unenforceable. We did lock down nearly the entire world to suppress covid. Except that as the partygate scandal showed, high ranking politicians were happy to sidestep the rules. Do we seriously think that the NSA and other major governments are going to obey we pause?

Covid also showed that what starts as a short pause can be renewed again and again, until it becomes unsustainable. Philosophers and computer scientists have spent the last fifty years doing research on artificial-intelligence safety, do you think that we are due for a conceptual breakthrough in the next six months? We had rogue but helpful artificial intelligence in mainstream culture for a long, long time: HAL 9000 in 2001: A Space Odyssey (1968), Mike in The Moon is a Harsh Mistress (1966). I grew up watching Lost in Space (1965) where an intelligent robot commonly causes harm. Six months won’t cut it for a new philosophical breakthrough. Shelley’s Frankenstein was published in 1818. A case can be made that the monster is not the danger itself, but rather how human beings do it. In some sense, it is Frankenstein himself that is the cause of the tragedy, not his monster.

Going nuclear and banning advanced AI research definitively is flat out unfeasible: it will almost certainly require going to war. You might object that we successfully stopped the proliferation of nuclear weapons… Except that it is a bad analogy: what is suggested is an entire worldwide ban. We never tried to ban nuclear weapons, we only limited them to powerful countries. If all you want to do is keep advanced AI out of the hands of some countries like North Korea, that can be arranged by cutting them off from the Internet and preventing them from importing computer equipment. But to stop Americans from doing artificial-intelligence research? When we tried to ban gain-of-function research on viruses, it moved to China.

Currently, only large corporations and governments can produce something akin to ChatGPT. However, it seems possible that the cost of training and running generative artificial-intelligence models might fall in the coming years. If it does, we might end up facing a choice between banning computing entirely or accepting powerful new artificial-intelligence software.

This being said, you can stall technological progress. It is even fairly easy. We basically stopped improving nuclear reactors decades ago by adding layers of regulations.

So while a worldwide ban or a pause are unlikely, it is possible to significantly harm artificial-intelligence research in parts of the world. We could require people who operate powerful computers to seek licenses. We could stop funding artificial-intelligence research. Would it work in China? Doubtful, but if we were to stop artificial intelligence just in North America and Europe, it would harm progress.

Let us be generous: maybe we could stop artificial intelligence. We might freeze it in its tracks. Should we?

We have a case of the one-sided bet fallacy. The one-sided bet fallacy goes as follow: a lottery ticket costs effectively nothing, maybe a dollar. However, you can win big (millions of dollars). Thus you should buy a lottery ticket. It is a fallacy because the cost of the ticket is not negligible, you only made it zero in your head.

How is it an instance of one-sided bet fallacy? We assume that a pause is free, but that the risk is immense. Thus a pause seems like a good deal.

Human civilization is not stationary, it never was. It was the mistake Malthus made when he predicted that human beings were bound to constantly starve. His logic was mathematical: given more food, people reproduce exponentially, until they exhaust the food supply.

For all we know, our near or medium term salvation could require more advanced artificial intelligence. Until we know more, we cannot tell whether the pause is more dangerous than continued progress.

In most of the world, we use less land for farming, and yet we produce more than enough food for a growing population. Nearly everywhere but in Africa, South America and South Asia, we have more and more land occupied by forest. In the recent past, worldwide poverty has been going down, except maybe for the covid era.

But shouldn’t we listen to the experts? Firstly, let us remember that an expert is someone who has experience. Nobody has experience with the breakthrough that is current generative artificial intelligence. Nobody knows how it will impact our society. Before we gain some experts, we need experience. Secondly, we should listen to all the knowledgeable people, not just those that are telling scary stories. Yann LeCun is just as much an expert as anyone in artificial intelligence, and he is entirely against the pause.

Am I saying that we should do nothing? Do not fall for the Politician’s syllogism:

  1. We must do something.
  2. This is something.
  3. Therefore, we must do this.

It is another fallacy that you must do something, anything. What you must do when facing the unknown is to follow the OODA loop: observe–orient–decide–act. Notice how action comes last? Notice how the decision comes after you have observed?

In my estimation, we are dealing with a disruptive technology. Historically, disruptive technologies have nearly wiped out entire industries, or entire occupations. Generative artificial intelligence will disrupt education: it is already doing so. But new technologies also bring tremendous benefits. McKinsey tells us that artificial intelligence might  amount to 1.2 percent additional economic growth per year. If so, that is an enormous contribution to society: turning this down is not an option if we want to get people out of poverty. I expect that we will see massive productivity gains in some industries where a small team is able to do what required a large team before. The transformation is never immediate. It often takes longer than you would expect. But it will be there, soon.

Daniel Lemire, "What are we going to do about ChatGPT?," in Daniel Lemire's blog, April 3, 2023.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

7 thoughts on “What are we going to do about ChatGPT?”

  1. So far ChatGPT does not seem to be working with actual concepts, dealing with meaning, or producing actual information. Instead, it merely produces “information-shaped sentences,” as Neil Gaiman puts it on Twitter.

    Case in point: Consider this comment posted on my friend’s “April” project:


    There the commenter quoted GPT4, which surmised that an observed bug in April was due to it comparing memory addresses instead of doing a deep comparison. Its diagnosis sounds very plausible. Problem is, it’s just not true.

    My friend replies below it that April does not in fact compare memory locations. Apparently GPT4 just saw a whole bunch of discussions out there, and assembled output that had all the most probably statistical correlations. It is merely “generative,” as in a generative grammar that can generate an infinite set of strings in some grammar.

    As Colin Wright puts it on Twitter: “GPT can produce things that are right, but it also produces things that look like they’re probably right, but are absolutely wrong. So someone needs to check it.”

    I initially thought that GPT might be using something like “Conceptual Dependency Networks,” which I learned about at GA Tech in the 80s via Janet Kolodner. We wrote programs which “knew” about common scenarios like dining in a restaurant, and you could tell it a story and then ask questions about what happened in it. At least that was dealing with concepts of meaning. But I don’t think that’s how GPT works.

  2. Before even talking about banning AI and models, we should really understand what it is. A generative model that can generate some coherent-looking sentences does not just it has “intelligence”. Rather than panicking about how “smart” it become, we should be worry more about if it is actually very stupid. Appeared more powerful models is not scary; blindly trusting them without reasoning is.

    1. A generative model that can generate some coherent-looking sentences
      does not mean it has “intelligence”.

      You could say that about human beings as well.

    2. I could see GPT being very useful in search engines, so it might do a good job looking up facts about the 1964 Buick Skylark and the 1963 Pontiac Tempest. But I don’t think it (yet) has the conceptual knowledge needed to present a coherent argument like this:


      I think that’s a very different process from generating sentences which sound convincing merely because they are produced by what is in effect a “grammar” derived from observed probabilities during massive neural network training.

      AI can also be very useful in eliminating mental grunt-work, for example in symbolic algebra and calculus. But note there that the concepts are already well established and the outcomes are virtually guaranteed correct.

      I realize that there is a growing shortage of capability in human conceptual thought, but the impulse to eliminate the need for it altogether is very dangerous. People can die as a result.

      I’m not even sure neural networks by themselves can produce competent self-driving cars. Sure they don’t get tired and distracted like human drivers, but (so far) they lack the conceptual capacity of human drivers.

  3. When it comes to A.I. my fears are mostly related to the confusion, violence and abuse it might bring.

    After seeing the “Trump arrested” and the “Fashion pope” pictures, I imagine it won’t be long until people would start to harass others by generating pornography and violent materials with their “victims”.
    Fake news, with “generative” evidences will become a thing. It will be hard to distinguish truth from falsehood. It’s hard even now, but give it 5-10 years.
    Online scams will flourish in creative and generative ways. Imagine being called by your grandkids asking you to send them some money because they had an accident.
    Plagiarism in education will become the norm.

    Banning A.I. doesn’t solve the problem. We will just have to deal with new challenges, and as a society to be creative enough to adapt to disruption. It will be hard, and things are moving fast.

Leave a Reply

Your email address will not be published.

You may subscribe to this blog by email.