The Godfather of AI doesn’t understand what he’s built
He confuses the machine with the human being
The November 2023 issue of The New Yorker Magazine provided an indepth profile of Geoffrey Hinton, often called “the godfather of AI”, due to his central role in developing Generative Artificial Intelligence (GenAI). Hinton spent his life participating in the major breakthroughs in machine learning (ML). In 2013, when he was 65 years old, he sold his three person startup to Google for 44 million dollars. He continued working on further developing machine learning, which led to the development of GenAI, OpenAI’s ChatGPT and others. In 2023 he left Google. He was worried about the potential of A.I. to do harm, and began giving interviews in which he talked about the “existential threat” that the technology might pose to the human species. The more he used ChatGPT, an A.I. system trained on a vast corpus of human writing, the more uneasy he became.
This is the second in a review of articles about GenAI where I am seeking to discover the difference between AI and us human beings. This would seem pretty simple, but the current existing habit of treating AI as if it were human, confuses the issue. In this review of the article about Hinton, similar to the previous post, I will use quotes from the article with my comments on them. Let’s see what we can find by reflecting on how “the godfather of AI” views the outcome of his labors, Generative AI.
https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai
First, the author Joshua Rothman, describes how the human brain works, as a way of understanding what Hinton was using as a pattern in how he built machine learning (ML).
In your brain, neurons are arranged in networks big and small. With every action, with every thought, the networks change: neurons are included or excluded, and the connections between them strengthen or fade. This process goes on all the time—it’s happening now, as you read these words—and its scale is beyond imagining. You have some eighty billion neurons sharing a hundred trillion connections or more. Your skull contains a galaxy’s worth of constellations, always shifting. New knowledge incorporates itself into your existing networks in the form of subtle adjustments. Sometimes they’re temporary: if you meet a stranger at a party, his name might impress itself only briefly upon the networks in your memory. But they can also last a lifetime, if, say, that stranger becomes your spouse. Because new knowledge merges with old, what you know shapes what you learn. If someone at the party tells you about his trip to Amsterdam, the next day, at a museum, your networks may nudge you a little closer to the Vermeer. In this way, small changes create the possibility for profound transformations. Hinton spent three decades as a computer-science professor at the University of Toronto—a leading figure in an unglamorous subfield known as neural networks, which was inspired by the way neurons are connected in the brain.
Although the computer algorithms in ML are structured in a way that uses the existence of neural networks in the brain of the human being as a pattern to emulate, they are not an artificial human brain, they are something completely different. This idea is expressed well by Jill Lepore in her article The Chit-Chatbot where she reviews the history of developing machines that talk. She notes that in 1958 Marvin Minsky of M.I.T. at a conference in England on “The Mechanization of Thought Processes”, spoke about heuristic programming. Some of the presenters had argued that creating a machine that could think and talk would need to wait the scientific penetration of the intricate working of the human brain. Minksy argued otherwise, insisting that “even for those whose central interest is unravelling the mysteries of the human brain, it might be well to devote a major share of the effort, at the present time, to the understanding and development of the kind of heuristic considerations that some of us call ‘artificial intelligence.” As Lepore notes, “you don’t need to imitate human intelligence; you can synthesize it instead—making something quite like it by making something entirely different.” So, let’s start there - every time we read that GenAI uses neural networks to produce its outcomes, let’s remember those networks are math based machine algorithms creating probabilities. They are not an artificial human brain. They are a machine.
Whenever we learn, our networks of neurons change—but how, exactly? Researchers like Hinton, working with computers, sought to discover “learning algorithms” for neural nets, procedures through which the statistical “weights” of the connections among artificial neurons could change to assimilate new knowledge.
This is a clear explanation of the machine equivalence of networks of neurons in the human brain: neural nets, procedures of algorithms through which the statistical weights of the connections among the artificial neurons could change to more accurately achieve intended outcomes. This is not thinking, this is calculating.
But Hinton has joined many prominent technologists, including Sam Altman, the C.E.O. of OpenAI, in warning that A.I. systems may start to think for themselves, and even seek to take over or eliminate human civilization. It was striking to hear one of A.I.’s most prominent researchers give voice to such an alarming view. “People say, It’s just glorified autocomplete,” he told me, standing in his kitchen. “Now, let’s analyze that. Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete.” Hinton thinks that “large language models,” such as GPT, which powers OpenAI’s chatbots, can comprehend the meanings of words and ideas. Skeptics who say that we overestimate the power of A.I. point out that a great deal separates human minds from neural nets. For one thing, neural nets don’t learn the way we do: we acquire knowledge organically, by having experiences and grasping their relationship to reality and ourselves, while they learn abstractly, by processing huge repositories of information about a world that they don’t really inhabit. But Hinton argues that the intelligence displayed by A.I. systems transcends its artificial origins.
Hinton’s argument that LLM’s like ChatGPT can comprehend the meaning of words and ideas is an example that shows he doesn’t understand what he has built. He thinks the intelligence displayed by AI systems transcends its artificial origins. He is projecting his own human intelligence into what he has built, forgetting, or choosing to ignore, that it is an artificial creation, a machine, while he, like all human beings, is something different: an organic being created through myriads of complex forces that is living in reality. The machine is not alive, it is a mechanical creation running on mathematical algorithms.
In 1986, Hinton, Rumelhart, and Williams published a three-page paper in Nature showing how such a system could work in a neural net. They noted that backprop, like the Boltzmann Machine, wasn’t “a plausible model of learning in brains”: unlike a computer, a brain can’t rewind the tape to audit its past performance. But backprop still enabled a brainlike neural specialization. In real brains, neurons are sometimes arranged in structures aimed at solving specific problems: in the visual system, for instance, different “columns” of neurons recognize edges in what we see. Something similar emerges in a backprop network. Higher layers subject lower ones to a kind of evolutionary pressure; as a result, certain layers of a network that’s tasked with deciphering handwriting, for instance, might become tightly focused on identifying lines, curves, or edges. Eventually, the system as a whole can develop “appropriate internal representations.” The network knows, and makes use of its knowledge.
Here again we find how Hinton is treating the human being as if we are machines, e.g. “backprop wasn’t “a plausible model of learning in brains”. He is using “brains” as if they are a discreet function of the human being that can be investigated separately, yet, when we think it is integrated with our feelings and actions in living inter-relatedness to the living and breathing world around us. Perhaps in Hinton’s treatment of the human being as if it is a machine that can be replicated we find the reason why the world is so confused about differentiating GenAI from the human being. If we are both machines, there’s not much of a difference. But if human beings are more than a machine, more than only a physical creation, the question arises: what are we? Who are we? How do we learn and know in ways that are different than machine calculations?
In the early two-thousands, as multi-layer neural nets equipped with powerful computers began to train on much larger data sets, Hinton, Bengio, and LeCun started talking about the potential of “deep learning.” The work crossed a threshold in 2012, when Hinton, Alex Krizhevsky, and Ilya Sutskever came out with AlexNet, an eight-layer neural network that was eventually able to recognize objects from ImageNet with human-level accuracy.
I protest: recognizing objects with human-level accuracy is not the same as knowing what they are and their importance in the overall world functioning, deciding how we feel about them, making sense and meaning of how they fit into the overall world of our experience. We are more than simply being a camera.
Hinton is convinced that there’s a real sense in which neural nets are capable of having feelings. “I think feelings are counterfactual statements about what would have caused an action,” he had told me, earlier that day. “Say that I feel like punching someone on the nose. What I mean is: if I didn’t have social inhibitions—if I didn’t stop myself from doing it—I would punch him on the nose. So when I say ‘I feel angry,’ it’s a kind of abbreviation for saying, ‘I feel like doing an aggressive act.’ Feelings are just a way of talking about inclinations to action.” Hinton said he had seen a “frustrated A.I.” in 1973. A computer had been attached to two TV cameras and a simple robot arm; the system was tasked with assembling some blocks, spread out on a table, into the form of a toy car. “This was hard, particularly in 1973,” he said. “The vision system could recognize the bits if they were all separate, but if you put them in a little pile it couldn’t recognize them. So what did it do? It pulled back a little bit, and went bash!, and spread them over the table. Basically, it couldn’t deal with what was going on, so it changed it, violently. And if a person did that you’d say they were frustrated. The computer couldn’t see the blocks right, so he bashed them.” To have a feeling was to want what you couldn’t have.
With due respect to the brilliance of Hinton’s achievements in machine learning, it shows almost a child-like ignorance to claim neural nets are capable of having feelings. He bases this on the idea that feelings are just inclinations to action. Yet they are far more complex. Let’s take as an example the climate activists who are throwing tomatoes onto famous paintings like the Mona Lisa in order to raise consciousness about climate change. Yes, one can say here, feelings led to actions. Yet one can also stand in front of the Mona Lisa and feel its beauty and relationship to the nobility of the human being without moving to action. Feelings are so complex, they exist in between thinking and willing, they are a bridge to understanding between people and the world. In claiming the machine can have feelings, Hinton shows he doesn’t understand what he has built.
How useful—or dangerous—will A.I. turn out to be? No one knows for sure, in part because neural nets are so strange. In the twentieth century, many researchers wanted to build computers that mimicked brains. But, although neural nets like OpenAI’s GPT models are brainlike in that they involve billions of artificial neurons, they’re actually profoundly different from biological brains. Today’s A.I.s are based in the cloud and housed in data centers that use power on an industrial scale. Clueless in some ways and savantlike in others, they reason for millions of users, but only when prompted. They are not alive.
Great point. Neural nets like OpenAI’s GPT model, are not alive. They are cloud based and housed in huge data centers powered on an industrial scale.
Intuitions might be understood as thoughts that we can’t explain: our minds generate them for us, unconsciously, by making connections between what we’re encountering in the present and our past experiences. We tend to prize reason over intuition, but Hinton believes that we are more intuitive than we acknowledge. “For years, symbolic-A.I. people said our true nature is, we’re reasoning machines,” he told me. “I think that’s just nonsense. Our true nature is, we’re analogy machines, with a little bit of reasoning built on top, to notice when the analogies are giving us the wrong answers, and correct them.”
Again we see the godfather of ai thinking of humans as machines. We need to separate his amazing contribution to machine learning from his misunderstanding of human beings as machines.
Hinton is often asked if he regrets his work. He doesn’t. When he began his research, he says, no one thought that the technology would succeed; even when it started succeeding, no one thought that it would succeed so quickly. Precisely because he thinks that A.I. is truly intelligent, he expects that it will contribute to many fields. Yet he fears what will happen when, for instance, powerful people abuse it. “You can probably imagine Vladimir Putin creating an autonomous lethal weapon and giving it the goal of killing Ukrainians,” Hinton said. He believes that autonomous weapons should be outlawed—the U.S. military is actively developing them—but warns that even a benign autonomous system could wreak havoc. “If you want a system to be effective, you need to give it the ability to create its own subgoals,” he said. “Now, the problem is, there’s a very general subgoal that helps with almost all goals: get more control. The research question is: how do you prevent them from ever wanting to take control? And nobody knows the answer.”
“If you want a system to be effective, you need to give it the ability to create its own subgoals,” he said. “Now, the problem is, there’s a very general subgoal that helps with almost all goals: get more control. The research question is: how do you prevent them from ever wanting to take control? And nobody knows the answer.” Here Hinton reduces the human being, learning and knowing, into a simplistic underlying formula that we exist only to get more control. This is Hinton’s general subgoal for learning in humans. Certainly this subgoal exists in human beings, yet there are others, for example, to love and give others what they need, even if this means sometimes giving up control. In conclusion: the godfather of AI doesn’t understand what he’s created, because he doesn’t understand the human being: what it means to be human. Until we grasp this, we are in danger of seeing ourselves as only machines, and thereby subjecting ourselves to them as our lord and master. Instead, let us wake up!