We’ve already noted that Douglas Hofstadter, in his seminal Gödel, Escher, Bach: An Eternal Golden Braid, established baselines for both human and artificial intelligence that help us wrap our minds around how the evolution of both might proceed.
In GEB, he offers a list of eight criteria for “intelligence”, and by intelligence, he means intelligent behavior – a human or AI acting in accordance with these criteria may be said to be intelligent.
It’s a thorough and thought-provoking list. But it’s a list derived from intelligence we already know about – our own (and that of some higher animals) - and presumes that this aggregate of features of human intelligence represents the threshold by which we can assign an AI into that category. They are, in Hofstadter’s words, “essential abilities for intelligence.”
Before going any further, here they are.
Intelligence is the ability
to respond to situations very flexibly
to take advantage of fortuitous circumstances
to make sense out of ambiguous or contradictory messages
to recognize the relative importance of different elements of a situation
to find similarities between situations despite differences which may separate them
to draw distinctions between situations despite similarities which may link them
to synthesize new concepts by taking old concepts and putting them together in new ways
to come up with ideas which are novel
It’s a terrific list, isn’t it? Each of these is an aspect of human intelligence that yields behaviors critical to survival, problem-solving, decision-making, cooperation. It’s sensitive to our sense of the past, present, and future. These abilities enable strategy, planning, and – the big one! - understanding.
Hofstadter asserts that an AI must achieve all eight to merit the designation “intelligent”. What he means is, an AI must hit all eight to achieve human intelligence. And the fact is, achieving all eight of these abilities is a tough row to hoe: some of them have already been achieved; some are going to happen soon; and some are pretty far off.
We’re left with the reality that we will experience a succession of increasingly-sophisticated AIs that are more and more general in nature – able to solve wider ranges of problems, and each distinct in some ways from its peers. And we will persist some of them with just those abilities from the list that are needed for them to perform optimally.
Not one AGI, but many, and each human-like to some degree. Various AIs presenting some of Hofstadter’s intelligence components, in diverse combinations.
Let’s go through each of the eight.
to respond to situations very flexibly
This one is problematic in its vagueness. “Flexible” we can work with in general, but if we’re talking the flexibility of human intelligence, it covers not only a wide spectrum in presentation (humans vary greatly in their capacity to adapt their responses) and in context, as well (the spectrum of situations in which humans can be cognitively flexible is considerable; must an AI be so in all of them?).
Writing in 1979, Hofstadter’s context is the state of the art in computer software wherein both the software and hardware were adamantly rigid – very in-flexible, by design. In those days, the more inflexible and rigid the code, the better.
That hasn’t been the case for some time. We already have software in wide commercial deployment that is quite flexible, highly adaptable; the Google search engine, for instance, is powered by algorithms that employ more than a single search strategy, and can adapt contextually, as input changes, to offer increasingly refined and granular results.
Human flexibility goes much further, of course; it can encompass scenarios like changing one’s route to a destination to increase the driving time, because one is enjoying a conversation with a traveling companion; or switching from pliers to a wrench because a fixture is stubborn; or swapping out electives in an upcoming college class schedule to accommodate part-time work requirements. These adaptations – and there are, of course, almost infinitely more – demonstrate the vast range of flexibility humans are capable of.
An AGI doesn’t need to accommodate them all, or even a significant fraction of them. We could say that an AGI has achieved sufficient flexibility to be designated intelligent if it can internally shift from one solution algorithm to another, in a quest to solve a difficult problem – or, going one better while preserving our point, spontaneously generate a new algorithm better suited to the problem.
We can safely claim that situational flexibility already exists in AIs, and will only get better. Human-like? No, but it’s hard to imagine why it would ever need to be.
to take advantage of fortuitous circumstances
This, too, is vague when it comes to human performance. What do we mean by “take advantage”? What constitutes “fortuitous circumstances”? The human range of answers to both questions is inexhaustible.
We define “fortuitous” as an event or condition that presents unexpectedly, either by chance or beyond design. For both human and AI purposes, we can bundle such events and conditions under the banner “unexpected”.
This criterion goes beyond simply responding to the unexpected; it calls for exploitation of the unexpected. That is certainly an ability humans have, and is unquestionably of great value.
A human being, upon finding a forgotten $50 bill tucked in a pocket in a wallet, might spend that money in the moment to buy a gift for a friend who is feeling down and needs cheering; or someone hopelessly lost in the woods might hear an incidental blast from a semi-truck's horn, far in the distance, and realize a highway lies in that direction.
For an AI, “fortuitous circumstance” would be defined in the context of those inputs from which it receives data (and the range of that data), as well as the arrays of responses it has been built to produce, as a result of its training. It is possible, and often preferred, that the AI be constrained from responding to anything “fortuitous”, or unexpected.
There are now, however, a great many AI applications deploying in the world that will absolutely need to respond to the unexpected. Self-driving vehicles must be able to react instantly to unpredictable conflicts; robot surgeons will have to adapt to unforeseen events on an operating table. Responding to the unexpected is one of the abilities we’re most counting on.
And that’s already part of our design. But Hofstadter is pushing us further: he wants, not just response to the unexpected, but exploitation of the unexpected.
Let’s say an AI is studying the market, picking stocks, trading autonomously – managing a portfolio. In this scenario, an epidemic could wipe out herds of cattle in the Southwest, radically upsetting the regional economy and sending shock waves through the national beef market. Detecting this unpredictable event, the AI would alter its buying and selling strategy – shifting away from the affected stocks and buying into chicken ranches. That satisfies Hofstadter’s criterion: it’s taking advantage of a fortuitous circumstances.
And financial AIs are already doing that. Tick that box off, and look for this ability to grow and expand as AI proceeds.
to make sense out of ambiguous or contradictory messages
This one is more clear-cut, even though (again) the range of human response is vast. Resolving ambiguity in communications is even more interesting because we would have to expand it beyond human communication to include “ambiguous or contradictory signals” and “ambiguous or contradictory data”.
The human beings who are good at this become good at it in one very emphatic way: through experience. To sense ambiguity or contradiction in the first place is a consequence of experience, and to work through it is likewise something a person learns through repeated exposure – to act on a perceived interpretation, then learn whether the action was the correct response or not.
This is classical learning, and it’s what modern AIs do.
The caveat is that an AI can only achieve this capacity for resolving ambiguity or contradiction in the same way humans do – through experience. Thus, if an AI is going to require this ability, it will need to be trained to it, through repeated exposure to data that is ambiguous or contradictory.
Even so – we're already there.
to recognize the relative importance of different elements of a situation
The problem with this one is that it’s not just vague; it would be, in the real world, subjective in a vast range of situations.
How is “relative importance” measured? That is an entirely contextual question, depending on the scenario, and the answer could change among humans faced with that scenario – five people in a situation might score the “relative importance’ of the elements of the situation differently.
What we will need in AI, much more often than not, is consistency in making the “relative importance” call when necessary. And, much more often than not, that relative importance will be objective. If I’m a self-driving car, for instance, and I sense a human pedestrian and a dog nearby, I assign higher priority to the pedestrian than to the dog in my deliberations. That is not a subjective assignment.
We’re left with a simple training requirement, then, to imbue AIs with this human intelligence ability for specific tasks.
What about generalizing? What about assigning “relative importance” across domains?
To some degree, that, too, is a training problem: generalization can fail when context shifts. If I am a robot that is instructed to get into a car and drive it, I assign the human pedestrian higher importance than the dog; but if I am then assigned to throw a bag over the dog and deliver it to the dog pound, I must assign it higher priority than the pedestrian.
At this point, we’re beyond consideration of this ability itself; we’re into the problem of generalization, which is a bigger one, and which will have to wait.
to find similarities between situations despite differences which may separate them
to draw distinctions between situations despite similarities which may link them
to synthesize new concepts by taking old concepts and putting them together in new ways
These three are of a piece, and to deal with either of the first two is to deal with both; and the third has much in common with analogical thinking, which we discuss elsewhere.
Comparing two situations and perceiving the similarities between the two in detail was a tough problem, back when Hofstadter wrote this one. In the Seventies and Eighties, there was no big data, no data mining – only static features defined in limited data, compared to one another just as statically.
Today, situational elements and contextual features can be learned from large bodies of data, and compared with astonishing granularity – much more precisely, it has to be said, that human beings can do it, in many situations. So the problem of establishing ad hoc boundaries and giving them categorical significance (for decision-making purposes) in the moment is already well within the abilities of deep learning AI.
Mapping them, however, and applying those ad hoc boundaries and categories in new situations takes us into analogy territory. This essential human cognitive ability – articulated in Hofstadter’s list only in this entry, and there only by its similarity – is thus far not something deep learning can deliver, as there is no methodology extant for achieving it. We understand it in principle, and there are approaches we can take – we just haven’t done that yet.
But Hofstadter’s wording goes beyond analogy itself. It calls, not only for the mapping knowledge from one domain to another, but for the ability to create something new from something old. Synthesis.
That is the pinnacle of cognitive elegance, as it exists in the realm of abstraction, above the plain of primate survival requirements. Our ability to create new things out of existing things is an evolutionary gift, and we have capitalized on it for our own survival – and the lofty sophistication of it should not go unappreciated.
And that’s before we get to concepts – one of Hofstadter’s favorite things of all time, the very cornerstone of his career. Concepts – cognitive representations, not of actual objects, but of the ideas behind those objects. Frameworks within which we interpret events we observe and experience. A kind of categorization that goes beyond the routine bucketing of things in the world, defining their attributes in ways that give us much greater utility in thinking about them and working with them.
We can only get to concepts after we achieve analogy. Both are well-understood, and Hofstadter himself has been the pacesetter for four decades in modeling them and pushing back the boundaries of their potential implementation in his research.
But we don’t yet have a way to build them into deep learning-based AI. Or, more precisely, we don’t have a single, preferred methodology that we can generally deploy. We may have it soon, we may not.
to come up with ideas which are novel
Again, vague – what is an idea? What do we mean by novel?
To a human being, these are easily-defined concepts. We know what it is to have an idea, and we know what it means for that idea to be new – not among the ideas we already harbor, and sometimes even an idea no one has had before.
And, as human beings, we know that the domains in which novel ideas can be found are endless – which boosts the vagueness of this criterion by orders of magnitude.
For the purposes of a general AI, however, we can at least rein in the problem. “Idea” can be one of many things: a solution; a recommendation; a created object (such as generative text, art, or music). All of these can be generated by today’s more advanced AIs.
As for “novel” - that can mean many things. “Novel” as in never-before-seen? “Novel” as in beyond-the-AI's-experience? Something in between?
We're already into speculative territory, beyond what AI is doing, at least in the commercial realm; but we have to acknowledge that there is great nuance in “novel”. Do we mean “new”? Do we mean “unprecedented”? Do we mean “unlike anything we’ve seen before”?
We can clarify this in the superhero domain. Luke Cage, aka Power Man in the Marvel universe, is really, really strong and is indestructible. He was new – and, to many readers, novel, but was he really? There are many really, really strong superheroes who are indestructible.
Firestorm, DC’s Nuclear Man, however, is a superhero we could call truly novel. Why? Because he’s not one person, but two: he is teenager Ronnie Raymond, merged with scientist Martin Stein. They exist as two separate people most of the time, except when they fuse together to become Firestorm. That’s a truly novel superhero.
As things stand today, in 2023, we have some drawing-board ideas and a few useful and intriguing research models that suggest future courses of action for this kind of AI – but nothing that truly satisfies Hofstadter’s criterion. And the boots-on-the-ground AI that does create – generative AI such as ChatGPT – is derivative and not truly original.
A framing of the problem that might prove useful is this. I wrote an essay not long ago called “Can an AI be CEO of Apple?” In that essay, I pointed out that many CEO functions – the generation of uplifting rhetoric, high-level approvals, evaluation of industry trends, and so on – are indeed within AI’s purview, at least in principle.
But can an AI come up with a new product that the world isn’t asking for? Can it realize that it’s something everybody needs – they just don’t know it yet? Can it look at a cellular flip-phone and see an integrated phone, laptop, and music/video player?
I'm not sure anyone can put forth a framework for such thinking that’s achievable with today’s technology.
To be fair, Hofstadter isn’t setting the bar nearly as high as Steve Jobs when he calls for the ability to come up with novel ideas; he’s thinking more along the lines of creating candles that smell like bubble gum. But this extreme example clarifies just what we’re talking about when we get into the realm of true creativity – where generative AI can’t yet go – and what it means to come up with something truly new and unique.
Hofstadter’s list is truly useful, then, and it frames AI in both pragmatic and speculative terms that give us much to work with. But, as said above, checking off boxes on this list, omitting what’s not there and mixing-matching what is, we get not one AI but many:
We get AGI that’s flexible and ambiguity-sensitive, but not analogical;
We get AGI that’s flexible and opportunistic, and even analogical, but doesn’t prioritize;
We get AGI that can assess new scenarios, break them down and select the relevant points of interest, but only in singular domains.
Not one AGI, but many. None fully human-like, but human enough.
This isn’t a weakness; on the contrary, it speaks to the sheer range of the intelligence that’s about to emerge into the world. Hofstadter’s list isn’t all-or-nothing; it’s a menu, and it’s a pretty tantalizing one.
Comments