top of page
Writer's pictureScott Robinson

The Black Box Ding


Much is being written about how human-like generative AI seems, and just as much is being written about how that’s just an illusion; generative AI isn’t human-like at all. Both are true, in a way; it certainly does seem human-like. And it certainly isn’t really, under the hood.


And then there’s usually some mention of AGI (artificial general intelligence) as the actual next step toward human-like intelligence.


Yes, AGI is the next step; but describing AGI as “human-like” is terribly misleading.


The distinction between AGI and the AI that powers today’s generative apps is that word general, which essentially means that the AI of tomorrow will be able to handle a wide range of tasks and problems, rather than narrowly focusing on one (as most of today’s high-performing AIs do). This means the AI is able to generalize solutions from a single domain into other, often unrelated domains.


Human beings call this analogy – noting significant conceptual similarities between two things, and attaining proficiency with the new thing based on familiarity with the old thing.

That’s a very human cognitive skill. Cognitive scientist Douglas Hofstadter considers it the engine of intelligence, and it’s easy to see why: analogical thinking is incredibly efficient, making it far faster and easier for humans (and some higher animals) to solve problems. Its survival value is obvious.


So, when we say that AGI is “human-like”, this is the aspect of it that truly is. AGI will be intelligence that is analogical in nature, able to solve ranges of problems comparable to what humans can do.


Adam D’Angelo, CEO of Quora, defines it this way:


“I define AGI as the ability to do anything a human can do while working at a computer with Internet access,” excluding the physical dexterity and attributes required for manual tasks (robotics).


But that doesn’t mean AGI will be really human-like – not by a long shot.


Generative AI, alongside most of the truly impressive and useful AI in service today, is based on a technology called deep learning. That’s a kind of processing that mimics the human brain, by means of an artificial neural network, which resembles large arrays of human neurons connected in series. Exposure to massive amounts of data enables these networks to discover otherwise obscure patterns within them – patterns that can be used to create algorithms of uncanny accuracy and granularity. If you’d toyed with ChatGPT or one of the art-generating AIs, you have a sense of just how precise and detailed these results can be.

That exposure to data is called training, and the quality of the results the AI’s algorithms can produce is a reflection of the quality of that training.


AGI – the next step – will not be based on this concept. General intelligence, after all, isn’t about sifting through oceans of data. While human beings do that over time – learning to read, to play music, etc. - our problem-solving capacity actually reflects the opposite: we can often solve new problems with a minimum of new information, because our experience with problems we’ve already solved in the past informs our solutions to new ones.


AGI, then, is likely to be new methods of combining narrow, single-specialty networks trained via deep learning into new models, powered by AI that searches out the learning patterns displayed by the deep-learning networks themselves, then training the new models to seek out and surface the similarities between them. To discover analogies, in other words. In those common patterns, the AGI will have paths to new solutions based not on new data from the big, wide world, but from other networks that have already lived in it, solving big problems.


We don’t have that yet, and it’s not all we’d need to develop true AGI – but we’re working on it, and it will be here soon.


The thing is...


When it arrives, we won’t quite know how it works.


Deep learning, the technique we’re already using to create narrow, extremely high-performing AI solutions, is a black box – meaning, we often don’t know exactly what’s going on inside the neural network to generate the algorithm. Millions of simulated neurons with billions of connections between them, all of which are constantly shifting values in dynamic patterns are zeroing in on a solution, are almost impossible to track. We don’t know what those dynamic patterns are. So we don’t know how the AI came up with its solution.


It’s the same with the human brain. We know, in general, how it works, but have no way of discovering the precise operations of neurons and synapses and axons that happen in solving specific problems. We can only know which areas in the brain are switching on and off as it does its thing.


The result of these neural processes in a human brain, when solving a problem, is a little dopamine surge that happens when it’s completed. We observe, we think about what we’re seeing, we come up with a solution, and ding! We get a happy little dopamine hit, a message in our brain that says, “That’s right!”


This happens in human brains regardless of the type of problem being solved. If you’re working on a jigsaw puzzle and realize you’ve found the piece you’re looking for, you get that ding! If, on the other hand, you’re working on a problem that’s completely abstract – solving a Rubik’s Cube or Sudoku, for instance – you'll get the same ding! “That’s right!”


When we train a neural network via deep learning in pursuit of an algorithm targeting a specific problem, we set the ding ourselves: the network knows the desired output we’re after, and that’s what it goes for. When the finished AI application interprets its input, it responds with a ding based purely on the training ding.


But in an AGI, it’s the AGI itself that locates the ding – the match point in the patterns it’s discovering in the AIs it’s working from. It’s setting its own ding; it’s creating its own sense of “rightness”. Put another way, with many of these new AGIs, we won’t know what’s triggering their sense of “That’s right!”


The implications of this aren’t at all trivial. When AI of this level of sophistication - “human-like” - arrives, we will be turning over bigger and bigger tasks and operations and (most concerning) decisions to it, on the pretext that it has become “human-like”, and basing what it does on what we do – just better and faster.


But the truth is much more troubling: AGI won’t be human-like intelligence at all. It will be intelligence of a completely new kind, unlike us, and unlike anything else.


And we won’t have a true understanding of how it’s doing what it’s doing, what exactly is happening in there, or why it is really coming up with the answers it does. We’ll just know it works.


On the one hand, in thinking in ways unlike our own, it will come up with stuff we never would; on the other, it may be capable of intelligence we can’t even conceive of, let alone understand.


Today, with generative AI, we are similarly in the dark, albeit to a lesser degree. We don’t know precisely how it works, we just know that it does, well enough to be insanely profitable.


Will we really be okay with that tomorrow?

11 views0 comments

Recent Posts

See All

Comments


bottom of page