top of page
Writer's pictureScott Robinson

AI, Generally Speaking

Updated: May 18, 2023



Much of the disquiet over AI’s rapid advances is the growing fear that it will very soon catch up with us, and then blow right past us. A few minutes with ChatGPT is enough to stir that fear in many. And the endless hype spilling out across social media only stirs those fears.


It’s also disquieting that so many of tech’s biggest names, as well as some of the AI field’s leading experts, are part of the growing chorus expressing those fears: Elon Musk. Bill Gates. Geoffrey Hinton. They are quick to point out that the AI that’s out there today, based on a technology called deep learning, can do us no real harm; it’s the AI right around the corner that’s the real danger. But that danger, they maintain, will arrive sooner rather than later.


They’re talking about artificial general intelligence – AGI – and the danger is certainly real.

Today’s seemingly miraculous new AI technologies – generative AI chatbots, for instance – are based on deep learning. This AI methodology mimics the operations of the human brain to create the algorithms that drive the AI, on a foundation of artificial neural networks – systems that learn as human beings learn, detecting patterns in data that aren’t apparent and constructing algorithms to recreate them. This works brilliantly, and it’s not just for shiny chatty toys; it’s the foundation of our AI-powered visual systems (facial recognition in particular), our progress in self-driving vehicles, and many other new applications.


This technology tends to be applied to narrow, single-task applications, where it excels. Given a single dedicated mission, deep learning can deliver a solution that can blow the doors off human performance. But while that might be enough to make us very nervous, it’s not the thing that gives even the experts a sense of dread. It’s that coming-up-next thing: AGI.

Human beings can learn to do specific tasks extremely well, but the specialness of our intelligence transcends that ability. Spiders, after all, spin webs extremely well; birds navigate cross-continent extremely well. What makes us special is not our task-specific competence, but our general intelligence: our ability to apply a solution we’ve concocted for one problem and apply it to completely different one, and get a good result.


Deep learning is exploiting a key component of generalized intelligence, to be sure: pattern-finding. Until AI came along, human beings were the most gifted pattern-finders in the universe. It is central to the survival of all animal species, and our facility with it is central to our domination of the food chain.


But there are other key components to general intelligence:


  • Analogical thinking. The core of our intelligence, per cognitive scientist Douglas Hofstadter, is analogy – our ability to observe that this is like that, in both our observations in the world and in abstract concepts. When we create an analogy, we transfer knowledge and understanding we’ve acquired about one thing and apply it to another. Without this incredible ability, we would not be running the planet.

  • Social learning. We don’t just acquire knowledge and understanding from our own memories and experiences; we also acquire them from the testimony of others when they relate their own experiences we have not shared. I do not need to learn first-hand that the fire will burn me; I take my mom’s word for it. This, too, is an incredible ability, and one that only a handful of other creatures have mastered.

  • An internal model of the world. As we live our lives, our accumulated experiences and memories of them assemble within our minds a model of reality. It’s different for each of us, though our internal models will certainly have much in common (animals, too, have this experience, as the lowly mouse taught us in the laboratory). It is through referencing this internal world model that we are able to identify and act from analogy.


There are others, but with these three, AGI is on its way. When we have AI that can create analogies, drawing from a consolidated inner map of the world, built not only out of its own experience, but that of other AIs (and humans), we’ll have AGI.


The thing is... we already have most of these parts.


We are already able to combine AI models into larger models; we even have an array of perfected methodologies to choose from. Internal world-building in an AI is already possible, and being worked on.


AIs can already learn from other AIs; we have developed neural networks whose task it is to train other neural networks. They will get better and better.


All we lack, in the list above, is the ability for an AI to draw from its inner worldmap and identify analogies. And that is – wait for it! - a pattern-matching task, at which AIs already excel. It’s a matter of creating deep-learning AI systems whose task it is to study the internal workings of other AIs and identify patterns, perfecting that task until it can be incorporated into a consolidated model.


Put another way, AGI is not a century away, as some predict. Geoffrey Hinton, mentioned above – one of the pioneers of the Internet, and formerly Google’s top AI researcher – gives it 20 years at most. Some of his peers are saying 10.


But of this, we can be certain: just as deep learning AIs have already left us in the dust, outperforming humans at everything they do, so will these other pieces be better and faster. When they are finally integrated into a single AI – an AGI – they will already be a better generalized intelligence than the human brain.


The first true AGI won’t emerge into the world as a monkey-like thing that is cute and cuddly but not quite as smart as we are, setting the stage for the v2. The v2 won’t be the equivalent of a human two-year-old, promising but not all that capable; and the successor to that version won’t be equal-but-not-better. There won’t be a slow parade of incremental improvements, giving us time to wrap our minds around AGI and prepare for it.


When it arrives, it will already be far beyond us. It will be born as superintelligence, every facet of its operation already performing far beyond human norms.


That’s what the experts are afraid of. That’s what we need to be thinking about and preparing for, right now, not 20 years from now.


“We’ll be completely caught off guard,” Hofstadter himself cautioned. “We’ll think nothing is happening and all of a sudden, before we know it, computers will be smarter than us.”

11 views0 comments

Recent Posts

See All

Comments


bottom of page