top of page
Writer's pictureScott Robinson

Grow from Go

Updated: May 27, 2023


Above, Melanie Mitchell has reminded us that the abstract activities of chess and Go – two applications that AI put human beings firmly and emphatically in their place – are, despite their abstraction, perfect intellectual training grounds for young minds. Parents, she points out, go out of their way to encourage their kids to learn chess and Go, in order to learn strategy, planning, logical thinking – winning.


And, above, we’ve also considered that analogical thinking – transferring knowledge and solutions from one domain to another, to solve problems and make decisions more efficiently – is a hallmark of human intelligence that will set apart today’s narrowly-focused AIs, from tomorrow’s general AIs.


Now, let’s consider both at the same time.


We’ve speculated that analogical processing might build on deep learning models by mining the processing patterns of the models mining the data, and using those patterns as templates for deep learning models in other domains.


Consider, then, using a deep learning AI to capture from a chess AI, not the game play itself nor the rules it construes, but the patterns of discovery in its learning about obstacles, threats, traps – meta-patterns of the AI’s patterns. Not what it learned, but how it learned. Embedded in those meta-patterns are not the AI’s strategies and planning, but the concepts of strategy and planning.


Now consider building a new AI from the model that has already trained on the meta-patterns of the first, using those meta-patterns to govern the goal-seeking feedback processes in its own learning, with the goal of analogizing strategy and planning, per the style of the original model, into the new one. If that new model is, say, a stock portfolio manager, it will strategize and plan its buying and selling in the manner of a chess champion (or Go champion, or champion of whatever game the original AI had mastered).


It’s fair to ask why such a step would be useful, when a straightforward training of the new model, without the meta-patterns gleaned from the old, would still yield a successful, functional, efficient model. Why go to the trouble?


The answer is that differences in training networks yield differences in results. Changing the number of neuron layers in a training network, for instance, will produce a somewhat different model. Similarly, pre-biasing a network as described above will bring about a different result – not necessarily better or worse, but different in character.


We can imagine, for instance, training up our starter model on the chess games of Bobby Fischer, the grandmaster who was one of the most aggressive in history (the following was once said to describe him: “At the board he radiates danger, and even the strongest opponents tend to freeze, like rabbits when they smell a panther. Even his weaknesses are dangerous.”). That model is going to pass along a different analogical influence than a generic one might, in shaping the behavior of the subsequent AI. It won't just operate effectively; it will operate ruthlessly.


We can imagine training a starter model on a game other than chess or Go – still a game that has winning as its goal, but with a different competitive style. Say, Monopoly.


In Monopoly, it is not enough to win; one must decimate one’s opponents. The goal is not just victory, but utter dominance. Winning is not achieved just by the satisfaction of some arbitrary criterion like the trapping of the opposing king, but by the eradication of all opponents’ resources. Strategy and planning take on new analogical definitions in that scenario.

Now imagine turning that AI loose on the stock market.


Let’s not stop there. Let’s imagine a starter AI trained to goal-seek, and to do so competitively, but benignly. What would happen, for instance, if the starter model was trained on Catan, a game where the object is to build settlements, and winning is achieved through good resource management, efficient planning, and negotiation with other players?


That’s a scenario with the same general conceptual foundation – strategy and planning – but those concepts are based on win-win exchanges with others; conservation; coping with scarcity. Not Bobby Fischer thinking at all.


We can go further still if we use as our model the modern Role-Playing Game, where a group of players work toward a common goal – cooperating, compensating for one another’s weaknesses, deriving strategy and planning through consensus. Imagine a starter model trained up on those conceptualizations, then applied to the analogical stock market AI – which would then possess goal-seeking biases that would include the formation of alliances with others to increase its resources and optimize its wins, through ensemble strategies and planning.


Far-fetched? Not at all. This is certainly all very conceptual in itself, but these are direct next steps from the speculative developments in AI we’ve surveyed so far. And it’s a certainty that these scenarios are, if nothing else, a reflection of the staggering variety we will see emerging as analogical models begin to appear, and AGIs make their way into the mix.

18 views0 comments

Recent Posts

See All

Comments


bottom of page