CAMBRIDGE — Current progress in machine intelligence is newsworthy, but it’s often talked about out of context. MIT’s Josh Tenenbaum described it this way: Advances in deep learning are powering machines to accurately recognize patterns, but human intelligence is not just about pattern recognition, it’s about modeling the world.
By that, Tenenbaum, a professor of cognitive science and computation, was referring to abilities that humans possess such as understanding what they see, imagining what they haven’t seen, problem-solving, planning, and building new models as they learn.
That’s why, in the interest of advancing AI even further, Tenenbaum is turning to the best source of information on how humans build models of the world: children.
“Imagine if we could build a machine that grows into intelligence the way a person does — that starts like a baby and learns like a child,” he said during his presentation at EmTech 2018, an emerging technology conference hosted by MIT Technology Review.
Tenenbaum called the project a “moonshot,” one of several that researchers at MIT are exploring as part of the university’s new MIT Quest for Intelligence initiative to advance the understanding of human and machine intelligence. The “learning moonshot” is a collaborative effort by MIT colleagues, including AI experts, as well as those in early childhood development and neuroscience. The hope is to use how children learn as a blueprint to build a machine intelligence that’s truly capable of learning, Tenenbaum said.
The “quest,” as it’s aptly labeled, won’t be easy partly because researchers don’t have a firm understanding for how learning happens, according to Tenenbaum. In the 1950s, Alan Turing, father the Turing test to analyze machine intelligence, presumed a child’s brain was simpler than an adult’s and that it was akin to a new notebook full of blank pages.
“We’ve now learned that Turing was brilliant, but he got this one wrong,” Tenenbaum said. “And many AI researchers have gotten this wrong.”
Child’s play is serious business
Instead, research such as that done by Tenenbaum’s colleagues suggests that newborns are already programmed to see and understand the world in terms of people, places and things — not just patterns and pixels. And that children aren’t passive learners but instead are actively experimenting, interacting with and exploring the world around them.
Josh Tenenbaumprofessor of cognitive science and computation, MIT
“Just like science is playing around in the lab, children’s play is serious business,” he said. “And children’s play may be what makes human beings the smartest learners in the known universe.”
Tenenbaum described his job as identifying insights like these and translating them into engineering terms. Take common sense, for example. Kids are capable of stacking cups or blocks without a single physics lesson. They can observe an action they’ve never seen before and yet understand the desired outcome and how to help achieve that outcome.
In an effort to codify common sense, Tenenbaum and his team are working with new kinds of AI programming languages that leverage the pattern recognition advances by neural networks, as well as concepts that don’t fit neatly into neural networks. One example of this is probabilistic inference, which enables machines to use prior events to predict future events.
Game engines open window into learning
Tenenbaum’s team is also using game engines. These are used to simulate a player’s experience in real time in a virtual world. Common game engines include the graphics engine used to design 2D and 3D images, and the physics engine, which transposes the laws of physics from the real to the virtual world. “We think they provide first approximations to the kinds of basic commonsense knowledge representation that are built into even the youngest children’s brains,” he said.
He said the game engines, coupled with probabilistic programming, capture data that helps researchers understand what a baby knows at 10 months or a year old, but the question remains: How does a baby learn how to build engines like these?
“Evolution might have given us something kind of like game engine programs, but then learning for a baby is learning to program the game engine to capture the program of their life,” he said. “That means learning algorithms have to be programming algorithms — a program that learns programs.”
Tenenbaum called this “the hard problem of learning.” To solve it, he’s focused on the easier problem of how people acquire simple visual concepts such as learning a character from a new alphabet without needing to see it a thousand times. Using Bayesian program learning, a machine learning method, researchers have been able to program machines to see an output, such as a character, and deduce how the output was created from one example.
It’s an admittedly small step in the larger landscape of machine intelligence, Tenenbaum said. “But we also know from history that even small steps toward this goal can be big.”