Artificial Intelligence scores high at arcade; Google program beats gamers at 'Space Invaders'

Computers already have bested human champions in "Jeopardy!" and chess, but artificial intelligence now has gone to master an entirely new level: "Space Invaders."

Google scientists have cooked up software that can do better than humans on dozens of Atari video games from the 1980s, like video pinball, boxing, and 'Breakout.' But computers don't seem to have a ghost of a chance at "Ms. Pac-Man."

The aim is not to make video games a spectator sport, turning couch potatoes who play games into couch potatoes who watch computers play games. The real accomplishment: computers that can teach themselves to succeed at tasks, learning from scratch, trial and error, just like humans.

The computer program, called Deep Q-network, wasn't given much in the way of instructions to start, but in time it did better than humans in 29 out of 49 games and in some cases, like video pinball, it did 26 times better, according to a new study released Wednesday by the journal Nature. It's a first time an artificial intelligence program bridged different type of learning systems, said study author Demis Hassabis of Google DeepMind in London.

Deep Q "can learn and adapt to unexpected things," Hassabis said in a news conference. "These types of systems are more human-like in the way they learn."

In the submarine game "Seaquest," Deep Q came up with a strategy that the scientists had never considered.

"It's definitely fun to see computers discover things that you didn't figure out yourself," said study co-author Volodymyr Mnih, also of Google.

Sebastian Thrun, director of the Artificial Intelligence Laboratory at Stanford University, who wasn't part of the research, said in an email: "This is very impressive. Most people don't understand how far (artificial intelligence) has come. And this is just the beginning."

Nothing about Deep Q is customized to Atari or to a specific game. The idea is to create a "general learning system" that can figure tasks out by trial and error and eventually to stuff even humans have difficulty with, Hassabis said. This program, he said, "is the first rung of the ladder."

Carnegie Mellon University computer science professor Emma Brunskill, who also wasn't part of the study, said this learning despite lack of customization "brings us closer to having general purpose agents equipped to work well at learning a large range of tasks, instead of just chess or just 'Jeopardy!'"

To go from pixels on a screen to making decisions on what to do next, without even a hint of pre-programmed guidance, "is really exciting," Brunskill said. "We do that as people."

The idea is that when the system gets scaled up, maybe it could work like asking a phone to plan a complete trip to Europe, book all the flights and hotels on its own "and it sorts it all out as if you have a personal assistant," Hassabis said.

But to some ways of thinking, Deep Q wasn't even as smart as a toddler because it can't transfer learned experiences from one situation to another and it doesn't get abstract concepts, Hassabis said.

Deep Q had trouble with "Ms. Pac Man" and "Montezuma's Revenge" because they are games that involve more planning ahead, he said.

Next, the scientists will try the system on more complicated games of the 1990s and beyond, perhaps a complicated game like "Civilization," where gamers create an entire empire to see if it can stand the test of time.

Deep Q isn't showing what Hassabis would call creativity, he said: "I would call it figuring out something that already existed in the world."

Creativity would be if the program created its own computer game, Hassabis said. Artificial intelligence isn't there, he said.

At least not yet.

___

Online:

Nature: http://www.nature.com/nature

___

Seth Borenstein can be followed at http://twitter.com/borenbears