03 09

One of the key challenges of deep reinforcement learning models — the kind of AI systems that have mastered Go, StarCraft 2, and other games — is their inability to generalize their capabilities beyond their training domain.

But scientists at AI research lab DeepMind claim to have taken the “first steps to train an agent capable of playing many different games without needing human interaction data”.

The new system, according to DeepMind’s AI researchers, is an “important step toward creating more general agents with the flexibility to adapt rapidly within constantly changing environments.”

The goal of DeepMind’s new project was to create “an artificial agent whose behavior generalizes beyond the set of games it was trained on.”

One of the main advantages of XLand is the capability to use programmatic rules to automatically generate a vast array of environments and challenges to train AI agents.

The researchers created “billions of tasks in XLand, across varied games, worlds, and players.”

The games include very simple goals such as finding objects to more complex settings in which the AI agents much weigh the benefits and tradeoffs of different rewards.

If neural networks could develop high-level notions such as using objects to create ramps or cause occlusions, it could have a great impact on fields such as robotics and self-driving cars, where deep learning is currently struggling.

Add your comment