DeepMind Researchers Suggest Reinforcement Learning Is All We Need For AGI

Matt Swayne
All you need is love. For robots, they may need rewards, according to researchers.
DeepMind
DeepMind scientists offer a rewarding look at creating AGI.

Artificial General Intelligence — AGI — is generally thought to be a far-off state when machines can learn on their own. It’s thought that it will take a mix of AI models and approaches to — eventually — do the trick. DeepMind scientists, however, believe that reinforcement learning may be enough to do the trick.

In a paper in the journal, Artificial Intelligence, available online now, the researchers report that maximizing rewards could lead to general intelligence.

They write that intelligence and its associated abilities are promoted through the maximization of reward.

“Accordingly, reward is enough to drive behavior that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalisation and imitation,” the researchers write.

Reinforcement learning formalises the problem of goal-seeking intelligence, according to the researchers.

“The general problem may be instantiated with a wide and realistic range of goals and worlds — and hence a wide range of forms of intelligence — corresponding to different reward signals to maximize in different environments,” the researchers write.

They disagree that intelligence would require specialized problem formulations for each ability.

The key to creating machines that learn would be to use trial and error, while emphasizing rewards.

“We suggest that agents that learn through trial and error experience to maximize reward could learn behavior that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence,” the researchers report.

How efficiently rewards work is another question.

“We do not offer any theoretical guarantee on the sample efficiency of reinforcement learning agents,” the researchers write. “Indeed, the rate at and degree to which abilities emerge will depend upon the specific environment, learning algorithm, and inductive biases; furthermore one may construct artificial environments in which learning will fail. Instead, we conjecture that powerful reinforcement learning agents, when placed in complex environments, will in practice give rise to sophisticated expressions of intelligence.”

In fact, the researchers conclude that it will give rise to AGI.

The team reports: “If this conjecture is true, it provides a direct pathway towards understanding and constructing an artificial general intelligence.”

The research team includes: David Silver, Satinder Singh, Doina Precup and Richard S.Sutton, all of DeepMind.

Total
0
Shares
Leave a Reply
Previous Post
Bioprinter

Super Productive 3D Bioprinter Could Help Speed Up Drug Development

Next Post
TQD Exclusive: ICEoxford Delivers Customized Cooling Solutions for Quantum Industry

TQD Exclusive: ICEoxford Delivers Customized Cooling Solutions for Quantum Industry

Related Posts
The Deeptech Insider