DeepMind isn’t the only player in the realm of Atari-savvy AI. A team of researchers from Uber AI has created an innovative set of algorithms called Go-Explore, which reportedly achieves “superhuman” scores across all Atari 2600 games, including those that have historically posed challenges for AI. The core of Go-Explore's success lies in its ability to remember and revisit promising game states before exploring new ones. This strategy has led to improvements by “orders of magnitude” in several games.
Go-Explore made history as the first AI to conquer every level of Montezuma's Revenge and achieved a "near-perfect" score in Pitfall—both of which are known for their difficulty in the context of reinforcement learning systems. While DeepMind’s Agent57 has reached similar milestones, as noted by Jeff Clune from the team, it employs entirely different methodologies. This diversification of approaches provides developers with multiple pathways to tackle the same challenges.
The primary aim of this research goes beyond simply mastering a vintage console game. The techniques employed in Go-Explore have also shown promise in practical applications, such as enabling a simulated robot to pick up and manipulate objects. The creators of Go-Explore intend to enhance the technology's robustness, as the competencies acquired from Atari games could significantly improve navigation for robots and self-driving vehicles.
In summary, Go-Explore is now capable of solving all previously unsolved Atari games, effectively manages stochastic training through goal-conditioned policies, intelligently reuses skills during exploration, and tackles complex robotic tasks in simulations!