Can artificial intelligence evolve and become more sophisticated in a competitive environment, much like life on Earth did through natural selection? Researchers at OpenAI have been exploring this question through extensive experiments, including a recent study that involved AI agents competing in nearly 500 million rounds of hide-and-seek.
In these experiments, AI agents initially ran about the environment without sophisticated strategies. However, after 25 million games, the hiders learned to use boxes as barricades, blocking exits and concealing themselves in rooms. They also collaborated by passing boxes to one another for quicker defense.
The seekers adapted as well, discovering ways to locate hiders within their makeshift forts after 75 million games. They navigated obstacles by maneuvering ramps against walls. Yet, around 85 million games in, the hiders countered by taking the ramps into their forts before sealing them off, rendering the seekers’ tools ineffective.
As OpenAI researcher Bowen Baker noted, "Once one team learns a new strategy, it creates pressure for the other team to adapt. This mirrors the evolutionary competition seen in nature." The AI's development did not stop there; they began exploiting environmental glitches by pushing ramps through walls, showcasing their growing ingenuity.
Baker emphasized that these findings suggest artificial intelligence has the potential to solve complex problems in ways humans might not even conceive. "Perhaps they'll even tackle challenges that we have yet to understand," he stated.
This research opens up new avenues for understanding AI evolution and its capabilities, illustrating the potential for innovative problem-solving in future applications.