Google is joining other leaders such as Facebook and Stanford in establishing institutions dedicated to promoting ethical AI. The company has formed an Advanced Technology External Advisory Council (ATEAC) to guide the "responsible development and use" of AI in its products. This council will address key ethical issues, including facial recognition and fair machine learning algorithms.
ATEAC is comprised of a diverse group of advisors with expertise in various fields. Members include academics specializing in technical aspects of AI, such as computational mathematics and drones, as well as experts in ethics, privacy, and public policy. The council also has an international perspective, with participants from locations including Hong Kong and South Africa.
The first ATEAC meeting is scheduled for April, with three additional meetings planned throughout 2019. While the insights from these discussions will inform Google's development processes, the company commits to transparency by publishing summaries of the council's talks and encouraging members to share applicable findings with their respective organizations. The ultimate goal is to enhance ethical practices across the entire tech industry, not just within Google.
This initiative follows Google's commitment to ethical AI amid the backlash from its involvement in the U.S. military's Project Maven drone program. By engaging ATEAC, Google aims to foster critical discussions that will guide its decision-making and prevent past controversies.