In a recent Financial Times editorial, Sundar Pichai, CEO of Google and Alphabet, emphasizes the urgent need for regulation of artificial intelligence: "...there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to." He reflects on his journey in technology, illustrating how innovation can lead to unforeseen negative outcomes. Pichai argues that while AI presents powerful opportunities, we must carefully weigh its potential harms against societal benefits.
His call for a balanced approach raises questions about the level of regulation he envisions. Pichai neither dismisses the White House’s preference for a relaxed regulatory framework nor critiques the EU's more stringent proposals. Instead, he advocates for a unified international stance on AI regulation.
Pichai suggests that Alphabet's internal policies may offer a useful model. He asserts that the company's rules aim to mitigate bias and prioritize safety and privacy, though the effectiveness of these measures is open to debate. He also declares that Alphabet will not use AI to facilitate mass surveillance or violate human rights. However, concerns persist regarding human rights implications linked to Google's operations, despite its decision not to sell facial recognition software, as some competitors do.
One crucial point Pichai makes is that "principles that remain on paper are meaningless." While there is consensus on the necessity of AI regulation, it is equally important to establish a robust regulatory body empowered to enforce compliance. Without significant consequences for companies that violate these rules or misuse AI technologies, regulations risk becoming ineffective.