An Australian Senate select committee inquiry has accused tech giants Amazon, Google, and Meta of "exploiting" the nation's cultural, data, and creative assets to fuel the development of their AI technologies. The inquiry criticized these companies for being vague and evasive about how they use Australian data, raising widespread public concern about the transparency and ethics of their practices.
Labor Senator Tony Sheldon, who chaired the inquiry, sharply criticized these firms for dodging direct questions during the hearings, likening their actions to those of "pirates" who exploit Australian data for profit without offering fair compensation. Sheldon pointed out that the companies refused to disclose whether data from services such as Alexa, Kindle, Audible, and other user platforms were being used to train their AI systems.
The committee's report proposes that general-purpose AI models—like OpenAI's GPT, Meta's Llama, and Google's Gemini—should be automatically classified as "high-risk" and subjected to strict transparency and accountability standards. Sheldon emphasized the need for new, dedicated AI legislation in Australia that would regulate big tech companies and protect citizens' rights, stressing that the interests of Silicon Valley should not take precedence over the well-being of Australians.
Meta acknowledged that it had been collecting data from Facebook and Instagram users since 2007 to develop AI models, but could not clarify how users had consented to such data usage for technologies that did not even exist at the time. The company was also unclear about how it uses data from WhatsApp and Messenger.
The inquiry's report further highlighted the risks AI poses to creative professionals, warning that their livelihoods are under immediate threat due to AI's ability to replicate and modify their work. One key recommendation was the creation of payment systems to compensate creators whose original content forms the basis for AI-generated works. The report also advocated for standalone AI legislation to cover all "high-risk" models, especially those affecting workers' rights.
However, some members of the Coalition, Senators Linda Reynolds and James McGrath, argued that AI's biggest threat lies in cybersecurity, national security, and the preservation of democratic institutions. They cautioned against overregulation that could stifle AI’s potential benefits for job creation and productivity.
The Greens, meanwhile, expressed dissatisfaction with the final report, criticizing it for lacking a comprehensive strategy that aligns Australia's AI regulations with those of other leading jurisdictions, such as the UK, Europe, and California.