AI is here, transforming our world and reigniting the spirit of Silicon Valley. While significant changes unfold in California, a parallel evolution is taking place in Washington, D.C.: the leading players in the AI industry are adopting a public policy strategy that is as surprising as the technology itself.
Today’s top AI companies are strategically engaging with policymakers from the start. They offer briefings to Congress members and their staff to enhance understanding of AI technology, demonstrating a willingness to testify before committees, both publicly and privately. Additionally, they are organizing multi-stakeholder forums and signing cooperative agreements with the White House.
Having been involved in various public policy efforts that bridge technology and the public sector, I’ve witnessed firsthand the challenges of achieving consensus among private sector entities, especially when liaising with the government.
Some skeptics argue that the AI industry’s outreach is merely a façade. Companies are aware that Congress often moves at a sluggish pace. They understand that establishing a new regulatory agency—including funding, staffing, and equipping it for enforcement—could take years. For context, social media companies have remained largely unregulated even decades after they first gained prominence.
Regardless of their underlying intentions, the swift convergence of major AI players on broad safety principles and regulatory frameworks reflects a serious acknowledgment of both the potential risks and opportunities that AI presents.
Never before has a technology so rapidly prompted the private sector to seek proactive government oversight. While we should applaud current efforts, it’s what follows that will truly matter.
AI executives and their public policy teams appear to have learned from the backlash faced by industries like social media and ride-sharing. In the past, Silicon Valley often ignored Congress, and at times, even ridiculed it. When called to testify, industry leaders often displayed disdain, which soured their reputations with policymakers and the public alike.
In stark contrast, the AI sector is currently demonstrating a more collaborative approach. CEOs are addressing Congress and answering even the most basic inquiries with apparent respect. They convey their messages clearly, neither overstating benefits nor downplaying risks. These leaders are presenting themselves as responsible and genuine.
As the dialogue shifts from initial engagement to the complex task of developing a regulatory framework, the strategies of AI companies will be put to the test.
To maintain momentum, AI industry leaders must continue to engage constructively. Building goodwill and trust is challenging and easily lost. Here are several constructive steps for the industry to consider:
1. Increase Transparency: Develop innovative methods to inform stakeholders about key aspects of current AI models—including their design, deployment, and future safety measures. Additionally, share new research and identify potential risks promptly.
2. Agree and Commit: Companies should not enter into joint agreements they cannot uphold. Clear commitments are vital; vague terms may lead to reputational damage if companies fail to meet expectations.
3. Broaden Member Inclusion: By reaching out beyond key committee members, AI companies can foster more robust relationships throughout Capitol Hill. Group briefings followed by individual meetings will enhance engagement with lawmakers and advocacy groups raising concerns about AI.
4. Establish a Congressional Support Network: Offer dedicated support to congressional staff on technical matters. This will equip members to better address constituents' concerns and enhance trust.
5. Engage State Governments: Implement a proactive strategy to engage state governments. Since states can create complex regulations affecting AI companies, early engagement is crucial to mitigate future compliance risks.
6. Incorporate Policymakers in Red Team Exercises: Include legislators in red teaming to demonstrate both the technical process and substantive solutions. Engaging lawmakers in this manner can foster cooperation and reduce blame.
7. Clarify Regulatory Opposition: When lobbying against specific provisions, be transparent about the reasons for resistance instead of making broad statements in favor of regulation. This honesty will help avoid perceptions of insincerity.
8. Implement Safety Bounty Programs: Beyond specialized hackathons, consider creating safety-focused bounty programs to reward individuals for identifying vulnerabilities. Given the rapid pace of AI development, it’s crucial to minimize the gap between identifying risks and implementing fixes.
Time will tell whether this innovative approach to public policy will last or prove to be fleeting. Ultimately, companies must navigate their own policy paths. There is no universal solution, and those who believe they have done enough may soon find themselves surprised by the challenges ahead.