The federal government is lagging in regulatory efforts for artificial intelligence (AI), prompting cities across the nation to take decisive action. New York City has recently launched a comprehensive AI strategy aimed at evaluating AI tools, identifying their risks, enhancing city staff’s proficiency with the technology, and encouraging the responsible integration of AI systems into government operations. This initiative is part of a growing movement to establish local regulations that will better manage AI applications and foster transparency.
“How this plays out could have meaningful implications for AI policy both federally and in other municipalities,” remarked Sarah Shugars, an assistant professor of communication at Rutgers University. “Can governments simply pledge to use AI responsibly, or will citizens insist on tangible actions to support those claims?”
**Establishing Local Regulations for AI**
In its detailed 51-page AI strategy, New York City has outlined a series of steps to deepen its understanding of AI technology and integrate it responsibly into governmental processes. The first phase features the formation of an 'AI Steering Committee' that brings together stakeholders from various city agencies. The strategy identifies nearly 40 specific initiatives, with 29 set to begin or conclude within the next year. Additionally, New York City pledges to publish an annual report on AI progress, keeping the public informed about the strategy’s advancements and implementation.
“While artificial intelligence offers a once-in-a-generation opportunity to enhance services for New Yorkers, we must be vigilant about the inherent risks these technologies pose,” stated Mayor Eric Adams. “I am proud to introduce a plan that will strike a critical balance in the global conversation about AI — one that empowers city agencies to harness technologies that can improve lives while safeguarding against potential dangers.”
In fact, regulations regarding AI are already taking shape in New York City. As of this year, a new law restricts employers from using AI-powered tools for hiring and promotions. This legislation covers "automated employment decision tools," which leverage AI, statistical methods, or data analysis to make employment-related decisions.
**A Nationwide Movement Towards AI Regulation**
New York City's initiative is part of a larger nationwide trend aimed at regulating AI technology. While no comprehensive federal AI law exists yet, the White House introduced an AI Bill of Rights last year, focusing on consumer protection in the design and use of automated systems. The Biden administration is also preparing to release an executive order addressing AI, examining its current legal framework and identifying the need for further legislation. Meanwhile, the European Union is nearing the final stages of passing its extensive AI Act.
In the U.S., a wave of legislation has emerged, with ten states adding AI-related rules to their privacy laws for 2023, and more states considering similar actions. Connecticut is working on an AI Bill of Rights, while Delaware's Personal Data Privacy Act empowers consumers to manage automated decisions. Washington, D.C., is implementing measures to prevent algorithms from making biased choices, and New Jersey’s state senator Doug Steinhardt has proposed updating identity theft laws to mitigate risks associated with AI and deepfake technology, addressing the rise of voice-cloning scams.
At the start of the year, California prepared to pursue AI legislation focused on combatting algorithmic bias and providing privacy protections for residents. However, substantial progress on these initiatives may be delayed until at least 2024. Local officials had proposed ideas for individuals to opt out of AI systems, reflecting growing concerns about AI’s impact on privacy and safety.
As these discussions unfold at the national level, some worry that the mishandling of AI could lead to adverse consequences. “Cities are becoming testing grounds for emerging AI technologies, and the federal government is using these pilot programs to decide on regulatory approaches,” noted David Dunmoyer, campaign director of Better Tech for Tomorrow at the Texas Public Policy Foundation.
While some advocate for regulation, others caution that overly stringent local rules could stifle innovation. “It feels as though regulation is inevitable,” stated Raj Kaur Khaira, chief operating officer at AutoGenAI. “However, it's critical to understand what exactly will be regulated. Will it be the technology itself, or just its applications? Any regulation in this domain must be proportionate to the risk being addressed. As we have seen in other sectors, regulators and lawmakers do not always grasp the complexities of the technologies they seek to regulate.”
The evolving landscape of AI regulation underscores the need for a measured approach that balances innovation with responsible governance, ensuring that AI technologies develop in a manner that is beneficial and safe for all communities.