A recent survey indicates that U.S. business leaders are increasingly advocating for comprehensive AI regulation and governance, driven by rising concerns over data privacy, security risks, and the ethical application of artificial intelligence technologies.
Conducted by The Harris Poll for Collibra, a data intelligence company, the study offers valuable insights into how organizations are managing the challenges of AI adoption and regulation.
The survey, which included responses from 307 U.S. adults in director-level positions or higher, revealed that an impressive 84% of data, privacy, and AI decision-makers support updating U.S. copyright laws to safeguard against AI misuse. This reflects the growing disconnection between rapid technological advances and existing legal frameworks.
“AI has fundamentally changed the relationship between technology vendors and creators,” said Felix Van de Maele, co-founder and CEO of Collibra. “The rapid deployment of generative AI tools has necessitated a reevaluation of ‘fair use’ and a retroactive application of centuries-old U.S. copyright law to modern technologies.”
Van de Maele highlighted the importance of fairness in this evolving landscape. “Content creators deserve enhanced transparency, protection, and compensation. Data serves as the backbone of AI, requiring high-quality, trusted sources—like copyrighted material—to produce reliable outputs. It's only fair that creators receive the compensation and protection they rightfully deserve.”
The push for updated copyright laws coincides with a surge of high-profile lawsuits against AI companies for alleged copyright infringements, spotlighting the complex issues surrounding AI’s use of copyrighted materials for training purposes.
Additionally, the survey showed significant support for compensating individuals whose data is used in AI model training, with 81% of respondents endorsing the idea of Big Tech companies providing such compensation. This indicates a shift in valuing personal data in the AI era.
“All content creators, regardless of size, should be compensated and protected when their data is utilized,” Van de Maele stated. “As we transition to valuing data talent more, the distinction between content creators and 'data citizens'—individuals responsible for utilizing data in their roles—will increasingly blur.”
The survey also highlighted a preference for federal and state-level AI regulation rather than international oversight, reflecting the current regulatory landscape in the U.S., where states like Colorado have begun to craft their own AI regulations amid the lack of comprehensive federal guidelines.
“States like Colorado have set a precedent for comprehensive AI regulations, though some may argue it was premature. Nonetheless, it demonstrates what must be done to protect companies and citizens,” Van de Maele noted. “Without clear federal guidelines, companies will seek guidance from state officials.”
Interestingly, the survey revealed a notable divide between large and small companies in their support for government AI regulation. Larger firms (1000+ employees) showed a stronger inclination toward backing these regulations compared to smaller businesses (1-99 employees).
This discrepancy can be attributed to resources and risk versus return on investment (ROI), according to Van de Maele. “Smaller companies often approach new technologies with skepticism and caution. There’s a common perception that AI is designed specifically for Big Tech, requiring substantial investment and potentially disrupting established operational models.”
Respondents expressed high confidence in their own companies' AI initiatives, but a trust gap emerged regarding government and Big Tech, posing challenges for policymakers and industry leaders in shaping the future of AI regulation.
Privacy and security concerns topped the list as perceived threats to effective AI regulation in the U.S., with 64% of respondents citing these as major issues. In response, companies like Collibra are developing solutions to enhance AI governance.
“Without proper AI governance, the likelihood of privacy issues and security risks increases,” Van de Maele explained. Collibra has introduced Collibra AI Governance, a solution designed to foster collaboration across teams, align AI projects with legal and privacy standards, reduce data risks, and optimize performance and ROI.
As the pace of AI technology accelerates, the survey revealed that 75% of respondents believe their companies prioritize AI training and upskilling, indicating a transformative shift in the job market.
Looking ahead, Van de Maele outlined key priorities for AI governance in the U.S.: leveraging data as a vital asset, establishing a trusted framework, preparing for the emergence of data talent, and emphasizing responsible access ahead of responsible AI.
“Governance must extend beyond IT; data governance should focus on data quality as well as quantity,” he emphasized.
The findings underscore the urgent need for comprehensive governance strategies as AI continues to reshape industries and challenge existing regulatory frameworks. While businesses are keen on embracing AI technologies, they are acutely aware of the associated risks and seek clear guidelines from policymakers for responsible development and deployment.
The coming years are likely to witness intense discussions among government, industry, and civil society as they strive to create a regulatory environment that fosters innovation while safeguarding individual rights and promoting ethical AI practices. Companies of all sizes will need to remain informed and adaptable, emphasizing robust data governance and AI ethics to navigate future challenges and opportunities.