AI has dramatically transformed the landscape of technology, especially following the public release of ChatGPT. Nevertheless, the rapid advancement of AI raises significant concerns. Leading institutions, such as the AI research lab Anthropic, warn of its potential destructive capabilities, particularly as competition intensifies with systems like ChatGPT. Key issues include the loss of millions of jobs, data privacy violations, and the proliferation of misinformation, all of which have caught the attention of global stakeholders, particularly government entities.
In the United States, Congress has intensified its focus on AI regulation, introducing various bills aimed at enhancing transparency, developing a risk-based framework, and more.
In October, the Biden-Harris administration issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This order outlines guidelines across multiple areas, including cybersecurity, privacy, bias, civil rights, algorithmic discrimination, education, workers' rights, and research. Additionally, as part of the G7, the administration recently introduced a code of conduct for AI.
Similarly, the European Union is making strides with its proposed AI legislation, the EU AI Act. This regulation targets high-risk AI tools that may violate individual rights, particularly in sectors such as aviation. The EU AI Act emphasizes essential controls for high-risk AI, including robustness, privacy, safety, and transparency. Systems deemed to pose an unacceptable risk will be prohibited from the market.
Although discussions continue regarding the government's role in AI regulation, effective governance is also beneficial for businesses. Striking a balance between innovation and regulation can protect organizations from unnecessary risks while providing them with a competitive edge.
The Role of Business in AI Governance
Businesses have a responsibility to mitigate the risks associated with AI technologies. Generative AI’s reliance on vast amounts of data raises important privacy issues. Without proper governance, consumer trust and loyalty may decline as customers worry about how their sensitive information is being utilized.
Moreover, companies must be aware of potential liabilities linked to generative AI. If AI-generated materials resemble existing works, businesses could face copyright infringement claims, exposing them to legal and financial repercussions.
It’s crucial to recognize that AI outputs can replicate societal biases, embedding them into decision-making systems that impact resource allocation and media visibility. Effective governance entails creating robust processes to minimize bias risks. This involves engaging affected parties in reviewing parameters and data, fostering a diverse workforce, and refining datasets to generate outputs perceived as equitable.
Going forward, establishing strong governance is essential to protect individual rights while promoting the advancement of transformative AI technologies.
A Framework for Regulatory Practices
Implementing due diligence can mitigate risks, but establishing a strong regulatory framework is equally important. Companies should focus on the following key aspects:
Identify and Address Known Risks
While opinions vary regarding the most pressing threats posed by unchecked AI, there is consensus around various concerns, such as job displacement, privacy violations, data protection, social inequality, and intellectual property issues. Businesses should assess the specific risks relevant to their operations. By reaching a consensus on these risks, organizations can create guidelines to proactively address and manage them.
For instance, my company, Wipro, has developed a four-pillar framework aimed at fostering a responsible AI-driven future, focusing on individual, social, technical, and environmental considerations. This framework serves as one approach for companies to establish robust guidelines for their interactions with AI systems.
Enhance Governance Practices
Organizations that leverage AI must prioritize governance to ensure accountability and transparency throughout the AI lifecycle. Implementing a governance structure helps document model training processes, reducing risks associated with model unreliability, bias introduction, and changes in the relationships between variables.
AI systems are inherently sociotechnical, composed of data, algorithms, and human involvement. Therefore, it’s vital to integrate both technological and social considerations into regulatory frameworks. Collaboration among businesses, academia, government, and society is essential to prevent the development of AI solutions by homogeneous groups, which could lead to unforeseen challenges.
Ivana Bartoletti is the global chief privacy officer for Wipro Limited.