Presented by Dynatrace
The hype surrounding AI in the business world is at an all-time high. New co-pilot tools, virtual assistants, and large language models (LLMs) are being developed weekly to assist organizations across various functions. As these technologies proliferate, it’s essential for every organization to establish an internal code of conduct to mitigate risks and govern AI usage.
AI governance extends existing privacy and security protocols, enhancing organizations’ data processing practices. Just as there are policies defining data access and handling, an AI code of conduct clarifies acceptable AI use. By proactively establishing these guidelines, organizations empower their teams to adopt AI tools quickly, ensuring innovation occurs within a safe and secure framework.
How to Craft an AI Code of Conduct
AI policy should be supported by both executives and employees, as both groups have strong incentives to comply with upcoming regulations such as NIS2 and to clarify which tools are permissible for various use cases. Here are four key considerations for IT leaders when implementing an AI code of conduct:
1. Integrate AI into Existing Procurement Processes
An effective AI code of conduct should detail the steps employees must follow before procuring or using new AI tools. Many organizations mistakenly create new procedures for AI, resulting in unnecessary complexity. Instead, they should subject AI tools to the same rigorous procurement processes as any data-sensitive technology. This process must align with the organization’s privacy and ethical standards to ensure they are not compromised. For streamlined decision-making, centralize requests within a cross-departmental governance board that includes representatives from engineering, business, security, and legal teams. This board should act as a facilitator to assess individual use cases and cost-benefit analyses.
2. Ban Free Tools Without Clear Privacy Guidelines
As different teams may gravitate toward freely available AI tools, organizations face risks associated with personal licenses that often have lenient privacy rules. Employees may not realize that these free tools can provide vendors access to company data. To mitigate this risk, the AI code of conduct should categorically prohibit free tools in any business context. Employees should only use approved, commercially licensed solutions that ensure full privacy protections.
3. Adopt a Security-First Approach to Vendors
Organizations must remain aware of how their technology vendors utilize AI within their products. The AI code of conduct should enforce monitoring of vendor agreements to ensure that security remains a priority as AI features are integrated. For instance, AI-powered summarization tools in video conferencing software can aid productivity but may expose sensitive data if vendors use this data for AI training without consent. Organizations should consider data security as a core element of their AI code of conduct, even if it means restricting certain AI capabilities.
4. Educate Employees on Effective AI Use
Many employees have a rudimentary understanding of AI but lack the knowledge to leverage it effectively. For example, misconceptions about AI tools like ChatGPT often lead employees to believe these systems comprehend context as humans do, which can result in unreliable outputs. An AI code of conduct should establish clear expectations regarding AI capabilities and the skills required for optimal use. Organizations can support staff by launching training programs to enhance their understanding of AI tools as technology continues to evolve.
Creating Responsible AI Practices
Organizations do not operate in isolation; thus, it’s essential to hold external partners accountable for their AI practices. Companies should establish a public-facing trust center that outlines their AI policies, fostering transparency about AI tool implementation and usage. By making these proactive AI policies publicly available, organizations can learn from one another, refining their own codes of conduct.
Ultimately, a virtuous cycle of AI governance can be achieved where companies collaborate to ensure the responsible and successful use of transformative AI technologies for the collective benefit of all.
Alois Reitbauer is the Chief Technology Strategist at Dynatrace.