Google Cloud to Address Legal Risks for Clients Using Generative AI

Google Cloud is taking a significant step to reassure its clients by pledging to cover legal claims arising from copyright infringement related to its AI tools. This commitment is particularly relevant for businesses that utilize AI models trained on copyrighted content. Under this protection, two scenarios are notably covered: when a corporate client is sued for using AI models developed with copyrighted data from Google, and when the client’s application of Google’s AI tools produces work that inadvertently infringes on someone’s copyright.

As noted in a recent blog post by Google Cloud Legal Vice President Neal Suggs and Chief Information Security Officer Phil Venables, this protection is not entirely new; Google has historically offered safeguards concerning training data. However, feedback from clients indicated a desire for clearer, more explicit assurances regarding their legal safety when using these powerful tools.

The protections span a variety of services. Duet AI, designed to assist with tasks such as drafting emails and enhancing presentations by adding images, is included within Google Workspace tools like Google Docs, Gmail, Slides, and Meet. Furthermore, the coverage extends to Vertex AI, an advanced platform that supports MLOps strategies to manage machine learning projects. Features of Vertex AI covered include search capabilities, conversation enhancements, text embedding APIs for multimodal integrations, visual captioning via Visual Q&A, and Codey APIs designed for code generation.

It’s important to note that this protective coverage is void if clients engage in deliberate copyright infringement. This approach aligns with recent announcements from industry giants such as Microsoft and Adobe. Last month, Microsoft committed to assuming responsibility in cases where its corporate clients using AI Copilot products face copyright lawsuits. Likewise, Adobe, with its latest generative AI tool Firefly, has expanded its existing protections for stock images to include generated images as well.

While these legal assurances are significant, companies integrating generative AI face additional challenges beyond copyright concerns. Issues such as AI hallucinations—where models produce fictitious information or present falsehoods as accurate—along with biases, cybersecurity vulnerabilities, high operational costs, the complexity of model architectures, and a shortage of qualified AI personnel remain prevalent obstacles in the generative AI landscape. These factors necessitate a comprehensive strategy for organizations looking to leverage AI technologies effectively and responsibly.

Most people like

Find AI tools in YBX