This Week in AI: Why AI Ethics Continues to Be Overlooked

Keeping pace with the rapidly evolving AI landscape is no small feat. Since an AI isn't ready to do it for you just yet, here’s a concise overview of the latest developments in machine learning, along with key research and experiments that we didn’t delve into previously.

This week in AI, the news cycle has finally (thankfully!) started to calm down as we approach the holiday season. However, that's not to imply a lack of interesting stories to report on, which is both a blessing and a curse for a sleep-deprived journalist.

One headline from the Associated Press particularly caught my attention: “AI image generators are being trained on explicit photos of children.” This alarming report reveals that LAION, a dataset that many well-known AI image generators rely on—such as Stable Diffusion and Imagen—contains thousands of images of suspected child sexual abuse. The Stanford Internet Observatory, collaborating with anti-abuse organizations, has worked diligently to identify this illegal content and report the links to law enforcement.

In response, LAION, a nonprofit organization, has removed its training data and committed to eliminating the harmful materials before resuming publication. This incident highlights the urgent need for ethical considerations in the development of generative AI products, especially as market competition intensifies.

With the rise of no-code AI model development tools, it’s easier than ever to train generative AI on virtually any dataset. While this trend benefits both startups and established tech companies looking to launch models quickly, it raises ethical concerns as the pressure mounts to prioritize speed over responsible practices.

Navigating ethics is undeniably challenging. For example, sifting through the thousands of inappropriate images in LAION, as illustrated this week, is a significant undertaking that won’t be resolved overnight. Ideally, developing AI ethically should involve collaboration with all stakeholders, particularly organizations that represent marginalized groups often impacted by AI systems.

The industry has seen its share of AI deployment decisions influenced more by shareholder interests than ethical considerations. A case in point is Bing Chat (now Microsoft Copilot), which, upon launch, controversially compared a journalist to Hitler and made disparaging comments about their appearance. As of October, both ChatGPT and Google’s Bard have been criticized for providing outdated and potentially racist medical advice. Furthermore, recent iterations of OpenAI’s image generator, DALL-E, have demonstrated signs of Anglocentrism.

It’s evident that there are real harms being inflicted in the race for AI dominance—often driven by Wall Street's expectations. However, the recent adoption of the EU's AI regulations, which impose penalties for noncompliance with specific AI guidelines, may provide a glimmer of hope. Still, a considerable journey lies ahead.

Here are additional notable AI stories from the past few days:

- Predictions for AI in 2024: Devin outlines predictions for AI's influence on the U.S. primary elections and what lies ahead for OpenAI, among other insights.

- Against Pseudanthropy: Devin argues for a prohibition on AI mimicking human behavior.

- Microsoft Copilot Expands to Music Creation: Thanks to an integration with the GenAI music app Suno, Microsoft Copilot can now compose songs.

- Facial Recognition Banned at Rite Aid: The Federal Trade Commission has prohibited Rite Aid from using facial recognition technology for five years due to its reckless deployment, which harmed customer privacy.

- EU Boosts AI Startups: The EU is enhancing its initiative to support local AI startups by providing access to processing power on the bloc’s supercomputers.

- OpenAI Strengthens Internal Safety Measures: To tackle harmful AI risks, OpenAI is expanding its safety protocols, forming a "safety advisory group" to oversee technical teams, with the board now granted veto power over executive decisions.

- Q&A with Ken Goldberg: For his Actuator newsletter, Brian interviews Ken Goldberg, a professor at UC Berkeley, discussing trends in humanoid robotics and the robotics industry.

- CIOs Take a Conservative Approach to GenAI: Ron notes that while CIOs face pressure to deliver engaging user experiences akin to ChatGPT, many are proceeding cautiously in adopting generative AI for enterprises.

- News Publishers Sue Google Over AI: A class-action lawsuit from several news publishers accuses Google of using anticompetitive practices, leveraging AI technologies like the Search Generative Experience and Bard chatbot.

- OpenAI Partners with Axel Springer: OpenAI has struck a deal with Axel Springer, the Berlin-based publisher behind Business Insider and Politico, to use their content for training its generative AI models and updating ChatGPT with recent articles.

- Google Expands Gemini Integration: Google has integrated its Gemini models into a wider array of products and services, enhancing its AI development platform and tools for creating AI-driven experiences.

In recent research, one highlight is the groundbreaking life2vec study from Denmark, which uses data points from an individual's life to estimate personality traits and life expectancy. Though it doesn’t claim infallibility, it demonstrates how our experiences can potentially be modeled using machine learning techniques.

Additionally, researchers at CMU have developed Coscientist, an LLM-driven assistant for researchers that autonomously handles various lab tasks in specific chemistry domains, revolutionizing routine laboratory processes.

Google's AI research team also made strides with FunSearch, designed to facilitate mathematical discoveries via an innovative use of AI models. Limiting hallucinations, this approach involves pairing AI models that generate and evaluate hypotheses—improving knowledge application across various fields.

Moreover, StyleDrop, a new generative imagery tool, allows users to specify styles by providing reference images, resulting in highly tailored outputs.

Lastly, in the realm of video, Google has launched VideoPoet, leveraging LLMs for various video-related tasks, from turning text into video to enhancing visual storytelling—though coherence over time remains a notable challenge.

Despite these advancements, researchers from Stanford caution against the potential for AI to perpetuate outdated medical stereotypes, especially when applied within the health sector. Vigilance is key to ensuring that AI technologies serve to elevate rather than undermine equitable healthcare practices.

Allow me to leave you with a creative snippet produced by Bard, complete with a shooting script and prompts—watch out, Pixar!

Most people like

Find AI tools in YBX