Facebook commenced the second day of F8 2019, its annual developers conference, with a keynote focused on the technologies designed to combat abuse on its platform. As previously outlined, artificial intelligence (AI) is pivotal in maintaining the safety of Facebook's apps and services. The company reports that AI is now proactively removing over 99% of spam, fake accounts, and terrorist propaganda. However, challenges remain with hate speech (51.6% mitigation) and harassment (14.9% mitigation).
A significant area where Facebook aims to enhance its technology is inclusivity. This means developing AI systems that function fairly across diverse demographics, regardless of skin color or physical characteristics. To achieve a more inclusive AI, Facebook is concentrating on three primary aspects: user studies, algorithm development, and system validation.
Lade Obamehinti, who leads the technical strategy for Facebook's augmented reality and virtual reality team, shared her personal experience on stage. While using a pre-production version of the Portal smart camera, she noticed a flaw: during a video call, the camera zoomed in on her white male colleague instead of her. As a Nigerian-American, Obamehinti investigated this issue and identified gaps in representation within the Portal's AI, which could lead to subpar experiences for users.
Understanding that humans can easily recognize different skin tones while machines cannot, Facebook is conducting tests on skin tones under varying lighting conditions. This data helps improve the AI that powers its services, including augmented reality applications and the Portal camera.
Facebook’s ultimate goal is to ensure that AI-driven devices operate equitably for all users and create a robust framework to eliminate bias from its systems, thereby providing the best personalized experiences. “We have to truly understand our diverse product community and address critical user issues when working with AI,” Obamehinti emphasized. “Inclusive means not excluding anyone.”