How Francine Bennett Leverages Data Science to Enhance AI Ethics and Responsibility

Francine Bennett is a founding board member of the Ada Lovelace Institute and currently serves as its interim director. Previously, she worked in the biotech sector, utilizing AI to discover medical treatments for rare diseases. Furthermore, she co-founded a data science consultancy and is a founding trustee of DataKind UK, which supports British charities with data science expertise.

How did you first get involved in AI? What drew you to this field?

I began my journey in pure mathematics, initially unimpressed by applied work. While I enjoyed exploring computers, I viewed applied mathematics as mere calculation without intellectual depth. My interest in AI and machine learning developed later when I recognized the vast potential of abundant data in solving diverse problems. This realization opened my eyes to the fascinating opportunities within AI and machine learning that I had previously overlooked.

What achievement in the AI domain are you most proud of?

I take pride in efforts that, while not technically complex, make a real difference in people's lives. For instance, using machine learning to identify hidden patterns in patient safety reports can help healthcare professionals enhance future patient outcomes. I’m also proud to advocate for placing people and society at the forefront of discussions, as I did at this year's U.K.'s AI Safety Summit. My experience with both the technology and its real-world implications allows me to speak with authority on these matters.

How do you handle the challenges of a male-dominated tech and AI industry?

I focus on collaborating with individuals and organizations that prioritize skills and talent over gender. I strive to foster inclusive teams, as this environment encourages everyone to reach their full potential. Given AI’s broad implications, particularly for marginalized communities, it’s crucial that diverse perspectives are included in its development and oversight.

What advice would you offer to women looking to break into AI?

Enjoy the journey! AI is a captivating and constantly evolving field filled with intellectually stimulating challenges. There are countless significant applications waiting to be discovered. Don't stress about mastering every technical detail; it's essential to start with what intrigues you and build from there.

What challenges does the AI field face as it continues to evolve?

Currently, there is a notable absence of a unified vision regarding AI's role in society. While technological advancements progress, often with significant social and environmental consequences, there is a lack of understanding around potential risks and unintended effects. The demographics guiding this technology’s development are limited, but we now have a unique opportunity to define our aspirations for AI and shape its future responsibly. Reflecting on previous technological advancements can inform how we manage AI’s evolution and address its challenges effectively.

What should AI users keep in mind?

AI users need to understand the capabilities of the tools they are working with and clearly express their needs from these technologies. It’s easy to view AI as an enigma, but it’s fundamentally a toolkit. I want users to feel empowered to control how they utilize these tools. However, responsibility also lies with governments and industry to create an environment enabling users to feel confident in their use of AI.

What is the best approach to building AI responsibly?

At the Ada Lovelace Institute, we frequently address this critical question. While there are many aspects to consider, two key points stand out. First, we must be willing to halt development when necessary. Often, AI projects gain momentum, and the team adds “guardrails” post-development, rather than pausing to consider the ethical implications. Second, understanding the diverse experiences of all stakeholders is essential. By genuinely engaging with a broad range of perspectives, we increase our chances of creating responsible AI that addresses real issues without exacerbating existing inequalities.

For instance, the Ada Lovelace Institute collaborated with the NHS to create an algorithmic impact assessment, ensuring developers evaluate the societal impact of AI systems before accessing healthcare data. This process encourages the inclusion of lived experiences from affected communities.

How can investors promote responsible AI development?

Investors can drive change by asking pivotal questions regarding their investments and potential outcomes. They should inquire about what responsible, high-functioning AI looks like and where issues might arise. Understanding the broader societal implications is crucial, as is having a plan for adjustments if needed. While there’s no universal approach to responsible AI investment, simply posing these questions can influence companies to prioritize ethical practices in their AI initiatives.

Most people like

Find AI tools in YBX