Google Launches MedLM: A New Suite of Generative AI Models for Healthcare Innovation

Google sees a significant opportunity to leverage generative AI models to enhance healthcare workflows and assist medical professionals in their tasks. Today, the tech giant unveiled MedLM, a collection of AI models specifically fine-tuned for the healthcare sector. Built on Med-PaLM 2, a model that consistently excels in medical exam evaluations, MedLM is accessible to Google Cloud clients in the U.S. (with limited availability in select other regions) who have been granted access via Vertex AI, Google’s comprehensive AI development platform.

Currently, two versions of MedLM are available: a larger model tailored for “complex tasks” and a smaller, adaptable version optimized for “scaling across various tasks.” According to Yossi Matias, Google’s VP of engineering and research, “Our collaboration with different organizations has shown that the best model for a specific task can vary greatly depending on the situation. For instance, summarizing conversations may be best suited to one model, while searching through medication options might be more effective with another.”

One early adopter of MedLM, HCA Healthcare, has been experimenting with the models in emergency departments to assist physicians in drafting patient notes. Another example is BenchSci, which has incorporated MedLM into its “evidence engine” to identify, classify, and rank emerging biomarkers.

“We are closely collaborating with practitioners, researchers, and health organizations daily,” Matias emphasizes.

As Google, Microsoft, and Amazon compete fiercely for a slice of the healthcare AI market—projected to be worth tens of billions by 2032—Amazon recently rolled out AWS HealthScribe, employing generative AI for transcribing and analyzing patient-doctor discussions. Meanwhile, Microsoft is exploring various AI-powered healthcare applications, including medical assistant tools driven by large language models.

However, there are valid concerns surrounding AI technology in healthcare, particularly regarding its efficacy and reliability. Historical instances highlight this skepticism; for example, Babylon Health, which partnered with the U.K.’s National Health Service, faced scrutiny for claims about its diagnostic capabilities. Similarly, IBM was compelled to sell its Watson Health division at a loss due to technical shortcomings that undermined client confidence.

Although some may argue that generative models like MedLM are more advanced, research indicates that their accuracy in answering medical questions, even basic ones, can leave much to be desired. A study co-authored by ophthalmologists assessed the performance of ChatGPT and Google’s Bard in responding to inquiries about eye conditions, revealing alarming inaccuracies in their outputs. ChatGPT, for instance, generated flawed cancer treatment plans, while both ChatGPT and Bard disseminated errant medical theories concerning kidney, lung, and skin health.

In October, the World Health Organization (WHO) publicly expressed concerns about the risks associated with generative AI in healthcare. They highlighted potential issues such as producing harmful misinformation, disclosing sensitive health data, and spreading disinformation about health conditions. The WHO warned that the accelerated adoption of untested generative AI systems could hinder patient safety and erode trust in healthcare technologies: “While we support the responsible deployment of generative AI to assist healthcare professionals and patients, there is a palpable need for cautious implementation,” they stated.

Google has reiterated its commitment to exercising prudence in the rollout of its generative AI healthcare tools. “Our focus remains on empowering professionals to utilize this technology responsibly,” Matias concluded. “We are dedicated not only to advancing healthcare but also to ensuring these advancements benefit everyone.”

Most people like

Find AI tools in YBX