Informa's Vishal Nigam Discusses Insights Into Large Multimodal Models

Vishal Nigam, the global data and science manager at Informa, explores the transformative shift from large language models (LLMs) to large multimodal models (LMMs). These cutting-edge models aren't limited to text; they also possess the capability to comprehend images and, in the near future, videos. This advancement could even enable robots to enhance their learning processes through onboard vision systems, dramatically expanding their operational capabilities.

While the issue of hallucinations—instances where AI generates incorrect or misleading information—remains a challenge, Nigam notes that the occurrence of these hallucinations is progressively declining. This is especially significant when considering the risks associated with integrating AI models with hardware, as erroneous outputs could lead to adverse outcomes.

The implications of LMMs extend far beyond technical capabilities; they have the potential to revolutionize business practices. For instance, a striking statistic reveals that 52% of U.S. social media users already follow a virtual influencer. This trend exemplifies how organizations can leverage similar technologies to achieve hyper-personalization in their marketing strategies, tailoring content and experiences to individual user preferences more effectively than ever before.

As the landscape of digital interaction evolves, the integration of LMMs into various sectors promises not only to enhance user engagement but also to set new standards in personalized communication. Embracing these innovations could pave the way for smarter decision-making, improved customer experiences, and ultimately, more significant business success.

Most people like

Find AI tools in YBX