If you expected the AI craze to subside in 2024, prepare for a surprise. The advancements in hardware and software are unlocking broad applications of generative AI (gen AI), indicating that 2023 merely scratched the surface of what's possible.
This year, known as the Year of the Dragon in the Chinese Zodiac, will witness a strategic integration of generative AI across all sectors. With risks evaluated and strategies in place, businesses are ready to incorporate gen AI as a central component of their operations. CEOs and business leaders now see the potential and necessity of generative AI, actively embedding these technologies into their processes.
The evolving landscape positions gen AI not just as an option but as a key driver of innovation, efficiency, and competitiveness. This fundamental shift marks 2024 as the year generative AI moves from an emerging trend to an essential business practice.
Volume and Variety
A significant aspect of this transformation is the increasing recognition of how generative AI enhances both the volume and variety of applications, ideas, and content. The sheer quantity of AI-generated content is staggering; since 2022, AI users have created over 15 billion images—an achievement that previously took humans 150 years. This monumental output will shift how historians analyze the internet post-2023, much like the impact of the atomic bomb on radioactive carbon dating.
For enterprises, this expansion raises the standards across all fields, marking a critical juncture where failure to engage with the technology may not only mean missed opportunities but also result in competitive disadvantages.
The Jagged Frontier
In 2023, we learned that generative AI enhances both industry standards and employee capabilities. A YouGov survey revealed that 90% of workers believe AI boosts productivity. One in four respondents uses AI daily, while 73% engage with it at least weekly.
Moreover, studies indicate that properly trained employees complete 12% of tasks 25% faster with the aid of generative AI, improving overall work quality by 40%, with the most significant gains seen among lower-skilled workers. Nevertheless, when tasks fall outside AI's scope, employees are 19% less likely to produce correct solutions.
This coexistence of strengths and weaknesses has led to what experts call the "jagged frontier" of AI capabilities. On one side, we see AI accomplish tasks once thought impossible for machines with remarkable precision. On the other hand, AI struggles where human intuition and adaptability are crucial—areas defined by nuance and complex decision-making.
Cheaper AI
As enterprises learn to navigate this jagged frontier, we can expect generative AI projects to become mainstream. Contributing to this trend is the decreasing cost of training foundational large language models (LLMs), thanks to advancements in silicon optimization that halve costs every two years.
With worldwide demand surging amidst global shortages, the AI chip market is on track to become more affordable in 2024 as alternatives to industry leaders like Nvidia emerge. Furthermore, new fine-tuning methods—such as Self-Play Fine-tuning (SPIN)—can strengthen weak LLMs without requiring additional human-annotated data, leveraging synthetic data to maximize efficiency.
Enter the ‘Modelverse’
This reduction in costs is paving the way for more companies to develop and implement their own LLMs, igniting a wave of innovative applications in the coming years. By 2024, we will also see a transition from primarily cloud-based models to locally executed AI, driven by hardware advancements like Apple Silicon and the untapped potential of everyday mobile devices.
Small language models (SLMs) will grow in popularity within medium and large enterprises, meeting more specific, niche needs. Unlike LLMs, which handle vast datasets, SLMs focus on domain-specific data sourced internally, ensuring relevance and privacy.
A Shift to Large Vision Models (LVMs)
As 2024 unfolds, attention will shift from LLMs to large vision models (LVMs), particularly those tailored to specific domains that enhance visual data processing. While LLMs trained on internet text adapt well to proprietary documents, LVMs trained mainly on generic internet images struggle with specialized visual content used in fields like manufacturing and life sciences.
Research shows that adapting an LVM to a specific domain using around 100,000 unlabeled images can significantly reduce the requirement for labeled data, improving performance levels. These focused models excel in tasks like defect detection and object location, surpassing generic LVMs in domain-specific applications.
In parallel, we will see businesses adopt large graphical models (LGMs), which are adept at processing tabular data often found in spreadsheets. Their ability to analyze time-series data offers fresh insights into sequential business data—ensuring a better understanding of enterprise operations.
Ethical Considerations
These advancements come with the necessity for stringent ethical oversight. Past experiences with general-purpose technologies like smartphones and social media have highlighted the need for regulatory frameworks to avoid negative societal impacts. While generative AI offers immense benefits, its evolution must be guided to prevent mistakes that could lead to widespread issues.
One major ethical dilemma surrounding generative AI is copyright. As these technologies evolve, they raise urgent questions about intellectual property rights concerning AI-generated content that relies on existing human-created works for training. The challenge lies in whether and how this content should be subject to copyright laws.
The tension between AI and copyright is significant because traditional copyright laws aim to prevent the unlawful use of others’ intellectual property. While drawing inspiration is permissible, replicating is not. Unlike a person constrained to limited data consumption, AI can analyze vast volumes of information, complicating the lines between inspiration and replication.
We are poised to see landmark cases like NYT vs. OpenAI shape the copyright debate and influence how the media adapts to a new AI-driven landscape in 2024.
Deepfakes and Political Implications
On the geopolitical front, 2024 will be dominated by how AI intersects with critical elections worldwide. Over half of the global population will head to the polls, with elections scheduled in major nations like the U.S., India, and South Africa.
Disinformation campaigns have already emerged, as evidenced in Bangladesh, where pro-government influencers used low-cost AI tools to promote false narratives. One deepfake, which was later removed, depicted an opposition figure retracting support for Palestinian solidarity—an impactful narrative in a predominantly Muslim nation.
The threat posed by AI-generated imagery is not hypothetical; studies show that minor alterations created to mislead AI can similarly affect human perception. This finding highlights the need for further research on the impact of adversarial images on both humans and AI systems.
As the call for watermarking and content credentials grows to distinguish authentic content from synthetic, challenges remain. The effectiveness of detection, potential misuse, and maintaining the distinction between real and manipulated media will become paramount.
With public trust at an all-time low, 2024 stands to blend significant electoral events with transformative AI technology. This year will undoubtedly showcase the profound impact and applications of AI in politics and beyond. Brace yourself for what lies ahead.
Elliot Leavy is the founder of ACQUAINTED, Europe’s first generative AI consultancy.