In October, Box introduced a new pricing strategy for its generative AI features, moving away from a flat-rate model to a distinctive consumption-based approach. Each user now receives 20 credits per month, which can be used for a variety of AI tasks, with each task costing one credit, totaling up to 20 events. Users can also access a shared pool of 2,000 additional credits. If demand exceeds this limit, users will need to engage in discussions with a salesperson to purchase extra credits.
Box CEO Aaron Levie explained that this pricing model allows for charges based on actual usage, recognizing that some users will utilize AI features more than others. It also factors in the costs associated with using the OpenAI API, the foundation of their large language model.
In contrast, Microsoft has opted for a more conventional pricing strategy, which it announced in November. The tech giant plans to charge $30 per user, per month for its Copilot features, in addition to the standard Office 365 subscription fees, which vary among customers.
As the year progressed, it became clear that cloud software firms would incorporate generative AI capabilities. At the Web Summit in November, a panel discussion featuring Christine Spang, co-founder and CTO of Nylas, and Manny Medina, CEO of sales enablement platform Outreach, shed light on the challenges SaaS companies face in implementing these innovative features.
Spang acknowledged that while there has been a lot of buzz surrounding generative AI, its true value lies in how effectively software companies can integrate it into their products. “I wouldn’t say we’re at a point where hype meets reality completely, but there’s undeniable value, particularly when this technology is connected to other systems and applications to create impactful use cases,” she stated.
Finding the right balance between satisfying customer demand for new features and establishing a pricing structure that ensures profitability is crucial. Medina pointed out, “Those of us bundling generative AI features must constantly reassess our costs with our large language model provider, which can escalate quickly. Until we create distinct experiences worth paying for, we will face challenges.”
It’s notable that AI model creators like OpenAI are already announcing price reductions as they develop more efficient models or lower prices on older products alongside new feature launches. In June, for instance, OpenAI introduced enhancements that boost processing power, offering better value while also reducing the cost of previous versions for developers who may not need all the latest advancements.
Spang indicated that her company already employs a consumption model based on the number of connected email or calendar applications, which she plans to extend as generative AI features are introduced. “We already see variations in message volume among users, so it's vital to adopt a pricing model that resonates, allowing us to pinpoint a suitable price point for the median,” she explained.
However, Medina noted that applying a consumption model in applications is more challenging than for an API provider like Nylas. “For application providers, I’m not sure if a consumption model is feasible. It works differently when you offer building blocks, like Nylas does,” he remarked.
There's also uncertainty about whether organizations will accept a flat fee, such as Microsoft's $30 monthly charge for Office 365, unless they perceive genuine value in the added expense. “The decision hinges on either a significant cost reduction to make it accessible or discovering a viable monetization strategy,” Medina added.
Furthermore, compliance costs related to AI implementation present another pressing concern for companies. “If governments, like in the U.S., enforce regulations requiring transparency about AI ingredient lists, that creates complications we won't get from OpenAI, making the situation even tougher,” he indicated.
CIOs managing tech budgets are meticulously evaluating these technologies and pondering whether the additional expenses will lead to enhanced employee productivity. Sharon Mandell, CIO of Juniper Networks, expressed her intent to assess the return on investment (ROI) for these features. “In 2024, we will be scrutinizing the generative AI narrative. If these tools deliver the promised benefits, the ROI could be substantial and may even allow us to eliminate other tools,” she explained. Consequently, she and fellow CIOs are conducting pilot programs cautiously, seeking ways to gauge any genuine productivity gains justifying the increased investment.
Regardless of the challenges, organizations will persist in trying out various pricing models while their customers conduct pilots and proofs of concept. It appears that both parties stand to gain from these innovations, yet the true benefits will remain unclear until these tools are more widely implemented. The cautious approach of CIOs towards generative AI in enterprises reflects the ongoing exploration of its potential.