Microsoft's Azure AI Search is making strides in affordability for developers creating generative AI applications. While the pricing remains stable, a notable increase in vector and storage capacity allows the company to deliver more data for each dollar spent. Additionally, this cloud-based service now supports applications from its partner, OpenAI.
Previously known as Azure Cognitive Services, Azure AI Search empowers companies to provide precise and hyper-personalized responses in their AI solutions.
Microsoft's latest update introduces the ability for developers to scale their applications to a “multi-billion vector index” in a single search, without sacrificing speed or performance. This update features an 11-fold increase in vector index size, a sixfold boost in total storage, and doubled improvements in indexing and query throughput.
These enhancements are available for customers subscribed to Azure AI Search's basic and standard tiers across several countries, including the U.S., U.K., United Arab Emirates, Switzerland, Sweden, Poland, Norway, Korea, Japan, Italy, India, France, Canada, Brazil, and Australia.
Microsoft also provides greater flexibility for enterprise customers utilizing Azure AI Search by integrating with external large language models (LLMs). The company's retrieval augmented generation (RAG) system is now compatible with OpenAI’s ChatGPT, GPTs, and Assistant API, positioning Microsoft as the key power source for queries or file additions related to these AI products.
This is a significant achievement for Microsoft, especially given that ChatGPT attracts 100 million weekly visitors and boasts over 2 billion developers using its API. This presents a tremendous opportunity for Azure AI Search.
Today's announcement adds to a series of updates Microsoft has implemented during the AI evolution. Over time, Azure AI Search has benefitted from enhancements in speech, search, language, and security, as well as support for Private Endpoints and Managed Identities. Microsoft remains committed to ensuring AI safety and reliability, exemplified by their recent launch of new tools to protect LLMs.