Microsoft Unveils Phi-2: A Compact Language Model AI Surpassing Llama 2 and Mistral 7B in Performance

The momentum of generative AI developments is accelerating as we approach the end of 2023, defying the typical slow-down associated with the winter holiday season.

Today, Microsoft Research, the cutting-edge division of the tech giant, unveiled its Phi-2 small language model (SML). This innovative text-to-text AI is compact enough to operate on laptops and mobile devices, as highlighted in a post on X.

Phi-2, equipped with 2.7 billion parameters (connections between artificial neurons), delivers performance on par with larger models like Meta's Llama 2-7B, which has 7 billion parameters, and Mistral-7B, another model of the same size. Notably, Phi-2 also excels over Google's new Gemini Nano 2 model, despite that model containing an additional half a billion parameters. Furthermore, Phi-2 generates responses with less toxicity and bias compared to Llama 2.

In a pointed comparison, Microsoft referenced Google's controversial demo video, in which the upcoming Gemini Ultra model purportedly solved complex physics problems and corrected students' mistakes. Interestingly, Phi-2 also accurately addressed these physics prompts and provided the correct solutions, despite being significantly smaller than Gemini Ultra.

However, there is a significant limitation: Phi-2 is currently licensed solely for research purposes under a custom Microsoft Research License, explicitly prohibiting commercial use. Consequently, businesses wishing to develop products based on this model will not be able to do so at this time.

Most people like

Find AI tools in YBX