Microsoft Launches Smallest AI Model, Phi-3-mini

azure

Lightweight artificial intelligence model launched this week by Microsoft, offering more cost-effective option for Azure customers

Microsoft has launched a lightweight artificial intelligence (AI) model designed to appeal to a wider customer base with more limited resources.

A blog post by Misha Bilenko corporate VP at Microsoft GenAI, introduced Phi-3 which Bilenko said is “a family of open AI models developed by Microsoft.”

Bilenko noted that Phi-3 models “are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across a variety of language, reasoning, coding, and math benchmarks.”

AI In Your Pocket
AI In Your Pocket

Phi-3-mini

At the moment, large language models (LLMs) are what many people are familiar with as a traditional AI service dealing with complex tasks.

But the size of LLMs means they can require significant computing resources to operate.

Microsoft has therefore developed a series of small language models (SLMs) that offer many of the same capabilities found in LLMs, but are smaller in size and are trained on smaller amounts of data.

Microsoft is to release three small language models (SLM), with the first being Phi-3-mini. Microsoft claims Phi-3-mini measures 3.8 billion parameters, and performs better than models twice its size.

In the coming weeks, additional models will be added to Phi-3 family.

According to Bilenko, Phi-3-small and Phi-3-medium will be available in the Azure AI model catalogue and other model gardens shortly.

However from Tuesday this week Phi-3-mini is available on Microsoft Azure AI Studio, machine learning model platform Hugging Face, and the Ollama framework.

The SLM will also be available on Nvidia’s software tool Nvidia Inference Microservices (NIM) and has also been optimised for its graphics processing units (GPUs).

SLM performance

According to Microsoft Phi-3-mini is available in two context-length variants – 4K and 128K tokens. It is the first model in its class to support a context window of up to 128K tokens, with little impact on quality.

“Phi-3 models significantly outperform language models of the same and larger sizes on key benchmarks,” wrote Bilenko. “Phi-3-mini does better than models twice its size, and Phi-3-small and Phi-3-medium outperform much larger models, including GPT-3.5T.”

Bilenko also noted that Phi-3 models were developed in accordance with the Microsoft Responsible AI Standard, based six principles: accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness.

Bilenko also wrote that thanks to their smaller size, Phi-3 models can be used in compute-limited inference environments. Phi-3-mini, in particular, can be used on-device, especially when further optimized with ONNX Runtime for cross-platform availability.

The release of Microsoft’s Phi-3-mini comes after the software giant last week invested $1.5 billion in UAE-based AI firm G42.

Microsoft has already invested a rumoured $13 billion in San Francisco-based OpenAI, and has also partnered with French startup Mistral AI to make their models available through its Azure cloud computing platform.