Please Share With Your Friends

Microsoft on Friday released its Phi-4 artificial intelligence (AI) model. The company’s latest small language model (SLM) joins its open-source Phi family of foundational models. The AI model comes eight months after the release of Phi-3 and four months after the introduction of the Phi-3.5 series of AI models. The tech giant claims that the SLM is more capable of solving complex reasoning-based queries in areas such as mathematics. Additionally, it is also said to excel in conventional language processing.

Microsoft’s Phi-4 AI Model to Be Available via Hugging Face

So far, every Phi series has been launched with a mini variant, however, no mini model accompanied the Phi-4 model. Microsoft, in a blog post, highlighted that Phi-4 is currently available on Azure AI Foundry under a Microsoft Research Licence Agreement (MSRLA). The company plans to make it available on Hugging Face next week as well.

The company also shared benchmark scores from its internal testing. Based on these, the new AI model significantly upgrades the capabilities of the older generation model. The tech giant claimed that Phi-4 outperforms Gemini Pro 1.5, a much larger model, on the math competition problems benchmark. It also released a detailed benchmark performance in a technical paper published in the online journal arXiv.

On safety, Microsoft stated that the Azure AI Foundry comes with a set of capabilities to help organisations measure, mitigate, and manage AI risks across the development lifecycle for traditional machine learning and generative AI applications. Additionally, enterprise users can use Azure AI Content Safety features such as prompt shields, groundedness detection and others as a content filter.

See also  Samsung One UI 7 Stable Version Tipped to Include AI-Powered Audio Eraser Feature

Developers can also add these safety capabilities into their applications via a single application programming interface (API). The platform can monitor applications for quality and safety, adversarial prompt attacks, and data integrity and provide developers with real-time alerts. This will be available to those Phi users who access it via Azure.

Notably, smaller language models are often being trained after deployment on synthetic data, allowing them to quickly gain more knowledge and higher efficiency. However, post-training results are not always consistent in real-world use cases.


Please Share With Your Friends

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *