Everywhere Inference updates: new AI models and enhanced product documentation

Everywhere Inference updates: new AI models and enhanced product documentation

This month, we’re rolling out new features and updates to enhance AI model accessibility, performance, and cost-efficiency for Everywhere Inference. From new model options to updated product documentation, here’s what’s new in February.

Expanding the model library

We’ve added several powerful models to Gcore Everywhere Inference, providing more options for AI inference and fine-tuning. This includes three DeepSeek R1 options, state-of-the-art open-weight models optimized for various NLP tasks.

DeepSeek’s recent rise represents a major shift in AI accessibility and enterprise adoption. Learn more about DeepSeek’s rise and what it means for businesses in our dedicated blog. Or, explore what DeepSeek’s popularity means for Europe.

The following new models are available now in our model library:

  • QVQ-72B-Preview: A large-scale language model designed for advanced reasoning and language understanding.
  • DeepSeek-R1-Distill-Qwen-14B: A distilled version of DeepSeek R1, providing a balance between efficiency and performance for language processing tasks.
  • DeepSeek-R1-Distill-Qwen-32B: A more robust distilled model designed for enterprise-scale AI applications requiring high accuracy and inference speed.
  • DeepSeek-R1-Distill-Llama-70B: A distilled version of Llama 70B, offering significant improvements in efficiency while maintaining strong performance in complex NLP tasks.
  • Phi-3.5-MoE-instruct: A high-quality, reasoning-focused model supporting multilingual capabilities with a 128K context length.
  • Phi-4: A 14-billion-parameter language model excelling in mathematics and advanced language processing.
  • Mistral-Small-24B-Instruct-2501: A 24-billion-parameter model optimized for low-latency AI tasks, performing competitively with larger models.

These additions give developers more flexibility in selecting the right models for their use cases, whether they require large-scale reasoning, multimodal capabilities, or optimized inference efficiency. The Gcore model library offers numerous popular models available at the click of a button, but you can also bring your own custom model just as easily.

Everywhere Inference product documentation

To help you get the most out of Gcore Everywhere Inference, we’ve expanded our product documentation. Whether you’re deploying AI models, fine-tuning performance, or scaling inference workloads, our docs provide in-depth guidance, API references, and best practices for seamless AI deployment.

Choose Gcore for intuitive, powerful AI deployment

With these updates, Gcore Everywhere Inference continues to provide the latest and best in AI inference. If you need speed, efficiency, and flexibility, get in touch. We’d love to explore how we can support and enhance your AI workloads.

Get a complimentary AI consultation

Everywhere Inference updates: new AI models and enhanced product documentation

Subscribe
to our newsletter

Get the latest industry trends, exclusive insights, and Gcore
updates delivered straight to your inbox.