Gaming industry under DDoS attack. Get DDoS protection now. Start onboarding
  1. Home
  2. Blog
  3. Mili Leitner Cohen

Mili Leitner Cohen

Content Marketing Lead, AI Products

Mili leads content marketing for the Gcore AI product team. She helps define how AI products are positioned, launched, and communicated globally. With more than a decade of experience in content and growth strategy, she makes complex innovation understandable and engaging for real audiences.

Gcore Everywhere AI evolves to full-lifecycle management with Slurm, Jupyter, and token-based inference integrations

AI adoption has a fragmentation problem. Organizations routinely stitch together separate tools for development, training, and serving, each with its own infrastructure, access controls, and operational overhead. The result is a patchwork t

March 23, 2026 3 min read
From CDN to AI powerhouse: Gcore at 12

Three partnership announcements. One week. Twelve years in the making.Last week, as Gcore turned 12, we launched a new feature in partnership with NVIDIA and announced that Microsoft selected Gcore to join its elite group of global CDN part

March 5, 2026 4 min read
New AI inference models on Application Catalog: translation, agents, and flagship reasoning

We’ve expanded our AI inference Application Catalog with three new state-of-the-art models, covering massively multilingual translation, efficient agentic workflows, and high-end reasoning. All models are live today via Everywhere Inference

December 7, 2025 2 min read
New AI inference models available now on Gcore

We’ve expanded our Application Catalog with a new set of high-performance models across embeddings, text-to-speech, multimodal LLMs, and safety. All models are live today via Everywhere Inference and Everywhere AI, and are ready to deploy i

November 17, 2025 2 min read
Introducing Gcore Everywhere AI: 3-click AI training and inference for any environment

For enterprises, telcos, and CSPs, AI adoption sounds promising…until you start measuring impact. Most projects stall or even fail before ROI starts to appear. ML engineers lose momentum setting up clusters. Infrastructure teams battle to b

November 3, 2025 3 min read

Related articles

Gcore Radar Q3–Q4 2025: three insights into an accelerating threat landscape

Cyberattacks are not just growing—they're accelerating at an alarming pace. The second half of 2025 marked a dramatic escalation in both the frequency and scale of DDoS attacks, with record-breaking volumes and increasingly sophisticated ta

Gcore Everywhere AI evolves to full-lifecycle management with Slurm, Jupyter, and token-based inference integrations

AI adoption has a fragmentation problem. Organizations routinely stitch together separate tools for development, training, and serving, each with its own infrastructure, access controls, and operational overhead. The result is a patchwork t

Four e-commerce takeaways from the Berlin Expo

E-commerce is moving fast — and the conversations at E-commerce Berlin Expo reflected just how much has changed in the last few years.Now that the event is over, here are four things I took away from speaking with key players across the ind

From CDN to AI powerhouse: Gcore at 12

Three partnership announcements. One week. Twelve years in the making.Last week, as Gcore turned 12, we launched a new feature in partnership with NVIDIA and announced that Microsoft selected Gcore to join its elite group of global CDN part

Introducing faster, lower-cost LLM inference with NVIDIA Dynamo

Imagine if you could click a button and suddenly your GPUs increase their throughput by 6x. Or reduce latency by 2x. Or route inference requests seamlessly across different GPU types.That's the experience we're bringing to our inference cus

Achieving 3-Second Latency in Streaming: A Deep Dive into LL-HLS and LL-DASH Optimization with CDN

HLS/DASH streaming via CDN with ~3 seconds latency glass-to-glassLL-HLS and LL-DASH are well-documented standards, but delivering them reliably at scale is far from trivial. The challenge is not in understanding the protocols—it is in engin