How We Taught Our CDN to Recognize Kittens

Imagine a content delivery network (CDN) that not only swiftly caches and manages data but also interprets it intelligently. Thatā€™s FastEdge, our groundbreaking edge-computing solution. FastEdge represents a leap in our technological prowess, harnessing our global network more efficiently than ever before. With this new solution, we were able to perform image recognition on our CDN, beginning with the fundamental pillar of the modern internet: kittens. Read on to discover how and why we taught our CDN to recognize kittens with FastEdge.

FastEdge: The Edge of Innovation

FastEdge is a low-latency, high-performance serverless platform designed for the development of responsive and personalized applications leveraging WebAssembly (Wasm) runtime. FastEdge resides directly on our edge servers, and this integration allows you to distribute custom code globally across more than 160 edge locations. The result is a near-instantaneous response to user inputs and exceptional application responsivenessā€”qualities that are highly valued when building modern distributed apps.

In developing FastEdge, we focused primarily on web optimization and custom network logic applications. However, the growing importance of AI inspired us to explore a new use case. Driven by our own curiosity, we decided to run AI on FastEdge, a significant milestone in the search for innovative computing solutions.

Why Run AI on the Edge?

Imagine you’re using an app like ChatGPT. Initially, its AI model gets fed a vast quantity of data: it learns to recognize patterns, understand language, or make predictions based on this data. This phase is called model training; it requires substantial computational resources and is typically done on high-performance servers. A trained AI model can then be deployed on other computer systems.

When you take a trained model and ask it a new question, it quickly gives you an answer. This process is called AI inference. The faster inference happens, the better your experience. And the closer that inference servers are located to end users, the faster inference happens.

With FastEdge operating on our CDN with 160+ points of presence, we realized that FastEdge has many of the infrastructure requirements for AI inference already in place: a flexible and versatile network with global-scale load balancing built upon a modern software stack.

Challenges

As we ventured into integrating AI capabilities at the edge using FastEdge, we revealed a fundamental challenge: while Wasm shines in traditional web tasks due to its speed, compactness, and security, it isnā€™t optimized for the demands of running AI models.

  • Wasm excels in environments where tasks are lightweight and execution times are short. AI inference, in contrast, often demands extensive computational power, typically relying on GPUs for faster processing. CPUs aloneā€”the hardware on which FastEdge runsā€”often struggle to meet these demands.
  • Wasm typically has lower memory limitations compared to dedicated AI solutions. AI models can easily exceed these limits, creating performance issues if an inappropriate model is chosen.

These AI-specific constraints surpass what Wasm was designed for and what a typical CPU-equipped server can handle. So, running AI on FastEdge requires additional solutions to bridge the gap between Wasmā€™s strengths and the unique requirements of AI deployment at the edge. It also demands lightweight AI models specifically designed for resource-constrained environments.

Solutions

Solution 1: Adding the Missing Component

To bridge the optimization gap, we turned to OpenVINO, an Intel toolkit that optimizes and deploys pre-trained AI models for efficient inference. OpenVINO is particularly valuable for its compatibility with a wide range of AI model frameworks and its capacity to efficiently utilize resources, including facilitating the use of CPU resources in scenarios where GPU acceleration is not feasible.

But simply optimizing models wasnā€™t enough. We also needed a way to deploy them seamlessly across different hardware configurations. Enter WASI-nn (WebAssembly System Interface for Neural Network), a developing protocol specifically designed for connecting AI models with diverse hardware backends. It acts as a bridge, allowing optimized OpenVINO models to run on edge devices regardless of their specific CPU, GPU, or specialized accelerator setup.

FastEdge structure demonstrating how Wasm interacts with OpenVINO via WASI-nn
FastEdge infrastructure for AI inference

This relationship between OpenVINOā€™s optimization power and WASI-nnā€™s platform-agnostic interface paved the way for high-performance AI inference on the edge, transforming FastEdgeā€™s web-oriented environment into an almost-AI-ready platform. Just one more step was required.

Solution 2: Finding the Right AI Model

Our experiment demanded a lean and mean AI model. We looked for a model thatā€™s lightweight from a complexity point of view, small from a storage perspective, and, of course, pre-trained to identify kittens.

Enter the Mobilenet v2, a champion model from Hugging Face with a proven track record in image recognition and small enough not to slow down our edge servers.

Putting It All Together

With our model and backend in place, it was time to move from theory to practice. We were ready to see how AI on FastEdge performed on the ultimate test case.

Our DevOps team meticulously compiled and deployed the pre-trained Mobilenet v2 model files into our production environment. To do this, the model file had to be specified as an environment variable in the FastEdge applicationā€™s configuration. (Looking ahead, we aim to streamline this process, allowing users to select their model directly as a resource within the application setup.)

Kitten Recognition in Action

Having deployed everything necessary at our edge servers, we tested how AI performs on FastEdge. See the results for yourself below, or drop your favorite internet-celebrity cats into our FastEdge demo page.

Several feline images recognized by FastEdge
FastEdge recognizes kittensā€”fastā€”using Gcore CDN

FastEdge demonstrates remarkable proficiency in kitten recognition. Despite being constrained to CPU processing, the system efficiently analyzes an image and yields recents in a decent time. The primary factors influencing the processing speed are the image size, which mainly affects the upload duration, and the userā€™s location.

The Mobilenet v2 model maintains a reasonable degree of accuracy in detecting felines (as opposed to confusing them with other animals.) However, for some reason, this AI model exhibits a peculiar tendency to classify red cats as Chihuahuas or Pomeranians.

Photo of a red kitten in a carton box which was also recognized as a hamster, broccoli, and Pomeranian
Cat in a box, or possibly a hamster, broccoli, or Pomeranian

So, despite hardware constraints, this fun experiment has shown that CPU-only configurations can perform AI inference tasks effectively, despite GPUs being the most popular and common choice for inference.

While we may explore GPU possibilities to improve performance in the future, our current priority for FastEdge is CPU-based processing. This is great news for FastEdge users, who are able to use the platform for a wide range of low-latency applicationsā€”even those beyond our initial intentions!

Beyond Kittens

Our journey with FastEdge, illustrated through kitten recognition, barely scratches the surface of its full potential. FastEdge paired with OpenVINO is versatile enough to be applied to a wide range of applications beyond just recognizing cats. It can identify various entities, objects, and patterns, showcasing its capability far beyond the realm of feline imagery:

Some photos of Corgi have been classified as Pomeranian
Gcore dog parents are keen to see what AI makes of their pets, with Corgi owners occasionally disappointed by the resultsā€¦

Of course, not every AI model is tailored to kittens, and use cases include word prediction and speech-to-text generation. As AI models evolve and diversify, so too will FastEdgeā€™s capabilities, paving the way for innovative applications across multiple sectors.

Conclusion: Playground for Innovation

We always envisioned FastEdge as our playground for innovationā€”a space where its users can build the future of technology. This vision encouraged us to combine edge computing with AI capabilities to create solutions previously thought to be unattainable. Initially planned as a fun experiment, we hope that kitten recognition will inspire FastEdge users to create powerful, intuitive, and user-focused applications by pushing technological boundaries.

FastEdge is a flexible, powerful, and versatile platform. It underscores our commitment to keeping pace with technological advancements and leading the charge. Learn more about FastEdge in our announcement, in our Product Documentation, and on the product page.

Try FastEdge today

Subscribe and discover the newest
updates, news, and features

We value your inbox and are committed to preventing spam