Inference AI is a specialized form of artificial intelligence that applies trained data models to new data for real-time decision making or predictions. When offered “as a service,” inference AI is cloud-based, providing businesses with the ability to leverage real-time AI decision-making capabilities without the need for in-house AI hardware and expertise. Outsourcing the inferencing workload to cloud services can save businesses costs associated with building and maintaining on-premises infrastructure and simultaneously let them benefit from the latest advancements in AI technology. Let’s delve into the complexities of deploying an inference AI model, explore the journey from model training to deployment, and discover Gcore’s offerings.
AI Model Training and Inference
In the world of AI, there are two key operations: training and inference. Regular AI encompasses both of these tasks, learning from data and then making predictions or decisions based on that data. By contrast, inference AI focuses solely on the inference phase. After a model has been trained on a dataset, inference AI takes over to apply this model to new data to make immediate decisions or predictions.
This specialization makes inference AI invaluable in time-sensitive applications, such as autonomous vehicles and real-time fraud detection, where making quick and accurate decisions is crucial. For self-driving cars, this service can swiftly analyze sensor data to make immediate driving decisions, eliminating latency and increasing safety. In real-time fraud detection, inference AI can instantaneously evaluate transactional data against historical patterns to flag or block suspicious activities.
The Need for Streamlined AI Production Management
Managing AI production involves navigating a complex matrix of interconnected decisions and adjustments. From data center location to financial budgeting, each decision carries a ripple effect. In our experience at Gcore, we see that this field is still defining its rules; the road from model training to deployment is more of a maze than a straight path. In this section, we’ll review the key components that every AI production manager must carefully consider to optimize performance and efficiency.
Location and latency should be your first consideration in AI production. Choose the wrong data center location, and you’re setting yourself up for latency issues that can seriously degrade user experience. For example, if you’re operating in the EU but your data center is in the United States, the transatlantic data travel times can create noticeable delays—a non-starter for inference AI.
Resource management demands real-time adaptability. Elements like CPUs, memory and specialized hardware — GPUs or TPUs — require constant tuning based on up-to-the-minute performance metrics. As you switch from development to full-scale production, dynamic resource management becomes not a luxury but a necessity, operating on a 24/7 cycle.
Financial planning is tightly linked to operational efficiency. Accurate budget forecasts are crucial for long-term sustainability, particularly given the volatility of computational demands in response to user activity.
Unlike the more mature landscape of software development, AI production management lacks a standardized playbook. This means you need to rely on bespoke expertise and be prepared for a higher error rate. It’s a field propelled by rapid innovation, and trial and error. In this sense, the sector is still in its adolescent phase — reckless, exciting and still figuring out its standards.
How to Deploy an Inference AI Model
Now that we understand the key components of AI production management, let’s walk through a step-by-step guide for deploying an AI inference model, focusing on the integration of various tools and resources. The aim is to build an environment that ensures swift, efficient deployment and scaling. Here are some tools that will be essential for success:
- Docker: An industry standard for containerization, aiding in the smooth deployment of your model.
- Whisper: A leading AI model for speech-to-text that serves as the foundation of our service.
- Simple Server Framework (SSF): This Graphcore tool facilitates the building and packaging (containerizing) of applications for serving.
- Harbor: An open source artifact storage software used for preserving Docker images, instrumental in our setup. Use the official docs to get set up.
Here’s what the pipeline looks like:
- Model: For this guide, we use a pre-trained model from Hugging Face. Training the model is outside the scope of this article.
- Environment: We have a designated cluster for model building. All commands will be executed via SSH.
Step 1: Set Up a Virtual Environment
Create a virtual environment:
virtualenv .venv --prompt whisper:
Step 2: Install Required Packages
pip install https://github.com/graphcore/simple-server-framework/archive/refs/tags/v1.0.0.tar.gz
Install additional plugins for Docker:
wget https://github.com/docker/buildx/releases/download/v0.11.2/buildx-v0.11.2.linux-amd64 mkdir -p ~/.docker/cli-plugins mv buildx-v0.11.2.linux-amd64 ~/.docker/cli-plugins/docker-buildx chmod u+x ~/.docker/cli-plugins/docker-buildx
Step 3: Codebase
Clone the Gcore repository that contains all the necessary files:
git clone https://github.com/G-Core/ai-code-examples.git
Change the branch:
cd ai-code-examples && git checkout whisper-lux-small-ssf
Two key files here are `ssf_config.yaml` and `whisper_ssf_app.py`.
`ssf_config.yaml` is crucial for configuring the package that you’ll build. It contains fields specifying the name of the model, license and dependencies. It also outlines the inputs and outputs, detailing the endpoints and types of fields. For instance, for the Whisper model, the input is a temporary file (TempFile) and the output is a string (String). This information sets the framework for how your model will interact with users.
Example for Whisper:
26 endpoints: 27 28 - id: asr 29 version: 1 30 desc: Simple application interface for Whisper 31 custom: ~ 32 33 inputs: 34 35 - id: file 36 type: TempFile 37 desc: Audio description text prompt 38 39 outputs: 40 41 - id: result 42 type: String 43 desc: Transcription of the text
SSF provides support for various data types. Detailed information can be found in its documentation.
`whisper_ssf_app.py` acts as a wrapper around your Whisper model, making it compatible with the Simple Server Framework (SSF). The script contains several essential methods:
- `build`: This is where the model’s computational graph is constructed. It must run on a host with an IPU.
- `startup`: Manages preliminary tasks before the model can begin serving user requests.
- `request`: This is the heart of the system, responsible for processing user requests.
- `shutdown`: Ensures graceful termination of the model, like completing ongoing requests.
- `is_healthy`: This method allows the model to function both as a standalone Docker container and as part of larger, more complex systems like Kubernetes.
Within the build method, the function `compile_or_load_model_exe` is invoked. This is pivotal when constructing a model’s computational graph on IPUs. Here’s the catch: Creating this graph requires an initial user request as input. While you could use the first real user request for this, keep in mind that graph-building could consume 1 to 2 minutes, possibly more. Given today’s user expectations for speed, this delay could be a deal-breaker. To navigate this, the build method is designed to accept our predefined data as the first request for constructing the graph. In this setup, we use `bootstrap.mp3` to mimic that inaugural request.
Step 4: Build and Publish the Container
Build and publish the container, specifying your own Docker registry address and credentials:
gc-ssf --config ssf_config.yaml build package publish --package-tag harbortest.cloud.gcorelabs.com/whisper/mkhl --docker-username gitlab --docker-password XXXXXXXXXX --container-server harbortest.cloud.gcorelabs.com
The resulting container holds all necessary components: the model, a FastAPI wrapper, and the bootstrap.mp3 for initial warmup. It will be pushed to the Harbor registry.
Step 5: Deploy to Edge Node
For deployment on the edge node, the following command is used:
gc-ssf --stdout-log-level DEBUG deploy --config ssf_config.yaml --deploy-platform Gcore --port 8100 --deploy-gcore-target-address ai-inference-cluster-1 --deploy-gcore-target-username ubuntu --docker-username gitlab --docker-password XXXXXXXXXXX --package-tag harbortest.cloud.gcorelabs.com/whisper/mkhl:latest --deploy-package --container-server harbortest.cloud.gcorelabs.com
`gc-ssf` deploy uses SSH to run commands on the target host, so you’ll need to access it using `ssh-key` between nodes.
By following this pipeline, you establish a robust framework for deploying your AI models, ensuring they are not just efficient but also easily scalable and maintainable.
Inferring a More Intelligent Future
Inference AI’s growing role isn’t limited to tech giants; it’s vital for any organization aiming for agility and competitiveness. Investment in this technology constitutes a strategic alignment with a scalable, evolving solution to the data deluge problem. Inference AI as a service is poised to become an indispensable business tool because it simplifies AI’s technical complexities, offering a scalable and streamlined way to sift through mountains of data and extract meaningful, actionable insights.
How Gcore Uses Inference AI
Despite the surge in AI adoption, we recognize there’s still a gap in the market for specialized, out-of-the-box AI clusters that combine power with ease of deployment. Gcore is engineered to provide infrastructure and low latency services in order to go global faster. This solves one of the most significant challenges in the machine learning landscape: the transition from model development to scalable deployment. We use Graphcore’s Simple Server Framework to create an environment that’s capable not only of running machine learning models, but also of improving them continuously through Inference AI.
Inference AI as a service can transform the way businesses operate, allowing them to make real-time decisions and predictions based on trained data models. This cloud-based AI service streamlines the process of managing AI production, optimizing performance, and efficiently deploying AI models. It’s a tool with exciting prospects for any organization aiming to enhance its agility and competitiveness.
Gcore’s powerful, easy-to-deploy AI clusters provide the low latency and high performance required for effective inference AI as a service. With the use of Graphcore’’s Simple Server Framework, Gcore creates an environment capable of running machine learning models and improving them continuously through inference AI. For a deeper understanding of how Gcore is shaping the AI ecosystem, explore our AI infrastructure documentation.