To ensure that only authenticated clients can access your AI models, you must deploy an inference instance with authorization enabled.Documentation Index
Fetch the complete documentation index at: https://gcore.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Step 1. Enable authorization
When deploying an AI model, set theauth_enabled option to true. This means an API Key will be automatically generated and linked to the deployment.
Step 2. Retrieve the API key
Once the deployment is created with authentication enabled, you can retrieve the API Key via the designated API endpoint.API request
The API key can be retrieved via this endpoint.InfoThe API Key is only available through this endpoint. Store it securely.
Step 3. Use the API key for authorization
Once you have retrieved the API Key, include it in your API requests using theX-API-Key header.
Example using OpenAI python client library
Here’s an example demonstrating how to use the API key for authorization:InfoFor Gcore deployments with authorization enabled, the
X-API-Key header is mandatory in all API requests.