Overview
IAM
- Account
- Users
- API Tokens
CDN
- IP addresses list
- CDN service
- Origins
- CDN resources
- CDN activity logs
- Log forwarding
- Log viewer
- Logs uploader
- Tools
- Rules
- Rule templates
- Purge history
- SSL certificates
- CA certificates
- CDN Statistics
- Advanced analytics
- Origin shielding
- Let's Encrypt certificates
Managed DNS
- Analyze
- Locations
- Lookup
- Metrics
- Pickers
- DNS Clients
- Zones
- DNSSEC
- RRsets
Cloud
- Bare Metal
- Container as a Service
- Cost Reports
- DDOS Protection
- File Shares
- Firewalls
- Floating IPs
- Function as a Service
- GPU Cloud
- IP Ranges
- Images
- Inference Instances
- POSTDeprecated. Preview inference deployment pricedeprecated
- GETDeprecated. List API Keysdeprecated
- POSTDeprecated. Create API Keydeprecated
- GETDeprecated. Get API Keydeprecated
- DELDeprecated. Delete API Keydeprecated
- PATCHDeprecated. Update API Keydeprecated
- GETDeprecated. Get Capacity for regionsdeprecated
- GETDeprecated. List Inference Instancesdeprecated
- POSTDeprecated. Create Inference Instancedeprecated
- GETDeprecated. Get Inference Instancedeprecated
- PUTDeprecated. Update Inference Instancedeprecated
- DELDeprecated. Delete Inference Instancedeprecated
- GETDeprecated. Get Inference Instance Logsdeprecated
- GETDeprecated. Get Inference Instance Logs by Regiondeprecated
- POSTDeprecated. Start Inference Instancedeprecated
- POSTDeprecated. Stop Inference Instancedeprecated
- GETDeprecated. List inference instance flavorsdeprecated
- GETDeprecated. Get inference instance flavor Detailsdeprecated
- GETList ML Model Catalogdeprecated
- GETGet ML Model Catalog Details
- GETDeprecated. List Registriesdeprecated
- POSTDeprecated. Create Registrydeprecated
- GETDeprecated. Get Registrydeprecated
- DELDeprecated. Delete Registrydeprecated
- PATCHDeprecated. Update Registrydeprecated
- POSTPreview inference deployment price
- GETGet inference capacity by region
- GETList inference flavors
- GETGet inference flavor
- GETList models from catalog
- GETGet model from catalog
- GETList inference deployments
- POSTCreate inference deployment
- GETGet inference deployment
- DELDelete inference deployment
- PATCHUpdate inference deployment
- GETGet inference deployment API key
- GETGet inference deployment logs
- POSTStart inference deployment
- POSTStop inference deployment
- GETList inference registry credentials
- POSTCreate inference registry credential
- GETGet inference registry credential
- PUTUpdate inference registry credential
- DELDelete inference registry credential
- GETList Secrets for Inference
- POSTCreate Inference Secret
- GETGet Inference Secret
- PUTUpdate Inference Secret
- DELDelete Inference Secret
- POST
- Instances
- Load Balancers
- Logging
- Managed Kubernetes
- Managed PostgreSQL
- Networks
- Placement Groups
- Projects
- Quotas
- Regions
- Registry
- Reservations
- Reserved IPs
- Routers
- SSH Keys
- Secrets
- Service Endpoints
- Snapshot Schedules
- Snapshots
- Tasks
- User Actions
- User Role Assignments
- Volumes
Security
- Event Logs
- BGP announces
- Security Templates
- Profiles
FastEdge
- Apps
- Binaries
- FastEdge Clients
- FastEdge Secrets
- Stats
- FastEdge Templates
WAAP
- WAAP Service
- Domains
- Policies
- Analytics
- Custom Page Sets
- Custom Rules
- Firewall Rules
- Advanced Rules
- Tags
- Network Organizations
- API Discovery
- IP Spotlight
- Security Insights
Web Security
- Billing Statistics
- Resources
Video Streaming
- Streams
- Overlays
- Broadcasts
- Restreams
- Videos
- Subtitles
- Directories
- Playlists
- QualitySets
- Players
- AI
- Streaming Statistics
Object Storage
- Notifications
- Key
- Location
- Storage
- Storage Statistics
Inference Instances
Get inference deployment
GET
/
cloud
/
v3
/
inference
/
{project_id}
/
deployments
/
{deployment_name}
Copy
Ask AI
import os
from gcore import Gcore
client = Gcore(
api_key=os.environ.get("GCORE_API_KEY"), # This is the default and can be omitted
)
inference = client.cloud.inference.deployments.get(
deployment_name="my-instance",
project_id=1,
)
print(inference.project_id)
Copy
Ask AI
{
"address": "https://example.com",
"auth_enabled": false,
"command": [
"nginx",
"-g",
"daemon off;"
],
"containers": [
{
"deploy_status": {
"ready": 1,
"total": 3
},
"region_id": 1,
"scale": {
"cooldown_period": 60,
"max": 3,
"min": 1,
"triggers": {
"cpu": {
"threshold": 80
},
"memory": {
"threshold": 70
}
}
}
}
],
"created_at": "2023-08-22T11:21:00Z",
"credentials_name": "dockerhub",
"description": "My first instance",
"envs": {
"DEBUG_MODE": "False",
"KEY": "12345"
},
"flavor_name": "inference-16vcpu-232gib-1xh100-80gb",
"image": "nginx:latest",
"ingress_opts": {
"disable_response_buffering": true
},
"listening_port": 8080,
"logging": {
"destination_region_id": 1,
"enabled": true,
"retention_policy": {
"period": 45
},
"topic_name": "my-log-name"
},
"name": "my-instance",
"probes": {
"liveness_probe": {
"enabled": true,
"probe": {
"exec": {
"command": [
"ls",
"-l"
]
},
"failure_threshold": 3,
"http_get": {
"headers": {
"Authorization": "Bearer token 123"
},
"host": "127.0.0.1",
"path": "/healthz",
"port": 80,
"schema": "HTTP"
},
"initial_delay_seconds": 0,
"period_seconds": 5,
"success_threshold": 1,
"tcp_socket": {
"port": 80
},
"timeout_seconds": 1
}
},
"readiness_probe": {
"enabled": true,
"probe": {
"exec": {
"command": [
"ls",
"-l"
]
},
"failure_threshold": 3,
"http_get": {
"headers": {
"Authorization": "Bearer token 123"
},
"host": "127.0.0.1",
"path": "/healthz",
"port": 80,
"schema": "HTTP"
},
"initial_delay_seconds": 0,
"period_seconds": 5,
"success_threshold": 1,
"tcp_socket": {
"port": 80
},
"timeout_seconds": 1
}
},
"startup_probe": {
"enabled": true,
"probe": {
"exec": {
"command": [
"ls",
"-l"
]
},
"failure_threshold": 3,
"http_get": {
"headers": {
"Authorization": "Bearer token 123"
},
"host": "127.0.0.1",
"path": "/healthz",
"port": 80,
"schema": "HTTP"
},
"initial_delay_seconds": 0,
"period_seconds": 5,
"success_threshold": 1,
"tcp_socket": {
"port": 80
},
"timeout_seconds": 1
}
}
},
"project_id": 1,
"status": "ACTIVE",
"timeout": 120
}
Authorizations
API key for authentication.
Path Parameters
Project ID
Examples:
1
Inference instance name.
Required string length:
4 - 30
Examples:
"my-instance"
Response
200 - application/json
OK
The response is of type object
.
Was this page helpful?
Copy
Ask AI
import os
from gcore import Gcore
client = Gcore(
api_key=os.environ.get("GCORE_API_KEY"), # This is the default and can be omitted
)
inference = client.cloud.inference.deployments.get(
deployment_name="my-instance",
project_id=1,
)
print(inference.project_id)
Copy
Ask AI
{
"address": "https://example.com",
"auth_enabled": false,
"command": [
"nginx",
"-g",
"daemon off;"
],
"containers": [
{
"deploy_status": {
"ready": 1,
"total": 3
},
"region_id": 1,
"scale": {
"cooldown_period": 60,
"max": 3,
"min": 1,
"triggers": {
"cpu": {
"threshold": 80
},
"memory": {
"threshold": 70
}
}
}
}
],
"created_at": "2023-08-22T11:21:00Z",
"credentials_name": "dockerhub",
"description": "My first instance",
"envs": {
"DEBUG_MODE": "False",
"KEY": "12345"
},
"flavor_name": "inference-16vcpu-232gib-1xh100-80gb",
"image": "nginx:latest",
"ingress_opts": {
"disable_response_buffering": true
},
"listening_port": 8080,
"logging": {
"destination_region_id": 1,
"enabled": true,
"retention_policy": {
"period": 45
},
"topic_name": "my-log-name"
},
"name": "my-instance",
"probes": {
"liveness_probe": {
"enabled": true,
"probe": {
"exec": {
"command": [
"ls",
"-l"
]
},
"failure_threshold": 3,
"http_get": {
"headers": {
"Authorization": "Bearer token 123"
},
"host": "127.0.0.1",
"path": "/healthz",
"port": 80,
"schema": "HTTP"
},
"initial_delay_seconds": 0,
"period_seconds": 5,
"success_threshold": 1,
"tcp_socket": {
"port": 80
},
"timeout_seconds": 1
}
},
"readiness_probe": {
"enabled": true,
"probe": {
"exec": {
"command": [
"ls",
"-l"
]
},
"failure_threshold": 3,
"http_get": {
"headers": {
"Authorization": "Bearer token 123"
},
"host": "127.0.0.1",
"path": "/healthz",
"port": 80,
"schema": "HTTP"
},
"initial_delay_seconds": 0,
"period_seconds": 5,
"success_threshold": 1,
"tcp_socket": {
"port": 80
},
"timeout_seconds": 1
}
},
"startup_probe": {
"enabled": true,
"probe": {
"exec": {
"command": [
"ls",
"-l"
]
},
"failure_threshold": 3,
"http_get": {
"headers": {
"Authorization": "Bearer token 123"
},
"host": "127.0.0.1",
"path": "/healthz",
"port": 80,
"schema": "HTTP"
},
"initial_delay_seconds": 0,
"period_seconds": 5,
"success_threshold": 1,
"tcp_socket": {
"port": 80
},
"timeout_seconds": 1
}
}
},
"project_id": 1,
"status": "ACTIVE",
"timeout": 120
}
Assistant
Responses are generated using AI and may contain mistakes.