GET
/
cloud
/
v3
/
inference
/
models
/
{model_id}
Get model from catalog
curl --request GET \
  --url https://api.gcore.com/cloud/v3/inference/models/{model_id} \
  --header 'Authorization: <api-key>'
{
  "category": "Text Classification",
  "default_flavor_name": "inference-16vcpu-232gib-1xh100-80gb",
  "description": "My first model",
  "developer": "Stability AI",
  "documentation_page": "/docs",
  "eula_url": "https://example.com/eula",
  "example_curl_request": "curl -X POST http://localhost:8080/predict -d '{\"data\": \"sample\"}'",
  "has_eula": true,
  "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
  "image_registry_id": "123e4567-e89b-12d3-a456-426614174999",
  "image_url": "registry.hub.docker.com/my_model:latest",
  "inference_backend": "torch",
  "inference_frontend": "gradio",
  "model_id": "mistralai/Pixtral-12B-2409",
  "name": "model1",
  "openai_compatibility": "full",
  "port": 8080,
  "version": "v0.1"
}

Authorizations

Authorization
string
header
required

API key for authentication. Make sure to include the word apikey, followed by a single space and then your token. Example: apikey 1234$abcdef

Path Parameters

model_id
string
required

Model ID

Response

200 - application/json

OK

category
string | null
required

Category of the model.

Examples:

"Text Classification"

default_flavor_name
string | null
required

Default flavor for the model.

Examples:

"inference-16vcpu-232gib-1xh100-80gb"

description
string
required

Description of the model.

Examples:

"My first model"

developer
string | null
required

Developer of the model.

Examples:

"Stability AI"

documentation_page
string | null
required

Path to the documentation page.

Examples:

"/docs"

eula_url
string | null
required

URL to the EULA text.

Examples:

"https://example.com/eula"

example_curl_request
string | null
required

Example curl request to the model.

Examples:

"curl -X POST http://localhost:8080/predict -d '{\"data\": \"sample\"}'"

has_eula
boolean
required

Whether the model has an EULA.

Examples:

true

id
string<uuid>
required

Model ID.

Examples:

"3fa85f64-5717-4562-b3fc-2c963f66afa6"

image_registry_id
string | null
required

Image registry of the model.

Examples:

"123e4567-e89b-12d3-a456-426614174999"

image_url
string
required

Image URL of the model.

Examples:

"registry.hub.docker.com/my_model:latest"

inference_backend
string | null
required

Describing underlying inference engine.

Examples:

"torch"

"tensorflow"

inference_frontend
string | null
required

Describing model frontend type.

Examples:

"gradio"

"vllm"

"triton"

model_id
string | null
required

Model name to perform inference call.

Examples:

"mistralai/Pixtral-12B-2409"

name
string
required

Name of the model.

Examples:

"model1"

openai_compatibility
string | null
required

OpenAI compatibility level.

Examples:

"full"

"partial"

"none"

port
integer
required

Port on which the model runs.

Examples:

8080

version
string | null
required

Version of the model.

Examples:

"v0.1"