Gclaw uses the Kimi-K2.5 model for inference, running on Gcore H200 GPUs.Documentation Index
Fetch the complete documentation index at: https://gcore.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Model limits
| Limit | Value |
|---|---|
| Context window | 200,000 tokens |
| Maximum output | 32,000 tokens |
| Reasoning | Enabled |