CosmicAC Logo

Managed Inference I/O Specs

Supported input modes, response formats, and request schemas for CosmicAC inference endpoints.


Input Modes

ModeDescription
HTTP REST/v1/chat/completions (OpenAI-compatible)
HRPCHyperswarm RPC protocol

Authentication

ModeDescription
Bearer TokenAuthorization: Bearer <token>
API Key HeaderX-API-Key: <token>

Request Modes

ModeDescription
StreamingServer-Sent Events (SSE) support
Non-streamingStandard JSON request/response

Response Modes

ModeDescription
JSON ResponseStandard completion with usage metadata.
Streaming SSEReal-time event stream with data: chunks.
Usage TrackingToken consumption (input/output/total).

Request Schema

FieldTypeRequiredDescription
modelstringYesModel identifier
messagesarrayYesAn array of message objects
streambooleanNoEnable streaming response
stream_optionsobjectNoStreaming configuration

Message Object

FieldTypeDescription
rolestringMessage role
contentstringMessage content

Stream Options

FieldTypeDescription
include_usagebooleanInclude token usage in stream

Object Structure

{
  "model": "string (required)",
  "messages": [
    { "role": "string", "content": "string" }
  ],
  "stream": "boolean",
  "stream_options": {
    "include_usage": true
  }
}

On this page