GPT-OSS Chat

Powered by OpenAI's GPT models via EigenAI endpoint - engage in intelligent conversations

Backend online

Parameters

Higher values make output more random
Maximum length of the response
Controls how much effort the model spends on reasoning steps
🤖

Welcome to GPT-OSS Chat

Start a conversation by typing your message below

0 characters Input 0.10 /M tokens • Output 0.50 /M tokens

🔌 API Access

Integrate GPT-OSS chat into your applications using our REST API.

🔑 API Keys

Use an active API key with every request. Manage your API keys →

POST /api/v1/generate; /api/v1/chat/completions

Chat with GPT-OSS

Send conversation messages to GPT-OSS.

Cost: 0.10 credits / 1M input tokens • 0.50 credits / 1M output tokens

Request (cURL)

curl -X POST https://app.eigenai.com/api/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-oss",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Explain quantum computing in simple terms."}
    ],
    "temperature": 0.7,
    "reasoning_effort": "medium",
    "max_tokens": 2048
  }'

Request (Python)

import json
import requests

url = "https://app.eigenai.com/api/v1/chat/completions"
headers = {
    "Authorization": "Bearer YOUR_API_KEY",
    "Content-Type": "application/json"
}
payload = {
    "model": "gpt-oss",
    "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing in simple terms."}
    ],
    "temperature": 0.7,
    "reasoning_effort": "medium",
    "max_tokens": 2048
}

response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()
print(json.dumps(response.json(), indent=2))

Request (JavaScript/Node.js)

import fetch from 'node-fetch';

const response = await fetch('https://app.eigenai.com/api/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'gpt-oss',
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      { role: 'user', content: 'Explain quantum computing in simple terms.' }
    ],
    temperature: 0.7,
    reasoning_effort: 'medium',
    max_tokens: 2048
  })
});

if (!response.ok) {
  throw new Error(`Request failed: ${response.status} ${await response.text()}`);
}

const result = await response.json();
console.log(result);

Parameters

Parameter Type Required Description
model string ✅ Yes Use gpt-oss
messages array ✅ Yes Conversation history with {role, content} objects
temperature number Optional Controls creativity (0-2)
max_tokens number Optional Maximum tokens in the response
reasoning_effort string Optional Controls reasoning depth (low, medium, high). Default medium
stream boolean Optional Default false. Set to true for SSE streaming responses.