POST/chat/completions

Chat Completions

Creates a model response for the given chat conversation. Supports streaming, function calling, and vision capabilities.

OpenAI Compatible

This endpoint is fully compatible with OpenAI's chat completions API. Simply change the base URL and use your Selam API key.

Request

Request Body

1{
2  "model": "selam-plus",
3  "messages": [
4    {
5      "role": "system",
6      "content": "You are a helpful assistant."
7    },
8    {
9      "role": "user",
10      "content": "Hello!"
11    }
12  ],
13  "temperature": 0.7,
14  "max_tokens": 150,
15  "stream": false
16}

Parameters

modelstringrequired

ID of the model to use. See the models page for available options.

messagesarrayrequired

A list of messages comprising the conversation so far. Each message has a role (system, user, or assistant) and content.

temperaturenumberoptional

Sampling temperature between 0 and 2. Higher values make output more random. Default is 1.

max_tokensintegeroptional

The maximum number of tokens to generate in the completion.

streambooleanoptional

If set to true, partial message deltas will be sent as server-sent events. Default is false.

Response

1{
2  "id": "chatcmpl-123",
3  "object": "chat.completion",
4  "created": 1677652288,
5  "model": "selam-plus",
6  "choices": [
7    {
8      "index": 0,
9      "message": {
10        "role": "assistant",
11        "content": "Hello! How can I assist you today?"
12      },
13      "finish_reason": "stop"
14    }
15  ],
16  "usage": {
17    "prompt_tokens": 9,
18    "completion_tokens": 12,
19    "total_tokens": 21
20  }
21}

Examples

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="YOUR_API_KEY",
5    base_url="https://api.selamgpt.com/v1"
6)
7
8response = client.chat.completions.create(
9    model="selam-plus",
10    messages=[
11        {"role": "user", "content": "Hello!"}
12    ]
13)
14
15print(response.choices[0].message.content)

Was this page helpful?