Completions

CompletionsResource

Methods

create() ->
post/v5/completions

Completions

Parameters
model: str

model specified as model_vendor/model, for example openai/gpt-4o

prompt: Union[str, List[str]]

The prompt to generate completions for, encoded as a string

PromptUnionMember0 = str
PromptUnionMember1 = List[str]
best_of: Optional[int]

Generates best_of completions server-side and returns the best one. Must be greater than n when used together.

echo: Optional[]

Echo back the prompt in addition to the completion

frequency_penalty: Optional[float]

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text.

logit_bias: Optional[Dict[str, int]]

Modify the likelihood of specified tokens appearing in the completion. Maps tokens to bias values from -100 to 100.

logprobs: Optional[int]

Include log probabilities of the most likely tokens. Maximum value is 5.

max_tokens: Optional[int]

The maximum number of tokens that can be generated in the completion.

n: Optional[int]

How many completions to generate for each prompt.

presence_penalty: Optional[float]

Number between -2.0 and 2.0. Positive values penalize new tokens based on their presence in the text so far.

seed: Optional[int]

If specified, attempts to generate deterministic samples. Determinism is not guaranteed.

stop: Optional[Union[str, List[str]]]

Up to 4 sequences where the API will stop generating further tokens.

StopUnionMember0 = str
StopUnionMember1 = List[str]
stream: Optional[Literal[false]]

Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events.

false
stream_options: Optional[Dict[str, ]]

Options for streaming response. Only set this when stream is True.

suffix: Optional[str]

The suffix that comes after a completion of inserted text. Only supported for gpt-3.5-turbo-instruct.

temperature: Optional[float]

Sampling temperature between 0 and 2. Higher values make output more random, lower more focused.

top_p: Optional[float]

Alternative to temperature. Consider only tokens with top_p probability mass. Range 0-1.

user: Optional[str]

A unique identifier representing your end-user, which can help OpenAI monitor and detect abuse.

Returns
Request example
200Example

Domain types

class Completion: ...