Skip to main content

OpenAI Models: Explore APIpie's Overview

OpenAI
info

For detailed information about using models with APIpie, check out our Models Overview and Completions Guide.

Description

OpenAI's suite of language models represents the cutting edge in artificial intelligence technology. These models, including the renowned GPT-4, GPT-3.5 and o1 series, deliver exceptional performance across a wide range of tasks. The models are available through various providers integrated with APIpie's routing system.

The models come in several specialized variants:

  • Chat Models: Standard chat models optimized for dialogue and instruction-following
  • ChatX Models: Enhanced versions with additional capabilities like function calling and structured output
  • O1 Models: Next-generation models offering superior performance and reliability
  • Vision Models: Capable of understanding and analyzing images alongside text
  • Audio Models: Specialized for audio processing and transcription tasks
  • Voice Models: Text-to-speech models with various voice options
  • Code Models: Optimized for programming and technical documentation tasks

The O1 series represents OpenAI's latest advancement in language models, offering:

  • Improved reasoning and problem-solving capabilities with up to 200K token context
  • Enhanced consistency in outputs through specialized variants (preview, mini)
  • Better handling of complex instructions with optimized response generation
  • Superior performance in specialized tasks with provider-specific optimizations
  • Flexible deployment options across multiple providers

Key Features

  • Extended Context Windows: Models support context lengths from 4K to 128K tokens, enabling processing of extensive documents and conversations.
  • Multi-Provider Availability: Accessible through OpenAI, OpenRouter, EdenAI, and DeepInfra.
  • Advanced Capabilities:
    • Function calling for structured tool use
    • JSON mode for reliable structured output
    • Parallel function calling for efficiency
    • System message control
    • Reproducible outputs with seeds
    • Temperature and top_p controls
    • O1 optimizations for enhanced performance
  • Multimodal Processing: Support for text, images, and audio in a single conversation

Model List

Model List updates dynamically please see the Models Route for the up to date list of models

info

For information about model performance and benchmarks, see OpenAI's Model Overview.

Model NameMax TokensResponse TokensProvidersType/Subtype
gpt-4-turbo-preview128,0004,096OpenAILLM/ChatX
gpt-4-turbo128,0004,096OpenAI, EdenAILLM/ChatX
gpt-4-turbo-2024-04-094,0964,096OpenAI, EdenAILLM/Chat
gpt-4-1106-preview4,0964,096OpenAI, EdenAILLM/Chat
gpt-48,1914,096OpenAI, OpenRouter, EdenAILLM/Chat
gpt-4-06138,1928,192OpenAILLM/ChatX
gpt-4-03148,1914,096EdenAILLM/Chat
gpt-4-32k-031432,7674,096EdenAILLM/Chat
gpt-4o128,00016,384OpenAI, OpenRouter, EdenAILLM/Chat,ChatX,Code
gpt-4o-mini128,00016,384OpenAI, EdenAILLM/Chat,ChatX
gpt-4o-mini-2024-07-1816,38416,384OpenAILLM/Chat,ChatX
gpt-4o-2024-05-134,0964,096OpenAI, EdenAILLM/Chat,ChatX
gpt-4o-2024-08-0616,38416,384OpenAILLM/Chat,ChatX
gpt-4o-2024-11-2016,38416,384OpenAI, OpenRouterLLM/Chat,ChatX
o1-preview128,00032,768OpenAILLM/ChatX
o1-mini128,00032,768EdenAILLM/Chat
o1200,000100,000OpenRouterLLM
gpt-4-vision-preview128,0004,096OpenAI, OpenRouterVision/Multimodal
gpt-4-1106-vision-preview128,0004,096OpenAIVision/ChatX
gpt-4o-audio-preview16,38416,384OpenAILLM
gpt-4o-mini-audio-preview128,00016,384OpenAILLM
gpt-4o-audio-preview-2024-12-1716,38416,384OpenAILLM
gpt-4o-mini-audio-preview-2024-12-17128,00016,384OpenAILLM
gpt-3.5-turbo16,3844,096OpenAI, EdenAILLM/Chat
gpt-3.5-turbo-16k16,3854,096EdenAILLM/Chat
gpt-3.5-turbo-012516,3854,096OpenAI, EdenAILLM/Chat
gpt-3.5-turbo-110616,3854,096OpenAI, EdenAILLM/Chat
gpt-3.5-turbo-instruct4,0954,096OpenRouterLLM/Chat
tts-1-hd shimmer--OpenAIVoice/TTS
tts-1-hd alloy--OpenAIVoice/TTS
tts-1-hd echo--OpenAIVoice/TTS
tts-1-hd fable--OpenAIVoice/TTS
tts-1-hd onyx--OpenAIVoice/TTS
tts-1-hd nova--OpenAIVoice/TTS
tts-1-1106 alloy--OpenAIVoice/TTS
tts-1-1106 echo--OpenAIVoice/TTS
tts-1-1106 fable--OpenAIVoice/TTS
tts-1-1106 onyx--OpenAIVoice/TTS
tts-1-1106 nova--OpenAIVoice/TTS
clip-vit-large-patch14-336--DeepInfraVision/Classification
clip-vit-large-patch14--DeepInfraVision/Classification
clip-vit-base-patch32--DeepInfraVision/Classification
whisper-large--DeepInfraVoice/ASR
whisper-medium--DeepInfraVoice/ASR
whisper-medium.en--DeepInfraVoice/ASR
whisper-timestamped-medium--DeepInfraVoice/ASR
whisper-timestamped-medium.en--DeepInfraVoice/ASR
whisper-base--DeepInfraVoice/ASR
whisper-base.en--DeepInfraVoice/ASR
whisper-small--DeepInfraVoice/ASR
whisper-small.en--DeepInfraVoice/ASR
whisper-tiny--DeepInfraVoice/ASR
whisper-tiny.en--DeepInfraVoice/ASR

Example API Call

Below is an example of how to use the Chat Completions API with OpenAI's GPT-4 model:

curl -L -X POST 'https://apipie.ai/v1/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Authorization: Bearer <YOUR_API_KEY>' \
--data-raw '{
"provider": "openai",
"model": "gpt-4-turbo-preview",
"max_tokens": 150,
"messages": [
{
"role": "user",
"content": "What are the key differences between renewable and non-renewable energy sources?"
}
]
}'

Response Example

The expected response structure from an OpenAI model:

{
"id": "chatcmpl-abc123example456",
"object": "chat.completion",
"created": 1709234567,
"provider": "openai",
"model": "gpt-4-turbo-preview",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Renewable and non-renewable energy sources differ in several key ways:\n\n1. Replenishment:\n- Renewable: Naturally replenished within a human lifetime (solar, wind, hydro)\n- Non-renewable: Take millions of years to form (fossil fuels)\n\n2. Environmental Impact:\n- Renewable: Generally lower emissions and environmental impact\n- Non-renewable: Higher carbon emissions and environmental degradation\n\n3. Availability:\n- Renewable: Unlimited but dependent on natural conditions\n- Non-renewable: Limited and depleting resources"
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 110,
"total_tokens": 130,
"prompt_characters": 82,
"response_characters": 380,
"cost": 0.003500,
"latency_ms": 2800
},
"system_fingerprint": "fp_abc123def456"
}

API Highlights

  • Provider: Specify the provider or leave blank for automatic selection.
  • Model: Choose from OpenAI's range of models based on your needs. See Models Guide.
  • Max Tokens: Set the maximum response length (varies by model).
  • Messages: Structure your conversations with role-based messages. See message formatting.

Applications and Integrations

  • Conversational AI: Power chatbots and virtual assistants with GPT-4 and GPT-3.5. Try with LibreChat.
  • High-Performance Tasks: Utilize O1 models for applications requiring superior reliability and consistent outputs.
  • Vision Tasks: Process and analyze images with GPT-4 Vision models.
  • Audio Processing: Handle audio tasks with specialized audio preview models.
  • Extended Context: Process long documents with models supporting up to 128K tokens. See our Models Guide.
  • Code Generation: Leverage models for programming tasks and technical documentation.

Ethical Considerations

OpenAI models are powerful tools that require responsible use. Users should implement appropriate safeguards and consider potential biases. For guidance, refer to OpenAI's Usage Policies and Safety Best Practices.


Licensing

OpenAI's models are available under commercial terms through their API Service Terms. For detailed licensing information and usage guidelines, consult the OpenAI Platform Documentation and respective hosting providers.

tip

Try out OpenAI models in APIpie's various supported integrations.