OpenAI Agents Integration Guide


This guide will walk you through integrating the OpenAI Agents SDK with APIpie, enabling you to build powerful multi-agent workflows that leverage a wide range of language models.
What is OpenAI Agents SDK?
OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows. It provides a set of tools and components for:
- Agent Orchestration: Create and manage multiple specialized agents
- Agent Handoffs: Seamlessly transfer control between agents
- Guardrails: Configure safety checks for input and output validation
- Tracing: Built-in tracking and debugging of agent runs
- Tool Integration: Enable agents to use functions and external APIs
- Provider Agnosticism: Works with OpenAI and 100+ other LLMs
By connecting OpenAI Agents SDK to APIpie, you gain access to a wide range of powerful language models while leveraging the SDK's sophisticated agent orchestration capabilities.
Integration Steps
1. Create an APIpie Account
- Register here: APIpie Registration
- Complete the sign-up process.
2. Add Credit
- Add Credit: APIpie Subscription
- Add credits to your account to enable API access.
3. Generate an API Key
- API Key Management: APIpie API Keys
- Create a new API key for use with OpenAI Agents.
4. Install OpenAI Agents SDK
pip install openai-agents
For voice support (optional):
pip install 'openai-agents[voice]'
5. Configure OpenAI Agents SDK for APIpie
Set up the environment variables for the APIpie integration:
import os
# Set APIpie as the provider
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_BASE_URL"] = "https://apipie.ai/v1"
# Optional: Configure default model
os.environ["OPENAI_MODEL"] = "gpt-4o" # Use any model available on APIpie
Key Features
- Multi-Agent Orchestration: Create specialized agents and coordinate their interactions
- Seamless Handoffs: Transfer control between agents based on needs and expertise
- Configurable Guardrails: Set safety checks for inputs and outputs
- Comprehensive Tracing: Debug and optimize agent workflows
- Tool Integration: Equip agents with functions to interact with external systems
- Model Flexibility: Works with any LLM accessible via OpenAI-compatible API
Example Workflows
Application Type | What OpenAI Agents Helps You Build |
---|---|
Multi-Specialist Systems | Workflows with specialized agents for different domains |
Human-in-the-Loop Applications | Systems with human review and intervention points |
Complex Reasoning Chains | Applications requiring multi-step reasoning and planning |
Enterprise Workflows | Business processes with multiple specialized steps |
Safety-Critical Applications | Systems with built-in guardrails and validation |
Using OpenAI Agents SDK with APIpie
Basic Single Agent Example
import os
from agents import Agent, Runner
# Configure APIpie
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_BASE_URL"] = "https://apipie.ai/v1"
# Create a simple agent
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant specializing in Python programming.",
model="gpt-4o", # Use any model available on APIpie
)
# Run the agent
result = Runner.run_sync(
agent,
"Explain how to use list comprehensions in Python with some examples."
)
print(result.final_output)
Multi-Agent System with Handoffs
import os
import asyncio
from agents import Agent, Runner
# Configure APIpie
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_BASE_URL"] = "https://apipie.ai/v1"
# Create specialized agents
python_agent = Agent(
name="Python Expert",
instructions="You are an expert Python programmer. Provide detailed, technically accurate information about Python programming.",
model="gpt-4o",
)
javascript_agent = Agent(
name="JavaScript Expert",
instructions="You are an expert JavaScript programmer. Provide detailed, technically accurate information about JavaScript programming.",
model="gpt-4o",
)
# Create a triage agent that can hand off to specialists
triage_agent = Agent(
name="Programming Triage",
instructions="Determine if the user is asking about Python or JavaScript and hand off to the appropriate expert agent.",
model="gpt-4o-mini", # Use a lighter model for triage
handoffs=[python_agent, javascript_agent],
)
async def main():
# Run the agent system
result = await Runner.run(
triage_agent,
"What's the difference between list comprehensions in Python and array methods in JavaScript?"
)
print(result.final_output)
if __name__ == "__main__":
asyncio.run(main())
Using Function Tools
import os
import asyncio
from agents import Agent, Runner, function_tool
# Configure APIpie
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_BASE_URL"] = "https://apipie.ai/v1"
# Define a function tool for weather information
@function_tool
def get_weather(city: str, country: str = "US") -> str:
"""Get the current weather for a city.
Args:
city: The name of the city
country: The country code (default: US)
Returns:
Current weather information
"""
# In a real implementation, you would call a weather API
return f"The weather in {city}, {country} is currently sunny and 72°F."
# Create an agent with the weather tool
weather_agent = Agent(
name="Weather Assistant",
instructions="You help users get weather information for different locations.",
model="gpt-4o-mini",
tools=[get_weather],
)
async def main():
result = await Runner.run(
weather_agent,
"What's the weather like in Tokyo, Japan?"
)
print(result.final_output)
if __name__ == "__main__":
asyncio.run(main())
Adding Guardrails
import os
import asyncio
from agents import Agent, Runner
from agents.guardrails import InputGuardrail, OutputGuardrail
# Configure APIpie
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_BASE_URL"] = "https://apipie.ai/v1"
# Define guardrails
class ProfanityInputGuardrail(InputGuardrail):
async def validate(self, input_text: str) -> bool:
profanity_list = ["bad_word1", "bad_word2"] # Define your list of prohibited words
for word in profanity_list:
if word in input_text.lower():
self.failure_reason = f"Input contains prohibited word: {word}"
return False
return True
class FactualOutputGuardrail(OutputGuardrail):
async def validate(self, output_text: str) -> bool:
if "definitely" in output_text.lower() and "always" in output_text.lower():
self.failure_reason = "Output contains overgeneralizations"
return False
return True
# Create an agent with guardrails
agent = Agent(
name="Guarded Assistant",
instructions="You provide helpful information about science topics.",
model="gpt-4o",
input_guardrails=[ProfanityInputGuardrail()],
output_guardrails=[FactualOutputGuardrail()],
)
async def main():
try:
result = await Runner.run(
agent,
"Tell me about the solar system."
)
print(result.final_output)
except Exception as e:
print(f"Guardrail triggered: {e}")
if __name__ == "__main__":
asyncio.run(main())
Using Structured Output Types
import os
import asyncio
from typing import List
from pydantic import BaseModel
from agents import Agent, Runner
# Configure APIpie
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_BASE_URL"] = "https://apipie.ai/v1"
# Define a structured output type
class MovieRecommendation(BaseModel):
title: str
year: int
director: str
genre: str
description: str
class MovieRecommendations(BaseModel):
recommendations: List[MovieRecommendation]
reasoning: str
# Create an agent with structured output
movie_agent = Agent(
name="Movie Recommender",
instructions="You recommend movies based on user preferences.",
model="gpt-4o",
output_type=MovieRecommendations,
)
async def main():
result = await Runner.run(
movie_agent,
"Recommend three sci-fi movies similar to Interstellar."
)
# Access structured data
for i, movie in enumerate(result.final_output.recommendations, 1):
print(f"Recommendation {i}:")
print(f" Title: {movie.title}")
print(f" Year: {movie.year}")
print(f" Director: {movie.director}")
print(f" Genre: {movie.genre}")
print(f" Description: {movie.description}")
print(f"\nReasoning: {result.final_output.reasoning}")
if __name__ == "__main__":
asyncio.run(main())
Troubleshooting & FAQ
-
How do I configure specific models for different agents?
Set themodel
parameter when creating each agent:Agent(name="Agent", model="gpt-4o")
-
Can I use custom prompts with OpenAI Agents?
Yes, use theinstructions
parameter to define your agent's behavior. For more complex prompting, use multiple agents with specialized instructions. -
How do I debug agent behavior?
OpenAI Agents SDK includes built-in tracing. You can also use external tracing processors like Logfire, AgentOps, Braintrust, Scorecard, or Keywords AI. -
What if my handoff logic doesn't work as expected?
Review your triage agent's instructions and ensure they clearly explain when to hand off to which agent. You can also implement explicit logic by using function tools. -
Are there limits to how many agents I can chain together?
While there's no strict limit, each handoff adds latency. Themax_turns
parameter can prevent infinite loops. -
How do I handle authentication for function tools?
Store API keys securely and use environment variables. Your function tool should handle authentication internally.
For more information, see the OpenAI Agents SDK documentation or the GitHub repository.
Support
If you encounter any issues during the integration process, please reach out on APIpie Discord for assistance.