AG2 (AutoGen) Integration Guide

This guide will walk you through integrating AG2 (formerly AutoGen) with APIpie, enabling you to build powerful multi-agent AI applications with access to a wide range of language models.
What is AG2?
AG2 (formerly AutoGen) is an open-source framework for building and orchestrating AI agents. It provides a flexible and powerful system for:
- Multi-agent conversations where agents can communicate and collaborate
- Human-in-the-loop workflows for oversight and feedback
- Tool use to extend agents with custom functionality
- Code execution for solving complex problems
- Customizable agent behaviors through system messages and specialized configurations
By connecting AG2 with APIpie, you gain access to a wide range of powerful models and features while leveraging AG2's robust agent orchestration capabilities.
Integration Steps
1. Create an APIpie Account
- Register here: APIpie Registration
- Complete the sign-up process.
2. Add Credit
- Add Credit: APIpie Subscription
- Add credits to your account to enable API access.
3. Generate an API Key
- API Key Management: APIpie API Keys
- Create a new API key for use with AG2.
4. Install AG2
Install AG2 (AutoGen) using pip:
pip install ag2[openai]
# Or use the alias if you prefer
pip install autogen[openai]
5. Configure AG2 for APIpie
Create a configuration file that points to APIpie's API:
import os
from autogen import LLMConfig
# Option 1: Using environment variables
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_API_BASE"] = "https://apipie.ai/v1"
# Option 2: Using configuration file (recommended)
config_list = [
{
"model": "gpt-4o-mini", # You can use any model available on APIpie
"api_key": "your-apipie-api-key",
"base_url": "https://apipie.ai/v1",
}
]
# Create an LLMConfig object
llm_config = LLMConfig.from_config_dict(config_list[0])
You can also save your configuration to a JSON file for better organization:
import json
import os
from autogen import LLMConfig
# Create a configuration file
config = [
{
"model": "gpt-4o-mini",
"api_key": "your-apipie-api-key",
"base_url": "https://apipie.ai/v1",
}
]
# Save to a file (make sure to add it to .gitignore)
with open("oai_config.json", "w") as f:
json.dump(config, f)
# Load from file
llm_config = LLMConfig.from_json(path="oai_config.json")
Key Features
- Multi-Agent Systems: Create collaborative agent networks that communicate to solve complex tasks
- Customizable Agents: Define specialized agents with unique roles, knowledge, and capabilities
- Tool Integration: Extend agents with custom functions and external APIs
- Code Generation & Execution: Generate and run code to solve technical problems
- Human-in-the-Loop: Include human feedback and oversight at key decision points
Example Workflows
Application Type | What AG2 Helps You Build |
---|---|
Conversational Agents | AI assistants that can engage in natural dialogue |
Problem-Solving Teams | Groups of specialized agents that collaborate on complex tasks |
Coding Assistants | Agents that can write, debug, and execute code |
Research Aids | Agents that can gather, analyze, and synthesize information |
Decision Support Systems | AI workflows that help humans make informed decisions |
Using AG2 with APIpie
Basic Multi-Agent Setup
import os
from autogen import AssistantAgent, UserProxyAgent, LLMConfig
# Configure APIpie
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_API_BASE"] = "https://apipie.ai/v1"
# Or load from configuration
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
# Create an assistant that uses APIpie
with llm_config:
assistant = AssistantAgent(
name="assistant",
system_message="You are a helpful AI assistant specializing in data analysis."
)
# Create a user proxy agent that can execute code
user_proxy = UserProxyAgent(
name="user_proxy",
code_execution_config={"work_dir": "coding", "use_docker": False}
)
# Start a conversation
user_proxy.initiate_chat(
assistant,
message="Analyze the following data and create a visualization: [1, 5, 3, 8, 2, 7, 4]"
)
Using Multiple Specialized Agents
import os
from autogen import AssistantAgent, UserProxyAgent, LLMConfig, GroupChat, GroupChatManager
# Configure APIpie
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
with llm_config:
# Create specialized agents
planner = AssistantAgent(
name="planner",
system_message="You break down complex tasks into manageable steps. Be concise and clear.",
)
researcher = AssistantAgent(
name="researcher",
system_message="You find and provide information needed to complete tasks. Focus on reliable sources.",
)
coder = AssistantAgent(
name="coder",
system_message="You write code to solve problems. Explain your code clearly.",
)
critic = AssistantAgent(
name="critic",
system_message="You review solutions and suggest improvements. Be constructive.",
)
# User proxy for human input and code execution
user_proxy = UserProxyAgent(
name="user_proxy",
code_execution_config={"work_dir": "coding", "use_docker": False},
is_termination_msg=lambda msg: "TASK COMPLETE" in msg.get("content", ""),
)
# Create a group chat
groupchat = GroupChat(
agents=[user_proxy, planner, researcher, coder, critic],
messages=[],
max_round=15,
)
# Create a manager to orchestrate the conversation
manager = GroupChatManager(
groupchat=groupchat,
llm_config=llm_config,
)
# Start the conversation
user_proxy.initiate_chat(
manager,
message="Create a Python script that pulls weather data for New York City and displays it as a chart."
)
Implementing Tool Usage
import os
from typing import List, Dict, Any
from autogen import register_function, AssistantAgent, UserProxyAgent, LLMConfig
# Configure APIpie
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
# Define a custom tool
def search_products(query: str, max_results: int = 5) -> List[Dict[str, Any]]:
"""
Search for products matching the query.
Args:
query: The search query string.
max_results: Maximum number of results to return.
Returns:
A list of product dictionaries with name, price, and rating.
"""
# In a real application, this would call your API
# This is just a mock example
mock_data = [
{"name": "Smartphone X", "price": 899.99, "rating": 4.5},
{"name": "Wireless Earbuds", "price": 149.99, "rating": 4.3},
{"name": "Laptop Pro", "price": 1299.99, "rating": 4.7},
{"name": "Smart Watch", "price": 249.99, "rating": 4.2},
{"name": "Bluetooth Speaker", "price": 79.99, "rating": 4.4},
]
filtered = [p for p in mock_data if query.lower() in p["name"].lower()]
return filtered[:max_results]
# Create an assistant with tool use capability
with llm_config:
assistant = AssistantAgent(
name="shopping_assistant",
system_message="You help users find products. Use the search_products tool when needed.",
)
# Create a user proxy
user_proxy = UserProxyAgent(
name="user",
human_input_mode="TERMINATE",
code_execution_config={"work_dir": "shopping", "use_docker": False},
)
# Register the custom tool
register_function(
search_products,
caller=assistant,
executor=user_proxy,
description="Search for products matching a query string",
)
# Start the conversation
user_proxy.initiate_chat(
assistant,
message="I'm looking for wireless audio devices. Can you help me find some options?"
)
Troubleshooting & FAQ
-
Which models are supported?
Any model available via APIpie's OpenAI-compatible endpoint. -
How do I handle environment variables securely?
Store your API key in environment variables or in a configuration file that's added to your .gitignore. Never commit API keys to repositories. -
Can I mix different LLMs within the same agent system?
Yes, you can create different LLMConfig objects for different agents, allowing them to use different models based on their specific needs. -
What if I need to handle requests with large context windows?
Choose models on APIpie that support larger context windows, like gpt-4-turbo or claude-3-opus. -
How can I optimize costs when using AG2 with APIpie? Use smaller/cheaper models for simple tasks and reserve more powerful models for complex reasoning. APIpie's routing capabilities can help optimize this automatically.
For more information, see the AG2 documentation or the GitHub repository.
Support
If you encounter any issues during the integration process, please reach out on APIpie Discord for assistance.