Google Agent Development Kit (ADK) Integration Guide


This guide will walk you through integrating Google's Agent Development Kit (ADK) with APIpie, enabling you to build sophisticated AI agents with flexible tooling and deployment options.
What is Google Agent Development Kit?
Google Agent Development Kit (ADK) is an open-source, code-first toolkit for building, evaluating, and deploying AI agents. It provides:
- Rich Tool Ecosystem: Utilize pre-built tools, custom functions, OpenAPI specs, or integrate existing tools
- Code-First Development: Define agent logic, tools, and orchestration directly in Python
- Modular Multi-Agent Systems: Design scalable applications with specialized agents in flexible hierarchies
- Deployment Options: Deploy agents on Cloud Run or Vertex AI Agent Engine
- Model Agnosticism: Works with Gemini models by default, but also supports other LLMs
By connecting ADK to APIpie, you gain access to a wide range of powerful language models while leveraging ADK's sophisticated agent development capabilities.
Integration Steps
1. Create an APIpie Account
- Register here: APIpie Registration
- Complete the sign-up process.
2. Add Credit
- Add Credit: APIpie Subscription
- Add credits to your account to enable API access.
3. Generate an API Key
- API Key Management: APIpie API Keys
- Create a new API key for use with Google ADK.
4. Install Google Agent Development Kit
pip install google-adk
For additional capabilities (optional):
# For tracing and monitoring
pip install google-adk[tracing]
# For development UI
pip install google-adk[web]
# For voice support
pip install google-adk[voice]
5. Configure ADK for APIpie
ADK is designed to work with Google's Gemini models by default, but it can be configured to use alternative LLM providers like APIpie. You can set this up by creating a custom LLM provider class:
import os
from google.adk.agents import Agent
from google.adk.llms import LLM
from google.adk.llms.openai import OpenAILLM
# Configure a custom LLM with APIpie
apipie_llm = OpenAILLM(
model="gpt-4o", # Use any model available on APIpie
api_key=os.environ.get("APIPIE_API_KEY"),
base_url="https://apipie.ai/v1"
)
# Create an agent with the custom LLM
root_agent = Agent(
name="apipie_agent",
llm=apipie_llm,
description="Agent that uses APIpie as the LLM provider",
instruction="You are a helpful assistant that provides accurate and concise information."
)
Key Features
- Tool Integration: Easily add custom tools, functions, and APIs to your agents
- Code-First Approach: Build agents directly in Python with full software engineering practices
- Multi-Agent Architecture: Create specialized agents that work together on complex tasks
- Streaming Support: Real-time streaming for text, voice, and video interactions
- Flexible Deployment: Run agents locally, in containers, or on cloud platforms
- Provider Agnosticism: Works with any LLM through custom providers
Example Workflows
Application Type | What Google ADK Helps You Build |
---|---|
Customer Support Systems | Agents that can access knowledge bases and respond to queries |
Research Assistants | Multi-tool agents that search, summarize, and analyze data |
Voice/Video Applications | Interactive applications using streaming for real-time responses |
Enterprise Workflows | Complex business processes with specialized agent teams |
Data Analysis Agents | Systems that process, visualize, and interpret structured data |
Using Google ADK with APIpie
Basic Agent with Multiple Tools
import os
import datetime
from zoneinfo import ZoneInfo
from google.adk.agents import Agent
from google.adk.llms.openai import OpenAILLM
# Configure APIpie as the LLM provider
apipie_llm = OpenAILLM(
model="gpt-4o",
api_key=os.environ.get("APIPIE_API_KEY"),
base_url="https://apipie.ai/v1"
)
def get_weather(city: str) -> dict:
"""Retrieves the current weather report for a specified city.
Args:
city (str): The name of the city for which to retrieve the weather report.
Returns:
dict: status and result or error msg.
"""
if city.lower() == "new york":
return {
"status": "success",
"report": (
"The weather in New York is sunny with a temperature of 25 degrees"
" Celsius (77 degrees Fahrenheit)."
),
}
else:
return {
"status": "error",
"error_message": f"Weather information for '{city}' is not available.",
}
def get_current_time(city: str) -> dict:
"""Returns the current time in a specified city.
Args:
city (str): The name of the city for which to retrieve the current time.
Returns:
dict: status and result or error msg.
"""
if city.lower() == "new york":
tz_identifier = "America/New_York"
else:
return {
"status": "error",
"error_message": f"Sorry, I don't have timezone information for {city}.",
}
tz = ZoneInfo(tz_identifier)
now = datetime.datetime.now(tz)
report = f'The current time in {city} is {now.strftime("%Y-%m-%d %H:%M:%S %Z%z")}'
return {"status": "success", "report": report}
# Create agent with multiple tools
root_agent = Agent(
name="weather_time_agent",
llm=apipie_llm,
description="Agent to answer questions about the time and weather in a city.",
instruction="You are a helpful agent who can answer user questions about the time and weather in a city.",
tools=[get_weather, get_current_time],
)
Using Built-in Google Search Tool
import os
from google.adk.agents import Agent
from google.adk.tools import google_search
from google.adk.llms.openai import OpenAILLM
# Configure APIpie as the LLM provider
apipie_llm = OpenAILLM(
model="gpt-4o",
api_key=os.environ.get("APIPIE_API_KEY"),
base_url="https://apipie.ai/v1"
)
# Create a search agent
search_agent = Agent(
name="search_agent",
llm=apipie_llm,
description="Agent to answer questions using Google Search.",
instruction="You are an expert researcher. You always stick to the facts and cite your sources.",
tools=[google_search],
)
Multi-Agent System with Specialized Agents
import os
from google.adk.agents import Agent
from google.adk.agents.sequential_agent import SequentialAgent
from google.adk.llms.openai import OpenAILLM
# Configure APIpie as the LLM provider
apipie_llm = OpenAILLM(
model="gpt-4o",
api_key=os.environ.get("APIPIE_API_KEY"),
base_url="https://apipie.ai/v1"
)
# Create specialized agents
research_agent = Agent(
name="research_agent",
llm=apipie_llm,
description="Agent to conduct research on topics.",
instruction="You are a research specialist who finds accurate information."
)
analysis_agent = Agent(
name="analysis_agent",
llm=apipie_llm,
description="Agent to analyze research findings.",
instruction="You are an analysis expert who synthesizes information into insights."
)
presentation_agent = Agent(
name="presentation_agent",
llm=apipie_llm,
description="Agent to present analysis in a clear format.",
instruction="You are a communication expert who presents complex information clearly."
)
# Create a sequential agent that chains these specialized agents
agent_workflow = SequentialAgent(
name="research_workflow",
llm=apipie_llm,
description="Research workflow that researches, analyzes, and presents information.",
instruction="Coordinate research, analysis, and presentation to provide comprehensive answers.",
agents=[research_agent, analysis_agent, presentation_agent]
)
Streaming Support for Voice and Video
import os
from google.adk.agents import Agent, LiveRequestQueue
from google.adk.runners import Runner
from google.adk.agents.run_config import RunConfig
from google.adk.sessions.in_memory_session_service import InMemorySessionService
from google.adk.llms.openai import OpenAILLM
# Configure APIpie as the LLM provider
apipie_llm = OpenAILLM(
model="gpt-4o", # Use a model that supports streaming
api_key=os.environ.get("APIPIE_API_KEY"),
base_url="https://apipie.ai/v1"
)
# Create a streaming-capable agent
streaming_agent = Agent(
name="streaming_agent",
llm=apipie_llm,
description="Agent that supports real-time streaming for voice and video.",
instruction="You respond naturally and helpfully to voice and video inputs."
)
# Set up session and runner for streaming
session_service = InMemorySessionService()
session = session_service.create_session(
app_name="Streaming Demo",
user_id="user_123",
session_id="session_456"
)
# Create a runner
runner = Runner(
app_name="Streaming Demo",
agent=streaming_agent,
session_service=session_service
)
# Configure for audio response
run_config = RunConfig(response_modalities=["TEXT", "AUDIO"])
# Create a live request queue for two-way communication
live_request_queue = LiveRequestQueue()
# Start a streaming session
live_events = runner.run_live(
session=session,
live_request_queue=live_request_queue,
run_config=run_config
)
# In an async context, you would process live_events and live_request_queue
# to handle the streaming communication
Running Your Agents
ADK provides multiple ways to interact with your agents:
Using the Development UI
# Navigate to your agent's parent directory
cd path/to/your/project
# Launch the dev UI
adk web
This will start a web interface where you can interact with your agent.
Using the Terminal
# Navigate to your agent's parent directory
cd path/to/your/project
# Run the agent in terminal mode
adk run your_agent_module
As an API Server
# Navigate to your agent's parent directory
cd path/to/your/project
# Start the API server
adk api_server your_agent_module
This will start an API server that you can integrate with web applications or other services.
Troubleshooting & FAQ
-
Can I use APIpie models with Google ADK?
Yes, ADK is designed to be model-agnostic. You can use the OpenAILLM provider to connect to any OpenAI-compatible API like APIpie. -
How do I handle environment variables securely?
Store your API keys in a.env
file and load them using python-dotenv. Never commit API keys to your repositories. -
What if I need to use multiple models in the same agent system?
You can create different LLM instances and assign them to different agents in your system based on their requirements. -
How do I debug agent behavior?
Use theadk web
development UI to inspect agent interactions, tool calls, and responses. Enable tracing for more detailed insights. -
Can I deploy my agent to a production environment?
Yes, ADK supports containerization for deployment on platforms like Cloud Run or Vertex AI Agent Engine. -
Are there any limitations when using non-Google LLMs?
Some ADK features may be optimized for Gemini models, but core functionality works with any LLM. Check the compatibility notes for specific features.
For more information, see the Google ADK documentation or the GitHub repository.
Support
If you encounter any issues during the integration process, please reach out on APIpie Discord for assistance.