LangGraph Integration Guide

This guide will walk you through integrating LangGraph with APIpie, enabling you to build sophisticated, stateful, multi-actor applications that leverage a wide range of language models.
What is LangGraph?
LangGraph is a powerful orchestration framework for building controllable agents and multi-agent workflows. It provides:
- Stateful Workflows: Create complex, multi-step agent workflows with persistent state
- Multi-Agent Systems: Design systems where multiple agents collaborate to solve tasks
- Human-in-the-Loop: Incorporate human feedback and approval in your agent workflows
- Streaming Support: First-class token-by-token streaming for real-time visibility
- Graph-Based Design: Model agent behaviors as computational graphs for flexibility
- Low-Level Control: Build custom agents without rigid abstractions
- Persistence: Maintain state across execution for reliable long-running workflows
By connecting LangGraph with APIpie, you gain access to a wide range of powerful language models for your agent applications while leveraging LangGraph's sophisticated orchestration capabilities.
Integration Steps
1. Create an APIpie Account
- Register here: APIpie Registration
- Complete the sign-up process.
2. Add Credit
- Add Credit: APIpie Subscription
- Add credits to your account to enable API access.
3. Generate an API Key
- API Key Management: APIpie API Keys
- Create a new API key for use with LangGraph.
4. Install LangGraph and Required Packages
pip install langgraph langchain langchain_openai
You may need additional packages depending on your specific use case:
# For retrieval-augmented agents
pip install langchain-community
# For human feedback
pip install panel
# For tracing and debugging
pip install langsmith
5. Configure LangGraph for APIpie
LangGraph integrates with LangChain's LLM providers, which can be configured to use APIpie:
import os
from langchain_openai import ChatOpenAI
# Configure the LLM to use APIpie
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_API_BASE"] = "https://apipie.ai/v1"
# Create an instance of the LLM
llm = ChatOpenAI(
model="gpt-4o-mini", # Choose any model available on APIpie
temperature=0.7
)
Key Features
- Graph-Based Orchestration: Build complex agent workflows as computational graphs
- State Management: Maintain and update agent state across interactions
- Multi-Agent Collaboration: Create teams of agents with different roles and capabilities
- Tool Integration: Equip agents with tools to perform actions in the world
- Human Feedback: Incorporate human input and oversight at critical decision points
- First-Class Streaming: Stream agent reasoning and actions in real-time
- Checkpointing: Persist agent state for reliable long-running tasks
Example Workflows
Application Type | What LangGraph Helps You Build |
---|---|
Conversational Agents | Multi-turn conversational agents with memory and reasoning |
ReAct Agents | Agents that reason and act to solve complex tasks |
Research & Analysis | Multi-agent systems that collaborate on research problems |
Business Process Automation | Enterprise workflows with human approvals and oversight |
Supervised Agents | Agent systems with human-in-the-loop supervision |
Using LangGraph with APIpie
Basic ReAct Agent
import os
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
# Configure APIpie credentials
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_API_BASE"] = "https://apipie.ai/v1"
# Create a tool that the agent can use
def search(query: str) -> str:
"""Search for information on the internet."""
# Mock implementation - in a real app, you would call a search API
if "weather" in query.lower():
return "It's currently 72°F and sunny in San Francisco."
elif "population" in query.lower():
return "The population of the United States is approximately 332 million."
else:
return "No relevant information found."
# Create a ReAct agent with APIpie as the LLM provider
llm = ChatOpenAI(model="gpt-4o", temperature=0)
agent = create_react_agent(llm, tools=[search])
# Use the agent to answer a question
result = agent.invoke(
{"messages": [{"role": "user", "content": "What's the weather in San Francisco?"}]}
)
print(result["messages"][-1]["content"])
Custom Multi-Step Agent Workflow
import os
from typing import TypedDict, Annotated, List, Dict, Any
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate
from langgraph.graph import StateGraph, END
# Configure APIpie credentials
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_API_BASE"] = "https://apipie.ai/v1"
# Define the state for our workflow
class AgentState(TypedDict):
messages: List[Dict[str, Any]]
summary: str
# Set up the LLM with APIpie
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.7)
# Define the nodes in our graph
# 1. Agent that processes user questions
def process_query(state: AgentState) -> AgentState:
"""Process the user query and generate a response."""
# Get the last message
last_message = state["messages"][-1]
# Only process user messages
if last_message["role"] != "user":
return state
# Create a prompt
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Answer user questions accurately."),
("human", "{input}")
])
# Get response from LLM
response = llm.invoke(prompt.format(input=last_message["content"]))
# Add the AI message to the state
new_messages = state["messages"].copy()
new_messages.append({"role": "assistant", "content": response.content})
return {"messages": new_messages, "summary": state["summary"]}
# 2. Agent that creates a summary of the conversation
def summarize_conversation(state: AgentState) -> AgentState:
"""Create a summary of the conversation so far."""
# Extract the conversation
conversation = "\n".join([f"{m['role']}: {m['content']}" for m in state["messages"]])
# Create a prompt for summarization
prompt = ChatPromptTemplate.from_messages([
("system", "Summarize the following conversation in a concise paragraph."),
("human", "{conversation}")
])
# Get summary from LLM
summary = llm.invoke(prompt.format(conversation=conversation))
return {"messages": state["messages"], "summary": summary.content}
# 3. Routing function to decide when to summarize or end
def should_summarize(state: AgentState) -> str:
"""Decide whether to summarize the conversation or continue."""
# Count message pairs (user + assistant)
message_pairs = len([m for m in state["messages"] if m["role"] == "assistant"])
# Summarize after every 3 exchanges
if message_pairs > 0 and message_pairs % 3 == 0:
return "summarize"
else:
return "continue"
# Create the workflow graph
workflow = StateGraph(AgentState)
# Add nodes
workflow.add_node("process", process_query)
workflow.add_node("summarize", summarize_conversation)
# Set up the edges
workflow.add_edge("process", should_summarize)
workflow.add_conditional_edges(
"should_summarize",
{
"summarize": "summarize",
"continue": END
}
)
workflow.add_edge("summarize", END)
# Set the entry point
workflow.set_entry_point("process")
# Compile the graph
agent_workflow = workflow.compile()
# Initialize the state
initial_state = {
"messages": [{"role": "user", "content": "Tell me about artificial intelligence."}],
"summary": ""
}
# Run the workflow
result = agent_workflow.invoke(initial_state)
print("Final Messages:")
for message in result["messages"]:
print(f"{message['role']}: {message['content']}\n")
print("Conversation Summary:")
print(result["summary"])
Multi-Agent Team with Human Approval
import os
from typing import TypedDict, List, Dict, Any, Literal, Annotated
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolExecutor, ToolInvocation
# Configure APIpie credentials
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_API_BASE"] = "https://apipie.ai/v1"
# Define our state
class TeamState(TypedDict):
messages: List[Dict[str, Any]]
sender: str
next: str
task: str
solution: str
approval: bool
# Define our tools
def search_web(query: str) -> str:
"""Search the web for information."""
# Mock implementation
return f"Results for '{query}': Found relevant information about {query}."
def calculate(expression: str) -> str:
"""Calculate a mathematical expression."""
try:
return str(eval(expression))
except:
return "Error evaluating expression."
# Set up tool executor
tools = [search_web, calculate]
tool_executor = ToolExecutor(tools)
# Create our team of agents using APIpie
model = ChatOpenAI(model="gpt-4o", temperature=0.7)
# Define the system prompts for different roles
researcher_prompt = ChatPromptTemplate.from_messages([
("system", """You are a research specialist. Your job is to gather information to help solve problems.
Use the search_web tool to find information related to the task. Be thorough and provide detailed results."""),
("human", "{task}")
])
analyst_prompt = ChatPromptTemplate.from_messages([
("system", """You are an analyst. Your job is to analyze information and perform calculations when needed.
Use the calculate tool for mathematical operations. Provide detailed analysis based on the information provided."""),
("human", "{task}\n\nResearch information: {research}")
])
solution_architect_prompt = ChatPromptTemplate.from_messages([
("system", """You are a solution architect. Your job is to create a final solution based on research and analysis.
Create a comprehensive and clear solution that addresses the original task. Be creative but practical."""),
("human", "{task}\n\nResearch information: {research}\n\nAnalysis: {analysis}")
])
# Define the agent functions
def researcher(state: TeamState) -> TeamState:
"""Research agent that gathers information."""
if state["next"] != "researcher":
return state
# Get task information
task = state["task"]
# Research using the model
research_response = model.invoke(researcher_prompt.format(task=task))
research_message = {"role": "assistant", "name": "researcher", "content": research_response.content}
# Update state
messages = state["messages"].copy()
messages.append(research_message)
return {
"messages": messages,
"sender": "researcher",
"next": "analyst",
"task": state["task"],
"solution": state["solution"],
"approval": state["approval"]
}
def analyst(state: TeamState) -> TeamState:
"""Analyst agent that analyzes information."""
if state["next"] != "analyst":
return state
# Get research information from the researcher
research_messages = [m for m in state["messages"] if m.get("name") == "researcher"]
research = research_messages[-1]["content"] if research_messages else ""
# Analyze using the model
analysis_response = model.invoke(analyst_prompt.format(task=state["task"], research=research))
analysis_message = {"role": "assistant", "name": "analyst", "content": analysis_response.content}
# Update state
messages = state["messages"].copy()
messages.append(analysis_message)
return {
"messages": messages,
"sender": "analyst",
"next": "solution_architect",
"task": state["task"],
"solution": state["solution"],
"approval": state["approval"]
}
def solution_architect(state: TeamState) -> TeamState:
"""Solution architect agent that creates the final solution."""
if state["next"] != "solution_architect":
return state
# Get research and analysis information
research_messages = [m for m in state["messages"] if m.get("name") == "researcher"]
analysis_messages = [m for m in state["messages"] if m.get("name") == "analyst"]
research = research_messages[-1]["content"] if research_messages else ""
analysis = analysis_messages[-1]["content"] if analysis_messages else ""
# Create solution using the model
solution_response = model.invoke(solution_architect_prompt.format(
task=state["task"],
research=research,
analysis=analysis
))
solution_message = {"role": "assistant", "name": "solution_architect", "content": solution_response.content}
# Update state
messages = state["messages"].copy()
messages.append(solution_message)
return {
"messages": messages,
"sender": "solution_architect",
"next": "human_approval",
"task": state["task"],
"solution": solution_response.content,
"approval": state["approval"]
}
def human_approval(state: TeamState) -> TeamState:
"""Simulate human approval (in a real app, this would wait for human input)."""
if state["next"] != "human_approval":
return state
# Simulate approval (in a real application, this would be a UI for human input)
# Here we're just automatically approving
approval_message = {"role": "human", "name": "supervisor", "content": "Solution approved."}
# Update state
messages = state["messages"].copy()
messages.append(approval_message)
return {
"messages": messages,
"sender": "human",
"next": "end",
"task": state["task"],
"solution": state["solution"],
"approval": True
}
# Create the graph
team_graph = StateGraph(TeamState)
# Add nodes
team_graph.add_node("researcher", researcher)
team_graph.add_node("analyst", analyst)
team_graph.add_node("solution_architect", solution_architect)
team_graph.add_node("human_approval", human_approval)
# Add edges
team_graph.add_edge("researcher", "analyst")
team_graph.add_edge("analyst", "solution_architect")
team_graph.add_edge("solution_architect", "human_approval")
team_graph.add_edge("human_approval", END)
# Set entry point
team_graph.set_entry_point("researcher")
# Compile the graph
team_workflow = team_graph.compile()
# Initialize state
initial_state = {
"messages": [],
"sender": "user",
"next": "researcher",
"task": "Research the impact of artificial intelligence on healthcare and propose three innovative applications.",
"solution": "",
"approval": False
}
# Run the workflow
result = team_workflow.invoke(initial_state)
# Display the results
print("Task:", result["task"])
print("\nFinal Solution:", result["solution"])
print("\nApproval Status:", "Approved" if result["approval"] else "Pending")
Troubleshooting & FAQ
-
How do I debug my LangGraph workflows?
LangGraph integrates with LangSmith for tracing and debugging. You can also add print statements at each node to see the state evolution. -
Can I use streaming with APIpie and LangGraph?
Yes, streaming is fully supported. Use thestreaming=True
parameter when creating your LLM instances. -
How do I handle environment variables securely?
Store your API keys in environment variables and never expose them in your code or repositories. -
Can I persist agent state between sessions?
Yes, LangGraph provides checkpointing functionality that allows you to persist and restore state across sessions. -
How do I integrate human feedback?
LangGraph supports human-in-the-loop workflows through dedicated human intervention nodes and tools like Panel for UI elements. -
What's the difference between LangGraph and other agent frameworks?
LangGraph provides low-level orchestration primitives with graph-based control flow, making it more flexible for complex agent systems than rigid frameworks.
For more information, see the LangGraph documentation, GitHub repository, or LangChain Academy.
Support
If you encounter any issues during the integration process, please reach out on APIpie Discord for assistance.