Node-level streaming for LangGraph graphs
Project description
LangGraph Node-Level Streamer
A Python package that enables node-level streaming in LangGraph graphs, providing lower latency and faster processing compared to graph-level streaming by bypassing LangGraph's event system and yielding chunks directly from LLM nodes.
Features
- ✅ Direct chunk streaming - Bypasses LangGraph's event system for immediate access to LLM tokens
- ✅ Tool call support - Automatically handles and reconstructs tool calls from streaming chunks
- ✅ Lower latency - Chunks available immediately without event wrapping overhead
- ✅ Memory efficient - No event metadata objects, reduced memory footprint
- ✅ Production-ready - Battle-tested in real-world applications
Installation
From PyPI (Recommended)
pip install langgraph-node-streamer
From Source
git clone <repository-url>
cd langgraph-node-streamer
pip install -e .
Quick Start
Basic Usage
from langgraph_node_streamer import create_streaming_node
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
from langgraph.graph.message import AnyMessage, add_messages
# Define your state
class GraphState(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
# Create your LLM
llm = ChatOpenAI(model="gpt-4", streaming=True, temperature=0)
# Create the streaming node
streaming_node = create_streaming_node(llm)
# Build your graph
graph = StateGraph(GraphState)
graph.add_node("llm", streaming_node)
graph.add_edge("llm", END)
app = graph.compile()
# Use the graph with streaming
async def stream_response():
config = {"configurable": {"thread_id": "1"}}
inputs = {"messages": [("user", "Hello, how are you?")]}
async for chunk in app.astream(inputs, config):
# Process chunks as they arrive
if "llm" in chunk:
for message in chunk["llm"]:
if hasattr(message, 'content') and message.content:
print(message.content, end="", flush=True)
Using the Class-Based Interface
from langgraph_node_streamer import StreamingNode
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4", streaming=True)
streaming_node = StreamingNode(llm)
# Use in graph
graph.add_node("llm", streaming_node)
With Tool Calls (Automatic Handling)
This package automatically handles tool calls from streaming chunks. Tool call information may be split across multiple chunks, and this package reconstructs them properly:
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import ToolNode
@tool
def get_weather(city: str) -> str:
"""Get the weather for a city."""
return f"Weather in {city}: Sunny, 72°F"
tools = [get_weather]
tool_names = [t.name for t in tools]
# Create LLM with tools
llm = ChatOpenAI(model="gpt-4", streaming=True).bind_tools(tools)
# Create streaming node with tool validation
# Tool calls are automatically reconstructed from chunks
streaming_node = create_streaming_node(llm, tool_names=tool_names)
graph = StateGraph(GraphState)
graph.add_node("llm", streaming_node)
graph.add_node("tools", ToolNode(tools))
# Add routing for tool calls
def route_tool(state):
messages = state["messages"]
last_message = messages[-1]
if isinstance(last_message, AIMessage) and last_message.tool_calls:
return "tools"
return END
graph.add_conditional_edges("llm", route_tool, ["tools", END])
graph.add_edge("tools", "llm")
graph.set_entry_point("llm")
app = graph.compile()
Tool Call Features:
- ✅ Automatically collects tool call chunks across multiple streamed chunks
- ✅ Reconstructs complete tool call arguments (may be split across chunks)
- ✅ Validates tool names if
tool_namesparameter is provided - ✅ Creates proper
AIMessageobjects with tool calls for graph state
Complete Example with Custom State Key
from langgraph_node_streamer import create_streaming_node
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
from langgraph.graph.message import AnyMessage, add_messages
class CustomState(TypedDict):
conversation: Annotated[list[AnyMessage], add_messages]
metadata: dict
llm = ChatOpenAI(model="gpt-4", streaming=True)
# Use custom state key
streaming_node = create_streaming_node(
llm,
state_key="conversation" # Use "conversation" instead of "messages"
)
graph = StateGraph(CustomState)
graph.add_node("llm", streaming_node)
graph.add_edge("llm", END)
app = graph.compile()
How It Works
Standard LangGraph Approach
In standard LangGraph, nodes typically use ainvoke() to get complete responses:
# Standard LangGraph node
async def llm_node(state, config):
# Waits for COMPLETE response
response = await llm.ainvoke(state["messages"], config)
return {"messages": [response]}
To stream, you use astream_events() which captures chunks via LangGraph's callback system:
# Standard streaming approach
async for event in graph.astream_events(inputs, config, version="v1"):
if event["event"] == "on_chat_model_stream":
chunk = event["data"]["chunk"] # Nested in event structure
content = chunk.content
if content:
yield content
How astream_events() works:
- Node calls
llm.ainvoke()and waits for complete response - LangGraph's callback system captures chunks during
ainvoke()execution - Each chunk is wrapped in event metadata with structure like:
{ "event": "on_chat_model_stream", "data": {"chunk": AIMessageChunk(...)}, "name": "llm_node", # ... more metadata }
- You must filter events:
if event["event"] == "on_chat_model_stream" - Extract chunks from nested structure:
event["data"]["chunk"].content
Our Approach: Direct Node-Level Streaming
Our package uses astream() directly in the node, bypassing ainvoke() and the event system:
# Our streaming node
async def streaming_node(state, config):
# Streams chunks IMMEDIATELY - no waiting for complete response
async for chunk in llm.astream(state["messages"], config):
yield chunk # Yield immediately as chunks arrive
# ... collect for final message construction
yield {"messages": [final_message]} # Final state update
How astream() works with our node:
- Node uses
llm.astream()and yields chunks immediately - Chunks are yielded directly from the node (no event system)
- LangGraph collects what the node yields into
chunk["llm"] - Direct access:
chunk["llm"]contains the yielded items
# Our streaming approach
async for chunk in app.astream(inputs, config):
if "llm" in chunk:
for message in chunk["llm"]:
if hasattr(message, 'content') and message.content:
yield message.content # Direct access - no event wrapping!
Architecture Comparison
┌─────────────────────────────────────────────────────────┐
│ GRAPH-LEVEL STREAMING (Standard LangGraph) │
├─────────────────────────────────────────────────────────┤
│ Node: llm.ainvoke() → waits for complete response │
│ ↓ │
│ Callback System → Event Wrapper → Event Queue │
│ ↓ │
│ Event Filter → Extract from nested structure → Your Code │
│ (overhead) (latency) (CPU) (complexity) │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ NODE-LEVEL STREAMING (This Package) │
├─────────────────────────────────────────────────────────┤
│ Node: llm.astream() → yields chunks immediately │
│ ↓ │
│ Direct Yield → Your Code │
│ (minimal) (immediate) │
└─────────────────────────────────────────────────────────┘
Why This is Faster
1. No Waiting for Complete Response
Standard Approach:
- Node waits for complete LLM response via
ainvoke() - Chunks are captured via callbacks during execution
- Events are processed after response completes
Our Approach:
- Node streams chunks immediately via
astream() - Chunks are available as soon as they're generated
- No waiting for complete response
2. No Event Wrapping Overhead
Standard Approach:
- Each chunk creates an event dictionary with metadata
- Event system processes and categorizes events
- You filter events:
if event["event"] == "on_chat_model_stream" - Extract from nested structure:
event["data"]["chunk"].content
Our Approach:
- Chunks are yielded directly
- No event metadata
- No filtering needed
- Direct access:
chunk["llm"]
3. Reduced Processing Overhead
Standard Flow:
LLM → Callback System → Event Wrapper → Event Queue →
Event Filter → Nested Extraction → Your Code
Our Flow:
LLM → Direct Yield → Your Code
Performance Benefits
| Aspect | Graph-Level Streaming | Node-Level Streaming | Benefit |
|---|---|---|---|
| First Chunk Latency | Waits for ainvoke() + event processing |
Immediate from astream() |
Lower latency |
| Processing Overhead | Event wrapping + filtering + nested access | Direct access | Less overhead |
| Memory per Chunk | Event object + chunk | Chunk only | Less memory |
| Code Complexity | Filter events, extract from nested structure | Direct access | Simpler |
| CPU Usage | Event processing overhead | Minimal | Lower |
Note: Actual performance improvements depend on your specific use case, LLM provider, network latency, and system configuration.
Tool Call Handling
This package automatically handles tool calls that may be split across multiple streaming chunks:
How Tool Calls Work
When an LLM makes a tool call, the information may arrive in multiple chunks:
- Chunk 1:
{"tool_calls": [{"function": {"name": "get_weather"}} - Chunk 2:
, "arguments": "{\"city\": " - Chunk 3:
"San Francisco"}"}]}
This package:
- Collects all tool call chunks as they arrive
- Reconstructs complete tool call arguments by concatenating JSON fragments
- Validates tool names if
tool_namesparameter is provided - Creates proper
AIMessageobjects with tool calls for graph state
Example
from langgraph_node_streamer import create_streaming_node
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4", streaming=True).bind_tools(tools)
tool_names = [t.name for t in tools]
# Tool calls are automatically handled
streaming_node = create_streaming_node(llm, tool_names=tool_names)
# The node will:
# 1. Stream content chunks immediately
# 2. Collect tool call chunks in the background
# 3. Reconstruct complete tool calls
# 4. Yield proper AIMessage with tool_calls for graph state
Advanced Usage
Error Handling
async def safe_stream():
try:
async for chunk in app.astream(inputs, config):
if "llm" in chunk:
for message in chunk["llm"]:
if hasattr(message, 'content') and message.content:
yield message.content
except Exception as e:
print(f"Streaming error: {e}")
# Handle error appropriately
Integration with Existing Workflows
from langgraph.prebuilt import ToolNode
from langgraph_node_streamer import create_streaming_node
from langchain_core.messages import AIMessage
# Create your graph with streaming node
graph = StateGraph(GraphState)
# Add streaming LLM node with tool support
llm = ChatOpenAI(model="gpt-4", streaming=True).bind_tools(tools)
streaming_node = create_streaming_node(
llm,
tool_names=[t.name for t in tools] # Tool calls automatically handled
)
graph.add_node("llm", streaming_node)
# Add tool node
tool_node = ToolNode(tools)
graph.add_node("tools", tool_node)
# Add conditional routing
def route_tool(state):
messages = state["messages"]
last_message = messages[-1]
if isinstance(last_message, AIMessage) and last_message.tool_calls:
return "tools"
return END
graph.add_conditional_edges("llm", route_tool, ["tools", END])
graph.add_edge("tools", "llm")
graph.set_entry_point("llm")
app = graph.compile()
API Reference
create_streaming_node(llm_chain, tool_names=None, state_key="messages")
Creates a streaming node function for LangGraph that yields chunks directly from the LLM.
Parameters:
llm_chain(Any): The LangChain LLM chain or runnable that supportsastream(). Should accept messages and config, and return async iterator of chunks.tool_names(Optional[List[str]]): List of tool names for validation. If provided, tool calls will only be processed if the tool name is in this list. Tool calls are automatically reconstructed from streaming chunks.state_key(str): Key in state dictionary containing messages (default: "messages")
Returns:
- Async generator function compatible with LangGraph nodes. The function yields:
- Individual chunks as they arrive from the LLM
- Final state updates with complete
AIMessageobjects (including tool calls if present)
Features:
- Streams chunks immediately as they arrive
- Automatically handles tool calls split across multiple chunks
- Reconstructs complete tool call arguments
- Creates proper
AIMessageobjects for graph state
StreamingNode(llm_chain, tool_names=None, state_key="messages")
Class-based interface for streaming nodes.
Parameters:
- Same as
create_streaming_node
Usage:
- Instantiate and use directly as a LangGraph node
Requirements
- Python >= 3.8
- langchain-core >= 0.3.0
- langgraph >= 0.2.0
- typing-extensions >= 4.0.0
Key Advantages Summary
✅ Lower latency - Chunks available immediately without waiting for complete response
✅ Less overhead - No event wrapping, filtering, or nested data access
✅ Less memory - No event metadata objects
✅ Simpler code - Direct chunk access
✅ Tool call support - Automatically handles tool calls split across chunks
✅ Production-ready - Battle-tested in real applications
FAQ
Q: Does this work with all LangGraph features?
A: Yes! This package is fully compatible with LangGraph's state management, checkpoints, tools, and conditional edges. It only changes how chunks are streamed.
Q: How does this differ from standard LangGraph streaming?
A: Standard LangGraph uses ainvoke() in nodes and astream_events() for streaming, which wraps chunks in event metadata. Our approach uses astream() directly in nodes and astream() for access, bypassing the event system.
Q: What about tool calls?
A: Tool calls are fully supported and automatically handled! The package collects tool call chunks as they arrive, reconstructs complete tool call arguments (which may be split across chunks), and creates proper AIMessage objects with tool calls for graph state.
Q: Will this break my existing code?
A: No, this is a drop-in replacement. Simply replace your LLM node with create_streaming_node(llm) and update your streaming code to use astream() instead of astream_events().
Q: Can I use both graph-level and node-level streaming?
A: Yes, you can use node-level streaming for LLM nodes and graph-level streaming (astream_events()) for debugging or monitoring other parts of your graph.
Q: Is this production-ready?
A: Yes! This package is based on production code that has been tested with high-volume streaming applications.
License
MIT License
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
For issues, questions, or contributions, please visit the GitHub repository.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langgraph_node_streamer-0.1.0.tar.gz.
File metadata
- Download URL: langgraph_node_streamer-0.1.0.tar.gz
- Upload date:
- Size: 9.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0a4271ff8d72ec3bb1d86dd51076547dd03566de434888058377af2af49fb9b7
|
|
| MD5 |
2b3847f6ce6bdd4b65e8d1bfea4e3c76
|
|
| BLAKE2b-256 |
0cffc791346211c6471be6dc8fedb744dfc48d80d110d308368e03ab4e9ec4e5
|
File details
Details for the file langgraph_node_streamer-0.1.0-py3-none-any.whl.
File metadata
- Download URL: langgraph_node_streamer-0.1.0-py3-none-any.whl
- Upload date:
- Size: 8.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d787d54eb7c685d4627e0d737429785da9a838da85cb25c9f08332bcc28e48f2
|
|
| MD5 |
cfdf4cfc775ec7da91394054bf129b11
|
|
| BLAKE2b-256 |
861c7243735f47d35f7c6ebe8c8e8fb2165ffee1420c83223792d46bea1653b4
|