Streaming
The SDK provides comprehensive streaming support for real-time response handling.
Basic Streaming
Platform Mode
const stream = aw.streamComplete({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a story' }]
});
for await (const chunk of stream) {
if (chunk.type === 'text_delta') {
process.stdout.write(chunk.text);
}
}Direct Provider Mode
import { createProvider } from '@agentic-work/sdk/providers';
const provider = createProvider({
type: 'openai',
apiKey: process.env.OPENAI_API_KEY
});
const stream = provider.stream({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Explain quantum computing' }]
});
for await (const chunk of stream) {
// Handle different chunk types
}Stream Chunk Types
Text Delta
Text content from the model:
Tool Call Delta
Partial tool call information (streamed incrementally):
Stream Done
Completion signal:
Stream Error
Error during streaming:
Complete Handler Example
Agent Streaming
Agents stream text output while handling tools internally:
Streaming with Callbacks
For more granular control over agent execution:
Cancellation
Cancel a streaming request:
Rich Stream Events (CLI/UI)
For building interactive UIs:
Best Practices
Always handle all chunk types - Don't assume only text will be streamed
Accumulate tool call arguments - They arrive in pieces
Check for errors - Handle
errorchunks gracefullyUse done signal - Don't assume stream ends with text
Implement cancellation - Allow users to abort long streams
Track usage - Token counts are only available in
donechunk
Last updated