LANGCHAIN SHIPS STREAMING V2 AND STANDARDIZES AGENTS ON CREATE_AGENT
LangChain changed how streaming works and is standardizing agent creation on create_agent. The latest [langchain-classic 1.0.5](https://github.com/langchain-ai...
LangChain changed how streaming works and is standardizing agent creation on create_agent.
The latest langchain-classic 1.0.5 retargets deprecations to create_agent and bumps core versions.
Core now includes content-block-centric streaming (v2) per the same notes, which changes event shapes and parsing paths; plan for SSE/WebSocket updates.
langchain-anthropic 1.4.3 also lands a stability fix and aligns with the agent API shift. If you lean into reasoning and tools, expect higher test-time compute per this cost deep-dive.
Streaming v2 changes event structure, which can silently break parsers, UIs, and logging.
create_agent consolidates agent setup, reducing API churn across providers and easing migrations.
-
terminal
Run a minimal chain with streaming v2 and verify SSE/WebSocket parsing, backpressure, and partial updates across OpenAI and Anthropic.
-
terminal
Migrate one production agent to create_agent and measure latency, token usage, and tool-calling behavior under load.
Legacy codebase integration strategies...
- 01.
Search for deprecated agent constructors, enable deprecation warnings in CI, and gate rollouts behind a feature flag.
- 02.
Add telemetry for dropped events and malformed deltas; compare costs with and without reasoning turns.
Fresh architecture paradigms...
- 01.
Adopt create_agent and streaming v2 from day one to keep provider-neutral event handling.
- 02.
Model UIs and storage on content blocks instead of raw text deltas to simplify multi-model support.
Get daily LANGCHAIN + SDLC updates.
- Practical tactics you can ship tomorrow
- Tooling, workflows, and architecture notes
- One short email each weekday