VERTEX-AI
30 days · UTC
Synchronizing with global intelligence nodes...
Claude Code 2.1.86–2.1.87 tighten reliability, add session-aware header, and smooth long runs
Anthropic shipped Claude Code 2.1.86–2.1.87 with broad reliability fixes and a new session header that simplifies telemetry and ops. The 2.1.86 updat...
Claude Code v2.1.81 adds Channels (phone approvals) and a headless --bare mode
Anthropic shipped Claude Code v2.1.81 with remote approval Channels and a headless --bare mode, plus several reliability fixes. The release adds a ne...
Claude Code Channels lands: push-to-chat agents and a headless --bare mode
Anthropic shipped Claude Code Channels and a headless --bare mode, making Claude a push-driven, scriptable agent with key reliability fixes. Channels...
Agent backends are converging: tools, graphs, and caches you can ship now
Agent backends are converging on tool-centric, graph-aware designs with caching at every layer, ready to ship on Vertex AI or Neo4j. A hands-on guide...
Claude Code v2.1.80 ships big-repo perf gains, proxy streaming fixes, and new MCP push channels
Anthropic released Claude Code v2.1.80 with large-repo performance improvements, safer proxy streaming, new agent hooks, and visible rate-limit status...
Google opens Gemini on IL5 GenAIMIL for U.S. government; build task-specific agents with Vertex AI
Google Cloud made Gemini available to U.S. military and government users on its IL5 GenAIMIL platform with built-in agent tooling. Per [this report](...
Shopify taps Google Vertex AI Discovery AI for semantic search in enterprise tier
Shopify's enterprise tier now uses Google Cloud's Vertex AI Discovery AI for semantic product search, with early adopters reporting up to 15x more ord...
Claude Sonnet 4.5 vs Gemini 3: structured outputs, grounding, and reliability trade-offs
For production teams choosing between Claude Sonnet 4.5 and Gemini 3, the core trade-off is post-generation schema enforcement versus native, schema-c...
Google debuts Gemini 3.1 Flash Lite: cheaper, faster model with variable reasoning
Google launched Gemini 3.1 Flash Lite, a cheaper and faster developer-focused model with variable reasoning now in preview via the Gemini API and Vert...
Google’s Gemini 3.1 Flash-Lite targets high-volume, low-latency workloads
Google released Gemini 3.1 Flash-Lite, a faster, cheaper model aimed at high-volume developer workloads and signaling a broader shift to lighter LLMs ...
Google ships Gemini 3.1 Pro with big reasoning gains and 1M‑token context
Google released Gemini 3.1 Pro with major reasoning gains, a context window up to 1 million tokens, and broad availability across developer and enterp...
Gemini 2.5 Pro 'Deep Think' and Code Assist GA: Practical wins from I/O 2025
Google I/O 2025 highlighted Gemini 2.5 Pro’s experimental Deep Think mode for stronger reasoning on complex coding/data tasks and made it accessible v...
Evaluate claims about a new budget 'Gemini 3 Flash' model
A recent third-party video claims Google has a new low-cost 'Gemini 3 Flash' model with strong performance and a free tier. There is no official Googl...
Gemini Enterprise update claims — prep your Vertex AI eval
Creator videos claim a new Gemini Enterprise update, but no official Google details are linked. Treat this as a heads-up: prep an evaluation plan in V...
Engineering, not models, is now the bottleneck
A recent video argues that model capability is no longer the main constraint; the gap is in how we design agentic workflows, tool use, and evaluation ...