>
Logo
Entity_Name
Classification
Capability_Summary
Status
Cursor

CURSOR

LLM_CORE

Cursor is an AI-first code editor based on VS Code that embeds chat, code-generation and multi-agent orchestration directly in the IDE. It targets software developers who want integrated AI assistance for writing, refactoring and navigating large codebases.

Operational
Vertex

VERTEX

PLATFORM

Vertex AI is Google Cloud’s integrated machine-learning platform for building, training, and deploying traditional and generative AI models. It offers managed infrastructure, MLOps tooling, and APIs so developers and enterprises can develop and scale ML workflows on Google Cloud.

Operational
Bedrock

BEDROCK

SERVICE

Amazon Bedrock is AWS’s managed generative-AI service that gives developers API access to multiple foundation models and tooling for building, deploying, and governing AI agents and applications. It is aimed at enterprises that want scalable model hosting, agent registries, and integrated security and compliance on the AWS cloud.

Operational
PowerShell

POWERSHELL

PLATFORM

PowerShell is an open-source command-line shell and scripting language created by Microsoft for cross-platform task automation and configuration management. It offers an object-oriented pipeline and rich module ecosystem that developers and IT pros use to manage Windows, macOS, Linux, and cloud resources.

Operational
Google Vertex AI

GOOGLE VERTEX AI

PLATFORM

Vertex AI is Google Cloud’s unified platform for building, deploying, and managing machine-learning and generative-AI models at scale. It targets developers and data scientists who want managed infrastructure, model hosting, and tooling such as model tuning, evaluation, and security features.

Operational
VS Code

VS CODE

PLATFORM

Visual Studio Code is a free, open-source code editor and lightweight IDE created by Microsoft. It is widely used in developer workflows—including the WordPress and agent-native setups referenced in the stories—for coding, debugging, and running extensions.

Operational
Anthropic Claude

ANTHROPIC CLAUDE

LLM_CORE

Claude is a family of large language models and chat assistants developed by Anthropic. It powers coding agents, content generation, and other AI workflows through the Claude.ai web app and API integrations.

Operational
Gemini

GEMINI

PLATFORM

Gemini is Google’s family of large multimodal generative AI models and the associated developer platform for building chat, agent, and RAG applications. It powers products like the Gemini chat app and Gemini CLI and is offered through Google AI APIs for third-party integration.

Operational
SWE-Bench Pro

SWE-BENCH PRO

TERM

A framework for software engineering benchmarking and evaluation.

Operational
SWE-bench

SWE-BENCH

TERM

SWE-bench refers to benchmarking tools in software engineering.

Operational
.NET

.NET

PLATFORM

.NET is an open-source, cross-platform developer platform for building and running applications in C#, F#, and other languages. It supplies a common runtime, standard libraries, and tooling that target web, desktop, mobile, cloud, and AI workloads.

Operational
Gemini 3.1 Pro

GEMINI 3.1 PRO

LLM_CORE

Gemini 3.1 Pro is Google’s top-tier large language model variant in the Gemini family, offering very long (≈1 M token) context windows and high reasoning ability for code, agent, and document-heavy workloads. It is exposed to developers through Google AI and Vertex AI endpoints as a paid, high-performance model option.

Operational
Gemini 3

GEMINI 3

LLM_CORE

Gemini 3 is Google’s third-generation Gemini large language model family, available through Google AI and Cloud APIs for reasoning, coding, and long-context document work. It is positioned as a frontier-class model competing with GPT-5, Claude, and Grok in benchmarks like SWE-Bench and RAG document QA.

Operational
Google

GOOGLE

COMPANY

Google is a multinational technology company that develops internet services, cloud platforms, artificial intelligence models and developer tools. In the stories it appears as the creator of Gemini models/CLI and TurboQuant research, and as a key partner in security and agentic AI initiatives.

Operational
SWE-Bench Verified

SWE-BENCH VERIFIED

TERM

A framework for evaluating software engineering tools.

Operational
Qwen

QWEN

LLM_CORE

Qwen is Alibaba Cloud’s open-source family of large language models designed for long-context reasoning, coding and agent workflows. Releases such as Qwen 3.6 Plus and Qwen Coder offer up to million-token windows and can be deployed in the cloud or run locally by developers.

Operational
GitHub Copilot Pro

GITHUB COPILOT PRO

LLM_CORE

GitHub Copilot Pro is the paid, higher-tier version of GitHub’s AI pair-programming assistant that provides faster models, chat capabilities, and enhanced features for individual developers. It integrates directly into Visual Studio Code, Visual Studio, and other IDEs to suggest code, explain snippets, and boost productivity.

Beta_v2
Copilot CLI

COPILOT CLI

REPO

GitHub Copilot CLI is an open-source command-line interface that brings Copilot’s AI coding and agent features to the terminal. It lets developers invoke chat-style agents, run tasks, and manage Model Context Protocol servers directly from the shell.

Operational
apps

MCP SERVER

TERM

An MCP server is a runtime service that implements the open-source Model Context Protocol (MCP), exposing local tools, data, and APIs so AI agents like ChatGPT or Claude can invoke them. Security, reliability and integration guides treat it as the critical bridge between agent front-ends and real back-end resources.

Operational
Copilot Pro

COPILOT PRO

SERVICE

GitHub Copilot Pro is the paid individual subscription tier of GitHub Copilot that unlocks higher-end AI models, extra usage limits, and premium features across Copilot Chat, IDE plugins, and the Copilot CLI. It targets professional developers who want more powerful code completion, chat and review capabilities than the free tier provides.

Operational
GET_DAILY_EMAIL
AI + SDLC // 5 MIN DAILY