CODEX COMMAND INJECTION LET ATTACKERS STEAL GITHUB TOKENS; FIXES SHIPPED—TEAMS SHOULD ROTATE AND HARDEN NOW
BeyondTrust disclosed a command injection in OpenAI Codex that could steal GitHub tokens; OpenAI hotfixed it and hardened defenses by late January. According t...
BeyondTrust disclosed a command injection in OpenAI Codex that could steal GitHub tokens; OpenAI hotfixed it and hardened defenses by late January.
According to a detailed writeup from BeyondTrust Phantom Labs, Codex accepted a GitHub branch name that flowed into shell execution, enabling arbitrary command injection and theft of the GitHub User Access Token used by Codex. The attack path could affect multiple surfaces, including the ChatGPT website, Codex CLI, SDK, and IDE extension, and scale across shared repos and environments analysis and timeline.
OpenAI issued a hotfix on December 23, 2025, followed by additional hardening on January 22 and January 30, 2026. Even with vendor fixes, teams integrating AI coding agents with GitHub should assume untrusted inputs, tighten token scopes, sanitize branch names, and isolate agent runtimes to reduce blast radius BeyondTrust report.
LLM-powered coding tools can expand your attack surface into GitHub, turning small input bugs into organization-wide credential compromise.
Vendor patches help, but your token hygiene, input sanitization, and agent isolation determine real-world blast radius.
-
terminal
Create repos/branches with malicious names like "$(id)" or "
env" and verify no CI, agent, or internal tool executes them; enforce server-side branch name policies. -
terminal
Audit GitHub token creation/use tied to Codex from Dec 2025–Feb 2026; rotate PATs/OAuth tokens, trim scopes to least privilege, and validate logs for unusual API access.
Legacy codebase integration strategies...
- 01.
Inventory where Codex or similar agents touch your GitHub; revoke unused tokens, migrate to GitHub Apps with fine-grained, repo-scoped, short-lived creds.
- 02.
Add pre-receive hooks or org policies to reject unsafe branch names; centralize shell escaping and command execution guards in shared libraries.
Fresh architecture paradigms...
- 01.
Design agents as untrusted components: strict input validation, parameterized commands, jailed shells, network egress allowlists, and default-deny secrets access.
- 02.
Use job-scoped ephemeral tokens (GitHub App installations) and secretless auth patterns; log and trace all agent tool calls for forensics.