Hacker exploit representing supply chain compromise of OpenAI's macOS app signing certificates
Breaking News

OpenAI macOS Apps Compromised in Axios Supply Chain Attack

Malicious Axios v1.14.1 compromised OpenAI's macOS app signing pipeline. ChatGPT Desktop, Codex, and Atlas certificates exposed. Second major AI supply chain attack in weeks.

OpenAISupply Chain AttackAxiosmacOS SecurityCode Signing

OpenAI has revealed that its macOS app signing certificates were exposed when a GitHub Actions workflow downloaded a malicious version of the Axios npm package on March 31, 2026. The compromised workflow had access to certificates used to sign ChatGPT Desktop, Codex, Codex CLI, and Atlas macOS apps.

This is the second major AI supply chain attack in weeks, following the LiteLLM compromise in March.


🔴 WHAT HAPPENED

On March 31, 2026, Axios — a widely used third-party developer library — was compromised as part of a broader software supply chain attack. At that time, a GitHub Actions workflow OpenAI uses in the macOS app-signing process downloaded and executed a malicious version of Axios (version 1.14.1).

This workflow had access to the certificate and notarization material used for signing macOS applications, including:

  • ChatGPT Desktop
  • Codex App
  • Codex CLI
  • Atlas

The certificate is what tells macOS that software comes from the legitimate developer, OpenAI. If an attacker obtained it, they could sign their own code to make it appear as legitimate OpenAI software.


🛡️ OPENAI’S RESPONSE

OpenAI’s analysis concluded that the signing certificate was likely not successfully exfiltrated by the malicious payload, citing the timing of payload execution, certificate injection into the job, sequencing of the job, and other mitigating factors.

However, out of an abundance of caution, OpenAI is treating the certificate as compromised and has:

  1. Rotated the macOS code signing certificate — New builds of all relevant macOS products published with the new certificate
  2. Engaged a third-party digital forensics and incident response firm — Full investigation underway
  3. Working with Apple — Ensuring software signed with the previous certificate cannot be newly notarized
  4. Reviewed all notarization events — Confirmed no unexpected software notarization occurred with these keys
  5. Validated published software — No unauthorized modifications found

Effective May 8, 2026, older versions of OpenAI’s macOS desktop apps will no longer receive updates or support, and may not be functional. Minimum versions signed with the new certificate:

  • ChatGPT Desktop: 1.2026.051
  • Codex App: 26.406.40811
  • Codex CLI: 0.119.0
  • Atlas: 1.2026.84.2

🐛 THE ROOT CAUSE

The root cause was a misconfiguration in the GitHub Actions workflow. Specifically:

  • The action used a floating tag instead of a specific commit hash
  • There was no configured minimumReleaseAge for new packages

This means the workflow would automatically pull the latest version of Axios regardless of how recently it was published — no waiting period to detect malicious releases, no pinning to a known-good commit.


⚠️ WHY 30 DAYS, NOT IMMEDIATE

OpenAI is giving users a 30-day window rather than immediately revoking the certificate. The reasoning:

  • New notarization with the previous certificate is already blocked, so any fraudulent app using the old certificate would be blocked by macOS security protections by default
  • Immediate revocation would cause macOS to block new downloads and first-time launches of apps signed with the previous certificate, potentially disrupting users
  • OpenAI is monitoring for any indicators of misuse and will accelerate the revocation timeline if malicious activity is detected

🔗 THE PATTERN: AI COMPANIES ARE PRIME TARGETS

This attack follows the LiteLLM supply chain compromise in March 2026, where a malicious version of the LiteLLM Python package exfiltrated API keys from applications using it.

The pattern is clear: AI companies are high-value targets for software supply chain attacks. Their build pipelines have access to signing certificates, API keys, and proprietary model artifacts. A single compromised dependency in a CI/CD workflow can expose the entire chain of trust.

Key vulnerabilities in AI company build pipelines:

  • Floating dependencies — Using tags or latest versions instead of pinned commit hashes
  • No release age gates — Automatically pulling newly published packages without a waiting period
  • Over-privileged workflows — GitHub Actions with access to signing certificates when they don’t need it
  • Single point of failure — One compromised dependency exposing certificates for multiple products

🇳🇿 WHAT THIS MEANS FOR NZ

For New Zealand’s growing AI sector, this incident highlights several risks:

  • Supply chain security is not optional. If OpenAI can get hit through a third-party npm package, any organization using automated build pipelines is vulnerable.
  • Certificate hygiene matters. Organizations distributing signed macOS apps need plans for certificate rotation — it’s not a question of if, but when.
  • Pinned dependencies are non-negotiable. Every package in a build pipeline should be pinned to a specific commit hash with a minimum release age.
  • The AI sector attracts disproportionate attention. Attacking an AI company’s supply chain gives attackers access to signing keys, API credentials, and model artifacts — a uniquely valuable combination.

🔍 THE BOTTOM LINE

OpenAI’s macOS app signing certificates were exposed through a malicious Axios package in their GitHub Actions pipeline. While OpenAI found no evidence of actual misuse, the certificates could theoretically be used to sign malicious software that appears to come from OpenAI.

All macOS users of ChatGPT Desktop, Codex, Codex CLI, and Atlas must update by May 8, 2026. After that date, apps signed with the old certificate may not function.

The root cause was embarrassingly simple — a floating tag instead of a pinned commit hash, and no minimum release age for new packages. These are supply chain hygiene 101, and one of the world’s most valuable AI companies got caught out.

This is the second AI supply chain attack in weeks. After LiteLLM, the message is clear: AI companies are prime targets, and their build pipelines need the same security scrutiny as their models.

For developers everywhere: Pin your dependencies. Set minimum release ages. Don’t let your CI/CD workflows have more privileges than they need. If OpenAI can get burned by a malicious npm package, so can you.


SOURCES

  • OpenAI — Our Response to the Axios Developer Tool Compromise (April 2026)
  • Singularity.Kiwi — LiteLLM Attack: A Wake-Up Call for New Zealand’s AI Community (March 2026)
Sources: OpenAI