OpenAI Launches GPT-5.3-Codex-Spark
News Synopsis
OpenAI has introduced a new AI coding model called GPT-5.3-Codex-Spark. The company describes it as its first real-time code writing AI model capable of writing and refining code instantly with ultra-low latency. The model runs on low-latency hardware and is specifically designed for developers. Currently, it is available in limited research preview access.
OpenAI Launches GPT-5.3-Codex-Spark
OpenAI has taken a major step in the coding domain by launching GPT-5.3-Codex-Spark, a real-time AI model built to accelerate software development workflows. The new model focuses on instant code generation and editing, signaling OpenAI’s growing emphasis on AI-powered development tools in 2026.
Increased Focus on Codex
In 2026, OpenAI elevated the Codex model lineup to a higher strategic priority. Recently, the company released GPT-5.3-Codex, which launched ahead of the general-purpose model. OpenAI CEO Sam Altman described Codex’s growth as extremely rapid, mentioning a 50 percent increase within just one week.
With the introduction of Codex-Spark, OpenAI has now taken a direct step toward real-time coding capabilities. This move reflects the company’s stronger commitment to AI-driven software development tools.
What Makes GPT-5.3-Codex-Spark Special?
Real-Time Code Generation
The biggest strength of GPT-5.3-Codex-Spark is its ability to generate and edit code in real time. According to the company, the model can almost instantly write code, make edits, and reshape logic structures.
-
Processes approximately 1,000 tokens per second
-
Designed as a text-only model
-
Focused on code-level improvements, interface refinement, and targeted editing
-
Offers a 128,000-token context window
This context window enables it to handle standard and moderately complex development tasks efficiently. For more complex tasks, OpenAI recommends using the GPT-5.3-Codex model.
Specialized Hardware Partnership
The high speed of Codex-Spark is powered by low-latency hardware. OpenAI recently partnered with Cerebras Systems, and Codex-Spark runs on the company’s Wafer Scale Engine 3 AI accelerator.
Internal benchmark tests suggest:
-
Better performance than GPT-5.1-Codex-mini
-
Slightly lower performance than GPT-5.3-Codex
-
Significant speed advantage due to hardware optimization
Availability and Access
GPT-5.3-Codex-Spark is currently available to ChatGPT Pro subscribers via:
-
Codex app
-
CLI
-
IDE extension
It is being offered as a research preview to limited users. Access through API is also available to select design partners. OpenAI plans to expand access in the coming weeks.
Key Highlights at a Glance
-
First real-time AI coding model from OpenAI
-
Processes up to 1,000 tokens per second
-
128,000-token context window
-
Powered by Cerebras Wafer Scale Engine 3
-
Optimized for developers and software engineering workflows
-
Limited research preview access
You May Like


