OpenAI's Leaked Secret: GPT-5.4's Unexpected Arrival
Hold onto your hats, folks! We thought we were patiently awaiting GPT-5.3, but OpenAI just dropped a bombshell. It seems they've inadvertently revealed the existence of GPT-5.4, and we've got the inside scoop.
On a routine Monday evening, a cybersecurity block in Codex unveiled a mysterious model name: gpt-5.4-ab-arm1-1020-1p-codexswic-ev3. Yes, it looks like a password, but the key takeaway is the '5.4' part.
Just three weeks ago, GPT-5.3-Codex made its debut as OpenAI's first 'High Cybersecurity Capability' model. And now, its successor is casually popping up in error logs, leaving us wondering what's going on.
This wasn't a one-time glitch. Here's the evidence:
- Two pull requests in OpenAI's public Codex GitHub repo mentioned GPT-5.4, one setting a minimum model version to (5, 4) and another adding a 'Fast mode' toggle.
- Both requests were hastily removed, but not before an OpenAI employee shared a screenshot, only to delete it later.
With five GPT-5 variants in seven months, it's like we're on a rollercoaster ride. At this rate, GPT-5.9 might arrive before your next quarterly goals!
But what does this model name even mean? Let's decode it:
- gpt-5.4: A new addition to the GPT-5 family, indicating a minor version update.
- ab: Likely an A/B testing bucket, meaning some users are part of an experiment.
- arm1: Probably referring to the hardware cluster, similar to what Codex CLI users on arm64 Macs have encountered.
- 1020: An internal build ID, like a release bundle.
- 1p: Suggests a 'one-pass' inference system.
- codexswic: A Codex-specific routing profile, with 'swic' being internal jargon.
- ev3: Experiment variant 3, indicating active testing.
And here's where it gets controversial. Multiple users have reported similar strings in Codex errors, implying this is the model Codex uses after routing through capacity pools. But is it really a new model, or just an experimental tweak?
I gave it a test drive, and the results were intriguing. Responses seemed more comprehensive, catching details it had previously missed. But was it the model or my excitement playing tricks? I'm cautiously optimistic, but more testing is needed.
Why the jump from 5.3 to 5.4? Our theory: 5.3 might have been a stability update, while 5.4 focuses on performance. The 'Fast mode' mention hints at OpenAI's plans for different latency tiers or inference pipelines.
OpenAI's strategy is evolving. Gone are the days of grand annual releases. Now, we're witnessing rapid minor version deployments, blurring the lines between product launches and continuous DevOps.
The big reveal? Major models are no longer unveiled with fanfare. They quietly evolve, constantly and incrementally. So, if you're eagerly awaiting the next GPT announcement, it might already be in your hands.
Note: This story first appeared in our sister publication, The Neuron. Subscribe to their newsletter for more insights.
What do you think? Is this a genuine leak or a clever marketing strategy? Share your thoughts in the comments below! Are you excited for the future of AI models, or does this rapid evolution spark concerns? Let's discuss!