NVIDIA has distributed Codex—OpenAI's code generation system built on GPT-5.5—to 40,000 engineers and researchers. The deployment, running on NVIDIA's GB200 and GB300 infrastructure, is shifting how the company ships production systems and runs machine learning research.
The gains are material. Dennis Hannusch, a senior software engineer on NVIDIA's coding agents team, reports that Codex with GPT-5.5 "surfaces bugs and gaps in my program that other models weren't able to find." Hannusch has used the tool to evolve an internal platform from an MVP into a production-ready system—work that had proven difficult with earlier models.
One concrete example: NVIDIA's teams built an internal podcast recording app (similar to Riverside) in hours using Codex. Hannusch notes that given privacy constraints, "it would have taken us weeks to procure software." The Codex desktop app with computer interaction capabilities tested the video and audio recording functionality as it was built, autonomously. "I didn't have to do anything—it was built and tested completely autonomously," he says.
For research teams, Codex has largely automated the research loop. Shaunak Joshi, an AI researcher at NVIDIA, reports a "10x speed improvement just in terms of running experiments, because it's able to handle the whole end-to-end machine learning research workflow." Joshi's team points Codex at large corpora of papers in areas like reinforcement learning, and GPT-5.5 traces evidence across the corpus, then builds knowledge graphs to visualize how concepts tie together.
Codex's SSH support means researchers no longer manually log into remote hosts. Joshi can run large machine learning workloads directly from his laptop. Another use case emerging at NVIDIA: teams are sending Python codebases to GPT-5.5 for machine translation into Rust, achieving roughly 20x efficiency gains.
Hannusch's closing remark captures the inflection: "Codex has completely changed the threshold for what's worth building."