For the first time in OpenAI’s history, its flagship models are now directly available via another major cloud provider—Amazon Web Services. This historic move, announced on August 5, 2025, marks a major expansion of OpenAI’s ecosystem beyond Microsoft Azure and could reshape enterprise AI deployment across the globe.
Breaking into AWS: What Changed
On August 5, 2025, AWS confirmed it was adding OpenAI’s two new open-weight reasoning models, gpt‑oss‑120b and gpt‑oss‑20b, to its Amazon Bedrock and SageMaker AI platforms—making OpenAI models directly available to AWS customers for the first time. Previously, OpenAI’s models were only accessible through Microsoft Azure or directly via OpenAI. The AWS offering now broadens enterprise access to these state-of-the-art AI tools.
Meet the Models: gpt-oss-120b and gpt-oss-20b
OpenAI’s launch included two open-weight models—an industry-first for the company since GPT‑2. These models differ from traditional open‑source variants by sharing the underlying trained parameters under an Apache 2.0 license, enabling fine‑tuning and commercial use without exposing training data.
- gpt‑oss‑120b is the larger variant, delivering performance rivaling OpenAI’s o4‑mini, capable of running on a single 80 GB GPU.
- gpt‑oss‑20b is optimized for consumer-grade hardware, requiring only ~16 GB of memory, and performs similarly to o3‑mini.
Benchmarks show gpt‑oss‑120b outperforming DeepSeek‑R1 and comparable open models in tasks such as coding and mathematical reasoning tests—though still slightly trailing OpenAI’s top-tier o‑series models.
AWS Integration: Why It Matters
Amazon’s integration lets customers access these models directly in Bedrock and SageMaker JumpStart, with support for enterprise-grade deployment, fine-tuning, monitoring tools, and security guardrails.
AWS CEO Matt Garman called it a “powerhouse combination,” highlighting how OpenAI’s advanced models now pair with AWS’s scale and reliability. By adding these open-weight models, AWS aims to expand its “model choice” strategy while cementing its position as a one-stop shop for AI developers.
Pricing claims are notably aggressive: AWS touts that, in Bedrock, gpt‑oss‑120b achieves up to 3× better price-performance than Google’s Gemini, 5× better than DeepSeek‑R1, and nearly twice the efficiency of OpenAI’s own o4 model.
What It Means for the Industry
This move signals a major shift for both companies:
- For OpenAI, it’s a strategic pivot—releasing models as open-weight assets under Apache 2.0 after years of closed‑source restraint. Leadership cited mounting competition from Chinese and open-source labs, and a philosophical push to restore OpenAI’s mission of democratized AI access.
- For AWS, this is a breakthrough. Until now, AWS has been largely offering models from other providers like Anthropic (Claude), Meta (Llama), DeepSeek, Cohere, and Mistral. OpenAI’s adoption legitimizes Bedrock and SageMaker as platforms capable of hosting world-class models. The move also provides enterprises with alternatives beyond Azure-bound AI access.
Looking Ahead
The OpenAI models are available through Hugging Face, Databricks, Azure, and now AWS—a truly cross‑platform release spanning open‑weight accessibility with enterprise integrations.
We’ll be watching how competitors respond. Meta’s Llama, Google’s Gemma, and DeepSeek’s models are now part of an increasingly crowded, high-stakes arena. AWS’s bet on OpenAI may accelerate enterprise adoption of generative AI while reshaping competitive dynamics in cloud provider alignment.
In Summary
OpenAI’s decision to release gpt‑oss‑120b and gpt‑oss‑20b as open‑weight models—and AWS’s simultaneous integration of those models—marks a pivotal moment in generative AI history. This partnership expands access, unlocks pricing efficiencies, and places OpenAI firmly within AWS’s model ecosystem for the first time. Enterprises now have broader, more flexible avenues for integrating OpenAI’s top-tier reasoning models into their own operations.