ChatGPT is having a partial outage

🗓️ 2025-06-10 17:52

Latest

AI

Amazon

Apps

Biotech & Health

Climate

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

Social

Space

Startups

TikTok

Transportation

Venture

Events

Startup Battlefield

StrictlyVC

Newsletters

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

OpenAI experienced a partial outage on Tuesday morning that created some issues for people trying to access ChatGPT, Sora, and the API, the company said on its status page.

The company started investigating the issues late on Monday night, but the partial outage has persisted through Tuesday morning. Around 5:30 a.m. PT on Tuesday, OpenAI says it identified the issue and started working to fix it.

However, at roughly 8 a.m. PT, OpenAI said full recovery across its services may take “another few hours,” meaning folks logging onto work on the West Coast of the United States won’t be able to access ChatGPT this morning.

We are observing elevated error rates and latency across ChatGPT and the API.Our engineers have identified the root cause and are working as fast as possible to fix the issue.For updates see our status page: https://t.co/oUGSSyltRU

While ChatGPT outages typically last just a few hours, Tuesday’s partial outage is notably long in duration. The company said people may experience “elevated errors and latency” when using ChatGPT.

When TechCrunch tried to access GPT-4o in ChatGPT on Tuesday morning, the chatbot responded with an error reading “Too many concurrent requests.”

Tuesday’s partial outage comes amid a flurry of announcements from OpenAI. At Apple’s WWDC event on Monday, the iPhone maker revealed deeper integrations with OpenAI’s models. Also on Monday, an OpenAI spokesperson confirmed to CNBC that the company has reached $10 billion in annualized recurring revenue. Earlier on Tuesday, OpenAI CEO Sam Altman revealed an 80% price cut for developers trying to access its o3 AI reasoning models in its API.

OpenAI has previously run into issues scaling the usage of its AI models to hundreds of millions of people, but that’s exactly what the company needs to do to meet its grand ambitions. Altman has previously said the company’s “GPUs are melting” to keep up with the demand for ChatGPT, indicating that OpenAI’s computing resources are spread thin these days. It seems that demand for OpenAI’s models is only continuing to rise.

Topics

Senior AI Reporter

From seed to Series C and beyond—founders and VCs of all stages are heading to Boston. Be part of the conversation. Save $200+ now and tap into powerful takeaways, peer insights, and game-changing connections.

Final call: Apply to host a Side Event at TechCrunch All Stage 2025 today

Threads is finally getting a DM inbox

Apple’s upgraded AI models underwhelm on performance

Enterprise AI startup Glean lands a $7.2B valuation

The US Navy says ‘welcome aboard’ to new startup partnerships

At WWDC 2025, Apple sang developers’ praises amid AI letdowns and App Store battles

Report: Meta taps Scale AI’s Alexandr Wang to join new ‘superintelligence’ lab

Š 2025 TechCrunch Media LLC.

← Back to articles