Cerebras AI inference platform, developer console, and Cerebras cloud services.
[ STATUS AT A GLANCE ]
Current status of individual Cerebras services
When Cerebras has issues, Bifrost automatically routes your requests to a healthy alternative provider. Zero code changes. 99.999% effective uptime.
What Cerebras does, where the data on this page comes from, and recent reliability
[ ABOUT CEREBRAS ]
Cerebras provides Cerebras Inference, Developer console, and Cloud AI services. Cerebras status is relevant for teams using specialized inference infrastructure where throughput and availability are tightly linked to developer productivity.
This page pulls data from Cerebras's official status page to show current service health, any active incidents, and a history of recent issues — all in one view.
[ DATA SOURCES ]
Cerebras publishes detailed component status, a full incident archive, and scheduled maintenance data through their official status page.
[ RELIABILITY ]
[ COMMON USE CASES ]
Cerebras status is relevant for teams using specialized inference infrastructure where throughput and availability are tightly linked to developer productivity.
Active incidents, scheduled maintenance, and incident history for Cerebras
Mar 18, 2026 — Resolved Mar 18, 2026
Between 10:30 PM PST and 11:42 PM PST 03/17 users experienced partial degradation with glm-4.7. We have deployed a fix and the issue is now resolved.
Mar 17, 2026 — Resolved Mar 17, 2026
This incident has been resolved.
The service is currently inaccessible. We are currently working urgently to restore service capabilities. We will provide further updates as we make progress.
Mar 17, 2026 — Resolved Mar 17, 2026
We have deployed a fix and the issue is now resolved.
Between 03/16 02:29 PM PST and 03/17 05:00 AM PST users experienced service disruption with glm-4.7. We have deployed a fix and the issue is now resolved.
The service is currently inaccessible. We are currently working urgently to restore service capabilities. We will provide further updates as we make progress.
Mar 16, 2026 — Resolved Mar 16, 2026
Between 02:29 PM PT and 02:20 AM PT users experienced service unavailability with GLM-4.7, caused by a datacenter issue.
Mar 6, 2026 — Resolved Mar 6, 2026
Between 4:53 UTC ending 5:38 UTC, Qwen 3 235B endpoint experienced a partial service disruption due to a transient network issue. The issue has been identified and fixed, the endpoint is operational.
We identified the issue, applied a fix and are monitoring the endpoint.
Qwen 235B is facing partial service disruption. We are currently working to resume normal service performance. We will provide further updates as we make progress.
Feb 10, 2026 — Resolved Feb 11, 2026
This incident has been resolved.
We've identified a fix and our engineering team is deploying this now.
We are currently investigating an issue with zai-glm-4.7 where users are seeing an elevated number of 503 errors. Engineering is working on a resolution.
Feb 10, 2026 — Resolved Feb 10, 2026
This incident has been resolved.
Our engineering team has resolved the issue.
We've identified a fix and our engineering team is deploying this now.
We are currently investigating an issue with Llama-3.3-70B where users are seeing an elevated number of 503 errors. Engineering is working on a resolution.
Feb 5, 2026 — Resolved Feb 5, 2026
Between 7.30 PM PT and 11 PM PT, the inference service was partially disrupted by 502 Gateway errors across all endpoints. This issue is caused by an internal system dependency, the fix was rolled out and the service is operational across all endpoints.
We are investigating 502 Gateway errors on Cerebras endpoints. We are currently working to resume normal service performance. We will provide further updates as we make progress.
Feb 4, 2026 — Resolved Feb 5, 2026
Between 06:00 AM PT and 06:00 PM PT users experienced service disruption with Qwen 3 235B Instruct. We have taken action to address recent changes in traffic patterns and capacity, reducing the disruption. We plan to monitor this endpoint and evaluate restoring rate limits in the next week for Pay Go users.
Between 06:00 AM PT and 06:00 PM PT users experienced service disruption with Qwen 3 235B Instruct. We have taken action to address recent changes in traffic patterns and capacity, reducing the disruption. We plan to monitor this endpoint and evaluate restoring rate limits in the next week for Pay Go users.
As part of recent changes in traffic patterns and capacity, we are temporarily turning down rate limits for Pay-go users to help maintain a positive experience across the board. We understand the challenges this may create for you and your users, and we sincerely apologize for the inconvenience. We plan to monitor this endpoint and evaluate restoring rate limits in the next week. Thank you for your continued partnership and understanding, please reach out to our support team for any other questions.
Qwen 235B performance is degraded and temporarily unavailable for some service tiers. We are currently working to resume normal service performance. We will provide further updates as we make progress.
Jan 27, 2026 — Resolved Jan 27, 2026
This incident has been resolved.
Partial of the service has been restored, and we are currently working to resume normal service performance
Qwen 32b is currently inaccessible. We are currently working urgently to restore service capabilities. We will provide further updates as we make progress.
Jan 26, 2026 — Resolved Jan 26, 2026
This incident has been resolved.
Between 20:51 and 21:47 UTC users experienced service disruption with GLM 4.7. We have deployed a fix and the issue is now resolved.
A fix has been rolled out, and we are actively monitoring the situation.
Partial of the service has been restored, and we are currently working to resume normal service performance.
The service is currently inaccessible. We are currently working urgently to restore service capabilities. We will provide further updates as we make progress.
Jan 18, 2026 — Resolved Jan 19, 2026
Between 12.45 PM PT and 4.00 PM PT users experienced service unavailability with Llama 3.3 70B, caused by a datacenter issue. We have deployed a fix and the issue is now resolved, model endpoint is operational.
Between 12.45 PM PT and 4.00 PM PT users experienced service unavailability with Llama 3.3 70B, caused by a datacenter issue. We have deployed a fix and the issue is now resolved, model endpoint is operational.
The fix has been rolled out and the service has resumed consuming traffic and being monitored.
The issue has been root caused and fix is being implemented to bring the service backup.
The service is currently inaccessible. We are currently working urgently to restore service capabilities. We will provide further updates as we make progress.
Jan 17, 2026 — Resolved Jan 17, 2026
Between 4.25 PM PT and 5.45 PM PT, developers experienced minor service disruption due to API Key Error. The issue has been identified and resolved, normal service operation is restored.
This issue is caused by an internal system dependency, and we are currently working to restore system performance.
Partial service disruption due to 401 Error with Unauthorized API Key. We are currently working to resume normal service performance. We will provide further updates as we make progress.
Dec 12, 2025 — Resolved Dec 12, 2025
Between 06:50 AM PST and 07:45 AM PST users experienced partial degradation with llama3.1-8b, llama-3.3-70b, qwen-3-32b, qwen-3-235b-instruct-2507 models. We have deployed a fix and the issue is now resolved.
Between 06:50 AM PST and 07:45 AM PST users experienced partial degradation with llama3.1-8b, llama-3.3-70b, qwen-3-32b, qwen-3-235b-instruct-2507 models. We have deployed a fix and the issue is now resolved.
Between 06:50 AM PST and 07:45 AM PST users experienced partial degradation with llama3.1-8b, llama-3.3-70b, qwen-3-32b, qwen-3-235b-instruct-2507 models. We have deployed a fix and the issue is now resolved.
We've deployed the fix and affected service is now recovering. We are actively monitoring service performance.
We are continuing to investigate this issue.
+ 2 more updates
Dec 11, 2025 — Resolved Dec 11, 2025
Between 01:30 AM PT and 2:30 PM PT users experienced partial degradation with Qwen 3 235B Instruct. We have deployed a fix and the issue is now resolved.
Nov 18, 2025 — Resolved Nov 18, 2025
This incident has been resolved.
The platform is accessible now. We are monitoring to ensure stability.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
+ 2 more updates
Nov 14, 2025 — Resolved Nov 14, 2025
Between 06:05 AM PST and 09:25 AM PST users experienced partial degradation with Llama-3.3-70B. We have deployed a fix and the issue is now resolved.
Some features may be temporarily unavailable. We are currently working to resume normal service performance. We will provide further updates as we make progress.
We've deployed the fix and Llama-3.3-70B is now recovering. We are actively monitoring service performance.
From 06:05 AM some features may be temporarily unavailable with Llama-3.3-70B. We are currently working to resume normal service performance. We will provide further updates as we make progress.
Nov 8, 2025 — Resolved Nov 8, 2025
We’ve mitigated the issue impacting ZAI-GLM-4.6, and normal performance has been restored.
We are continuing to investigate the issue with ZAI-GLM-4.6. A fix has been deployed, and we are actively monitoring the situation.
Some features may be temporarily unavailable with ZAI-GLM-4.6. We are currently working to resume normal service performance. We will provide further updates as we make progress.
Nov 8, 2025 — Resolved Nov 8, 2025
Between 1:30 AM PST and 2:30 AM PST users experienced service disruption with ZAI-GLM-4.6. We have deployed a fix and the issue is now resolved.
Nov 5, 2025 — Resolved Nov 6, 2025
This incident has been resolved.
We are continuing to investigate this issue.
Some features may be temporarily unavailable. We are currently working to resume normal service performance. We will provide further updates as we make progress.
Check the status indicator at the top of this page — it pulls directly from Cerebras's official status page. If Cerebras is experiencing any issues, you'll see it reflected here.
This page tracks Cerebras Inference, Developer console, and Cloud AI services using data from Cerebras's official status page. You can see current component health, active incidents, and a history of past issues.
We check Cerebras's status page every 60 seconds. How quickly issues show up here depends on how fast Cerebras updates their own official status.
Cerebras status is relevant for teams using specialized inference infrastructure where throughput and availability are tightly linked to developer productivity.
The most common approach is to set up automatic failover to an alternative provider. Bifrost is an open-source AI gateway that can route requests away from Cerebras when it's experiencing issues, keeping your application running even when a single provider has problems.