Try Bifrost Enterprise free for 14 days.
Request access
Cerebras
[ LIVE STATUS ]

Is Cerebras Down?

Cerebras AI inference platform, developer console, and Cerebras cloud services.

All Systems Operational
Live — updated just now

[ STATUS AT A GLANCE ]

Operational
Current Status
All Systems Operational
5
Components
Service areas tracked on this page
13
90d Incidents
Incidents reported in the last 90 days

System Components

Current status of individual Cerebras services

Llama3.1-8BAll Systems Operational
97.778% uptime
90 days agoToday
Qwen-3-235B-Instruct-2507All Systems Operational
95.556% uptime
90 days agoToday
GPT-OSS-120BAll Systems Operational
97.778% uptime
90 days agoToday
ZAI-GLM-4.7All Systems Operational
93.333% uptime
90 days agoToday
Developer ConsoleAll Systems Operational
98.889% uptime
90 days agoToday
[ AUTOMATIC FAILOVER ]

Cerebras down? Route around it.

When Cerebras has issues, Bifrost automatically routes your requests to a healthy alternative provider. Zero code changes. 99.999% effective uptime.

About Cerebras

What Cerebras does, where the data on this page comes from, and recent reliability

[ ABOUT CEREBRAS ]

About Cerebras

Cerebras provides Cerebras Inference, Developer console, and Cloud AI services. Cerebras status is relevant for teams using specialized inference infrastructure where throughput and availability are tightly linked to developer productivity.

This page pulls data from Cerebras's official status page to show current service health, any active incidents, and a history of recent issues — all in one view.

Cerebras InferenceDeveloper consoleCloud AI services

[ DATA SOURCES ]

Full incident history available

Cerebras publishes detailed component status, a full incident archive, and scheduled maintenance data through their official status page.

  • Data pulled from Cerebras's official status page (status.cerebras.ai)
  • Refreshed every 60 seconds
  • Covers Cerebras Inference, Developer console, and Cloud AI services
  • Includes full incident archive and scheduled maintenance history

[ RELIABILITY ]

Recent reliability

  • 13 incidents reported over the last 90 days.
  • Last reported incident was 5 days ago.
  • All 5 monitored components are currently operational.
  • Most frequently affected: Qwen-3-235B-Instruct-2507, Llama3.1-8B, and GPT-OSS-120B.

[ COMMON USE CASES ]

How teams use Cerebras

Cerebras status is relevant for teams using specialized inference infrastructure where throughput and availability are tightly linked to developer productivity.

High-speed inference
Model experimentation
Developer platform integrations

Incidents & Maintenance

Active incidents, scheduled maintenance, and incident history for Cerebras

Past Incidents

GLM 4.7 Partial Service Disruption

Mar 18, 2026 — Resolved Mar 18, 2026

Resolvednone
ResolvedMar 18, 2026, 9:40 AM UTC

Between 10:30 PM PST and 11:42 PM PST 03/17 users experienced partial degradation with glm-4.7. We have deployed a fix and the issue is now resolved.

Z.ai GLM 4.7 Service Unavailable

Mar 17, 2026 — Resolved Mar 17, 2026

Resolvedmajor
ZAI-GLM-4.7
ResolvedMar 17, 2026, 6:31 PM UTC

This incident has been resolved.

IdentifiedMar 17, 2026, 12:21 PM UTC

The service is currently inaccessible. We are currently working urgently to restore service capabilities. We will provide further updates as we make progress.

Z.ai GLM 4.7 Service Unavailable

Mar 17, 2026 — Resolved Mar 17, 2026

Resolvedmajor
ZAI-GLM-4.7
ResolvedMar 17, 2026, 12:11 PM UTC

We have deployed a fix and the issue is now resolved.

InvestigatingMar 17, 2026, 12:10 PM UTC

Between 03/16 02:29 PM PST and 03/17 05:00 AM PST users experienced service disruption with glm-4.7. We have deployed a fix and the issue is now resolved.

InvestigatingMar 17, 2026, 11:14 AM UTC

The service is currently inaccessible. We are currently working urgently to restore service capabilities. We will provide further updates as we make progress.

Z.ai GLM 4.7 Service Unavailable

Mar 16, 2026 — Resolved Mar 16, 2026

Resolvednone
ResolvedMar 17, 2026, 10:41 AM UTC

Between 02:29 PM PT and 02:20 AM PT users experienced service unavailability with GLM-4.7, caused by a datacenter issue.

Partial Service Disruption

Mar 6, 2026 — Resolved Mar 6, 2026

Resolvedminor
Qwen-3-235B-Instruct-2507
ResolvedMar 6, 2026, 5:51 AM UTC

Between 4:53 UTC ending 5:38 UTC, Qwen 3 235B endpoint experienced a partial service disruption due to a transient network issue. The issue has been identified and fixed, the endpoint is operational.

MonitoringMar 6, 2026, 5:47 AM UTC

We identified the issue, applied a fix and are monitoring the endpoint.

InvestigatingMar 6, 2026, 5:29 AM UTC

Qwen 235B is facing partial service disruption. We are currently working to resume normal service performance. We will provide further updates as we make progress.

Partial Service Disruption on zai-glm-4.7

Feb 10, 2026 — Resolved Feb 11, 2026

Resolvedminor
ZAI-GLM-4.7
ResolvedFeb 11, 2026, 6:14 AM UTC

This incident has been resolved.

IdentifiedFeb 11, 2026, 12:12 AM UTC

We've identified a fix and our engineering team is deploying this now.

InvestigatingFeb 10, 2026, 9:37 PM UTC

We are currently investigating an issue with zai-glm-4.7 where users are seeing an elevated number of 503 errors. Engineering is working on a resolution.

Partial Service Disruption on Llama-3.3-70B

Feb 10, 2026 — Resolved Feb 10, 2026

Resolvedminor
ResolvedFeb 10, 2026, 10:47 PM UTC

This incident has been resolved.

InvestigatingFeb 10, 2026, 10:47 PM UTC

Our engineering team has resolved the issue.

InvestigatingFeb 10, 2026, 9:09 PM UTC

We've identified a fix and our engineering team is deploying this now.

InvestigatingFeb 10, 2026, 7:52 PM UTC

We are currently investigating an issue with Llama-3.3-70B where users are seeing an elevated number of 503 errors. Engineering is working on a resolution.

Partial Service Disruption on Cerebras Endpoints

Feb 5, 2026 — Resolved Feb 5, 2026

Resolvedminor
Llama3.1-8BQwen-3-235B-Instruct-2507GPT-OSS-120BZAI-GLM-4.7Developer Console
ResolvedFeb 5, 2026, 8:04 AM UTC

Between 7.30 PM PT and 11 PM PT, the inference service was partially disrupted by 502 Gateway errors across all endpoints. This issue is caused by an internal system dependency, the fix was rolled out and the service is operational across all endpoints.

InvestigatingFeb 5, 2026, 6:42 AM UTC

We are investigating 502 Gateway errors on Cerebras endpoints. We are currently working to resume normal service performance. We will provide further updates as we make progress.

Partial Service Disruption Qwen 235B

Feb 4, 2026 — Resolved Feb 5, 2026

Resolvedmajor
Qwen-3-235B-Instruct-2507
ResolvedFeb 5, 2026, 2:07 AM UTC

Between 06:00 AM PT and 06:00 PM PT users experienced service disruption with Qwen 3 235B Instruct. We have taken action to address recent changes in traffic patterns and capacity, reducing the disruption. We plan to monitor this endpoint and evaluate restoring rate limits in the next week for Pay Go users.

InvestigatingFeb 5, 2026, 2:07 AM UTC

Between 06:00 AM PT and 06:00 PM PT users experienced service disruption with Qwen 3 235B Instruct. We have taken action to address recent changes in traffic patterns and capacity, reducing the disruption. We plan to monitor this endpoint and evaluate restoring rate limits in the next week for Pay Go users.

MonitoringFeb 4, 2026, 6:53 PM UTC

As part of recent changes in traffic patterns and capacity, we are temporarily turning down rate limits for Pay-go users to help maintain a positive experience across the board. We understand the challenges this may create for you and your users, and we sincerely apologize for the inconvenience. We plan to monitor this endpoint and evaluate restoring rate limits in the next week. Thank you for your continued partnership and understanding, please reach out to our support team for any other questions.

InvestigatingFeb 4, 2026, 4:59 PM UTC

Qwen 235B performance is degraded and temporarily unavailable for some service tiers. We are currently working to resume normal service performance. We will provide further updates as we make progress.

Qwen-32B Service Experiencing Degraded Performance

Jan 27, 2026 — Resolved Jan 27, 2026

Resolvednone
ResolvedJan 27, 2026, 9:05 PM UTC

This incident has been resolved.

IdentifiedJan 27, 2026, 5:46 PM UTC

Partial of the service has been restored, and we are currently working to resume normal service performance

InvestigatingJan 27, 2026, 5:18 PM UTC

Qwen 32b is currently inaccessible. We are currently working urgently to restore service capabilities. We will provide further updates as we make progress.

Z.ai GLM 4.7 Service Unavailable

Jan 26, 2026 — Resolved Jan 26, 2026

Resolvedcritical
ZAI-GLM-4.7
ResolvedJan 26, 2026, 10:08 PM UTC

This incident has been resolved.

InvestigatingJan 26, 2026, 10:08 PM UTC

Between 20:51 and 21:47 UTC users experienced service disruption with GLM 4.7. We have deployed a fix and the issue is now resolved.

MonitoringJan 26, 2026, 9:47 PM UTC

A fix has been rolled out, and we are actively monitoring the situation.

InvestigatingJan 26, 2026, 9:19 PM UTC

Partial of the service has been restored, and we are currently working to resume normal service performance.

InvestigatingJan 26, 2026, 8:51 PM UTC

The service is currently inaccessible. We are currently working urgently to restore service capabilities. We will provide further updates as we make progress.

Llama 3.3 70B Service Unavailable

Jan 18, 2026 — Resolved Jan 19, 2026

Resolvedmajor
ResolvedJan 19, 2026, 12:36 AM UTC

Between 12.45 PM PT and 4.00 PM PT users experienced service unavailability with Llama 3.3 70B, caused by a datacenter issue. We have deployed a fix and the issue is now resolved, model endpoint is operational.

InvestigatingJan 19, 2026, 12:35 AM UTC

Between 12.45 PM PT and 4.00 PM PT users experienced service unavailability with Llama 3.3 70B, caused by a datacenter issue. We have deployed a fix and the issue is now resolved, model endpoint is operational.

MonitoringJan 19, 2026, 12:03 AM UTC

The fix has been rolled out and the service has resumed consuming traffic and being monitored.

IdentifiedJan 18, 2026, 11:52 PM UTC

The issue has been root caused and fix is being implemented to bring the service backup.

InvestigatingJan 18, 2026, 9:40 PM UTC

The service is currently inaccessible. We are currently working urgently to restore service capabilities. We will provide further updates as we make progress.

Partial Service Disruption- API Key Error

Jan 17, 2026 — Resolved Jan 17, 2026

Resolvedminor
Llama3.1-8BQwen-3-235B-Instruct-2507GPT-OSS-120BZAI-GLM-4.7
ResolvedJan 17, 2026, 2:01 AM UTC

Between 4.25 PM PT and 5.45 PM PT, developers experienced minor service disruption due to API Key Error. The issue has been identified and resolved, normal service operation is restored.

InvestigatingJan 17, 2026, 1:53 AM UTC

This issue is caused by an internal system dependency, and we are currently working to restore system performance.

InvestigatingJan 17, 2026, 1:47 AM UTC

Partial service disruption due to 401 Error with Unauthorized API Key. We are currently working to resume normal service performance. We will provide further updates as we make progress.

Partial Degradation of Qwen3 32B, Llama 3.1 8B, Llama 3.3 70B, and Qwen-3-235B-Instruct-2507

Dec 12, 2025 — Resolved Dec 12, 2025

Resolvedminor
Llama3.1-8BQwen-3-235B-Instruct-2507
ResolvedDec 12, 2025, 6:04 PM UTC

Between 06:50 AM PST and 07:45 AM PST users experienced partial degradation with llama3.1-8b, llama-3.3-70b, qwen-3-32b, qwen-3-235b-instruct-2507 models. We have deployed a fix and the issue is now resolved.

InvestigatingDec 12, 2025, 5:48 PM UTC

Between 06:50 AM PST and 07:45 AM PST users experienced partial degradation with llama3.1-8b, llama-3.3-70b, qwen-3-32b, qwen-3-235b-instruct-2507 models. We have deployed a fix and the issue is now resolved.

InvestigatingDec 12, 2025, 5:47 PM UTC

Between 06:50 AM PST and 07:45 AM PST users experienced partial degradation with llama3.1-8b, llama-3.3-70b, qwen-3-32b, qwen-3-235b-instruct-2507 models. We have deployed a fix and the issue is now resolved.

MonitoringDec 12, 2025, 3:58 PM UTC

We've deployed the fix and affected service is now recovering. We are actively monitoring service performance.

InvestigatingDec 12, 2025, 3:37 PM UTC

We are continuing to investigate this issue.

+ 2 more updates

Partial Degradation of Qwen3 235B Instruct

Dec 11, 2025 — Resolved Dec 11, 2025

Resolvedminor
ResolvedDec 12, 2025, 4:20 AM UTC

Between 01:30 AM PT and 2:30 PM PT users experienced partial degradation with Qwen 3 235B Instruct. We have deployed a fix and the issue is now resolved.

Identified - Caused by External Depencency

Nov 18, 2025 — Resolved Nov 18, 2025

Resolvedcritical
Llama3.1-8BQwen-3-235B-Instruct-2507GPT-OSS-120BDeveloper Console
ResolvedNov 18, 2025, 3:36 PM UTC

This incident has been resolved.

MonitoringNov 18, 2025, 2:57 PM UTC

The platform is accessible now. We are monitoring to ensure stability.

MonitoringNov 18, 2025, 2:55 PM UTC

A fix has been implemented and we are monitoring the results.

IdentifiedNov 18, 2025, 2:41 PM UTC

The issue has been identified and a fix is being implemented.

InvestigatingNov 18, 2025, 2:24 PM UTC

We are continuing to investigate this issue.

+ 2 more updates

Partial Service Disruption

Nov 14, 2025 — Resolved Nov 14, 2025

Resolvedminor
ResolvedNov 14, 2025, 6:08 PM UTC

Between 06:05 AM PST and 09:25 AM PST users experienced partial degradation with Llama-3.3-70B. We have deployed a fix and the issue is now resolved.

InvestigatingNov 14, 2025, 5:11 PM UTC

Some features may be temporarily unavailable. We are currently working to resume normal service performance. We will provide further updates as we make progress.

MonitoringNov 14, 2025, 4:34 PM UTC

We've deployed the fix and Llama-3.3-70B is now recovering. We are actively monitoring service performance.

InvestigatingNov 14, 2025, 3:48 PM UTC

From 06:05 AM some features may be temporarily unavailable with Llama-3.3-70B. We are currently working to resume normal service performance. We will provide further updates as we make progress.

Partial Service Disruption

Nov 8, 2025 — Resolved Nov 8, 2025

Resolvedminor
ResolvedNov 8, 2025, 7:40 PM UTC

We’ve mitigated the issue impacting ZAI-GLM-4.6, and normal performance has been restored.

MonitoringNov 8, 2025, 7:08 PM UTC

We are continuing to investigate the issue with ZAI-GLM-4.6. A fix has been deployed, and we are actively monitoring the situation.

InvestigatingNov 8, 2025, 4:10 PM UTC

Some features may be temporarily unavailable with ZAI-GLM-4.6. We are currently working to resume normal service performance. We will provide further updates as we make progress.

Resolved

Nov 8, 2025 — Resolved Nov 8, 2025

Resolvednone
ResolvedNov 8, 2025, 4:09 PM UTC

Between 1:30 AM PST and 2:30 AM PST users experienced service disruption with ZAI-GLM-4.6. We have deployed a fix and the issue is now resolved.

Partial Service Disruption

Nov 5, 2025 — Resolved Nov 6, 2025

Resolvedmajor
ResolvedNov 6, 2025, 12:01 AM UTC

This incident has been resolved.

InvestigatingNov 5, 2025, 11:57 PM UTC

We are continuing to investigate this issue.

InvestigatingNov 5, 2025, 11:46 PM UTC

Some features may be temporarily unavailable. We are currently working to resume normal service performance. We will provide further updates as we make progress.

Frequently Asked Questions

Is Cerebras down right now?

Check the status indicator at the top of this page — it pulls directly from Cerebras's official status page. If Cerebras is experiencing any issues, you'll see it reflected here.

What does this Cerebras status page track?

This page tracks Cerebras Inference, Developer console, and Cloud AI services using data from Cerebras's official status page. You can see current component health, active incidents, and a history of past issues.

How often is Cerebras status updated here?

We check Cerebras's status page every 60 seconds. How quickly issues show up here depends on how fast Cerebras updates their own official status.

Why monitor Cerebras status?

Cerebras status is relevant for teams using specialized inference infrastructure where throughput and availability are tightly linked to developer productivity.

What can I do if Cerebras goes down?

The most common approach is to set up automatic failover to an alternative provider. Bifrost is an open-source AI gateway that can route requests away from Cerebras when it's experiencing issues, keeping your application running even when a single provider has problems.