All Systems Operational

About This Site

Welcome to the Thoughtly status page. Here, you can find system metrics and real-time updates on the operational status of our platform, APIs, voice infrastructure, and key services.

Thoughtly's state-of-the-art AI Voice Agent infrastructure is designed for enterprise-grade reliability, ensuring seamless, real-time telephony conversations with minimal latency. Our globally distributed systems are built with redundancy, failover mechanisms, and stringent uptime commitments to support mission-critical business operations.

Platform Operational
90 days ago
99.99 % uptime
Today
API (US East) ? Operational
90 days ago
99.98 % uptime
Today
API (US West) ? Operational
90 days ago
99.98 % uptime
Today
API (Europe) ? Operational
90 days ago
99.98 % uptime
Today
API (Asia Pacific) ? Operational
90 days ago
99.98 % uptime
Today
Dashboard ? Operational
90 days ago
100.0 % uptime
Today
CDN ? Operational
90 days ago
100.0 % uptime
Today
Automations Operational
90 days ago
100.0 % uptime
Today
Voice Infrastructure Operational
90 days ago
100.0 % uptime
Today
Agent ? Operational
90 days ago
100.0 % uptime
Today
Carrier Gateway ? Operational
90 days ago
100.0 % uptime
Today
STT ? Operational
90 days ago
100.0 % uptime
Today
TTS ? Operational
90 days ago
100.0 % uptime
Today
LLM ? Operational
90 days ago
100.0 % uptime
Today
SIP ? Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Agent Latency ?
Fetching
Mar 30, 2025

No incidents reported today.

Mar 29, 2025

No incidents reported.

Mar 28, 2025

No incidents reported.

Mar 27, 2025

No incidents reported.

Mar 26, 2025
Resolved - The issues affecting the API load balancer have been resolved, and all systems are now operational.
Mar 26, 16:19 EDT
Resolved - The API health check issues affecting multiple regions have been resolved. Services are operating normally.
Mar 26, 16:13 EDT
Resolved - Automation Start Delay - Elevated Queue Time
Mar 26, 16:12 EDT
Update - API Health Check Failures in Multiple Regions
Mar 26, 16:08 EDT
Update - API Health Check Degradation
Mar 26, 16:03 EDT
Update - API Health Check Degradation - USA Iowa
Mar 26, 15:58 EDT
Update - API Health Check Failure - APAC and USA Regions
Mar 26, 15:53 EDT
Update - API Health Check - API Asia
Mar 26, 15:48 EDT
Investigating - We are investigating an issue where automation runs are experiencing elevated queue times between launching and running the first step. This may impact automation execution times.
Mar 26, 15:47 EDT
Resolved - Automated tasks are now running as expected with normal queue times. The issue with elevated queue times between launching and running the first step has been resolved.
Mar 26, 15:45 EDT
Resolved - API Health Check Degradation - US East
Mar 26, 15:43 EDT
Update - Automation Start Delay
Mar 26, 15:40 EDT
Investigating - We are currently investigating an issue with API health check failures in the US East region. Users might experience degraded performance when accessing API services.
Mar 26, 15:38 EDT
Resolved - The previously reported issues with the API health check have been resolved. All systems are operational again.
Mar 26, 15:09 EDT
Resolved - The delay in automation start times has been resolved. Automation processes are now operating normally.
Mar 26, 15:08 EDT
Postmortem - Read details
Mar 28, 17:37 EDT
Resolved - API Outage
Mar 26, 15:07 EDT
Update - We are continuing to investigate this issue.
Mar 26, 15:04 EDT
Update - API Load Balancer Health Check Degradation
Mar 26, 15:02 EDT
Investigating - We are currently investigating elevated queue times in automation runs causing a delay between launching and running the first step.
Mar 26, 14:58 EDT
Resolved - Investigation: Elevated Queue Time for Automations Launch
Mar 26, 14:25 EDT
Investigating - We are currently investigating an issue where automation runs are experiencing an elevated queue time between launching and starting the first step. Our team is looking into the root cause and will provide updates as soon as possible.
Mar 26, 14:20 EDT
Resolved - Automation Start Delay Investigating
Mar 26, 01:36 EDT
Investigating - Automation runs are experiencing an elevated queue time between launching and running the first step. Our team is actively investigating the cause of this delay.
Mar 26, 01:11 EDT
Mar 25, 2025
Resolved - Automation Start Delay
Mar 25, 21:57 EDT
Investigating - Automation runs are experiencing an elevated queue time between launching and running the first step.
Mar 25, 21:52 EDT
Resolved - Automation Start Delay
Mar 25, 19:15 EDT
Investigating - Automation runs are experiencing an elevated queue time between launching and running the first step.
Mar 25, 19:10 EDT
Resolved - Automation Start Delay
Mar 25, 16:58 EDT
Investigating - Automation runs are experiencing an elevated queue time between launching and running the first step.
Mar 25, 16:53 EDT
Resolved - The database in us-east4 region has returned to normal CPU utilization levels.
Mar 25, 13:23 EDT
Resolved - The CPU utilization on the database in the US East region has returned to normal levels.
Mar 25, 13:23 EDT
Resolved - The issue with elevated memory usage in our database instance in the US East region has been resolved. All systems are now operational.
Mar 25, 02:45 EDT
Mar 24, 2025
Resolved - Automation start delay
Mar 24, 23:05 EDT
Update - Automation Start Delay
Mar 24, 22:50 EDT
Update - Redis Memory Usage Warning
Mar 24, 22:45 EDT
Investigating - We are investigating an issue where Redis memory usage has reached elevated levels in the us-east region. This may impact performance.
Mar 24, 18:44 EDT
Resolved - Investigation: Elevated 500 Errors on API Global Load Balancer
Mar 24, 14:05 EDT
Investigating - The global load balancer for the API is currently experiencing a higher number of 500 errors than normal. Our engineering team is investigating the issue.
Mar 24, 14:00 EDT
Resolved - Replication Lag in US East Region
Mar 24, 12:41 EDT
Investigating - We are investigating elevated replication lag in the database affecting services in the US East region.
Mar 24, 12:36 EDT
Resolved - High CPU Utilization in Asia Database
Mar 24, 10:51 EDT
Investigating - We are currently investigating reports of high CPU utilization in our database systems in the Asia region. This may be impacting service performance.
Mar 24, 10:46 EDT
Resolved - Elevated CPU Utilization on Database (Asia)
Mar 24, 10:51 EDT
Investigating - The database in the Asia region is experiencing elevated CPU utilization. Our team is actively investigating the situation.
Mar 24, 10:46 EDT
Resolved - Automation Start Delay
Mar 24, 01:12 EDT
Investigating - Automation runs are experiencing an elevated queue time between launching and running the first step.
Mar 24, 01:07 EDT
Mar 23, 2025
Resolved - Automation Start Delay
Mar 23, 15:01 EDT
Investigating - Automation runs are experiencing an elevated queue time between launching and running the first step.
Mar 23, 14:56 EDT
Resolved - Automation Start Delay
Mar 23, 14:52 EDT
Investigating - Automation runs are experiencing an elevated queue time between launching and running the first step.
Mar 23, 14:47 EDT
Resolved - Automation Start Delay
Mar 23, 05:22 EDT
Investigating - Automation runs are experiencing an elevated queue time between launching and running the first step.
Mar 23, 04:32 EDT
Resolved - Automation Start Delay - Elevated Queue Time
Mar 23, 04:21 EDT
Investigating - We are currently investigating an issue where automation runs are experiencing an elevated queue time between launching and running the first step. Our team is working to identify the cause and mitigate the impact.
Mar 23, 04:06 EDT
Mar 22, 2025
Resolved - Investigation on Automation Start Delay
Mar 22, 22:51 EDT
Investigating - Automation runs are experiencing an elevated queue time between launching and running the first step.
Mar 22, 22:41 EDT
Resolved - Automation Start Delay
Mar 22, 16:36 EDT
Investigating - We are currently investigating elevated queue times for automation runs. Our team is actively looking into the issue.
Mar 22, 16:31 EDT
Resolved - Automation Start Delay
Mar 22, 08:53 EDT
Investigating - Automation runs are experiencing an elevated queue time between launching and running the first step.
Mar 22, 08:43 EDT
Resolved - Automation Start Delay
Mar 22, 08:32 EDT
Investigating - We are currently investigating an elevated queue time for automation runs, affecting their initiation and causing delays. Our team is actively working to identify and resolve the issue.
Mar 22, 08:27 EDT
Mar 21, 2025
Resolved - The Deepgram cluster experienced critically high latency, which has now been resolved. All systems are operational.
Mar 21, 11:44 EDT
Mar 20, 2025
Resolved - Deepgram Critical Latency Levels
Mar 20, 21:32 EDT
Investigating - Deepgram cluster is experiencing critically high latency.
Mar 20, 21:27 EDT
Resolved - High Latency in Deepgram Cluster
Mar 20, 16:44 EDT
Investigating - The Deepgram cluster is experiencing critically high latency, impacting performance.
Mar 20, 16:39 EDT
Resolved - Deepgram Critical Latency
Mar 20, 16:04 EDT
Investigating - Deepgram cluster is experiencing critically high latency.
Mar 20, 15:59 EDT
Resolved - Deepgram Cluster Elevated Latency
Mar 20, 15:49 EDT
Investigating - Deepgram cluster is experiencing critically high latency. The engineering team is investigating the issue.
Mar 20, 15:44 EDT
Resolved - Deepgram Cluster High Latency
Mar 20, 15:19 EDT
Investigating - Deepgram cluster is experiencing critically high latency. Our team is currently investigating the issue.
Mar 20, 15:09 EDT
Resolved - Elevated Latency on Deepgram
Mar 20, 13:39 EDT
Investigating - We are currently investigating reports of critically high latency in the Deepgram processing service, potentially impacting speech-to-text functionalities.
Mar 20, 13:34 EDT
Resolved - High Latency in Voice STT Service
Mar 20, 12:26 EDT
Investigating - The voice STT service is experiencing critically high latency. Our team is actively investigating the cause and working to resolve the issue as quickly as possible.
Mar 20, 12:16 EDT
Resolved - Investigating Critical Latency in Deepgram Cluster
Mar 20, 12:14 EDT
Investigating - We are currently investigating reports of critically high latency in the Deepgram cluster, affecting voice processing services. Our team is working to identify the root cause and mitigate the impact.
Mar 20, 12:09 EDT
Resolved - Deepgram Critical Latency Levels
Mar 20, 12:04 EDT
Investigating - Deepgram cluster is experiencing critically high latency. Our engineering team is investigating the issue.
Mar 20, 11:59 EDT
Resolved - Investigating Deepgram Critical Latency Levels
Mar 20, 07:47 EDT
Investigating - Deepgram cluster is experiencing critically high latency, which may impact services relying on it. Our engineering team is currently investigating the issue.
Mar 20, 07:42 EDT
Mar 19, 2025
Resolved - The Deepgram cluster was experiencing critically high latency but has now been resolved and is back to normal operation.
Mar 19, 18:20 EDT
Resolved - Deepgram Critical Latency Levels
Mar 19, 18:19 EDT
Investigating - Deepgram cluster is experiencing critically high latency.
Mar 19, 18:15 EDT
Mar 18, 2025

No incidents reported.

Mar 17, 2025
Resolved - High Latency in Deepgram Cluster
Mar 17, 17:17 EDT
Investigating - We are currently investigating critically high latency in the Deepgram cluster, which may affect service performance. Our team is working to identify the cause and mitigate the impact.
Mar 17, 17:07 EDT
Resolved - High Latency in Deepgram Service
Mar 17, 17:03 EDT
Investigating - The Deepgram cluster is experiencing critically high latency. Our team is investigating the issue.
Mar 17, 16:56 EDT
Resolved - The latency issue on the Deepgram cluster has been resolved. Services have returned to normal operation.
Mar 17, 16:50 EDT
Resolved - On-Prem Deepgram STT Latency
Mar 17, 16:40 EDT
Update - On-Prem Deepgram STT Latency
Mar 17, 16:40 EDT
Investigating - Deepgram cluster is experiencing critically high latency.
Mar 17, 16:35 EDT
Mar 16, 2025

No incidents reported.