Cloud provider status pages are often slow to update and filtered by internal thresholds or approval delays. Real issues can go unacknowledged for hours if at all. Often they're hosted on the very services they track, making them vulnerable to their own outages. Cloud Looking Glass offers a real-time, independent view of cloud availability using redundant test accounts and globally distributed test agents. Hosted outside any provider, we report what we see — without thresholds or approvals — while manually verifying events for accuracy.
We approach cloud observability as a time series problem, recognizing that public cloud environments are fast-moving and constantly shifting. To capture this, we repeat test operation every five minutes — currently totaling 5,250 tests across 26 cloud providers. This frequency and breadth allow us to detect widespread events as well as transient issues that often go unnoticed.
This project is in its early stages. Full networking, control plane, and data plane testing is live in AWS us-east-1. We also run networking tests across 25 other providers, with full testing planned for Google Cloud and Azure soon. Methodology
Loading...|Currently running
1,048 tests / minute
LIVE
Cloud Looking Glass
Run a real-time zonal, regional, cross-region, cross-cloud or last-mile network test.
from to
Testing...
Matrix & Dashboard Quickstarts
A Matrix displays tables where one axis is services tested and the other is test operations, enabling quick uptime and latency/RTT comparisons across providers, services, and operations. You can apply a quartile heatmap to easily spot performance differences.
A Dashboard provides a real-time view of individual cloud test operations. Each includes uptime and latency/RTT summaries, historical availability and outages, a live time-series graph, and a latency/RTT distribution box plot.
Both are generated from our continuous cloud testing — run every five minutes across multiple providers — and can be created on demand or explored from curated examples below.