On September 15, 2025, Starlink went dark—briefly, but globally. Reports spiked to tens of thousands of affected users in the US, and Ukraine's front line felt it immediately because so much of their comms and drone ops ride on Starlink. Service trickled back within roughly 30–60 minutes. Starlink acknowledged a disruption and then removed the notice; no public root cause yet. That's enough of a wake-up call. If your business relies on any single provider for a critical function, this is your rehearsal.
If you need help assessing your exposure and building pragmatic redundancy (not buzzwords, actual failover that works), talk to me: dynamicdisorder.co/contact.
This isn't an indictment of Starlink. It's a reminder: all providers fail. Cloud. Telco. CDN. Satellite. Your plan has to assume that.
What happened (and why it matters to enterprises, not just armies)
Event: brief, wide Starlink outage; Ukraine's front line reported impact; service restored within about an hour. Reuters
Context: Starlink has had previous global incidents (e.g., July 24, 2025, ~2.5h tied to "failure of core internal software services"). Outages happen for many reasons—software, operations, space weather, and plain old "we don't know yet." AeroTime
Solar storms are real: May 2024 "severe" geomagnetic storms degraded GPS/HF comms and triggered official impact reviews. Satellite and GNSS-dependent systems see measurable effects. Don't rely on "that never happens." It does. NOAA Space Weather Prediction Center, Wikipedia
The lesson is simple: assume a critical vendor will be intermittently unavailable, sometimes with minimal detail, sometimes at the worst possible moment.
SLAs won't save your uptime (or your P&L)
Read your SLAs and, more importantly, the master agreements. Most big-cloud/service contracts include force-majeure carve-outs. If a storm (solar or otherwise), a power grid issue, or Internet carrier problems sit "outside their reasonable control," you're looking at service credits at best—and those credits won't cover your actual losses. AWS and Microsoft both state force-majeure protections in their standard agreements. Amazon Web Services, Inc.
Even when credits apply, credits ≠ compensation. Many SLAs make service credits the sole and exclusive remedy—and you'll still have lost revenue, customer trust, and team hours.
Takeaway: Treat SLAs as documentation, not risk mitigation.
Downtime costs more than it used to
Two recent datapoints that matter for budget owners:
Uptime Institute (2025): 54% of surveyed operators said their most recent significant outage exceeded $100,000 in total costs; 20% topped $1 million (up year-on-year). Outages are less frequent/severe, but more expensive. Data Center Dynamics
ITIC (2024): 93% of mid-size and large enterprises estimate $300k+ per hour of downtime; a big share reports $1M+. This aligns with what we see in negotiations and post-mortems. Lenovo
If your finance team still budgets "a few thousand per hour," update the model. Reality is uglier.
RTO/RPO for connectivity: design for loss of your primary pipe
RTO (how fast we must recover) and RPO (how much data loss we can tolerate) shouldn't be abstract. Tie them to connectivity loss explicitly:
- RTO for WAN loss: Should your core product keep operating for N hours in degraded mode if your primary provider dies? N must be explicit.
- RPO for data flows: If uplinks are down, what's the acceptable lag for telemetry, payments, orders, logs? Minutes? Hours? What gets queued locally and replayed cleanly?
Solar storms, ISP issues, satellite hiccups—doesn't matter which. Your RTO/RPO posture should treat the pipe as a failure domain.
Practical architecture moves (that actually reduce downtime)
Multi-path internet, not multi-wish.
Pair fiber (cheap, fast) with 5G and optionally satellite for rural/edge. In Spain, retail-grade fiber can be as low as €27/mo for 600Mb; dedicated business circuits start much higher (e.g., €449+/mo for symmetrical dedicated links). Use retail where appropriate; dedicate where justified. O2
If satellite is your primary (remote sites), keep terrestrial backup and test automatic failover on the router (not "we'll switch manually").
If Starlink is your backup, sanity-check plan tiers and new policy changes that affect standby/backup usage patterns. The Verge
Active-active or active-warm for critical services.
Replicate traffic through two independent egress providers with BGP or SD-WAN. If that's too heavy, use a dual-WAN edge with health-checked failover and an HA pair of firewalls/routers.
Local-first fallback.
- POS can cache transactions offline and sync later.
- Field apps switch to store-and-forward.
- Admin consoles expose "degraded" toggles: limited analytics, read-only workflows, queued email.
Data survivability.
- Message queues persist locally; idempotent replays prevent duplicates.
- Edge caches (Cloudflare/NGINX) hold enough static to keep the app usable while backends reconnect.
Runbooks you've actually tested.
- "Pull fiber, watch site fail over, time to steady-state."
- "Simulate provider-wide DNS breakage."
- "Simulate cloud region egress brownout."
Record real RTO numbers, not estimates.
Contract posture.
Negotiate multi-provider compatibility; don't let a vendor lock you into proprietary last-mile that blocks redundancy.
Track provider incident history (Starlink's July outage had a software cause; September's remains unclear). Bake that into risk registers. AeroTime
Quick ROI math your CFO will accept
Let's be conservative.
Redundancy bundle (SMB/site): second fiber (€27/mo), 5G data SIM (€20/mo), dual-WAN router (€500 amortized over 36 months ≈ €14/mo). Total ≈ €61/mo → €732/year. O2
If a 1-hour outage costs €10,000 (well below current mid-market medians), you break even if redundancy avoids ~4.4 minutes of downtime a year (732/10,000 = 0.0732 hours).
If your realistic cost is €300,000/hour (ITIC range), you break even with ~9 seconds saved annually. Lenovo
The math is boring on purpose. Redundancy pays for itself with a single incident.
If you want me to build your numbers (real revenue, SLAs, penalties, payroll, refunds, churn), ping me: dynamicdisorder.co/contact.
"But my provider has an SLA." Good. Keep it—and assume it won't cover you.
Big clouds and major providers explicitly exclude events "beyond reasonable control," including storms, power, telco failures. That's standard legal language (force majeure). Read the clause; plan accordingly. Amazon Web Services, Inc.
Also, credits ≠ cash. You'll still pay your team, refund users, and miss SLAs to your customers.
A note on sovereignty and single-vendor risk
There's a reason governments are pushing for alternatives and sovereign constellations (e.g., EU's IRIS2), and why allies talk openly about over-reliance on any one private network. Even if you're not an army, the pattern holds for enterprises: don't build a business where one company's status page is your single point of failure. Reuters
Implementation checklist (use it this quarter)
- Define RTO/RPO for connectivity loss specifically.
- Add a second path: fiber+5G, fiber+satellite, satellite+5G—whatever gives provider diversity.
- Put in automatic failover and test monthly.
- Add local-first modes in apps; queue + replay.
- Write degraded procedures for support, billing, ops.
- Update incident financials (real downtime cost model).
- Revisit contracts for force-majeure and exclusivity traps. Amazon Web Services, Inc.
FAQ
Did the 15-Sep outage really hit Ukraine's front line?
Yes. Multiple outlets reported widespread disruption, with Ukrainian commanders saying the entire front line was affected; service returned within an hour. Reuters, Business Insider
Was it caused by a solar storm?
Not confirmed publicly. Solar storms do affect satellite/GNSS/HF systems; regulators asked for impact reports after the May 2024 events. Treat space weather as a credible risk, but don't assume it's the cause of this outage until there's a formal RCA. NOAA Space Weather Prediction Center
Do SLAs cover this?
Usually not beyond credits, and force majeure (storms, power, carrier faults) limits liability further. Read the service agreement, not just the SLA marketing page. Amazon Web Services, Inc.
Is Starlink a bad choice for backup?
No. It can be excellent—especially in rural or disrupted areas. But treat it like any provider: pair it with an independent path, set health checks, and test failover. Also watch policy changes (e.g., standby/"pause" features) if you rely on "backup-only" usage. The Verge
What's a sensible small-site redundancy budget in Spain?
Retail fiber ~€27/mo (600Mb), 5G SIM ~€20/mo, dual-WAN edge amortized ~€14/mo ≈ €61/mo. Dedicated business circuits can be €449+/mo if you need strict SLAs and delivery guarantees. O2
Target keyword plan (buyer intent biased)
Primary keyword to target in this article: business continuity consulting (repeat naturally throughout). Supporting, mid-to-high intent terms to interlink across your site and future posts:
- business continuity consulting services
- disaster recovery consulting (DRaaS advisory)
- SLA review service / SLA penalty analysis
- downtime cost calculator (tool + article)
- RTO vs RPO for SaaS (implementation guide)
- redundant internet for business (fiber + 5G + satellite)
- multi-cloud failover architecture
These map to users who are close to action (budgeting, shortlisting vendors, seeking expert help).
Closing
If a one-hour satellite outage can rattle a war front, it can crater your quarter. The fix isn't theory: second paths, fallback modes, tested runbooks, and contracts that reflect the world you actually operate in. If you want a hands-on business continuity consulting engagement—architecture plus math plus drills—get in touch: dynamicdisorder.co/contact.
Sources: Reuters, Business Insider, Kyiv Independent (event details); Uptime Institute & ITIC (costs); NOAA/SIDC/Wikipedia (space weather impacts); AWS/Microsoft (force-majeure clauses); O2/Direct-Telecom/Starlink policy reporting for pricing/policy context. The Verge, Reuters, Business Insider