Charged IT Solutions LLC - Notice history

Remote DDoS Protection - Operational

100% - uptime
Aug 2025
Sep 2025
Oct 2025

DDoS Protection API - Operational

100% - uptime
Aug 2025
Sep 2025
Oct 2025

Iron Mountain - AZS-1 - Phoenix, AZ - Operational

100% - uptime
Aug 2025
Sep 2025
Oct 2025

Equinix - CH3 - Chicago, IL - Operational

100% - uptime
Aug 2025
Sep 2025
Oct 2025

365 Datacenters - BC1 - Miami, FL - Operational

100% - uptime
Aug 2025
Sep 2025
Oct 2025

Telehouse - Docklands - London, UK - Operational

100% - uptime
Aug 2025
Sep 2025
Oct 2025

Notice history

Oct 2025

Re: Network Instability - DDoS Protection
  • Monitoring
    Monitoring

    Subsets of customers in major metro regions such as Chicago and Miami have been experiencing short disruptions in connectivity for the past few weeks. Specialized DDoS protection providers and infrastructure networks worldwide are currently facing an extraordinary threat: a massive botnet campaign known as Aisuru, the same botnet responsible for the record-breaking 22 Tbps attack against Cloudflare. This botnet is launching attacks that exceed multiple terabits per second in volume, and subsequently creating congestion not only at targeted endpoints, but throughout the broader Internet infrastructure, affecting transit providers and peering points that serve traffic beyond any single network.

    It's important to recognize that attacks of this magnitude affect the entire Internet ecosystem. Protection providers including OVH, GSL, CosmicGuard, NeoProtect, and others have all seen similar challenges. When attackers direct multi-terabit floods toward a mitigation provider's infrastructure, temporary disruptions can occur even with the right mitigation in place. No provider can guarantee complete immunity against attacks of this unprecedented scale.

    Services with strict latency and packet loss requirements—such as real-time gaming, VoIP, and live streaming—are especially vulnerable during these events. Even when our upstream mitigation provider reroutes traffic in response to attacks, the initial attack wave may have already caused timeouts, lag, or connection drops before mitigation takes effect.

    We want to assure you that maintaining service stability is our highest priority. Our team is monitoring the situation 24/7, working directly with our mitigation partner to minimize exposure and respond to evolving threats. This represents an industry-wide challenge affecting networks globally, but we remain fully committed to protecting your services and maintaining the best possible stability under these extraordinary circumstances.

Sep 2025

Aug 2025

Outage - IRM AZS-1
  • Postmortem
    Postmortem

    On August 2nd at 3:19 PM MST, customers experienced a complete loss of connectivity across all services. This occurred during routine BGP configuration changes intended to enable load balancing across new upstream circuits at our Phoenix datacenter location. The outage was caused by an unexpected interaction between our BGP routing configuration and internal OSPF route advertisements, which prevented proper route installation in our core routing tables.

    All customer services requiring connectivity outside our internal network were affected for 55 minutes (3:19 PM - 4:14 PM MST). No customer data was lost, and all services automatically resumed normal operation at 4:14 PM MST.

    To ensure this type of incident does not occur again, several proactive measures are being implemented. All future routing changes, even those not expected to cause impact, will be performed within reason during scheduled maintenance windows alongside naturally low traffic periods, with mandatory staging and pre-validation procedures. Enhanced monitoring for BGP session states and routing table changes with automated alerting is being deployed, along with formal change control processes for all core routing configuration changes.

    While we sincerely apologize for the service disruption, our commitment remains focused on continuously improving infrastructure. Following the resolution, routing improvements were completed, and our Phoenix location now has greater capacity and resilience for customer services. The preventive measures being implemented will significantly reduce the likelihood of similar issues occurring in the future. Thank you for your patience during this incident.

  • Resolved
    Resolved
    This incident has been resolved.
  • Identified
    Identified
    During routine BGP load balancing configuration changes to bring additional capacity online at our Phoenix location, an unexpected routing table convergence issue occurred. Our team is actively working to restore BGP route advertisements and expects full service restoration within an hour. No customer actions are required at this time. Updates will be posted regularly.
  • Investigating
    Investigating
    We are currently investigating this incident.

Aug 2025 to Oct 2025

Next