Lagless Status - Large Global DDoS Attacks – Incident details

All systems operational

Large Global DDoS Attacks

Resolved
Major outage
Started about 1 month agoLasted 18 days

Affected

Data Centers

Operational from 9:18 PM to 8:34 PM, Degraded performance from 8:34 PM to 8:39 PM, Major outage from 8:39 PM to 12:23 AM, Operational from 12:23 AM to 4:57 AM, Major outage from 4:57 AM to 6:12 PM, Operational from 6:12 PM to 9:01 PM, Major outage from 9:01 PM to 9:48 AM

Los Angeles Metro

Operational from 9:18 PM to 8:34 PM, Degraded performance from 8:34 PM to 8:39 PM, Major outage from 8:39 PM to 12:23 AM, Operational from 12:23 AM to 4:57 AM, Major outage from 4:57 AM to 6:12 PM, Operational from 6:12 PM to 9:01 PM, Major outage from 9:01 PM to 9:48 AM

LAX1 - Los Angles, CA

Operational from 9:18 PM to 8:34 PM, Degraded performance from 8:34 PM to 8:39 PM, Major outage from 8:39 PM to 12:23 AM, Operational from 12:23 AM to 4:57 AM, Major outage from 4:57 AM to 6:12 PM, Operational from 6:12 PM to 9:01 PM, Major outage from 9:01 PM to 9:48 AM

Game Services

Operational from 9:18 PM to 8:34 PM, Degraded performance from 8:34 PM to 8:39 PM, Major outage from 8:39 PM to 12:23 AM, Operational from 12:23 AM to 4:57 AM, Major outage from 4:57 AM to 6:12 PM, Operational from 6:12 PM to 9:01 PM, Major outage from 9:01 PM to 9:48 AM

Internal Services

Operational from 9:18 PM to 8:34 PM, Degraded performance from 8:34 PM to 8:39 PM, Major outage from 8:39 PM to 12:23 AM, Operational from 12:23 AM to 4:57 AM, Major outage from 4:57 AM to 6:12 PM, Operational from 6:12 PM to 9:01 PM, Major outage from 9:01 PM to 9:48 AM

Updates
  • Resolved
    Resolved
    This incident has been resolved.
  • Update
    Update

    After NeoProtect discontinued their Remote Shield product, we selected Cosmic Global as our new DDoS protection provider for the Los Angeles region. Cosmic informed us that Cloudflare had not yet whitelisted our prefixes for protection, but shortly thereafter sent a message confirming that our tunnel had been provisioned. This appeared to be a standard service delivery notice, and no disclaimer was provided to indicate that activating BGP sessions at that stage could result in issues later.

    Shortly after activation, Cosmic claimed that our prefixes were directly targeted by an attack and chose to deroute our traffic entirely, taking the Los Angeles PoP offline. No prior notice was given, and we were only informed of this action several hours after it occurred. Despite multiple requests, Cosmic was unable to provide verifiable technical evidence showing that our prefixes were the cause of the incident.

    After nearly a full day of waiting for resolution, Cosmic informed us that our only options were to wait indefinitely for Cloudflare’s approval or to terminate the contract. They have since allowed us to exit the agreement. This situation could have been avoided had Cosmic communicated the operational risks and confirmed service readiness before delivery. We view this as a lapse in diligence and transparency.

    We are now working with our upstream partners to restore GSL DDoS protection in Los Angeles and bring the PoP back online as quickly as possible.

    We appreciate everyone’s patience and understanding while we work to re-establish full service stability.

  • Identified
    Identified

    Cosmic Global has decided to keep our BGP sessions shut till Cloudflare accepts our routes for protection. We are now waiting for updates from Cosmic.

    This results in the whole PoP being unreachable until Cosmic decides to turn us back online.

  • Monitoring
    Monitoring

    LAX Has been brought back online and is now fully operational if you face any issues please reach out to us.

  • Update
    Update

    As of today, our previous mitigation provider, NeoProtect, has permanently discontinued their Remote Shield service after their upstream (Datapacket / CDN77) terminated all active BGP sessions and declined to re-enable service. This impacts a large portion of the hosting industry that relied on their DDoS protection network.

    While this decision was outside of our control, our network and leadership teams immediately engaged in failover procedures and transitioned all PoPs onto backup transit and a blend of temporary mitigation partners to restore service continuity.

    We are now actively deploying new, permanent protection infrastructure to ensure long-term stability and resilience across all locations. These changes are being prioritized and rolled out over the coming days.

    To be clear:

    • NeoProtect’s Remote Shield is fully discontinued and no longer part of our stack.

    • All PoPs, except LA, are currently operational under backup protection.

    • New permanent mitigation solutions are being implemented globally.

    We understand how disruptive this situation has been, and we share your frustration. Our immediate focus is ensuring that every region under our network remains stable and secure under our own control moving forward.

    Thank you for your patience and trust as we continue working through this unprecedented problem.

  • Identified
    Identified
    We are continuing to work on a fix for this incident.
  • Update
    Update
    We implemented a fix and are currently monitoring the result.
  • Update
    Update

    Earlier today, our DDoS protection provider experienced a catastrophic upstream failure, resulting in their primary transit partner pulling all active BGP sessions globally. This caused widespread outages across their network, impacting not only us but many other hosting providers who rely on the same protection layer.

    Our network team and leadership immediately got on a bridge call to initiate a full failover. All of our PoPs have since been placed under failover transit and a blend of additional mitigation services to restore connectivity and maintain stability. While services are largely stabilizing, you may still notice intermittent availability as global routes continue to converge.

    The root cause was not an attack against Elcro directly, but rather against multiple prefixes announced by our DDoS provider that reached multi-terabit scale. Their upstream, CDN77 / Datapacket, responded by deactivating all BGP sessions to mitigate further risk, effectively taking their network offline.

    While this situation is entirely outside our control, our customers’ reliability and uptime are our top priority. We will be thoroughly re-evaluating our DDoS protection partners over the coming days to ensure this level of disruption cannot occur again.

    We appreciate everyone’s patience while our network and engineering teams work around the clock to maintain full stability and assess long-term solutions.

  • Monitoring
    Monitoring

    All PoPs are back to degraded or online function.

  • Update
    Update

    SGP1 is online, only one left down is LAX1

  • Update
    Update

    We have been able to get DAL1 online. LAX1 and SGP1 are mostly out of our control to fix, but we are trying to get updates as soon as possible.

  • Update
    Update

    We have been able to get ATL1 back online.

  • Update
    Update

    We have been able to get COV1 back online.

  • Update
    Update

    We have turned up secondary transit in NYC1, and the PoP is returning to online status.

  • Identified
    Identified

    We are communicating with upstreams we do not currently have an update for when this issue will be resolved we are working as hard as possible right now to get this resolved.

  • Investigating
    Investigating

    All PoPs affected by major DDoS attacks on our upstream providers infrastructure.

  • Update
    Update

    We are working on gathering more information.

  • Identified
    Identified

    We are continuing to work on a fix for this incident. We will update you once we have more information

  • Monitoring
    Monitoring
    We implemented a fix and are currently monitoring the result.
  • Identified
    Identified
    We are continuing to work on a fix for this incident.
  • Monitoring
    Monitoring
    We implemented a fix and are currently monitoring the result.
  • Update
    Update

    We are continuing to work on getting Atlanta back online.

  • Update
    Update

    We are continuing to work on the issues and implementing routing changes and debugging issues we will continue to update as we have more information available.

  • Update
    Update

    The locations bellow are been affected by frequent disconnects and timeouts we are still working hard to resolve this and will keep updating this status throughout the duration of these events.

  • Identified
    Identified

    This is in relation to our Dallas Location.

    In light of the recent issues we have been experiencing, we want to inform you that we will be implementing significant changes to our infrastructure over the coming weeks. These improvements are necessary to address and resolve the persistent DDoS attack issues that have been affecting our service.

    We want to be transparent with you: these infrastructure upgrades may result in additional periods of downtime as we work to strengthen our systems. However, please understand that these temporary disruptions are essential steps toward providing you with a more stable and secure service going forward.

    We are committed to keeping you informed throughout this process. You can expect regular updates regarding the progress of these improvements and any scheduled maintenance windows that may impact your access to our services.

    As a gesture of appreciation for your patience and understanding during this challenging period, we will be providing account credits to every user who has been affected by these service disruptions. Details regarding the credit amounts and distribution timeline will be shared in a separate announcement.

    We understand how frustrating these outages have been, and we genuinely appreciate your continued support while our team works diligently to implement these critical security enhancements. Your patience means everything to us, and we are doing everything in our power to ensure a more reliable experience for all of our users.

    Thank you for standing with us during this time. We will keep you updated every step of the way.

  • Monitoring
    Monitoring
    We implemented a fix and are currently monitoring the result.
  • Identified
    Identified
    We are continuing to work on a fix for this incident.
  • Investigating
    Investigating

    Atlanta has become unresponsive again we are looking into this

  • Update
    Update

    Atlanta POP should be returning to normal operations.
    We are still working on Dallas and will provide more updates as they become available.

  • Update
    Update

    We have identified further issues with our atlanta pop caused by the same upstream working to resolve it

  • Update
    Update

    This has been further identified to be an issue with our upstream further details will be released when we can. We apologies for the issues and will continue to work hard to resolve this.

  • Update
    Update

    We are performing some network changes to further reduce the impact of this. Panels will not be available for a short amount of time while we make this change.

  • Update
    Update

    We are still working hard to resolve this along with our upstream and will provide further updates as they become available.

  • Identified
    Identified

    Please follow updates from our upstream about these attacks.
    https://status.neoprotect.net/incidents/kdmtx0wk3h1l