On August 15th 2013 a DDoS attack targeted Github.com. The attack was reported by many news organizations; here is an article “GitHub code repository rocked by ‘very large DDoS’ attack” by Jack Clark of The Register. In addition the Github Status Page from that day also provides some additional timing for this DDos attack. At 15:47 UTC the Status page reports “a major service outage”. At 15:50 UTC they state: “We are working to mitigate a large DDoS. We will provide an update once we have more information.”
ThousandEyes was monitoring GitHub through public agents before and during the attack and we captured the coordinated effort by GitHub, Rackspace and their ISPs to fend off the attack. While multiple ISPs were involved in the effort, for the sake of simplicity we will focus primarily Level 3 Communications, and AboveNet. They used different techniques to counter the attack and ThousandEyes Deep Path Analysis helped us understand how the attack and defense evolved over time.
The view above shows the end-to-end loss from several of our public agents around the world when testing Github site. At 15:40 UTC the loss was at 16.7% worldwide, five minutes later at 15:45 UTC the loss had increased to 100% from all locations. This is typically an indication that there’s something wrong at the network level. To understand better where the problem is occurring we need to go to the path visualization view.
In August, Gitub was hosted by Rackspace, and from our agents we were able to identify four different upstream ISPs connected to Rackspace carrying Github traffic. While we will focus on Level 3 Communications and AboveNet, we also monitored provider links from Qwest (CenturyLink). Their configuration has changed significantly since this event in August.
Case #1: Level 3 Communications
Let’s take a step back and look at the state of the network before the DDoS attack on Github, as shown in Figure 2. This is what the Path Visualization looked like earlier in the day on the 15th for traffic using the Level 3 Communications network.
As we move forward, at 15:40 UTC we see nodes with loss, circled in red, in the Path Visualization. Specifically there is loss inside of Level 3 and Rackspace. Red nodes indicate that there’s packet loss in the adjacent links facing the destination. Figure 3 below shows though that there are still locations without loss (the green nodes on the left).
Fifteen minutes later at 15:55 UTC none of the traffic is reaching its destination as Figure 4 below shows. All traffic from these agents is terminating inside of Level 3 Communications.
Case #2: AboveNet
The data from another provider – AboveNet – tells us a similar story. At 15:45 UTC we detect five agents in Path Visualization routing through Abovenet to reach Github. There is a link inside of AboveNet with an average delay of 124 ms indicating some stress on their network, and the next three hops (one in AboveNet, two in Rackspace) are experiencing loss and we can see that the traffic is not making it to the destination.
Five minutes later at 15:50 UTC, Figure 6 below shows the loss enroute to the destination persists but now location of the loss is completely different. The traffic is no longer terminating inside of AboveNet or Rackspace. In fact it is never making it into either of their networks, it appears it is terminating on the ingress to AboveNet.
To investigate this further we can select one or more of these nodes, move to a period before or after the attack and see where that next hop is located. Figure 7 below shows us that two hours before the attack all nodes now dropping packets had next hops to AboveNet. This helps show us that there is some sort of destination-based filtering happening in AboveNet for Github traffic.
It is difficult to say exactly what techniques were used to mitigate the attack without access to internal ACL lists or routing tables (or even iBGP feeds), or input from the teams involved in stopping this attack on Github. However, from the public statements on the GitHub Status Page, and from our own data it is clear there was a coordinated and cooperative effort to stop the DDoS attack, and place destination-based filters in routers inside the Github’s providers. These filters are often distributed through iBGP inside the ISP using a mechanism such as Remotely Triggered Black Hole (RTBH). For additional information about this technique go here: “Remotely Triggered Black Hole Filtering-Destination Based and Source Based” Cisco Systems, Inc. 2005 (PDF).
Regardless of the specific techniques used by site owners, their hosting companies or their ISPs to combat DDoS these efforts require some level of coordination and cooperation. ThousandEyes can help in understanding how effective the filters are in mitigating these attacks and how DDoS attacks evolve over time and across the network and even helping identifying the source of the attacks in some instances.
- “ISP Security – Real World Techniques” by Gemberling, Morrow, and Greene. (PDF)
- “Remotely Triggered Black Hole Filtering-Destination Based and Source Based” Cisco Systems, Inc. 2005 (PDF)
- “Remotely-Triggered Black Hole (RTBH) Routing” by Jeremy Stretch, 2009