Spark explains why their network failed | The Jackal

8 Sep 2014

Spark explains why their network failed

After claiming that it was all the customers fault because they were looking at pictures of nude celebrities, Spark has today admitted that it was instead a DDoS attack from somewhere overseas.

Here's their explanation:

Cyber criminals based overseas appear to have been attacking web addresses in Eastern Europe, and were bouncing the traffic off Spark customer connections, in what is known as a distributed denial of service (DDoS) attack.

The DDoS attack was dynamic, predominantly taking the shape of an ‘amplified DNS attack’ which means an extremely high number of connection requests – in the order of thousands per second - were being sent to a number of overseas web addresses with the intention of overwhelming and crashing them. Each of these requests, as it passes through our network, queries our DNS server before it passes on – so our servers were bearing the full brunt of the attack.

While the Spark network did not crash, we did experience extremely high traffic loads hitting our DNS servers which meant many customers had either slow or at times no connectivity (as their requests were timing out). There were multiple attacks, which were dynamic in nature. They began on Friday night, subsided, and then began again early Saturday, continuing over the day. By early Sunday morning traffic levels were back to normal and have remained so since. We did see the nature of the attack evolve over the period, possibly due to the cyber criminals monitoring our response and modifying their attack to circumvent our mitigation measures – in a classic ‘whack a mole’ scenario.

How did they get access through the Spark Network?

Since the attacks began we have had people working 24/7 to identify the root causes, alongside working to get service back to normal. During the attack, we observed that a small number of customer connections were involved in generating the vast majority of the traffic. This was consistent with customers having malware on their devices and the timing coincided with other DNS activity related to malware in other parts of the world.

However, while we’re not ruling out malware as a factor, we have also identified that cyber criminals have been accessing vulnerable customer modems on our network. These modems have been identified as having “open DNS resolver” functionality, which means they can be used to carry out internet requests for anyone on the internet. This makes it easier for cyber criminals to ‘bounce’ an internet request off them (making it appear that the NZ modem was making the request, whereas it actually originates from an overseas source). Most of these modems were not supplied by Spark and tend to be older or lower-end modems.

So modems not supplied by Spark, but perhaps supplied by Telecom? Spark has also contradicted themselves here because they had previously claimed it was only people using fibre, which would likely mean they are using newer modems.

What remains clear is that good end user security remains an important way to combat these attacks. With the proliferation of devices in households, that means both the security within your device and the security of your modem.

What did Spark do?

We have now disconnected those modems from our network and are contacting all the affected customers. We have also taken steps at a network level to mitigate this modem vulnerability. We are now in the process of scanning our entire broadband customer base to identify any other customers who may be using modems with similar vulnerabilities and will be contacting those identified customers in due course to advise them on what they should do.

With respect to malware we continue to strongly encourage our customers to keep their internet device security up to date, conduct regular scans and regularly update the operating software and firmware on their home network. We also continue to advise customers not to click on suspicious links or download files when they are not sure of the contents.

There they go again, blaming the customers. The problem here is that Spark was warned that an attack was imminent and appeared to be incapable of reacting swiftly enough to ensure people's data and connectivity weren't compromised.

We have also taken steps at the network level to make it more difficult for cyber criminals to exploit the DNS open resolver modem vulnerability and we’re using the latest technology to strengthen our network monitoring and management capabilities. For security reasons we can’t detail these steps, however this is an ongoing battle to stay one step ahead of cyber criminals who are continually using more and more sophisticated tactics.

Why only Spark?

We can’t say what other networks experienced. However, it’s typical that cyber criminals look for clusters of IP addresses to use in any particular denial of service attack. That makes it more likely that these IP addresses belong to the customers of a single ISP – even more likely with a large ISP like Spark. They do this because it’s then easier for them to monitor the steps the ISP is taking to mitigate the attack and change their tactics accordingly. We definitely saw this happening over the weekend.

There are only a few countries in the world capable of undertaking a DDoS attack like the one that brought down the Spark network last Friday and Saturday. It certainly isn't able to be achieved by any individual hacker group apart from perhaps Anonymous. We know it wasn't them because they usually announce their attacks well in advance of carrying them out.

Countries with the available technical expertise and resources required to undertake such an exercise include the United States, China and Israel. If I were a Spark technician, that's where I would initially focus any search for the source of the attack.