Irony, one of the (few) simple pleasures of sometimes having to wear the risk manager hat.
Following the jump from the Dyn piece is the latest from The Register.
From the blog of Dyn, the managed DNS provider that was attacked on October 21, 2016.
Recent IoT-based Attacks: What Is the Impact On Managed DNS Operators?
October 20, 2016
Everyone from the C suite to K Street has seen the news of the most recent rounds of DDoS attacks against the likes of Krebs, OVH and others. Widespread cries for BCP 38 are renewed, source address validation everywhere (SAVE) is a hot topic, and talks about a solution centered on reputation based peering are bubbling up. But has anything changed for the internet operator community? Or has the social amplification of risk increased awareness of known faults and gaps in internet infrastructure? The focus of this piece is on attack traffic for which BCP 38 / SAVE are not impactful.
In the trenches
Let’s look at this operationally. An attack happens … now what? You have some logs, network usage metrics and a timeline of alerts from monitoring systems tripped during the attack. As an Authoritative DNS provider, the data we have from an attack often isn’t directly actionable without some cooperation from other recursive resolver operators or amplification honey pots. BCP 38 / SAVE would help remove the need for this step of analysis. These changes are needed because the Internet’s design inherently enables certain kinds of attacks. At the risk of oversimplification, here are some quick characterizations of each kind:
Attacks which focus on web service resource exhaustion can be harder to defend against because the attacker is making requests for resources, often in a manner similar to a normal end user. They connect to your web server and they request the images, html and other resources to render a web page. These attacks are higher risk for the attacking botnet operator as connection to the web server and making resource requests requires a TCP handshake which exposes the IP address of the compromised device. The large population of vulnerable connected devices and ease of exploitation has increased the viability and sustainability of layer 7 HTTP / HTTPS attacks.
More common attacks focus on generating large volumes of data which prevent legitimate data from reaching the targeted end point. These attacks don’t require a large botnet, they only require connectivity to a provider which doesn’t perform source address validation. The lack of source address validation allows requests to be issued seemingly on the behalf of another system and their response is then directed at the unsuspecting device or service. When issuing such a volumetric attack the operator has their choice of protocol DNS, NTP, SSDP, TFTP, even services as benign as TeamSpeak and Valve Source Engine can be used as their responses are larger than the requests made to them. In these scenarios, finding the reflector or issuer of the larger response feels like a waste of time. ShadowServer, DShield, The Open Resolver project and others have made reporting on these sources available for years. So the problem is not accessibility of data, availability of reporting, or awareness of the issue (If you own and operate IP space please sign up for ShadowServer reports to make sure you aren’t facilitating these attacks ) https://www.shadowserver.org/wiki/pmwiki.php/Services/Reports).
The goal of an authoritative DNS exhaustion attack is to remove the protection of the recursive caching layer from the authoritative DNS resolvers. To be effective the attackers wants to have each client request result in an authoritative lookup, ideally placing enough strain on the authoritative resolver that it stops functioning. To do this the client needs to request records which will not appear in the cache of the recursive layer, because if the result isn’t found in the cache the recursive resolver will need to request that value from the authoritative. This cache busting technique is used frequently when collecting DNS real time user measurement (RUM) data. In the case of RUM requests, the goal is to force authoritative resolution to collect timing and performance telemetry. The Mirai botnet, recently in the news for being identified as a source of the attack on Krebs on Security, has an authoritative exhaustion function in its arsenal. This is implemented in Mirai by prepending a pseudorandom 12 character subdomain to the target domain. This leaves the authoritative DNS provider with a fingerprint. At time X machine Y requested a domain with a pseudorandom sub domain. With this information, you can go to the owner/operator of machine Y and inform them that at time X you received a request from machine Y for a domain with a the specified sub domain. They can then tie that request to the machine which asked them about that domain with the sub domain. At that point they know the client IP of the infected system or the outbound IP of a carrier grade NAT.Yes, their analysis of Distributed Denial of Service attacks using the Internet of Things (the attacks on Krebs) was posted less than 24 hours before they themselves were attacked.
However, as the above description outlines, it requires an in-depth logging history for a high volume systems with some potential privacy implications....MORE
Here's the Register with more, see also the post immediately below if interested.
Today the web was broken by countless hacked devices – your 60-second guide
IoT gadgets behind tens of millions of IP addresses flooded DNS biz Dyn
... Updated Today, a huge army of hijacked internet-connected devices – from security cameras to home routers – turned on their owners and broke a big chunk of the internet.MORE
Compromised machines, following orders from as-yet unknown masterminds, threw huge amounts of junk traffic at servers operated by US-based Dyn, which provides DNS services for websites large and small.
We're told gadgets behind tens of millions of IP addresses were press-ganged into shattering the internet – a lot of them running the Mirai malware, the source code to which is now public so anyone can wield it against targets.
The result: big names including GitHub, Twitter, Reddit, Netflix, AirBnb and so on, were among hundreds of websites rendered inaccessible to millions of people around the world for several hours today.
Dyn tells us it has weathered the storm for now, and services are coming back online. Here's what we know:
- Starting from 1110 UTC, a distributed denial-of-service attack knocked Dyn's DNS nameservers offline. This continued throughout the day in three independent waves as hackers targeted Dyn's data centers one by one, including its US East Coast facility. By 2037 UTC, the situation is said to be under control after mitigations were put in place to block ongoing attacks.
- Dyn is a crucial component in the internet's infrastructure because when you visit a website that uses Dyn's DNS servers, Dyn is supposed to help your browser or app find the right system to connect to. When Dyn does down, your software can't find the website you want to visit.
- A spokesperson for US Homeland Security said the agency is "investigating all potential causes" of the mega-outage.
- Dyn's chief strategy officer Kyle York told The Register by phone that devices behind tens of millions of IP addresses were attacking his company's data centers.
- A lot of this traffic – but not all – is coming from Internet-of-Things devices compromised by the Mirai botnet malware. This software nasty was used to blast the website of cyber-crime blogger Brian Krebs offline in September, and its source code and blueprints have leaked online. That means anyone can set up their own Mirai botnet and pummel systems with an army of hijacked boxes that flood networks with junk packets, drowning out legit traffic....