Secure64 DNS Products Not Vulnerable to BIND Security Flaw

On July 28, 2015, the Internet Systems Consortium reported a critical security vulnerability in BIND, CVE-2015-5477. This vulnerability, which affects both BIND recursive and authoritative servers, is caused by an error in the handling of TKEY queries, allowing a remote attacker to crash BIND by sending a deliberately constructed query.

This vulnerability is considered critical, as it cannot be prevented through ACLs or configuration options, and affects all versions of BIND 9 (BIND 9.1.0 through 9.10.2). Successful attacks on unpatched BIND servers can result in a loss of DNS service, potentially making an organization’s web, email and other internet-connected servers unreachable.

BIND users are strongly encouraged to patch their servers immediately, as attacks against the DNS servers have already been reported. BIND-based appliances are also vulnerable and should be patched – customers are encouraged to contact their vendors for additional patching information.

Secure64 DNS Cache and DNS Authority products, which are not based on BIND, do not contain this flaw, and do not require a patch to provide protection against these attacks.

More Defenses Against Pseudo Random Subdomain Attacks (PRSD)

Last year we reported a new kind of DNS attack that we called the “Water Torture Attack”. This attack is also known as the Pseudo Random Subdomain Attack (PRSD, although we still like our name better).

In this attack, hackers send queries to open proxies around the world for random, non-existent subdomains of legitimate domains. For example:

alkuwrejghnlokiqhje.example.com.

These queries are forwarded to DNS resolvers at the upstream ISP. Although the attacks are intended to take down the authoritative servers for these legitimate domains, they have the side effect of dramatically increasing the load on ISP’s DNS resolvers to the point that they, too, can become overloaded and either slow down or crash.

Here are some additional steps that DNS operators can take, in addition to the steps we outlined in our previous blog, to protect their resolvers from these attacks:

1. “Prudently provision” their servers with enough RAM and query capacity. The attacks impact the resolver because it causes them to run out of critical system resources. By increasing these resources, the resolver may be able to sustain the higher recursive query loads.

2. Tune their configurations to maximize the number of simultaneous recursive queries allowed.

3. Automatically block IP addresses generating too many SERVFAIL responses, if possible. This capability is not available in many DNS resolvers, but is a new feature of our DNS Cache product.

Don’t drown in the IPv6 address sea

Our Chief Operating Officer, Joe Gersch, recently authored this blog post on managing large numbers of reverse DNS records at our partner, 6connect’s, blog site:

http://www.6connect.com/blog/dont-drown-ipv6-address-sea/

Secure64 DNS Cache not vulnerable to recently announced resource exhaustion bugs

Secure64 has confirmed that its DNS Cache product is not vulnerable to the latest BIND Vulnerability bug announced by ISC on December 8, 2014. This BIND bug is categorized as severe and remotely exploitable, and is the 9th such vulnerability in the past 24 months. The announcements describe flaws in the BIND DNS resolver that could cause it to issue large numbers of queries to resolve names in maliciously constructed zones, leading to resource exhaustion and is exploitable to launch denial of service attacks.

The ISC vulnerability announcements can be found here:

ISC                           CVE-2014-8500                  https://kb.isc.org/article/AA-01216

Unlike some previous CVE’s, immediate patching is available for ISC BIND. Users of BIND-based appliances like Efficient IP, Infoblox and Bluecat should check the vendor web sites.

Bluecat: A support note is posted at https://www.bluecatnetworks.com/support/security_updates/2014

Secure64′s DNS Cache product is not susceptible to this vulnerability or any of the previously announced BIND vulnerabilities. DNS Cache limits the amount of resources that are consumed by the resolver under normal and attack conditions to remain available and responsive, even under resolver-targeted attacks.

DNS is arguably the most critical control point for every on-line business and IP based service. Secure64 is a software applications company enabling secure DNS services and is built upon the industry’s only genuinely secure platform. Secure64 DNS technology brings protection to over 180 Million on-line users, supports 85% of all internet reverse DNSSEC and is used by leading service providers, enterprises and government organizations.

For more information on Secure64′s DNS capabilities and the latest wave of potent DNS attacks please request the “Death by One Thousand Paper Cuts” white paper by clicking on the Contact Us button the home page of our website www.secure64.com and filling in the contact form.

Latin America Going IPv6-only

IP address assignments around the world are handled by the Regional Internet Registries (RIR). In the beginning of May, I had the pleasure to attend and be a speaker at the LACNIC (the Latin American RIR) conference in Cancun, Mexico. My talk about IPv6 and DNS was very well received and I think the audience really understood how running the DNS supporting dual stack or IPv6-only clients differs from running in a pure IPv4 environment. There were several other talks, most of them discussing IPv6 and how to handle the depletion of IPv4 addresses.

In terms of the depletion of IPv4 addresses, all RIRs have implemented a phased approach to handing out their remaining IPv4 addresses. In this phased approach it will be harder and harder to get IPv4 addresses allocated from the RIRs as they move from phase to phase. Most RIRs have decided that the first of these phases, the depletion phase, will be reached when about 16 million addresses (A network of size /8) are still available in their pool. However, LACNIC took a somewhat different approach. Instead they decided to go into a limited depletion phase when they reached 8 million (/9) free addresses and a more restrictive phase once they reached 4 million (/10) free addresses.

At the time of the event, LACNIC had around 14-15 million addresses left in their free pool. Demand for the last blocks has been very high and on May 20th, LACNIC announced that they are down to one /9. This means that LACNIC entered the more restrictive depletion phase and also triggered the RIR’s parent organization (IANA) to send some recovered addresses to each RIR (about 2 million addresses to each RIR).

The next phase for LACNIC was triggered on June 10th when they reached a /10 (about 4 million addresses).  Once this phase is reached the policy restricts allocation to only 1000 addresses every 6 months per member. 1000 is not a lot of addresses – they can perhaps be used for translators when implementing IPv6 so that backwards compatibility with IPv4 still can be achieved using NAT64/DNS64 but not to any dual stack deployment.

The final phase will occur when a /11 (about 2 million addresses) are left. At that point a limited number of IPv4 addresses will only be available for new members.

We are, in other words, very close to going IPv6-only in Latin America. During the last year this region has used up a lot of IPv4 address space and is currently the region with the fewest IPv4 addresses remaining. Latin American service providers should make sure their DNS servers can receive traffic over IPv6 so that their customers can still reach them.

Heartbleed SSL Bug, DNS and the Perils of a Monoculture

 

The Heartbleed flaw in OpenSSL highlights a critical vulnerability in the structure of the Internet: lack of diversity in critical software and hardware that run everything.

Use of “free” open source software and commodity hardware enables a lot of applications and services to be delivered inexpensively but also leaves critical infrastructure open to exploitation by a single attack or bug. No system can be resilient if a single point of failure can take it out.  To be resilient the Internet needs redundancy and genetic diversity in its systems.

Cryto libraries are just one example of the genetic software bottleneck. Another is web servers; they are dominated by Apache and Microsoft with a roughly 70% combined market share.

A third example is the DNS. Over 85% of the DNS software in use today is BIND – which historically has disclosed a new critical vulnerability every few months. It is likely there are multiple governments and perhaps terror/crime-that-is-organized that could take it down in several ways.

The DNS knows everything that happens on the Internet  – every web lookup, every email, every phone call and text. Nothing in the cyber world works and nothing is secure without secure, reliable DNS. Yet a critical exploit of BIND would effectively take out the Internet.  No phone, no navigation, no Internet of anything.

What would you do in such a doomsday scenario? Wait for your bank statement by snail mail. Drop by the bank teller line to make a deposit or withdrawal. Pull out those CDs to listen to music. Fax those documents. Keep the Rand McNally map handy. Got a dictionary or encyclopedia?

Maybe civilization goes on but it sure would be a mess until fixed.

The Heartbleed bug is an inconvenience but it serves as a wake up call. If we are serious about reducing our exposure to potential cyber catastrophe we need to diversify critical infrastructure, starting with the DNS. Secure64 does not use BIND or OpenSSL in our DNS products.

Water Torture: A Slow Drip DNS DDoS Attack

A number of our service provider customers around the world are reporting that they see a new type of DNS DDoS attack that uses the DNS as the attack vector. The service providers themselves do not appear to be the target of this attack. Instead, the attack tries to overwhelm an outside victim’s authoritative DNS servers. Once the DNS server is taken down, the victim’s domains will appear to be inaccessible.

As a side effect, our service provider customers are seeing a spike in DNS traffic resulting in increased CPU and memory usage. This blogs gives some more details about the attack and suggests what you can do to mitigate the impact of it.

The Attack

It appears that a fairly large botnet is used to send queries for the victim’s domain. Queries are made-up, with random string with up to 16 letters prepended to the victim’s domain, like:

xyuicosic.www.victimdomain.com

A query for this domain is then sent to the service providers DNS server. The DNS server attempts to contact the authoritative nameserver to find the answer. If the authoritative nameserver does not reply (because it is too busy responding to queries from DNS servers all over the world, or perhaps has crashed), the DNS server attempts to contact the next authoritative nameserver and so on. Modern DNS server will make multiple attempts to contact each authoritative nameserver before giving up and responding back to the client with a SERVFAIL response.
The infected client will then repeat the same pattern but this time with another random string prepended, for example:

alkdfasd.www.victimdomain.com

Even though the DNS server was unable to get a response from any of the victimdomain.com authoritative nameservers during the previous query, most DNS servers will still attempt to contact them for this second query.
Now imagine that thousands of bots are sending a relatively small number of queries for such made-up subdomains. This will trigger a large increase in the number of DNS queries sent by the service provider’s DNS servers to the victim’s nameservers.

How to Detect the Attack

While this attack most likely is targeting the authoritative servers for victimdomain.com, it also puts an increased CPU load on the DNS server by forcing it to continually initiate recursive queries and also consumes large amounts of resolver memory resources. More importantly, if the internal resolver resources are fully consumed, the resolver may drop any inbound queries, including queries from legitimate clients.

If the DNS server’s behavior is being monitored, the symptoms of the attack will also show up as:

  • Increased CPU utilization
  • Increased number of SERVFAIL responses
  • Increased number of outbound queries and retransmissions
  • Increased query latency
  • Increased number of dropped client queries (if the resolver resources are fully consumed)

One thing all of the victim domains have in common is that they appear to be Chinese sites, perhaps gaming or gambling sites.

How to Block the Attack

Because the query rate from each client IP address is quite low and because there is no response amplification, it is difficult to determine simply from packet rates or bandwidth consumption which client IP-addresses are participating in the attack. And because the names change periodically, it can be time consuming to track and block queries to the domains being used in the attack.

However, here are some specific steps you can take to minimize the impact of the attack:

  • Check your timeout settings. Most resolvers allow you to specify the initial and subsequent timeout intervals. Make sure that these values are not too high (if they are, they will tie up resolver resources longer than necessary before a query fails).
  • Increase the number of outstanding recursive queries if you have sufficient RAM on your server. This will give the resolver more resources to work with.
  • Specify a non-zero TTL for the negative responses so that if a client requests the same non-responsive name more than once, the SERVFAIL answer is cached. By RFC, you should be able to specify up to a 5 minute TTL.

Secure64 Defenses

Secure64’s DNS Cache has built-in defenses against such an attack. Under attack conditions, the Secure64 resolver will not consume any CPU or memory resources attempting to reach nameservers that it already knows are non-responsive. This adaptive behavior allows the Secure64 resolver to remain 100% available to legitimate clients under such attack conditions.

Firezilla FTP

Recently, a fake version of the popular Filezilla File Transfer Protocol (FTP) client has been made available for download on some sites. This fake version of Filezilla looks and works as expected but it also harvests login credentials in the background. These credentials are secretly sent to a hacker owned site.  This is clearly a concern for any network and action must be taken to limit the damage. There are two things you should do:

1.       Block the stealing of credentials

The domain name that the hacker used to send credentials to is aliserv2013.ru. This domain name currently does not resolve, as the nameservers appear not to respond to DNS queries anymore (interestingly enough they still respond to ping). Additionally, the FTP server also appears to be down. So the worst crisis might be over. But to be safe you should blacklist aliserv2013.ru in your DNS server. If you are using Secure64 you need to add a line like the one below and reload your cache server:

local-zone: “aliserv2013.ru” refuse log

2.       Find and clean up your clients

If your DNS server is capable of logging blacklist hits, then now is the time to check your logs and see if any of your clients are using this fake Filezilla client.

By using the log option in the Secure64 example configuration above you can see which clients are trying to access the aliserv2013.ru site. You can then reach out to them and make sure they remove the faulty FTP client.

More info can be found here:

http://blog.avast.com/2014/01/27/malformed-filezilla-ftp-client-with-login-stealer/

DNS prefetching in browsers

Some browsers such as Firefox implement various types of prefetching. The basic idea is that the browser will start to preload hyperlinked pages in the background to so that once a user clicks on a link the web page will already be ready to be displayed.

Speculative prefetching like this is obviously wasteful from a DNS and network bandwidth perspective because there will be plenty of links on a page that are prefetched but that the user will not click on. On the other hand, prefetching in the browser gives a better user experience with reduced wait time.

Unfortunately for an ISP, prefetching is outside of their control. Prefetching is something that is turned on and off at the browser level. A web site owner can potentially limit prefetching by analyze its HTML code and making sure that it doesn’t have too many links on its most popular pages and also by adding special tags to links that shouldn’t be prefetched.

This blog post provides additional details about prefetching and what impact it has on DNS. In Firefox, the settings for prefetching can be changed by typing about:config in the address bar and then scrolling down to the network.dns.disablePrefetch setting. This setting is disabled by default (note the double negative) meaning that DNS prefetching is turned on. Chrome and other browsers have similar settings.

I ran a small experiment just to see if there is a large difference between turning prefetching off or on. Below is a table showing the results of this experiment. As you can see, there is quite a substantial difference in the number of queries generated. www.wikipedia.org and www.ebay.com in particular seem to generate a lot of DNS queries with prefetching turned on. This is because these pages have links to hundreds of different sub sites pages in other languages such as de.wikipedia.org (Germany) sv.wikipedia.org (Sweden), etc.  The google.com page, on the other hand, does not generate any additional queries with prefetch turned on.

without prefetch with prefetch increase factor
www.wikipedia.org 3 155 51.7
www.yahoo.com 10 12 1.2
www.fox.com 45 64 1.4
www.cnn.com 11 66 6.0
www.youtube.com 12 14 1.2
www.msn.com 28 85 3.0
www.google.com 1 1 1.0
www.ebay.com 31 147 4.7
www.amazon.com 8 65 8.1
average 16.6 67.7 4.1

From this experiment using a short list of domains it is clear that browsers with DNS prefetch enabled generate a very substantial number of additional queries to the DNS system.

Due to the 3-tiered architecture of DNS, the increased load and additional cost for browser prefetching in is borne by the Service Provider. The web site owner and the end user do not incur any significant extra cost. Companies like Wikipedia and Ebay should really look into how they have their sites coded. From what I understand, there are simple HTML codes you can add to instruct the browser not to do prefetching. To give you an example: every browser with prefetching turned on in the world will prefetch the DNS record for the Icelandic (330,000 native speakers) version of the Wikipedia site. This seems wasteful to me as it adds unnecessary extra queries to DNS systems at service providers around the world.

Lies, damn lies and DNS performance statistics

To paraphrase Mark Twain (and Benjamin Disraeli if internet search results can be trusted), there are three kinds of DNS lies: lies, damn lies and DNS performance statistics.

Most networking professionals know to have a healthy skepticism about information put out by the marketing departments of networking vendors. And so they should. It is the job of every marketer to put their company and products in the best possible light, and sometimes this means they have to stretch the truth a bit.

Read more