Secure64 SourceT OS not vulnerable to NTP flaws

CERT recently reported two Network Time Protocol (NTP) vulnerabilities (CERT VU#374268 April 7, 2015) . The first one concerns some versions of NTP Project software that will accept packets without authentication digests as if they actually had valid digests attached, and the second one describes a Denial of Service (DoS) scenario in which an attacker can prevent two peering systems from synchronizing. Neither NTP vulnerabilities affect Secure64 servers.

In the first case, no NTP Project code is used in the Secure64 NTP implementation.  In this implementation, associations that specify the use of authentication digests require all incoming and outgoing packets to have attached digests; any incoming packet without the required authentication information is treated exactly like a packet with an invalid digest, and is dropped.

In the second case, Secure64 NTP never peers with external servers, but rather forms a consensus among servers using periodic timestamp queries. This approach is not as precise as traditional NTP peering, but provides timestamp resolution well within the requirements of DNSSEC signing operations.  Since there is no peering being used, there is no way to disrupt a peering session, and hence no DoS vulnerability. This strategy conforms to the security best-practice of minimizing the code paths that are traversed, in order to maximize resistance to network exploits.

Latin America Going IPv6-only

IP address assignments around the world are handled by the Regional Internet Registries (RIR). In the beginning of May, I had the pleasure to attend and be a speaker at the LACNIC (the Latin American RIR) conference in Cancun, Mexico. My talk about IPv6 and DNS was very well received and I think the audience really understood how running the DNS supporting dual stack or IPv6-only clients differs from running in a pure IPv4 environment. There were several other talks, most of them discussing IPv6 and how to handle the depletion of IPv4 addresses.

In terms of the depletion of IPv4 addresses, all RIRs have implemented a phased approach to handing out their remaining IPv4 addresses. In this phased approach it will be harder and harder to get IPv4 addresses allocated from the RIRs as they move from phase to phase. Most RIRs have decided that the first of these phases, the depletion phase, will be reached when about 16 million addresses (A network of size /8) are still available in their pool. However, LACNIC took a somewhat different approach. Instead they decided to go into a limited depletion phase when they reached 8 million (/9) free addresses and a more restrictive phase once they reached 4 million (/10) free addresses.

At the time of the event, LACNIC had around 14-15 million addresses left in their free pool. Demand for the last blocks has been very high and on May 20th, LACNIC announced that they are down to one /9. This means that LACNIC entered the more restrictive depletion phase and also triggered the RIR’s parent organization (IANA) to send some recovered addresses to each RIR (about 2 million addresses to each RIR).

The next phase for LACNIC was triggered on June 10th when they reached a /10 (about 4 million addresses).  Once this phase is reached the policy restricts allocation to only 1000 addresses every 6 months per member. 1000 is not a lot of addresses – they can perhaps be used for translators when implementing IPv6 so that backwards compatibility with IPv4 still can be achieved using NAT64/DNS64 but not to any dual stack deployment.

The final phase will occur when a /11 (about 2 million addresses) are left. At that point a limited number of IPv4 addresses will only be available for new members.

We are, in other words, very close to going IPv6-only in Latin America. During the last year this region has used up a lot of IPv4 address space and is currently the region with the fewest IPv4 addresses remaining. Latin American service providers should make sure their DNS servers can receive traffic over IPv6 so that their customers can still reach them.

Heartbleed SSL Bug, DNS and the Perils of a Monoculture

 

The Heartbleed flaw in OpenSSL highlights a critical vulnerability in the structure of the Internet: lack of diversity in critical software and hardware that run everything.

Use of “free” open source software and commodity hardware enables a lot of applications and services to be delivered inexpensively but also leaves critical infrastructure open to exploitation by a single attack or bug. No system can be resilient if a single point of failure can take it out.  To be resilient the Internet needs redundancy and genetic diversity in its systems.

Cryto libraries are just one example of the genetic software bottleneck. Another is web servers; they are dominated by Apache and Microsoft with a roughly 70% combined market share.

A third example is the DNS. Over 85% of the DNS software in use today is BIND – which historically has disclosed a new critical vulnerability every few months. It is likely there are multiple governments and perhaps terror/crime-that-is-organized that could take it down in several ways.

The DNS knows everything that happens on the Internet  – every web lookup, every email, every phone call and text. Nothing in the cyber world works and nothing is secure without secure, reliable DNS. Yet a critical exploit of BIND would effectively take out the Internet.  No phone, no navigation, no Internet of anything.

What would you do in such a doomsday scenario? Wait for your bank statement by snail mail. Drop by the bank teller line to make a deposit or withdrawal. Pull out those CDs to listen to music. Fax those documents. Keep the Rand McNally map handy. Got a dictionary or encyclopedia?

Maybe civilization goes on but it sure would be a mess until fixed.

The Heartbleed bug is an inconvenience but it serves as a wake up call. If we are serious about reducing our exposure to potential cyber catastrophe we need to diversify critical infrastructure, starting with the DNS. Secure64 does not use BIND or OpenSSL in our DNS products.

Developing a Framework to Improve Critical Infrastructure Cybersecurity

Here are thoughts from our CTO, Bill Worley PhD, on properly securing critical infrastructure in our highly connected world. They are particularly applicable with what we have seen in the last year with increased DDoS attacks focused on the DNS and compromised systems for the theft of intellectual property. Read more

DNSSEC Deployment Lags

DNSSEC has been slow to be accepted by commercial sites, leading a lag in DNSSEC deployment, even though it is the best solution to prevent the exposure to site hijacking. This type of hijacking is possible because of a major flaw in DNS that makes it possible for hackers to launch cache poisoning, found by security researcher Dan Kaminsky 5 years ago. Read more

A New DNS Vulnerability

A new DNS vulnerability was found in BIND yesterday, CVE-2012-5688. It is listed as a critical vulnerability.

This adds to the list of major vulnerabilities discovered in BIND. Since February of 2011, a new high vulnerability has been found on average every 60 days. This is a worrisome trend for DNS administrators concerned with the increasing sophistication and level of attacks. None of these vulnerabilities have affected Secure64 DNS servers. Read more

Need More Secure Operating Systems

Kaspersky Lab has announced that they are developing a secure operating system for protecting SCADA (supervisory control and data acquisition) and ICS (industrial-control systems). These are the systems used for industrial control. They are core to most utility companies and industrial infrastructure, controlling such things as valves or switches. Read more

DDoS Attacks Get Serious

In the last couple of weeks there has been a big jump of DDoS attacks focused on the websites of major US financial institutions. Among those reportedly attacked has included Wells Fargo, JP Morgan Chase, Bank of America, PNC, and U.S. Bank. A distributed denial-of-service attack or better known as a DDoS Read more

GoDaddy’s DNS Outage Exposes the Need for DNS Redundancy

The GoDaddy DNS outage had wide spread effect. Hacktivists claimed to have caused it but Interim CEO Scott Wagner said the service outage was due to a series of internal network events that corrupted route data tables.

No matter what the cause, whether it was internal errors or external attacks, the outage Read more

Botnets, Route Hijacking, and Other Security Threats

Cyber crime has become big business. In the past, hackers tended to work alone or in small groups, and their impact was usually quite minimal. Sometimes it was done just for bragging rights rather than monetary gain, and often had no adverse affects on most of the general public. Read more