Previous month:
January 2013
Next month:
March 2013

February 2013

The New Face of IP Address Scanning

Could attackers change their IP address scanning technique to scan a larger address space with more stealth and identify hosts or services that are vulnerable to attack more efficiently? It's absolutely possible. Let me explain how.

Attackers and penetration testers use various scanning techniques to identify hosts in target networks. In a basic scan, an attacker or tester targets an IP subnet (an address block or range of IP addresses), sends traffic to the addresses within that block or range, and composes a list or enumeration of the hosts that respond.

 

Scanner
Image by Mike Licht
Attackers or testers use different kinds of network traffic (ping scans, TCP SYN scans, UDP scans) to meet their scanning needs. For example, a network admin who desires speed or efficiency may choose a ping scan, which is fast but easy to detect and more likely to miss hosts behind firewalls. An attacker may choose a TCP SYN scan, which is slow but more reliable and (when randomized) harder to detect. By issuing application-specific messages, the attacker can also identify services -- Web, mail, VoIP, etc. (see Port scanning) Once this intel is gathered, the attacker can probe for specific vulnerabilities on the systems being enumerated.

 

Firewalls and other intrusion detection systems generally can detect network scans, for example, by monitoring how frequently traffic arrives from the same origin addresses, whether that traffic targets the same destination port (http/80, smtp/25) as it tries each address in the monitored IP block, and whether the destination addresses in traffic increment monotonically or in another readily observable pattern. Detection using parameters of this kind is manageable, because the monitoring has to observe a modest number of possible host addresses in any given network.

Now back to my original question: How could you change network scanning to do this with more stealth across a larger address space -- say, the entire IPv4 space (known as /0)? You might consider tossing the conventional scanning features out the window. For example:

  • Don't monotonically increase addresses from the low-order bytes ("d" and possibly low-order bits of "c" of the a.b.c.d "dotted quad" representation of IPv4 addresses). 
  • Don't scan the same space too quickly (over a relatively short time interval). 
  • Don't scan from spoofed IPv4 addresses. • Don't scan too frequently and from the same legitimate IPv4 blocks or ASs. 

These are characteristics that researchers from the Cooperative Association for Internet Data Analysis (CAIDA) and the University of Napoli Federico II observed from CAIDA's darknet over a 12-day period, correlated with other data sources, and attributed to a botnet called Sality. In their paper, the researchers describe how the botnet authors distributed and coordinated a scan for VoIP servers across the IPv4 address space from about 3 million bots. The botnet used a scanning strategy "based on reverse-byte sequential increments of target IP addresses," the paper says. Unlike scans that sweep through the lowest-order (host) bits of a given IP network, the botnet operators parceled out chunks of the higher-order (network) bits to different bots, directed each bot to scan only portions of the host bits in its assigned chunks, and collected the addresses of hosts that each bot identified. This sophisticated "orchestration" results in a scanning pattern that maximizes coverage and overlap but is unlikely to be detected by current automation methods.

This discovery will attract considerable attention because the scanning technique is a radical departure from previously known techniques. No one really knows how long this technique has been in use. The discovery is worrisome, but you can help your partners best by putting this threat into context by sharing these messages:

  • Network scans may become more diverse and sophisticated in the future. 
  • Current scan detection methods won't detect scans of this kind. 
  • The detection methodology used by the researchers involved data collection and analysis from multiple and considerable resources, and it's unclear how or when security systems will be able to detect such scans. 

Temper your message by reminding your IT staff or business partners that network scanning is a means by which attackers identify and acquire targets. In other words, worry less over whether you are detecting scans and more over whether the services operating on your hosts that any scan identifies are hardened against attack.

This an updated version of an article published December 2012 at The Champion Community.


Research & Victim Phishing Reports Tell Same Sad Story

Sadstory
Photo by ~Aphrodite
The AntiPhishing Working Group has released a study of phishing attacks detected in the first half of 2012, and a second study of reports by phishing victims over a period of nearly two years. The Global Phishing Survey 1H2012: Trends and Domain Name Use uses a large sampling of confirmed phishing URLs. The Web Vulnerabilities Survey September 2012 uses reports submitted by organizations whose websites were compromised and subsequently used to host phishing attacks. 

 

While the studies use entirely different data sets, they share several of the same findings. In both reports, researchers found that the average phishing uptime (how long it takes for a phishing website, once detected, to remain up) is less than a day. The Global Phishing Survey also reports a median uptime of five hours and 45 minutes.

While investigators note that both average and median times are record lows, they also call attention to the same, sobering realities of phishing attacks:

"The longer a phishing attack remains active, the more money the victims and target institutions lose."

The organization that has its website compromised, and the company or brand that is phished, are victims."

The conclusion: early detection, notification, and prompt response by website operators is doubly important.

The Global Phishing Survey historical data, collected since 2008, provides what is a self-evident context for phishing attack victims: Phishers tend to use compromised web servers more frequently than servers they host and name using domain names they register directly. The reason, according to the study, is because "reputation services block domains and subdomains quickly and registrars and registries are more responsive to malicious registrations and have increased their fraud controls."

The lesson? Every legitimate website is a potential target, and so site operators need to consider and implement some form of web application firewall, server hardening, server grade host intrusion detection, and network monitoring.

One of the more difficult findings to corroborate in the Web Vulnerabilities Survey is how the attackers were able to hack into websites. (See Logging: A Vanishing Art Form and Elements of an Effective Logging Game Plan.)

Sadstory2
Image by opk
Read independently from the Global Phishing Survey, victim reports don’t paint a clear picture. However, if we correlate certain observations from the two studies, we can draw some interesting conclusions.

 

From the Global Phishing Survey, we see that:

  • Phishers have automated scripts and services that find and exploit large numbers of web servers using known vulnerabilities;
  • There are more exploitable web services, particularly applications like WordPress or Joomla.

In the Web Vulnerability Survey findings, we learn that:

  • LAMP (Linux, Apache, MySQL, PHP) is the most popular web operating environment
  • PHP is used by 78 percent of compromised websites
  • Joomla or Wordpress are used by over fifty percent for site management.

Taken together, we can conclude that web applications and site management are preferred attack vectors. For organizations, that means security teams should consider the secure deployment strategy mentioned in the Web Vulnerability Survey, which includes web application vulnerability scanning, secure application development, patch currency maintenance, and best-practices for secure application configuration (for Wordpress sites, see How to Protect Your Wordpress Site from Hackers and Use These Wordpress Plugins to Help Secure Your Site).

These are only highlights of a large and diverse set of findings and recommendations you and your partners will find in the APWG reports. The Global Phishing Survey studies are repeated biannually, so they can be particularly valuable as a means to monitor the constantly changing phishing attack surface.

This an updated version of an article published November 2012 at The Champion Community.