My friend and respected infosec colleague, Gary Warner, tweeted a link to a ComputerWorld column, Malware: war without end. I read the article, began the thread displayed to the right, and thought that the thread created a good opportunity to explore and comment in more detail than 140 character exchanges permit.
Certain assertions in the article frankly make me cringe; for example, this claim,
"there are no types of malware for which there are no defenses that we are currently aware of,"
reminds me of a first principle David Clark of MIT taught me to question years ago: does it scale to large populations? While I have no doubt that currrent security protections (AV, antimalware, firewalls, IDS are mentioned) can provide adequate defenses at small to modest scale, there is ample evidence, including the statistics John Pescatore offers at the outset of this very article, that current security protections do not scale to large or autonomously operated networks or user communities.
This opening (mis)perception that defenses can scale appears to be corroborated by a subsequent claim:
"We no longer see the kinds of big spreading malware that we saw three or four years ago"
which calls attention to signature detection, behavior monitoring, blacklisting, and whitelisting as weapons that have provided "Wins so far".
In response, let me simply name Conficker, Kelihos, Srizbi, and ZeuS. Infection counts tell a different story. These weapons may be formidable on small to modest-sized and carefully managed networks or in controlled tests but they clearly are not working at Internet scale.
I'll make one more comment and encourage you to read the remainder of the column with a similarly critical eye:
"infections still happen. But even the nature of the infections seems to have reached a state of equilibrium"
An alternative interpretation of the current state is that attackers have found a profitable, low-overhead comfort zone and we haven't been able to force them to change. All the news regarding drops in infection rates, improved zero-day response time is quite encouraging; however, let's not overlook an uncomfortable truth:
Spammers respond to more effective antispam measures by (wait for it) pushing the magnititude of spam campaigns to assure they profit even when tiny fractions of one percent of users need to fall victim to the attack.
I'm inclined to believe that this behavior is more a factor than security vendors choose to acknowledge. Or as Gary warner stated in the thread,
"if the state of the art is so amazing, why are we losing the malware battle?"
All excellent points, Andre.
Let's look at these carefully:
1) Attackers build, test, routinely and often. This rigor is not as widely adopted by infosec/IT.
2) Relying on "install, configure and forget" security technology is the antithesis of agility.
3) Relying on security technology excuses senior management from hiring analytical and insightful security professionals in sufficient numbers. We have some, but not nearly enough of these types of people.
Thanks for your comment!
Posted by: The Security Skeptic | Wednesday, 04 December 2013 at 11:28 AM
That's an easy answer. Professionals are failing to integrate SOTA defenses and decision makers are failing to fund full integration with their PMOs.
Additionally, you must also consider that unwanted adversaries build and test their malware systems and subsystems early and often. If you don't have the same agility in your organization, then their R-squared beats yours each time. The `killav' command in Meterpreter comes to mind, but so does the Exchange Policy and AirWatch bypasses on iOS especially when in combination with a faraday bag. Unwanted adversaries are analytic and highly-insightful professionals, but information security types typically are not.
Posted by: Andre Gironda | Monday, 02 December 2013 at 11:57 AM