Previous month:
October 2009
Next month:
December 2009

November 2009

Web site vulnerabilities

Brian Prince (eWeek.com) recently published an article that summarized the results of the latest web vulnerabilities report by WhiteHat Security researchers. On the same web page, you'll find a link to the August 2008 and March 2008 WhiteHat Security Reports. What can we observe from this series of reports spanning 18 months?

Percent of sites with at least one security issue. In March 2008, 9 of 10 sites had at least one serious vulnerability. The August 2008, the number declined to 82 percent and the latest study reports 72 percent. This is an encouraging downward trend, but the fact that nearly 3 of 4 sites still have at least on serious exposure to attack is somehow not comforting.

Sites vulnerable to cross site scripting (XSS). In March 2008, 70 percent of sites were vulnerable to XSS. In August 2008, the same percent of sites were found vulnerable to XSS. The most recent study finds 66% of the sites vulnerable to XSS and the team believes that the prevalence of XSS is underrepresented in the study. That the downward trend is nearly flat here is disturbing enough, but the most recent study also indicates that the average remediation time (time to fix) of cross-site scripting vulnerabilities is a discouraging 67 days.

Time to remedy. Even more discouraging than the sad statistics I've cited are the reasons time to fix is so long. While I can accept that many organizations do not have the expertise to fully understand the complexity of vulnerabilities and appreciate the risk should these be exploited, I find other reasons in the list that wreak of hubris or greed. Among the top "greed" reasons cited in the report you'll find "Feature enhancements are prioritized ahead of security fixes", "Solution conflicts with business use case" and "Compliance does not require it". In the hubris category, we find "Risk of exploitation is accepted". I claim the latter is hubris because few organizations actually assess risk correctly and factor harm to others fully, especially if direct loss can be offset in some other manner, for example, nuisance fees collected from customers buying on credit.

Brian Prince comments that the news isn't all that bad, citing that 17% of sites had no serious vulnerabilities, ever. I give Brian credit for unbounded optimism and a laudable commitment to balanced reporting, but let's be honest: 17% is terrible. All of these statistics are terrible and they are not improving. They are reversible, but only if the same organizations that today accept risk of exploitation glibly and prioritize new features and haste to publish choose to reverse their thinking.

The economy is sluggish. No one expects a quick turnaround. This is an opportune time to fund security measures: seriously, who'll notice a larger allocation to security and IT among the already abysmal losses? Seize the moment, commit to reviewing and testing web application code prior to publication, test sites frequently for vulnerabilities, and reduce time to fix.


Avoid malware threats: do what malware analysis would do

Gary Warner has published an intriguing article describing how the UAB Spam Data Mine finds new malware threats delivered by email.  Gary provides a step-by-step description of the automation implemented at UAB, using an example (this morning's Social Security version of Zeus). The procedure Gary outlines is very detailed, but not impossible for a human to repeat. In fact, I highly recommend studying it in the context of what any individual can do to reduce his own threat to falling victim of a malware download.

For example, the UAB spam analysis considers the Subject lines and Senders that arrive in 15 minute intervals and builds a picture over time of the prevalence of related themes or senders: in Gary's example, the repeated mention of "Social Security statement" in subject lines serves as a trigger. Related subject lines and senders are counted, and high counts trigger a more careful analysis. Can an individual follow this same sequence of actions and discern spam from safe email? Let's consider a real world example.

Last night, I received a stream of email from  "CVS Pharmacy". The subject lines, however, were "Weekend is near", "Busy or just absent?", and other random sentences that phishers use to attract attention to their phish email.  Looking at the message body, I see that the message is exactly the same, with each message containing a banner claiming "Delivery department work across the Globe."

At this point, the UAB spam analysis would visit hyperlinks in the suspicious message bodies and begin crawling for malicious executables buried amid the web pages.  DO NOT VISIT ANY LINKS. Rather, at this point, I - and you - should be saying, "Warning, Will Robinson! Mark or delete the message as spam".

Note that while humans are not automatons, we tend to respond automatically given repetitive tasks. Phishers and spammers rely on the fact that many email users haven't been programmed to respond in the manner that the UAB spam analysis automation responds. Now that you've seen how relatively simple a "spam aware" response can be, apply it!

Congratulations to Gary for providing a relatively straightforward and painless way for readers to improve their defenses against spamming and phishing awareness.