Email-Based Malware Attacks, July 2012 — Krebs on Security [PDF]

Jul 31, 2012 - The one detail most readers will probably focus on most this report is the atrociously low detection rate

7 downloads 20 Views 727KB Size

Recommend Stories


Website Malware Security Scanning
Your task is not to seek for love, but merely to seek and find all the barriers within yourself that

Patterns of malware and digital attacks
Forget safety. Live where you fear to live. Destroy your reputation. Be notorious. Rumi

CMPlan FINAL July 2012.pdf
What you seek is seeking you. Rumi

July 2012
The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together.

New Comprehensive Taxonomies on Mobile Security and Malware Analysis
Don't count the days, make the days count. Muhammad Ali

Progress Report July - December 2012 - AIPMNH [PDF]
Perencanaan Partisipatif Pembangunan Masyarakat Desa Plus Penganggaran – Participatory Village Planning and. Budgeting. P4K. Program Perencanaan Persalinan dan Pencegahan Komplikasi (Birth Preparedness Planning Program). PCC. Provincial Coordinatin

Report on card fraud, July 2012
Kindness, like a boomerang, always returns. Unknown

Number (7) July 2012
Forget safety. Live where you fear to live. Destroy your reputation. Be notorious. Rumi

Village Voice July 2012
The greatest of richness is the richness of the soul. Prophet Muhammad (Peace be upon him)

July 25, 2012
Seek knowledge from cradle to the grave. Prophet Muhammad (Peace be upon him)

Idea Transcript


Advertisement Subscribe to RSS Follow me on Twitter Join me on Facebook

Krebs on Security In-depth security news and investigation

About the Author Advertising/Speaking 31 Jul 12

Email-Based Malware Attacks, July 2012 Last month’s post examining the top email-based malware attacks received so much attention and provocative feedback that I thought it was worth revisiting. I assembled it because victims of cyberheists rarely discover or disclose how they got infected with the Trojan that helped thieves siphon their money, and I wanted to test conventional wisdom about the source of these attacks.

Top malware attacks and their antivirus detection rates, past 30 days. Source: UAB While the data from the past month again shows why that wisdom remains conventional, I believe the subject is worth periodically revisiting because it serves as a reminder that these attacks can be stealthier than they appear at first glance. The threat data draws from daily reports compiled by the computer forensics and security management students at the University of Alabama at Birmingham. The UAB reports track the top email-based threats from each day, and include information about the spoofed brand or lure, the method of delivering the malware, and links to Virustotal.com, which show the number of antivirus products that detected the malware as hostile (virustotal.com scans any submitted file or link using about 40 different antivirus and security tools, and then provides a report showing each tool’s opinion). As the chart I compiled above indicates, attackers are switching the lure or spoofed brand quite often, but popular choices include such household names as American Airlines, Ameritrade, Craigslist, Facebook, FedEx, Hewlett-Packard (HP), Kraft, UPS and Xerox. In most of the emails, the senders spoofed the brand name in the “from:” field, and used embedded images stolen from the brands being spoofed. The one detail most readers will probably focus on most this report is the atrociously low detection rate for these spammed malware samples. On average, antivirus software detected these threats about 22 percent of the time on the first day they were sent and scanned at virustotal.com. If we take the median score, the detection rate falls to just 17 percent. That’s actually down from last month’s average and median detection rates, 24.47 percent and 19 percent, respectively. Unlike most of poisoned missives we examined last month — which depended on recipients to click a link that takes them to a site equipped with an exploit kit designed to invisibly download and run malicious software — a majority of attacks in the past 30 days worked only when the recipient opened a zipped executable file.

Bogus image spoofs UPS in an attack on June 25, 2012 I know many readers will probably roll their eyes and mutter that anyone with half a brain would know that you don’t open executable (.exe) files sent via email. But many of versions of Windows will hide file extensions by default, and the attackers in these cases frequently change the icon associated with the zipped executable file so that it appears to be a Microsoft Word or PDF document. And, although I did not see this attack in the examples listed above, attackers could use the built-in right-to-left override feature of Windows to make a .exe file look like a .doc. Obviously, a warning that the user is about to run an executable file should pop up if he clicks a .exe file disguised as a Word document, but we all know how effective these warnings are (especially if the person already believes the file is a Word doc). There was at least one interesting attack detailed above in which the malicious email was booby-trapped with an HTML message that would automatically redirect the recipient’s email client to a malicious exploit site if that person was unfortunate enough to have merely opened the missive in an client that had HTML reading enabled. Many Webmail providers now block rendering of most HTML content by default, but it is often enabled or users sometimes enable it manually on email client software like Microsoft Outlook or Mozilla Thunderbird. A copy of the spreadsheet pictured above is available in Microsoft Excel and PDF formats. Tags: American Airlines, Ameritrade, Craigslist, cyberheist, Facebook, FedEx, Hewlett-Packard (HP), Kraft, Microsoft Outlook, Microsoft Word, Mozilla Thunderbird, pdf, university of alabama at birmingham, ups, Virustotal.com, windows, Xerox This entry was posted on Tuesday, July 31st, 2012 at 1:35 am and is filed under A Little Sunshine, Latest Warnings. You can follow any comments to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.

45 comments 1.

BT July 31, 2012 at 2:15 am I would be interested in seeing what the common user experience is to get infected with one of these. Are infected users using Outlook or browser-based email? Do they get warnings when attempting to execute the unsigned executables? How many, how stern? EJ July 31, 2012 at 9:46 am In my limited experience, I’ll see these things come to my Yahoo! email account, and the associated Norton scanning that Yahoo provides will blindly proclaim the attachment safe – its AV scanning is so ineffective, taking 3 or more days before it’ll recognize these attachments as a threat. I’m wondering if Hotmail account users are probably having similar experiences. With my Gmail account, I’ll never see them appear, even in the Spam folder. With Outlook, you’re only as good as your AV product (outside of your own smarts, which a good portion of the computing population are lacking), which based on VT stats as mentioned in the article, aren’t very good in the critical first days of release. Neej July 31, 2012 at 2:50 pm You can’t really blame AV applications perse, crooks test their creations against the most popular applications specifically so heuristic detection will not pick up the content as malicious – so the malware has to be verified as malicious by researchers which takes time. Perhaps you can blame the *marketing* of AV products but then again the average user is going to avoid a product that tells the truth (it won’t protect you against new threats for some period) and go for the product that makes the claims we’re all so familiar with (the user is protected full stop). So what can you do? You should never assume a file is clean unless it comes from a safe source such as a vendors website for example. You should certainly never assume a file is clean because multiple AV products don’t detect it as malware. JCitizen July 31, 2012 at 3:55 pm @EJ; I can attest that Hotmail does an excellent job of blocking the images of all spoof emails I receive, (which isn’t often). The emails look just like HP or PayPal emails from legitimate sources, but Hotmail always manages to block the executable objects and images from these emails. So generally I already know something is wrong. I did get fooled by just one that got through, a few years ago, when the filters didn’t work as well – but LastPass caught the fact that the URL didn’t match the site, so of course I couldn’t logon to the fake/phishing site. Needless to say, I rarely follow links in any email now, even from trusted sources. It is just as easy to simply navigate directly to the site to follow up on email subjects. When I do follow such email, it is in my honeypot lab, and I’m looking for trouble!

2.

bruce July 31, 2012 at 3:10 am Does GMail actually block the transmission of all .exe files [which can be a real pain …], or does it only block those .exe files that are actually labeled as an .exe file? EJ July 31, 2012 at 8:58 am Just ran a test using explore.exe. Blocked with a file type of .exe, but allowed it to be sent with a file type of .exx. qka July 31, 2012 at 10:47 am I had similar experiences with an in-house e-mail system a few jobs back. The mail administrators blocked .exe and .zip files, maybe other types. The need to share .zip files was large, so everyone I worked with learned to share .piz files. A social engineer could do the same, giving the recipient a reasonable excuse why .exx and .piz file need to be renamed before being opened. Or zip files with passwords, because the information is “confidential”, to further confound virus scanning and blocking.

3.

Jindroush July 31, 2012 at 3:19 am I have to repeat what I wrote on the previous articles. VT shows more or less only STATICAL detections. It usually does not show cloud results, it does not run all the heuristics, because the metadata about the sample are missing, the emulation may be not deep enough, again, because it’s missing the metadata, also sandboxing does not run. Why should anybody care for the level of protection of statical detections and not taking the whole product protection in account? The wording of your sentences is hiding this – it basically just bashes avs, without saying that it’s not _real world scenario_ and I’m not sure that most of your readers are aware of that. Counting the accurate, superexact numbers based on ridiculous assumptions is wrong. See you next month with the exactly same comment. Uzzi July 31, 2012 at 4:37 am Okay, you have a point… but, most AV (in terms of installations) didn’t detect this stuff anyways – according to user reports to abusedesk. @Brian: So your “friends” are back again… Bredo > Grum > Cridex (I like Sophos and Fortinet still calling it Bredo) Uzzi July 31, 2012 at 4:58 am Btw: same picture in europe /w localized brands and language (DHL…), most times sending several mails per spamtrap each day… oO BrianKrebs July 31, 2012 at 9:21 am Good to see you again Jindrich. I meant to ask you this earlier, but the last time we spoke you were arguing that the tests of products paid for by the industry put actual “live” detection rates at upwards of 90 percent. Assuming you still believe that, so what, in your view accounts for the missing 70 percent? Heuristics? Jindrich kubec July 31, 2012 at 3:34 pm Hi Brian, yes, the combination of cloud prevalences, source url blocking, non-executable detections, metadata detections and/or possible behaviour analysis of sample run, all of these add up much higher – see also my reply to Neej. I really can’t speculate how much higher, because this is a work of independent testers – and we are seeing over 90%+ detections in such test. My beef with this reporting of yours is in the way how it’s written. The static detections are simply _WORST POSSIBLE_ results, not ‘usual results’ and if the static detections are 20% or something, it does NOT mean that 80% users which click the attackment will get infected. They won’t. Neej July 31, 2012 at 3:01 pm What’s the point of this stance you continue to take exactly? Cloud data is irrelevant and largely a marketing exercise involving a buzzword. Heuristics are also irrelevant unless you can evaluate it against detections that were actually malware and not false positives. The points about metadata I don’t understand. Sandboxing is a preventative measure and has nothing to do with detection rates. You’re seriously mistaken if you believe that AV provides protection against new threats. “Professional” don’t waste their time spreading malware that is detected by heuristics (again, making your point superfluous and silly), it is tested against a large number of popular AV applications and not against VT. Jindrich kubec July 31, 2012 at 3:18 pm Neej, your opinions are the exact reason why I protest. So, point by point: a) Cloud is not irrelevant. If the file has low prevalence, AV may set much harder heuristics on it. In static testing, and for bandwidth purposes, and also not to skew the prevalences this is not done on VT. Very relevant. b) Heuristics are great – we can fine tune them in a way which makes av testing for bad guys much harder (but that also applies for VT). Very relevant. c) Metadata. If I have “single executable in zip, coming from email”, I have strong point which I can feed to heuristics, making it much more paranoid. When I get the same sample from VT under filename sample.exe, these metadata are missing. Very relevant. d) Sanboxing/behavior analysis can check the sample’s behaviour dynamically and then kill it, informing user. Very relevant. The only way how to correctly test AVs against such threats is simply run the attachment on the real machine and then check if it came thru or not. Checking this on VT is nonsense and it only manifests _WORST POSSIBLE_ results, not best results. TJ August 1, 2012 at 3:59 am I assume you’ll attempt respond in as politically correct a manner as possible, hoping not to offend the good people at VT. But for all intents and purposes, there is absolutely no way to read your critique of VT without coming to the conclusion that from your perspective scanning files with VT is an utterly pointless enterprise. Jindrich kubec August 1, 2012 at 4:11 am Nope. I think VT is valuable service, but it must be used with knowledge while interpreting the results. As I wrote somewhere else in this thread – VT does confirm the detection (or FP), but does not always confirm the miss (false negative) – exactly for the reasons I wrote. So if on VT the detection rate is 20%, it DOES NOT mean that av on users machine will pass thru 80% of the infection attempts. That’s what I read in Brian’s article and that’s what’s wrong in Brian’s article. TJ August 1, 2012 at 11:23 pm If (as you argue) Virus Total scan detection rates shouldn’t be compared to actual AV apps (due to the inherent deficiencies of VT’s static file scanners), why should anyone waste their time with a VT scan? This is what I get from your comments: At best, VT can help identify a false positive and “minimally” increase one’s confidence that a scanned file isn’t actually malicious. At worst, based on the mammoth gulf between Brian’s 17% detection rate and your 90% detection rate, VT is wholly inadequate arbiter of safe vs. unsafe and is most likely only providing its users with a false sense of security. kurt wismer August 1, 2012 at 11:47 pm “why should anyone waste their time with a VT scan?” virustotal is a useful *starting* point in investigating a sample. if it tells you it found something, that’s knowledge you didn’t have before. if it tells you it didn’t find anything, that doesn’t mean there isn’t anything there, nor does it mean that the products it’s using wouldn’t have found something under other circumstances. fundamentally, however, virustotal is geared towards enlightening you about samples, not enlightening you about anti-virus products. anyone who tries to infer something about AV products from virustotal results is making an egregious error. “At worst, based on the mammoth gulf between Brian’s 17% detection rate and your 90% detection rate, VT is wholly inadequate arbiter of safe vs. unsafe and is most likely only providing its users with a false sense of security.” virustotal is definitely NOT an adequate arbiter of safe vs. unsafe, and if anyone told you different they were filling your head with lies. this is a classic problem of people with shallow knowledge passing on exaggerated and incorrect information about some security technology to others and that exaggeration taking on a life of it’s own. TJ August 1, 2012 at 11:59 pm “anyone who tries to infer something about AV products from virustotal results is making an egregious error.” Well. I guess that’s a shot across Brian’s bow. Because that’s exactly what Brian has done in this article and the previous article. kurt wismer August 2, 2012 at 12:26 am yes, well, i guess it’s not the first shot: http://anti-virus-rants.blogspot.com/2011/04/its-not-detection-rate.html i hope he takes it as constructive criticism. it’s certainly not meant to be personal – and i’m sure he knows how to reach me if he feels it was. bubba August 3, 2012 at 11:19 am “anyone who tries to infer something about AV products from virustotal results is making an egregious error.” I think this is an interesting debate, but I have to respectfully disagree with this take. For the simple reason that other AV tests available to the public run tests on clean images, up to date patches, that aren’t running a multiplicity of other products, applications, services, etc, which is rarely the case. There is no accounting for conflicting processes and applications, registry errors, and the like. Which leads to AV products having their capabilities diminished. Here is a typical scenario: a user on a PC that is a couple of years old has job to do, a deadline to meet, and a family they want to come home to for dinner. Their computer pops an error message, runs slow, crashes, whatever. The last thing they want to do is log a help-desk ticket and patiently wait for someone to get back to them to resolve the issue. They try to troubleshoot the problem and invariably will start tinkering with the AV settings – turning features and functionality off one at a time to try and “fix” their issue, complete their work and go home. Rarely do they go back and turn those items back on. I know because I literally see it every single day. I’m not saying this happens with every user or even a majority of users, but it does happen on a regular basis. In some cases AV is even the culprit or at least part of the issue and then you have administrators disabling or altering functionality as a default setting. If you have responsibility for protecting an enterprise there is tremendous value in knowing both delta points – worse case and best case scenarios because the reality will be that you have AV deployments in both the extremes with the average landing somewhere in the middle. Anyone who says otherwise, in my humble estimation, is out of touch with what is happening in the trenches. In any case, it’s kind of a silly point to quibble over. Until AV can deliver a six-sigma level of detection rate for zero-day threats in even best case scenario deployments the most you can hope for from your AV solution is that it can stem the tide of nasty stuff getting through. kurt wismer August 3, 2012 at 12:10 pm @bubba: (had to reply to myself because there was no reply button on your comment – guess this has gone too deep) testing orgs test under ideal circumstances because the metric they are interested in is what level of protection the product is *capable* of providing. measuring what level of protection a product actually provides when misconfigured is a fools errand because there are too many different ways to misconfigure the products which consequently have too many different possible outcomes on the products’ protective capabilities. furthermore, the configuration used by virustotal does not match any misconfigured environment you’re likely to see in a real user’s system (virustotal uses command line tools, how many users have you seen do that?) . so even if we were going to try to measure protection in a typical misconfigured environment, virustotal would STILL be a bad analog and not give us an accurate measure. i reiterate, the people who make virustotal say you shouldn’t use virustotal for testing AV. they’ve been saying it for at least 5 years now. they, more than anyone here, know what is and is not an appropriate use for their service. why is it so hard for people to accept this? dschrader August 1, 2012 at 12:48 pm Jindroush is correct, Virustotal is a poor measure of the effectiveness of av products. I used to work at Symantec, so I had access to internal and 3rd party tests of detection rates. VT runs each file through the static file scanner provided by each AV vendor. However, 5-6 years ago many virus writers started focusing on rapidly mutating malware. So the AV companies, in turn, focused on non-static ways of detecting threats – heuristics, IPS, statistics looking at source (site reputation), frequency and other metrics (file reputation), spam filtering . . . . none of which are kicked off by submitting files to VT. The point is, stop using Virustotal – to compare AV products. It’s like comparing the safety of cars by just looking at car size – you really need to look at air bags, braking distance, crumple zones . . . . to get the full picture. BrianKrebs August 2, 2012 at 12:49 am Who’s comparing AV products?

4.

Jay Pfoutz July 31, 2012 at 5:00 am Too bad those with even a full brain click on the executable links anyway.

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.