by Philipp Waechter, Senior Security Consultant, help AG
In recent years the number of malware variants is exploding and so is the number of anti-malware signatures. Due to the fact that every effective malware nowadays encrypts its own payload, it comes with some sort of randomization or polymorphic mutations and the best ones even add some rootkit-based obfuscation features.
Antivirus vendors are responding fast and the average signature download every day reaches multiple megabytes. In 2007 Symantec SEP database covered around 250,000 different signatures. Today Symantec analyzes 55,000 new malware variants every day! Does it solve the problem? No.
75% of malware infects less than 50 machines worldwide. Chances are that our reader’s organization has infected computers which they don’t know about.
Over time the IT industry got comfortably used to a concept that has a flaw in its own design: signatures are always one step behind, they reactively respond to reported or detected malware. Now that malware variants and mutations are exploding, antivirus vendors are less able to catch up. Detection rates are declining dramatically (in some reports they barely reach 80%, depending on the sample set) and we haven’t even touched the topic of targeted attacks, where the attacker develops a malware just for the intended target. You can buy undetected trojans on Ebay for $200.
In other words, we are at the brink of the collapse and it is time to realize that signature-based antimalware is becoming more of a placebo effect: “You take it, because it makes you feel good”.
Theoretically speaking there are a few options to tackle the problem:
• Code signing: This means that the operating system will only run signed code, which is either signed by the vendor or by a trustworthy signing authority. Apple is doing this very successful with the iOS operating system, where Apple acts as the sole code signing authority. On the Apple platform this had some vendor “lock-in” effect but on the other hand until now it successfully kept malware away from the platform. Microsoft also insists in driver code signing for all its 64bit operating systems where Microsoft is the only code signing authority. This is done because of concerns of stability and not as a malware prevention. The idea behind is that a malware code does not get signed first place or can be revoked easily.
• Sandboxing: This approach will let each application run in its own container which we call a “sandbox”, it can do everything within its playground but has none or very restricted access to anything outside the sandbox. Apple is doing this with its iOS platform and for example Adobe implemented this approach with the latest Acrobat Reader, the rendering engine of the reader is firewalled from anything else on the computer, in order to block infections via pdf files.
• Runtime analysis: This method will analyze the behavior of applications. This profits from the fact that infections have typical and well-known load points where they install themself into the system. For example it is almost impossible for a clean application to add code to other executable files. Such monitoring can be done on the system itself while the application runs (with the risk of false-positives), or it can be done on dedicated gateway systems. A relatively new approach is to have a whole “lab-in-a-box” that runs virtual machines with all major platforms from Win XP to 7 and Office 2003-2010, Reader 6.0 – 10.0 etc. where each and every piece of downloaded code is analyzed on a multitude of unpatched software. If it shows malicious behavior the download is flagged as unsecure and can be submitted for further analysis, indirectly feeding the malware signature database of all client computers. The only problem is that such gateway analysis takes time, depending on the download size it can take around 40 seconds for a verdict.
• IP address reputation filtering: This approach profits from the fact that there is a limited number of IP addresses and number of providers on the internet and it turns out that some addresses are well known to be good and some destinations are notoriously bad. Also it turns out that most of the internet traffic goes to good destinations and only a tiny fraction of traffic goes to unknown or bad destinations thus making this approach quite a practical way of keeping internet traffic clean. This however is more a preventive way of avoiding infections and does not protect other sources of infections like opening files from USB drives.
• File reputation via hashing: This is usually a cloud-based approach where a hash value is generated of each and every file on a computer’s hard disk. It profits from the fact that most files stored on computers are well known and the files are the same as on millions of other computers. For example winword.exe has a distinguished hash value for each version, which can be found on millions of computers worldwide. Now if you have a winword.exe with a hash value that nobody or only a few other people have, then that is a clear warning sign that the file has been tampered with. We are witnessing a time when most of the antivirus vendors are building a trusted list of applications, you could call it a database of trusted files. Symantec has around 4 billion file hashes stored, and out of this my estimation is that only a few million file hashes are actually executables. Therefore, I expect that in a few years we will see a shift from a blacklist to a whitelist approach in AV technology, similar to code signing.
Some of help AG’s product offerings:
Fire Eye Next-Gen Security Gateways http://www.fireeye.com/
Symantec Insight http://www.symantec.com/business/reputation-based-security
Symantec SONAR http://en.wikipedia.org/wiki/SONAR_(Symantec)