

Neither AV A not AV B has added a corrupted sample, both have added a working sample, by sheer luck AV A still detects the file while AV B doesn't.ĪV C will not detect the real threat, but detects a corrupted, non-threatening sample. Honeynet downloads 1MB of the file, then the connection/download is interrupted/aborted.ĪV A detects the file although it is corrupted and not a threat to the user.ĪV C notices there is a UPX header in the file, and the pe header is not matching the file physique -> flag as generic UPX modified evil malwareĪV D can't unpack the file because it is corrupted Well to drive my point home even for you:ĪV D adds a signature for the unpacked UPX because they can unpack very well, and to detect possible future variants. Not intentionally, noone does that intentionally. An AV is there to detect threats to a users system, not to detect his aborted browser downloads because the file has only arrived 30%. Corrupted samples are the death of a good testbed, ask IBK how much effort he has put in weeding out corrupted samples, i am certain he will agree. Most of the other AVs won't make a sound, because simply put, there is nothing malicious in a completely corrupted file. Imagine for example a file with only a remaining UPX packed pe header, fortinet will flag it, becaus it's obviously not a normal UPX file. All respected AV tests including IBK,, Virus Bulletin, West Coast Labs, ICSA Labs use original file extensions for that very reason.Īnd corrupted malware is corrupted, it cannot be run, and depending on the sample and the AVs detections method it CANNOT be detected (or will be detected by sheer luck depending on where they made a signature). A testbed without extensions is generally frowned upon, ask IBK, Stefan Kurtzhals or The Inspector if you want to. Many AVs by default use extension based scanning, because files without executable extensions cannot be executed, and as such pose no harm to the user. You are wrong, because his does not reflect their detection rate in real situations.

Now you may decide yourself whether there is any value in this test. Also it's not mentioned whether KAV has been using extended or standard databases.Ĥ) The various virus databases have different dates (see screenshots)ĥ) They have used free/trial versions which may not include support for certain detection categories (i.E. This would mean extreme detection losses especially for NOD32(no advanced heuristics iirc) and Avira(deactivated heuristics, deactivated modified/strange crypter detection/deactivated SPR category). This will cause issues with many AVs if you are scanning with their default settings, which usually are extension based.ģ) There is not mentioning of configuration settings, it is however likely that they have used default settings for all applications, which would explain some of the bad results. On the other hand, some AVs may benefit from their sloppy approach (fortinet is pretty likely to flag any corrupted UPX file)Ģ) The testset does not contain files with their original filename, instead they have been renamed by the honeynet to extensionless "honeynet_number" file names.

This will cause issues for AVs that massively use generic detection, checksum based detection, unpacking and so on. This means that unless they have done very thorough checks on the collected malware, it is likely to contain thosands of corrupted samples. Looking at the sparse info about the test I want to point out a few things:ġ) The tested files come from a honeynet system. Re: Check this out.Malware-Test Lab Dec 11, 2006
