Page 109 - index
P. 109
It's a serious problem and one that can't really be solved without looking at some sort of forensic
information. In any application file, if you right click on its properties you can get information on how
many bytes it is and who the producer of the program is. But to really determine whether or not it is
'white' you probably need much more information. You need to be able to know how the endpoint
receives the program, who the person installing it is, which software installed it, and so on until you
have the whole history of what's happened in order to make the right decision.
Some people say that you can automate this decision-making process against a few criteria, for example,
automatically assuming that programs signed by Microsoft go onto the whitelist. But, again, we run up
against that differential between theory and reality. The reality is that in an enterprise-level
environment, not every program is signed by vendors and not every vendor is accurate about signing
their programs. For example, Microsoft Word is signed by Microsoft, and Microsoft Notepad is not. If
you followed the criteria that an unsigned program can't run, then you wouldn't find Notepad on the
whitelist.
If automatic criteria are that imperfect, then clearly IT has to frequently put the whitelist status of many
applications on hold in order to investigate. Theoretically that's the right thing to do. But practically this
means that there will be a lot of people in the organization that have legitimate work to do on legitimate
applications who have to wait until IT finds the time to investigate. These users start flooding IT's
voicemail box. They demand to get the programs they need and they generally make IT's life miserable.
So, what do you do with these grey applications? You shouldn't just allow them to run if you don’t know
their status, but it is inefficient to block all of them.
Well, if you can't put them into heaven—automatic allow—or hell—automatic block—then why not try
purgatory? What I mean by that is allow the gray programs to run, but limit their access to resources
until a decision can be made as to their white or black status. So the worker can use the application, but
the program can't access the internet, for instance, or it cannot access certain servers in the
organization or overwrite certain registry keys.
Some people might ask, isn't this what application sandboxing accomplishes? Well, not really. The
thought behind sandboxing is to put applications in a bubble and run them in complete isolation from
any other application or the operating system so that it will not damage your computer. The difficulty
with that approach is that the inconvenient factor of reality rears its head again. When applications run
in isolation, things tend to break. The Windows OS is not built for full sandboxing. The simplest example
is that an isolated application will not be able to reach a shared DLL, or writing to the virtual registry may
create some other problems. I'm not criticizing sandboxing, but I am saying that there is a difference
between what you can do in the lab and what you can do in an enterprise-level working environment.
It's that disparity between theoretical approaches and real-life operations that makes it necessary to
approach whitelisting with pragmatism. Because right now the major problem with whitelisting is that it
is very expensive from the point of view of human involvement. You can't completely eliminate that
109 Cyber Warnings E-Magazine – August 2013 Edition
Copyright © Cyber Defense Magazine, All rights reserved worldwide