Page 11 - cdm-2014
P. 11
Safety means that the family accounting system can’t be breached via an attack vector opened
by a consumer device and have all its critical accounts and passwords sucked out of it; and that
no-one can hack into a chemical processing plant and cause a disaster by messing with
embedded controllers or SCADA systems.
Safety means that the Internet of Things (IoT) is a place that can be used for the common good,
and not a danger-zone where one false move could put you in the poorhouse.
With that said and the facts of the business model brought to light, how do we proceed? The
answer is slowly, methodically and deliberately. Building a business in the Linux embedded (and
even the Linux server) space is like eating an elephant. You can do it, but you can’t bite off
more than you can chew and expect to be there for a while.
Having a clear view of who’s using what in the market is basic product management and all the
information is available to anyone willing to put in the effort and feels a sense of responsibility.
Vendors of embedded devices should be open to discussions regarding safety, especially if
getting safe [and secure] doesn’t mean that they have to rewrite their code to get the security
they need.
This brings us to another set of points that describe the realm of application security. There are
three generally accepted states in developing secure systems:
1.1. Find and Fix — This is the “Whack-a-Mole” approach and not a really great place to be.
Software is deployed and when bugs are reported, they are fixed and a patch or new version
released. In most cases, this places the burden of safety on the end-user and makes them
responsible for getting and installing the new software. Naturally, the system can be automatic,
with the device or system checking for its own updates and installing them. However, in certain
industries this behavior is not allowed. Additionally, while the vendor has the best of intentions,
new defects may be introduced as part of a patch depending on configuration, user data and a
raft of other things that the vendor has no control over.
1.2. Secure by Design — This is the opposite end of the spectrum where software is built to
be safe from the get go. This includes a) defining secure requirements with both use cases and
abuse cases; b) secure design techniques using the development of threat models and attack
surface reduction exercises; c) inline static analysis of code that provides developers with
instant feedback as to the security of the code they’re creating and d) the addition of penetration
testing to the normal integration and system testing process. There are many other engineering
activities that can be conducted, but these four are the most germane. The end result of the
effort is a safe application ready to deploy and take on the attackers.
1.3. Protect in Play — This is where applications are wrapped in security blankets such as
application firewalls, anti-malware systems and, yes, whitelisting. This state allows potentially
unsafe systems to be deployed and still be safe. It can be used during the “Whack-a-Mole”
process as temporary protection while things are being fixed, or as the basis of a generally safe
and secure system, which is what whitelisting provides. This is the key point that needs to be
communicated to device vendors. While an effective whitelisted system is not an excuse for
! " $ !
! # ! "