Page 114 - Cyber Defense eMagazine October 2023
P. 114

Quick expulsion is possible only when cybersecurity professionals keep a constant eye on the system in
            real time, not when organizations rely on tools that produce a look-back report that covers the previous
            day, week, or month.

            Corporate leaders who are focused merely on compliance often think only of firewalls and other perimeter
            defenses. Our profession needs to help them understand that true risk mitigation looks to limit the damage
            that comes from the intrusions that are essentially unstoppable.




            AI risks and promises

            At the same time, the rapid introduction of tools based in artificial intelligence changes the calculus of
            risk dramatically — but it also promises to bring improvements to the management of that risk.


            No one should underestimate the speed at which AI is arriving. Azure AI, Microsoft’s portfolio of AI tools
            for developers and data scientists, has been the fastest-growing service in the history of Azure.

            The greatest challenges presented by AI to cybersecurity professionals are likely to be associated with
            so-called “autonomous AI,” the development of products by AI that acts on its own without instruction.

            It doesn’t take much imagination to think of an AI tool tasked with protecting a system or solving an IT
            problem. The AI decides a particular tool is the best for that job, but sees that the tool isn’t available on
            the computer where the AI is running. It does a search on the Web, finds a link to the software it needs,
            installs it and completes its task.

            How do we know that the web links the AI is finding — without our knowledge — hasn’t installed malware
            on our system? Those are questions that should be keeping security professionals awake at night.

            Sleepless nights will be even more common among cybersecurity professionals in industries that are
            heavily regulated like financial services. There, emerging regulations focus extensively on transparency
            and disclosure. It will be difficult to square these requirements with the black-box aspects of AI. How will
            security professionals assess the security of third-party vendors, especially those whose products are
            handling confidential financial and personal information, if the vendors rely on black-box AI? Keep in mind
            that transparency is an impossible goal when a business operation is entirely opaque.

            Given the speed at which AI is sweeping into the marketplace, and given the slow and careful pace that’s
            customary among regulatory agencies, it’s safe to assume that new regulations — or even regulatory
            guidance — will significantly lag the development of AI technology. As a result, organizations are going
            to be on their own when they determine how to meet regulatory requirements when they use AI.

            The bad guys, meanwhile, already are using AI tools to enhance their attacks and improve their evasion
            techniques. Beleaguered IT staff members who are expected to address security threats while managing
            the entire enterprise system will be bowled over by the rush of new threats.

            Most importantly, those business executives who already fail to adequately account for cybersecurity
            risks will be in even greater danger as AI supercharges the computing universe.






            Cyber Defense eMagazine – October 2023 Edition                                                                                                                                                                                                          114
            Copyright © 2023, Cyber Defense Magazine. All rights reserved worldwide.
   109   110   111   112   113   114   115   116   117   118   119