A terrible assumption is that your system isn't already compromised, which is probably not true. Nor that you don't already have one or more insiders compromising the system. Sure, go ahead and teach your machine model what "normal" looks like, even if your systems aren't secure at this time.
We can rebuild him, we have the technology: AI will help security teams smack pesky anomalies
With highly targeted cyber attacks the new normal, companies are finding the once-hidden Security Operations Centre (SOC) is the part of their setup they really count on. SOCs have existed in a variety of guises for decades, emerging in recent years as a natural consequence of centralising security monitoring across …
COMMENTS
-
-
Friday 24th August 2018 13:37 GMT amanfromMars 1
Already Available in Myriad Areas Next to You. Just a Few Clicks Away
I would agree that terrible assumption is a negative positive spoiler, Jack of Shadows. How about levelling the playing fields a little with some extra info ....
Although many successful security compromises are built from a toolkit of relatively simple techniques and common weaknesses, the chances of new attack patterns combining these with an unknown vulnerability have risen dramatically.
The reality of new attack patterns hugely ACTive in the wild and now ascribing to EMPower SCADA Systems, has the chance of their not being deployed for cyber control advantage, completely destroyed.
Hence, at a stroke, are Realities Changed and Exchanged ....... CoMingled with Other Stellar Sources Tendering to COSMIC Forces. As you will need only to imagine, is that a Truly Advanced IntelAIgent Program with Surreal and Sublime Alien Protection. .... but IT is Shared here for Great and Grand Future Earthly Use ...... Human Terra Phormication.
-
Friday 24th August 2018 13:29 GMT Pascal Monett
"UEBA baselining with machine learning can adjust its worldview of a user's behaviour"
So all a hacker needs to do is ensure that his package can shift the pseudo-AI's worldview bit by bit and then he will be right at home.
Given that there have been Tesla buyers stupid enough to think that their car was self-diving, marketing any technical solution with the notion of AI is a surefire way to ensure a catastrophe. Complacency and habit means that when this so-called "AI" security will be in place, as long as it doesn't squeak, admins will just take care of the daily panics and not worry about whether or not the machine is working right.
Hackers of the future will have a lot of fun with these toys, I think.