Why ML/AI is not cyber and endpoint security savior

Artificial Intelligence (AI) and Machine Learning (ML) are considered the next evolution in computer science as they allow computers to perform complex decisions and tasks that were up until now reserved to humans. Their potential is so powerful that films such as The Terminator depict how they become smarter than their creators, turn against humanity and eventually lead to our demise.

In recent years we hear how ML/AI-based products and services claiming to be the ultimate savior and the next advancement of cyber security, offering new and improved ways to protect anything from identities to the cloud, including endpoints against malware and other threats. But unlike the movies, they have their faults and are far from what you’d expect them to be…

Big names in the fields are struggling

Everybody’s doing ML/AI and not just in cyber, but most (if not all) of these systems suffer from false-positives, biased and error-prone results and other issues that eventually produce a less than desirable outcome.

Some notable examples include Google who “fixed” its racist Photos algorithm by banning gorillas, Microsoft’s Tay chatbot which got shutdown 16 hours after its release due to it posting offensive tweets, IBM’s Watson gave unsafe recommendations for cancer treatments, Amazon’s recruiting AI showed bias against women, many agree that AI won’t fix Facebook’s fake-news problem and the list continues…

Threat actors (and researchers) counteract defensive ML/AI

Predictions released about 2 years ago stated that we’re about to see a turning point for ML/AI, instead of using them to combat malware, eventually ML/AI would be used by threat actors to counteract the defensive ML/AI. Not long after, such attacks started surfacing with Cerber creating a malicious payload that bypassed ML/AI systems, and this trend continues as malware authors continue to target these systems according to their training data and algorithms.

And it’s not limited to Windows- or desktop-based machines as research by Georgia Tech indicates the condition is worse when it comes to Android-based devices. Out of 58 products tested, only 2 were able to defend against their AVPass tool.

Even experts don’t believe the technology is ready

A study revealed that most security researchers and experts don’t trust these products yet. 70% believe attackers can bypass them (coincidently, Cerber ransomware bypass above was published on the same day). Even worse, 74% say ML/AI security products are flawed and 87% believe it will take more than 3 years for them to start trusting such systems. Another study indicates that 91% of cyber security professional are concerned hackers will use AI in cyber attacks.

John Leyden, The Register’s cybersecurity journalist published an opinion article on how we’re decades away from true anti-malware AI, mainly since AI & ML are two popular buzzwords from recent years and everybody uses them for marketing and appeal. John goes further and suggests that AI might just be rebranding of heuristics, reinforcing his claim.

The future of endpoint security and ML/AI

As most vendors use ML/AI to detect malware, we’ll keep seeing attackers evolving and the chase to build better, smarter systems from both sides to target the other will continue.

Other solutions, like endpoint deception, that are not susceptible to the aforementioned flaws will increasingly become more vital to enterprises seeking to bolster their security while at the same time reducing costs, operational burden and false-positive alerts…

Share via:

CONTACT US

Request a free trial or send us a message

Let's get in touch

Request a demo or send us a message