AI: Security’s White Knight or the Next Best Friend of the Cybercriminal?

Will AI save us from the cyber-crimes that make the headlines almost daily or serve to assist hackers? CompTIA’s IT Security Community tackled the question.

cybersecurity white knightWill AI save us from the cyber-crimes that make the headlines almost daily or serve to assist hackers? CompTIA’s IT Security Community decided to tackle this question at ChannelCon 2018 in Washington, D.C., this past summer.

The community brought together experts such as Stephen Cobb of ESET, Stephen Cox of MKACyber and Allan Friedman from the National Telecommunications and Information Administration, with moderator Robert DeMarzo, senior vice president of event content and strategy for The Channel Company, to shed some light on the topic.

Cox took the white knight stance. He jokingly referenced major players in security touting that AI will solve all the worlds cyber-problems, and added the reality is AI already has use-cases in which it is solving problems; as in the case of entity behavior analytics. AI, Cox said, is a tool – a powerful tool but it is just a tool. The real white knights are the researchers and vendors who adopt this technology to surmount real world challenges. Like all technology, security conversations need to take the view of what problems the client is trying to solve.

Cobb countered that good technology can also be problematic or used for nefarious purposes. He cited encryption as an example. When used for good it is a secure means of protecting data in motion and at rest. But the billion-dollar world of ransomware shows the other side of the issue. AI now allows cyber-criminals to move faster, gather more data, remain one step removed from targets and hit more victims in a shorter time. DeMarzo and Cobb both feel the cyber-criminals may have the edge. Cybercriminals don’t have to deliver a perfect product. The good guys do or at least try to.

Friedman started off by quoting Melvin Kranzberg: “Technology is neither good nor bad; nor is it neutral.” The key to technology is to understand what the broad impact is, he said. Image detection through automated systems is good now and can flag stock images when criminals are pretending to be someone different.

Cobb pointed to the DARPA Grand Challenge, in which AI systems battled each other and there was no clear winner. They all had respectable showings.  

AI must be trained. It is dependent on data and therein lies the catch. If data is compromised, bad or biased, AI is flawed. Additionally, data relevant to the problem to be solved must be used in the training. Governments and companies alike will need to look at how we address security. Currently, security is driven by rules, but the beauty of using AI and machine learning in security is it can look at emergent risks – things that have not been conceived of yet. While the future of AI is here, the reality is humans and data still hold the reins.

Passionate about cybersecurity? Click here to get involved with CompTIA’s IT Security Community.

Email us at blogeditor@comptia.org for inquiries related to contributed articles, link building and other web content needs.

Read More from the CompTIA Blog

Leave a Comment