AI-Powered Malware Doesn’t Actually Exist – Infosecurity Magazine

AI-Powered Malware Doesn’t Actually Exist - Infosecurity Magazine

You might have read reports from some cybersecurity “experts” that AI-powered cyber-attacks already exist and that they’re a threat we should be worried about. They paint a picture of futuristic Skynet-esque programs with the ability to intuitively understand their environment and adapt behaviors to outwit both automated cyber-defense measures and humans alike. Luckily, such reports are simply an attempt to market cybersecurity products through fearmongering.

Misconceptions around AI-powered malware are primarily fuelled by ignorance of what AI, or more accurately machine learning, is capable of. They assume that current machine learning techniques can be used to recreate human-level creativity and decision logic. This is the stuff of science fiction.

Let us contemplate, for a moment, how AI-powered malware might be created using current machine learning techniques. When considering current mechanisms, reinforcement learning seems the obvious choice for creating a program (agent) that can automate steps in a cyber-attack. Reinforcement learning can easily be used to train agents to perform actions (moving or copying files, launching executables, altering registry values, etc.) based on observations (information about the file system, processes, registry entries, etc.) from a target system.

The problem formulation is similar to that of creating an agent capable of playing an old-school text adventure. Agents in this scenario would be trained against pre-configured systems that contain vulnerabilities, security holes or misconfigurations typically encountered during red teaming or penetration testing operations. These would be designed to perform one – or several – steps in a typical cyber-attack chain, such as lateral movement, persistence, reconnaissance, privilege escalation or data exfiltration. The tools required to pull all of this off already exist.

However, for those planning on building their own AI-based attack tools in this manner, know that there are some caveats. Reinforcement learning models typically need to train for millions of steps before they converge on a good policy. In our described scenario, each step would involve running commands on an actual machine (or virtual machine) that would need to be spun up and configured for each episode. This means it would likely take weeks or even months and a lot of computing resources to train an agent, even if the process were parallelized.

This content was originally published here.

Laat een reactie achter

Het e-mailadres wordt niet gepubliceerd.