Home / Blog / Recast Blog / AI Isn’t Just a Hype Threat—It’s Already Changing the Attack Surface 

AI Isn’t Just a Hype Threat—It’s Already Changing the Attack Surface 

On Mar 13, 2026 by Recast Experts Recast Mark
5 min

CISOs and SysAdmins know this risk calculus well: Breaches are expensive, time-consuming, and disruptive. For most of the last 20 years, there was an invisible buffer, a window of reaction time. An attacker would break in, and defenders still had hours or days to investigate, contain, and respond. 

Today, that buffer is collapsing. 

AI-assisted malware is already in the wild 

In late 2025, Google’s Threat Intelligence Group published research documenting a meaningful shift in attacker behavior. Instead of using AI only for phishing or reconnaissance, adversaries are now integrating AI into malware itself. They’re generating code at runtime, altering behavior mid-attack, and evading detection tools that weren’t designed for moving targets. 

Multiple independent reports have since documented malware that: 

  • Calls AI model APIs to generate or obfuscate code during execution 
  • Creates malicious scripts dynamically instead of relying on static binaries 
  • Modifies its behavior mid-attack to slip past signature-based and heuristic detection 

Mobile threats are following. Researchers uncovered PromptSpy, an Android malware family that uses AI at runtime to adapt execution and maintain persistence. It’s the first of its kind. 

This isn’t speculation. It’s already being documented by major defenders worldwide. 

Detection alone isn’t the answer—and it never was 

SysAdmins have lived through the evolution of endpoint security: signature-based AV → EDR → XDR → AI-assisted detection. Each layer added capability. Each layer also added alerts, management overhead, and noise. 

The core limitation remains: Detection is reactive. It identifies malicious activity after execution has started. 

When malware can adapt itself mid-attack or generate entirely new functions on demand, detection alone cannot give defenders enough time to act. 

AI doesn’t make malware magically powerful. It makes novelty cheap and execution faster. That means defenders have less time, not more. 

The real lever: Control the workspace, not just the endpoint 

There’s a more durable principle worth examining here, one that’s becoming more relevant as AI reshapes the threat landscape: 

Blast radius is determined by what access exists, not just what detection catches. 

Most organizations control access to applications through Collections and Groups. It’s a blunt instrument. A user is in a group, so they get a set of apps. That model works until it doesn’t, and it often doesn’t when accounts are compromised, when roles shift, or when an attacker is moving laterally. 

Application Workspace is an application delivery solution that approaches this differently. Instead of static group membership, access is governed by what you might call a task perimeter: the layering of Identity, Context, and Role, evaluated in real-time. A user doesn’t simply “have access” to an application. They have access when the right combination of conditions is true: who they are, what device they’re on, what network they’re connecting from, and what role they’re operating in at that moment. 

That means the workspace a user sees isn’t a fixed list, but rather a dynamic, scoped surface that reflects what they actually need at that moment. 

Why this matters more as AI agents enter the environment 

This isn’t just a human-user problem anymore. 

AI agents are increasingly being spun up to carry out tasks on behalf of users and organizations. They’re browsing, executing workflows, and interacting with systems. Traditional identity models, which were broad, static, and slow to update, weren’t designed for this. An agent with over-provisioned access has a large blast radius waiting to be exploited. 

Entitlements based on task perimeters are inherently better suited to this reality. You can define the workspace, including the tools, apps, and access appropriate for a specific task, then enforce it in real-time for both the human and the agent working alongside them. Access scales with need. It expires when the need ends. 

That’s a fundamentally different security posture than “Here’s everything this account can reach.” 

What this means in practice 

This isn’t about stopping AI-powered attacks outright. No single control does that. 

It’s about minimizing the surface available to exploit. Then, when detection does catch something, there’s less to contain, the blast radius is smaller, and remediation is faster. 

Static, group-based access expands that surface. Dynamic, context-aware entitlements compress it. 

AI will help attackers operate faster and with less predictable behavior. How much damage they can do when they gain access depends heavily on what permissions are available to begin with. 

That’s a lever most organizations haven’t fully pulled. 

Share