Employee Login

Enter your login information to access the intranet

Enter your credentials to access your email

Reset employee password

Article

Notes From the Road: RSA Conference 2026 Edition

April 1, 2026
By Scott Radcliffe

While at this year’s RSA Conference I overheard a very senior security executive at a well-known security company remark that he “came to RSA expecting a security conference and instead seemed to arrive at an AI conference.” Like many things said in jest, there was more than a little truth buried inside.

Walking through the exhibitor halls, you’re immediately struck by the nearly comprehensive inclusion of AI in nearly every offering on display—from threat detection to incident response to risk management. It seemed every vendor had either retrofitted their solution with AI or built one from scratch.

It would be easy to dismiss it all as hype, another technology cycle where marketing teams latch onto a buzzword without a lot of substance to offer under the surface. Surely at least a little is snake oil, but to dismiss everything as vaporware would be miss the dramatic and evolutionary step AI represents for the cybersecurity space.

In the short twelve months since last year’s RSA conference, we’ve witnessed countless AI experiments, implementations and innovations, and even the most experienced security minds in the world are grappling with uncertainty about what’s coming next.

The Great Shift: From “Humans in the Loop” to Autonomous Operations

At last year’s conference, most discussions around AI in security were grounded at some level on keeping “humans in the loop” of the decision-making and execution process. AI could augment, assist and accelerate actions taken by human admins and users, but the final call had to rest with a human who understood context, nuance and consequences.

That narrative has fundamentally shifted in a single year. As Wall Street Journal reporter James Rundell pointed out from his first impression of this year’s conference, the industry has undergone a philosophical change over the course of the last year. Security teams are no longer asking whether AI should act independently—they’re asking how to best, and hopefully safely, architect systems where AI must act independently and, quite often, in real-time.

This isn’t a subtle distinction. It represents a wholesale reimagining of how we defend our networks and systems. The efficiency gains of this headlong leap into AI are real, but so are the risks, and that tension is what keeps many security leaders up at night.

Identity as the New Perimeter

If autonomous AI is the emerging challenge, then identity has become an even more critical battleground. Anyone who’s paid attention to the security space recently is familiar with the popularity and continued growth of identity-based attacks that use known, often re-used credentials like usernames, email addresses, and passwords to gain access to systems. With AI systems now being granted expanding autonomy and access to sensitive data, the question of whom, or more accurately, what—should be able to access particular systems, networks, or information has taken on even greater urgency.

Early implementations of AI agents have already demonstrated the dangers of unchecked permissions. Give these systems too much access or too broad an ability to act, and they can quickly spiral into trouble. A key message that echoed through many of the talks at RSA this year make clear that guardrails aren’t optional, they’re foundational. As organizations deploy AI more widely, the ability to establish firm, granular controls around identity and access will be absolutely critical. In a world of autonomous intelligent agents, identity becomes the ultimate arbiter of what’s possible.

AI’s Dual-Use Dilemma for Security: Offensive Operators Will Have a Huge Head Start

Perhaps the most sobering insight I took away from RSA this year is how far behind defenders will be, and for how long, in the AI race. AI certainly represents an immediate force multiplier for attackers, and it will take a significant amount of time for defenders to catch up. Kevin Mandia, a veteran cybersecurity executive with decades of experience founding some of the industry’s most iconic companies, put some sobering specifics to this sentiment. In his view, AI will provide a clear advantage to offensive operations for the next two years before the defense can accumulate enough data and operational experience to train systems that keep pace.

The advantage goes beyond speed, though that’s certainly part of it. AI enables attackers to operate with precision and personalization previously unattainable at scale. Rather than deploying generic attack tactics across broad targets, AI allows threat actors to generate bespoke attack plans tailored to individual organizations—understanding their specific vulnerabilities, mimicking their communication patterns, and timing operations to maximize success. For defenders, holding the line while playing catch-up will be a daunting but necessary challenge.

The Sovereignty Conversation: A Quiet but Consequential Shift

Away from the AI spotlight, Microsoft’s CISO for AI and Technology Data, Igor Tsyganskiy, brought up a fascinating nuance to the data sovereignty trend many cloud providers are facing during a fireside chat. As organizations continue to adopt cloud architectures, where data lives—physically and jurisdictionally—has moved from a compliance checkbox to a strategic security consideration.

Different regions, regulatory frameworks and threat landscapes all create scenarios where the location and control of data become material to security architecture. This trend will likely only intensify as companies navigate an increasingly fragmented geopolitical environment. Data sovereignty has been a growing trend for a number of months at this point. The interesting point Tsyganskiy raised at the conference last week, however, was the urgent need for organizations to consider operational contingencies as well in their plans to satisfy data sovereignty requirements.  A recent airstrike that destroyed Amazon’s data center in Bahrain underscores the point: it doesn’t take a missile to disrupt operations, so organizations should be prepared as the answer may not be as easy as flipping the switch to another data center in a desired location.

For security and communications leaders, this means the conversation with the business can’t remain purely technical. It has to account for regulatory, geopolitical and strategic business considerations.

The Fundamentals Still Matter (Maybe More Than Ever)

Rob Joyce, the former director of cybersecurity at the NSA, emphasized a reality that can sometimes get lost amid the AI hype: the fundamentals of cybersecurity still remain a powerful and largely effective defense. His point is worth emphasizing, especially at a conference filled with vendors pitching the latest solutions the security industry has to offer.

Attackers, Joyce argued, continue to disproportionately target organizations that don’t execute the basics well. Though those attacks will only grow as bad actors begin to use AI as a force multiplier, organizations that prepare by adhering closely to good security fundamentals will be in a much better position to weather the coming storm. This means companies that lag in patching systems, haven’t broadly deployed multi-factor authentication, maintain inadequate logging practices, or generally fail to stay prepared are putting their systems at much greater risk.

I would argue the same applies to communications and marketing teams. Ensuring you’re prepared, properly integrated with the rest of the organization and generally ready to help your organization stay ahead of a threat environment evolving at exponential speed is more important than ever. Furthermore, I’d add that the time has come for marketing and communications teams to do their part and partner with technical teams to ensure the security conversation organizations have with their boards and business leaders isn’t dominated by buzzwords but is instead grounded in ensuring the foundational elements of security are strong enough to build upon.

It’s certainly easy to walk away from RSA 2026 with a sense of dread. But the deeper message embedded throughout the conference would be missing.

Yes, AI represents a significant challenge. Yes, attackers have a near-term advantage. Yes, data sovereignty is becoming a more complex puzzle to solve. But it’s a challenge I think we’re all up for if we’re ready.

Scott Radcliffe is FleishmanHillard’s global director of cybersecurity, leading the firm’s Cybersecurity Center of Excellence and advising clients on rising cyber risks. He recently rejoined FH from Apple, where he led cybersecurity communications and previously served as the agency’s senior global data privacy and security expert.