Employee Login

Enter your login information to access the intranet

Enter your credentials to access your email

Reset employee password

Article

Notes From the Road: RSA Conference 2026 Edition

April 1, 2026
By Scott Radcliffe

While at this year’s RSA Conference I overheard a very senior security executive at a well-known security company remark that he “came to RSA expecting a security conference and instead seemed to arrive at an AI conference.” Like many things said in jest, there was more than a little truth buried inside.

Walking through the exhibitor halls, you’re immediately struck by the nearly comprehensive inclusion of AI in nearly every offering on display—from threat detection to incident response to risk management. It seemed every vendor had either retrofitted their solution with AI or built one from scratch.

It would be easy to dismiss it all as hype, another technology cycle where marketing teams latch onto a buzzword without a lot of substance to offer under the surface. Surely at least a little is snake oil, but to dismiss everything as vaporware would be miss the dramatic and evolutionary step AI represents for the cybersecurity space.

In the short twelve months since last year’s RSA conference, we’ve witnessed countless AI experiments, implementations and innovations, and even the most experienced security minds in the world are grappling with uncertainty about what’s coming next.

The Great Shift: From “Humans in the Loop” to Autonomous Operations

At last year’s conference, most discussions around AI in security were grounded at some level on keeping “humans in the loop” of the decision-making and execution process. AI could augment, assist and accelerate actions taken by human admins and users, but the final call had to rest with a human who understood context, nuance and consequences.

That narrative has fundamentally shifted in a single year. As Wall Street Journal reporter James Rundell pointed out from his first impression of this year’s conference, the industry has undergone a philosophical change over the course of the last year. Security teams are no longer asking whether AI should act independently—they’re asking how to best, and hopefully safely, architect systems where AI must act independently and, quite often, in real-time.

This isn’t a subtle distinction. It represents a wholesale reimagining of how we defend our networks and systems. The efficiency gains of this headlong leap into AI are real, but so are the risks, and that tension is what keeps many security leaders up at night.

Identity as the New Perimeter

If autonomous AI is the emerging challenge, then identity has become an even more critical battleground. Anyone who’s paid attention to the security space recently is familiar with the popularity and continued growth of identity-based attacks that use known, often re-used credentials like usernames, email addresses, and passwords to gain access to systems. With AI systems now being granted expanding autonomy and access to sensitive data, the question of whom, or more accurately, what—should be able to access particular systems, networks, or information has taken on even greater urgency.

Early implementations of AI agents have already demonstrated the dangers of unchecked permissions. Give these systems too much access or too broad an ability to act, and they can quickly spiral into trouble. A key message that echoed through many of the talks at RSA this year make clear that guardrails aren’t optional, they’re foundational. As organizations deploy AI more widely, the ability to establish firm, granular controls around identity and access will be absolutely critical. In a world of autonomous intelligent agents, identity becomes the ultimate arbiter of what’s possible.

AI’s Dual-Use Dilemma for Security: Offensive Operators Will Have a Huge Head Start

Perhaps the most sobering insight I took away from RSA this year is how far behind defenders will be, and for how long, in the AI race. AI certainly represents an immediate force multiplier for attackers, and it will take a significant amount of time for defenders to catch up. Kevin Mandia, a veteran cybersecurity executive with decades of experience founding some of the industry’s most iconic companies, put some sobering specifics to this sentiment. In his view, AI will provide a clear advantage to offensive operations for the next two years before the defense can accumulate enough data and operational experience to train systems that keep pace.

The advantage goes beyond speed, though that’s certainly part of it. AI enables attackers to operate with precision and personalization previously unattainable at scale. Rather than deploying generic attack tactics across broad targets, AI allows threat actors to generate bespoke attack plans tailored to individual organizations—understanding their specific vulnerabilities, mimicking their communication patterns, and timing operations to maximize success. For defenders, holding the line while playing catch-up will be a daunting but necessary challenge.

The Sovereignty Conversation: A Quiet but Consequential Shift

Away from the AI spotlight, Microsoft’s CISO for AI and Technology Data, Igor Tsyganskiy, brought up a fascinating nuance to the data sovereignty trend many cloud providers are facing during a fireside chat. As organizations continue to adopt cloud architectures, where data lives—physically and jurisdictionally—has moved from a compliance checkbox to a strategic security consideration.

Different regions, regulatory frameworks and threat landscapes all create scenarios where the location and control of data become material to security architecture. This trend will likely only intensify as companies navigate an increasingly fragmented geopolitical environment. Data sovereignty has been a growing trend for a number of months at this point. The interesting point Tsyganskiy raised at the conference last week, however, was the urgent need for organizations to consider operational contingencies as well in their plans to satisfy data sovereignty requirements.  A recent airstrike that destroyed Amazon’s data center in Bahrain underscores the point: it doesn’t take a missile to disrupt operations, so organizations should be prepared as the answer may not be as easy as flipping the switch to another data center in a desired location.

For security and communications leaders, this means the conversation with the business can’t remain purely technical. It has to account for regulatory, geopolitical and strategic business considerations.

The Fundamentals Still Matter (Maybe More Than Ever)

Rob Joyce, the former director of cybersecurity at the NSA, emphasized a reality that can sometimes get lost amid the AI hype: the fundamentals of cybersecurity still remain a powerful and largely effective defense. His point is worth emphasizing, especially at a conference filled with vendors pitching the latest solutions the security industry has to offer.

Attackers, Joyce argued, continue to disproportionately target organizations that don’t execute the basics well. Though those attacks will only grow as bad actors begin to use AI as a force multiplier, organizations that prepare by adhering closely to good security fundamentals will be in a much better position to weather the coming storm. This means companies that lag in patching systems, haven’t broadly deployed multi-factor authentication, maintain inadequate logging practices, or generally fail to stay prepared are putting their systems at much greater risk.

I would argue the same applies to communications and marketing teams. Ensuring you’re prepared, properly integrated with the rest of the organization and generally ready to help your organization stay ahead of a threat environment evolving at exponential speed is more important than ever. Furthermore, I’d add that the time has come for marketing and communications teams to do their part and partner with technical teams to ensure the security conversation organizations have with their boards and business leaders isn’t dominated by buzzwords but is instead grounded in ensuring the foundational elements of security are strong enough to build upon.

It’s certainly easy to walk away from RSA 2026 with a sense of dread. But the deeper message embedded throughout the conference would be missing.

Yes, AI represents a significant challenge. Yes, attackers have a near-term advantage. Yes, data sovereignty is becoming a more complex puzzle to solve. But it’s a challenge I think we’re all up for if we’re ready.

Scott Radcliffe is FleishmanHillard’s global director of cybersecurity, leading the firm’s Cybersecurity Center of Excellence and advising clients on rising cyber risks. He recently rejoined FH from Apple, where he led cybersecurity communications and previously served as the agency’s senior global data privacy and security expert.

Article

Why Your AI Rollout Is Stalling (And What Actually Moves the Needle)

March 25, 2026
By Zack Kavanaugh

Most organizations are investing heavily in AI but seeing minimal return. The tools are rolling out. The impact isn’t landing. This article examines why adoption is stalling, what employees are really feeling and why a new model for change is essential to close the gap between investment and outcomes. 

There’s a paradox unfolding in organizations right now – and its quietly derailing AI initiatives at scale. 

Companies are pouring millions – in some cases, billions – into AI infrastructure. Platforms are deploying. Training programs are launching. And yet, most organizations report that their AI efforts aren’t delivering the results they expected.  

In fact, 95% of generative AI pilots fail to reach measurable business impact. Only 1% of organizations consider their deployments truly mature. And across the workforce, a third of employees are actively considering leaving over unclear AI expectations and lack of support. 

The investment is real. The adoption and impact is missing – and the disconnect is striking.  

So, what gives? 

What we’ve learned supporting organizations through AI transformation is this: they’re treating it like a technology problem when it’s actually a people problem. And until we acknowledge that difference, adoption – and business impact along with it – will continue to stall. 

The Emotions Nobody’s Talking About 

Walk into most organizations right now, and the conversation sounds logical. “Here’s the business case. Here’s the ROI. Here’s the productivity uplift.” But underneath that rational overlay is something messier – and infinitely more powerful: how people actually feel

The data points here are endless – and we could marshal dozens more to prove that adoption is stalling. But honestly? They don’t really matter. What does matter is whether you feel your organization is progressing at the rate you know it’s capable of.  

If not, or if you don’t know where to start to answer that question, it may be time to look closely at your adoption strategy.  

The AI Readiness Gap 

This is the mistake we see companies continuing to make: assuming a strong business case is enough to win people over. We’re treating AI adoption like a switch you flip, when it’s actually a continuous, messy, non-linear process that requires people to move through change at different speeds. 

Most organizations are still leaning on traditional change models – the kind that default to logic and expect a single launch moment to do the heavy lifting.  

But AI transformation isn’t a single moment. It’s not a product launch. It’s a fundamental shift in how people think about their work, what they value in their roles and whether they trust the organization to shepherd them through it. 

That gap – between what leaders expect and what employees experience – is the real barrier to adoption. 

A Different Path Forward 

What’s needed is a model designed for how people actually change. Not how we think they should change. How they actually do. 

The good news: that change management model exists; it features three phases and three layers of employees’ experience, and all three matter equally: 

Phase 1: Normalization – The Emotional Layer 

Normalization is about shifting mindsets – listening, building psychological safety and trust, de-weirding tools and making AI part of everyday conversation. Before anyone can adopt anything, they need to feel safe, seen and supported. This means listening before launching.  

It also means leaders modeling vulnerability, not just expertise. And it means identifying trusted voices –both champions and skeptics – and giving them visibility in shaping the journey. When you remove the mystique around AI and make it visible in how people actually talk and work, adoption becomes possible. Listening earns you permission to lead. 

Phase 2: Experimentation – The Personal Layer 

Experimentation is about shaping habits – encouraging participation and creating low-risk opportunities to try, learn, fail safely and reflect. Once people feel safe, they’re ready to connect AI to their own work and identity.  

This is where curiosity replaces skepticism. You can help replace skepticism with curiosity when you share stories from peers – not polished case studies, but real moments where someone figured something out or tried something that didn’t work. When people see themselves in the adoption story, they move from “this doesn’t apply to me” to “I see where this helps.” Experiments become personal. Habits begin to form. Failure becomes data, not judgment. 

Phase 3: Integration – The Operational Layer 

Integration is about scaling impact – building and validating use cases, measuring value, embedding AI into workflows and scaling solutions. When adoption becomes embedded in how work actually happens, impact becomes measurable and repeatable.  

Proven experiments turn into templates and workflows. Success stories become standard operating procedures. Recognition systems reward AI fluency. And AI stops feeling like the new initiative and starts feeling like “just how we work.”

The Continuum, Not the Launch 

The shift here is fundamental. Instead of treating adoption as a destination, we’re treating it as a progression.  

Instead of betting everything on a single launch moment, latest tool or new corporate mandate, we’re developing constant feedback loops. Instead of assuming readiness, we’re building it – intentionally, measurably and with employees at the center.  

Whether you’re leading an organization, a department or a team, your people will never move cleanly through one phase alone. They will move at their own pace. Some people will be experimenting while others are just beginning to normalize. And as new information emerges, they will oscillate back and forth – revisiting earlier phases to deepen their foundation before moving forward again. 

The bottom line: AI adoption accelerates only when the environment is ready – when culture, clarity and context catch up to ambition. That’s when change starts to feel real. And when people decide it’s worth leaning in.  

More to come on all this. Stay tuned.  

Article

AI is Reshaping Communications: Inside FleishmanHillard’s Enterprise-Wide Approach

February 19, 2026

In his new Forbes piece, Bernard Marr explores the breakneck pace of AI transformation in the communications landscape with Ephraim Cohen, FleishmanHillard’s global head of data and digital. Cohen reveals that unlike past technological shifts that took decades to prepare for, today’s AI evolution is happening so rapidly that even full-time experts are struggling to keep pace.

Watch Their Full Conversation Here:

Three Key Takeaways:

1. Democratizing AI Across the Organization Rather than creating an elite “AI team,” Cohen outlines empowering every employee with hands-on access to frontier models and training. This bottom-up approach has yielded more powerful, bespoke solutions because they’re built by people who intimately understand client challenges, rather than strictly technical specialists.

2. The Power of Curated Knowledge Libraries Building digitized libraries of proven case studies and best practices that feed AI agents creates more relevant, accurate outputs than relying on open internet training data. For crisis simulations and campaign work, this approach delivers precision over generic AI-generated content.

3. Keeping Humans in the Driver’s Seat Cohen emphasizes that human creativity remains paramount. AI works best as a talented assistant—helping test, refine, and optimize human ideas, not replacing them. The result: less “AI slop,” more polished, high-impact work.

Article

FleishmanHillard Wins 2026 Innovation Awards for Data-Driven Strategy

January 12, 2026

FleishmanHillard has won two North America 2026 SABRE Awards: Data-Driven Agency of the Year for “Democratizing Data” and Data Professional of the Year for Ines Schumacher and SAGE Synthetic Audiences.

SAGE Synthetic Audiences, built on Omnicom’s industry-leading data stack and FleishmanHillard’s audience profiling expertise, was officially introduced last spring.

The wins underscore FleishmanHillard’s operational mindset of embedding intelligence and analytics at the center of communications strategy.

The recognition follows last fall’s news of 13 AMEC Measurement and Evaluation Awards including seven Gold, four Silver and two Bronze across FleishmanHillard TRUE Global Intelligence, Methods+Mastery and Omnicom Public Relations. Those accolades included Innovation Award for New Measurement Methodologies, Best Use of New Technology in Communications Measurement and Best Use of Measurement for a Single Event or Campaign.

The wins reflect what FleishmanHillard describes as an “integrated intelligence model,” where rigorous analysis and critical thinking are baked into strategy development and execution from the start rather than applying data after the fact. The news follows the rollout of the agency’s counselor-led AI solutions suite FH Fusion last summer.

The SABRE recognition validates the investments made in building proprietary methodologies, scaling analytics capabilities across regions and training advisors agency-wide to lead with insight.

Article

Sponsored Content in the AI Era

December 3, 2025
By Corina Quinn, Andrea Margolin and Amanda Hampton

The landscape may be shifting, but your brand story is still getting heard.

Sponsored content is arguably one of the most powerful tools in the PR toolbox for ensuring audiences hear a brand related story in an editorial format and context that will both resonate and drive impact. We’d be hard-pressed to think of a campaign that should not have this high-impact type of storytelling as a centerpiece.  

But how does it work in today’s AI-driven environment? 

Sponsored Content in the AI Era

This era of zero-click searches has led to lots of hand-wringing on what it means for sponsored articles and performance and whether it’s still a sound strategy for an integrated campaign.

Good news on that horizon: zero-click hasn’t marked the death to sponsored content. It remains a dynamic, stable solution to reach critical audiences with rich storytelling and trusted formats that offer unique support for a brand’s business goals.

New To SponCon?

Sponsored content is a form of paid media partnership in which brands collaborate with media publishers to co-create content that aligns with the editorial standards and audience interests of the publication, while advancing the brand’s communications objectives

Unlike traditional advertising, sponsored content is integrated within the editorial environment, delivering insights, perspectives or stories that resonate with readers and offer value beyond overt promotion. This approach leverages the credibility and reach of established media outlets, allowing brands to participate authentically in conversations that matter to their target audiences, while keeping control over message integration that you can’t guarantee with earned coverage from those same outlets.

Why Sponsored Content?

Trusted Influence at Scale
By appearing within reputable editorial environments, sponsored content benefits from the publisher’s authority and audience trust—and taps its editorial expertise in content creation. This enhances message credibility and drives deeper consideration among target audiences.

Strategic Storytelling
Sponsored content enables nuanced, narrative-driven communications that go beyond product features to communicate a range of things, from big-picture brand values, new and exciting offerings for consumers, and even thought leadership or societal impact. It supports reputation, positioning, and purpose-led initiatives as well as helps drive consumer offerings, from path to purchase and other lower-funnel tactics.

Precision Audience Engagement
Partnerships with publishers provide access to highly engaged audiences with a nuanced set of metrics you don’t get with earned media. This allows for tailored content that meets specific needs, interests, or pain points, improving relevance and engagement metrics—while driving brand business goals.

Integrated Communications Impact
As part of an integrated strategy, sponsored content amplifies earned and owned messaging, bridges gaps in the customer journey, and creates additional touchpoints that reinforce key themes across channels. It’s important to remember that sponsored content goes beyond print or digital advertorial, and includes qualitative content that also taps social media and other channels, and leverages video and audio content in addition to digital.

Measurement and Optimization
With robust analytics that can include time spent, click-through rates and more, sponsored content programs can be assessed for engagement, sentiment, and conversion—providing actionable insights to refine messaging and demonstrate business impact.

Sponsored Content in the Era of AI

There is a lot of discussion about declining site traffic due to changes in how people find information (known as zero-click search due to AI-driven features) and the truth is, there remain a lot of unanswered/developing questions about how AI summaries treat sponsored content that we will continue to track. We can also imagine a world where AI may enable more targeted distribution opportunities as media platforms respond and evolve.

Here’s what we know to be true today: Sponsored content remains effective because it is distributed and amplified through intentional, multi-channel strategies and is not just reliant on organic search or publisher homepage traffic.

There are several reasons why Sponsored Content remains effective:
Unique Traffic Drivers: In general, search and AI summaries aren’t the main traffic driver for sponsored content articles. Programs lean on newsletters, social media and paid amplification to drive qualified audiences.
Publishers Are Aware and Adapting: Many publishers moved to paywalls and subscriber-based ecosystems and have grown their cross-platform channel strategy in recent years to reduce their dependency on search, even before AI summaries emerged.
Quality Over Quantity: Our success metrics go beyond page views and impressions. We emphasize time spent, engagement and lower funnel actions such as clicks to the client’s site. Even with modest traffic results, high engagement audiences drive stronger qualitative impact.
Flexible Strategies: If we do see sponsored article performance start to be impacted, we collaborate with our media partners to evolve our strategies and content mix to make up for the loss in traffic.

Sponsored Content Goes Beyond Articles

In our current sponsored content programs, diversification is a deliberate part of our strategy to ensure we’re future-proofing programs against ecosystem shifts.
• Video, newsletter and social media integrations are important players in the content space and further distribute our reach and performance.
• Publisher-branded social handles, podcasts and more provide additional opportunities to connect with audiences even if article impressions or page views fluctuate.
• Diversifying our programs makes them less susceptible to zero-click trends because distribution is intentional and audience-driven rather than search-driven.

As the digital landscape continues to shift, we will monitor changes, adapt distribution and measurement strategies, and keep clients informed—ensuring that sponsored content continues to deliver value and impact in any environment.

From Left to Right: Corina Quinn, Andrea Margolin and Amanda Hampton

Corina Quinn is a Senior Vice President who co-leads our Media Partnerships Center of Excellence. A longtime award-winning editorial director and content strategist, she spent more than a decade in digital newsrooms at places including Conde Nast Traveler and Travel + Leisure.

Andrea Margolin is a senior communications strategist with more than 20 years of experience driving integrated storytelling, executive thought leadership and digital innovation for enterprise health brands. Based in Washington, DC, Andrea partners with global healthcare leaders to translate complex science into compelling, high-impact narratives that resonate across audiences and platforms.

Amanda Hampton is a Vice President based in Washington D.C. where she leads integrated sponsored content programs informed by audience insight and editorial excellence. She collaborates with clients and top consumer media partners to create high-impact storytelling that drives engagement and strengthens brand performance.

Article

Predictions for the Year Ahead: 3 Shifts in Internal and Change Comms in the Age of AI 

November 20, 2025
By Zack Kavanaugh

AI is reshaping how work gets done – and the field of communications is no exception. 

The fundamentals haven’t changed: people still need clarity, context and connection to make sense of change. What’s changing is how communication is created, delivered and received. 

Here are three shifts we expect to see in the year ahead – and what leaders can do now to prepare. 

Prediction 1: Content Will Keep Scaling. Attention Will Keep Shrinking. 

Stat to watch: 83% of knowledge workers say they are trapped in a communication maze of scattered emails and chats, where vital information often gets lost. 

What it means: Information overload is already a challenge, and AI-generated content is likely to add to the volume. The risk isn’t that employees won’t get enough information – it’s that they’ll disengage or find that messages actively obstruct their ability to focus on meaningful work.  

Communication teams must shift from output to impact, producing fewer, more intentional messages that protect attention and create value. 

How to prepare: Treat communication like scarce real estate. Ask: Is this message necessary? Who really needs it? Run small tests and trials, such as A/B tests, to see what captures attention, then scale your solution based on those insights. In a world of abundant content, relevance is what earns attention, provides value and builds trust. 

Prediction 2: Signals Will Get Louder. Understanding Will Stay Quiet. 

Stat to watch: 57% of employees say their company has a generative AI strategy in place, compared with 89% of executives who say it does. 

What it means: Leaders may assume their messages are landing simply because they were sent or because dashboards show clicks or activity. However, the gap between executives and employees in understanding AI strategy shows how misleading that assumption can be.  

How to prepare: Go beyond superficial metrics. Don’t just track clicks or usage. Use methods like focus groups, pulse surveys and informal conversations to assess true understanding. Ask: Do people understand the strategy? and Can they explain what it means for their role? Don’t settle for more data. Seek deeper, actionable insight that drives understanding and adoption. 

Prediction 3: Managers Will Carry More of the Message – and More of the Risk. 

Stat to watch: Only 27% of managers are engaged at work, and over half have never received formal training – including communication and people-leadership skills.  

What it means: As AI tools take on more of a team’s drafting and editing responsibilities, managers play an increasingly important role in the final stage of communication – ensuring messages are delivered in a way people understand and trust.  

Employees already look to them first for clarity, but many managers don’t feel equipped for the role. Without support, important messages risk being diluted or distorted – and organizational alignment can weaken. 

How to prepare: Support managers as the human bridge between the organization’s strategy and those who will implement it in their day-to-day responsibilities. Provide hands-on training to build confidence and give them opportunities to practice effective communications.  

In an AI-driven workplace, managers need more than digital tools. They need targeted coaching and ongoing, real-time support to communicate change. 

What AI Can’t Replace 

These shifts point to a future where AI does more of the producing, but people remain responsible for the meaning. 

More than ever, communication won’t just be about what gets said. It will be about what gets understood, internalized and acted on. And as machine-made content becomes more common, the messages employees will trust most are the ones that feel human. 

The role of the Internal Communications function is evolving – not to create more, but to help organizations make sense of more.  

Leaders who plan for this now will be better equipped to earn attention, maintain alignment and guide their people through the changes AI is accelerating. 

Article

Understanding the GLP-1 Consumer: Pairing AI and Consumer Behavior Research to Map Potential Impact on Food, Nutrition and Innovation 

October 29, 2025
By Allison Koch

Obesity medications have created a new type of consumer with unique needs. These consumers are not only spending their money differently but also spending less on groceries while still figuring out how to integrate their new diet into their homes and social lives.  

Food companies, as well as health professionals and dietitians like me, are seeking to better understand the GLP-1 user and how best to support them, especially as the medications become more affordable and accessible.  

Consumer research is already showing us where there are opportunities to support GLP-1 users. For example: 

GLP-1 users are tech-savvy, diverse and often rely on online communities – underscoring a shift in how Americans get health advice.

Moving beyond the numbers with AI 

But how do we really get behind the statistics and inside the mind of a GLP-1 user?  

We created a synthetic audience—an AI-driven amalgamation of many users based on all of the research we could put into the tool—to explore their thoughts and use them as a springboard for discussion and inspiration. Our proprietary tool unveiled potentially unintended consequences medication users’ decisions may have, including how their dietary habits and behaviors could influence how and what their family eats. More broadly, their habits and decisions will drive how product innovation happens and how the food supply chain is impacted.

And our synthetic audience showed us clearly that:  

  1. One size fits none: the most effective engagement – whether clinical or product – starts with understanding and targeting micro-segments.  
  1. Rethink education with reach: health care professionals (HCPs) – preferably led by registered dietitians (RDNs) who are experts in connecting the food and healthcare sectors – as well as the broader healthcare and food industries need to embed in GLP-1 users’ ecosystems as most build health knowledge outside traditional channels (on YouTube, Reddit, TikTok and with peer groups). 
  1. Anticipate ripple effects: HCPs (and the industry where appropriate) need to help patients navigate this cascade with empathy, flexibility and real-world solutions beyond just nutrition effects.  

What industry leaders are saying 

With these insights in hand, earlier this month I challenged three industry professionals to apply our findings to their work in front of a crowded room at the recent Academy of Nutrition and Dietetics annual Food and Nutrition Conference & Expo (FNCE). Each panelist brought a unique perspective to the table, discussing how they work with and reach GLP-1 medication users as well as key considerations and implications for practice and the broader healthcare, food and beverage community. 

How far does the GLP-1 impact reach? My colleague and Audience Strategy and Data Innovation expert Amanda Patterson said, “The rise in GLP-1 medications is fundamentally reshaping not just how people eat, but what and how much they buy at the grocery store. Beyond the individual, these changes ripple out to families and social circles. Many users say their household food routines (grocery lists, meal prep, holiday or social meals) are being reworked to accommodate their new eating patterns.” 

How should the food industry respond? For long term implications if this trend continues, community nutrition dietitian and GLP-1 user Summer Kessel shared, “I’m hopeful we are course correcting from the days of massive portion sizes and novelty products over nutrition. However, I’m a little worried that if people rely too heavily on ‘low-calorie’ processed foods instead of balanced meals, they risk missing out on essential nutrients.” 

Can the right nutrition messages get through the marketing hype? Founder of the Better Nutrition Program and RDN Ashley Koff shared, “We can use awareness of GLP-1 medications to introduce the public to weight-health hormones and how they regulate numerous functions in the body known collectively as ‘weight health.’ In doing this, dietitians can expand the reach of GLP-1, GIP beyond medications and help people learn to assess and as indicated, optimize their own hormones – whether they ever use a medication or not.” 

Rethinking food and health communications 

As GLP-1s continue to change daily routines and expectations, helping consumers make the right decisions to stay healthy but also being present with family and friends at meals and other food-based activities will test how we communicate about food and health.  

Combining insights from AI, research and lived experience allows us to reach solutions faster and understand not just what works, but why.  

For more information on these insights and other key learnings from FNCE, contact Allison at [email protected]

Allison koch width= Allison Koch MS, RD, CSSD, LDN is a vice president in FleishmanHillard’s Chicago office, where she provides nutrition communications counsel for clients. A registered dietitian with more than 20 years of experience, she’s passionate about helping brands connect science and storytelling to inspire healthier choices and stronger consumer trust.

 
Article

Augmented Judgment, Accelerated Execution: AI’s Role in Crisis, Issues and Risk Management

October 14, 2025
By Matt Rose and Alexander Lyall

Everyone’s talking about the promise of artificial intelligence. For crisis, issues and risk managers, that promise isn’t theoretical anymore. It’s already changing the game. The speed, scale and complexity of today’s challenges demand more than human effort alone. We need tools that sharpen judgment, spot risks sooner, simulate outcomes and move faster than we ever could on our own.

At FleishmanHillard, we call this Augmented Judgment, Accelerated Execution. It’s the balance of seasoned, human counsel with the foresight, scale and speed of AI. When used well, AI doesn’t replace human judgment, it strengthens it. AI compresses timelines, expands context, flags risks earlier and gives leaders the clarity they need under pressure.

Here’s how we’re putting this advantage into practice at FleishmanHillard, using trusted frameworks and strong data governance to help clients address crises, issues and risk with confidence.

AI for Early Warning

AI is becoming an essential early warning system. It examines global news, regulatory updates, and social activity to detect emerging topics and weak signals before they escalate. By analyzing conversations across markets, languages, and, it connects jurisdictions patterns that siloed teams might miss, with speed and breadth that today’s lean human teams cannot match.

It can also track how issues are likely to evolve and flag pressure points like upcoming regulations, activist campaigns, or viral moments. In addition, it can be pointed to anticipate when separate concerns may converge, adding complexity to timing, messaging, audience response and stakeholder engagement. This kind of foresight helps leaders act early, communicate clearly and stay ahead before critical moments hit.

AI for Stakeholder Simulation

Spotting a potential issue is one thing. Understanding how different audiences might respond is the next. Employees may question values. Regulators may focus on compliance. Investors may worry about financial impact. Customers may be concerned about reliability.

AI helps make this analysis possible through FleishmanHillard’s SAGE Synthetic Audiences. These simulations, built on polling data, demographics, and behavioral insights, let teams pressure-test messaging in real time.

AI can also model how a story might spread. Coverage could draw regulatory attention, spark activism, or open the door for competitors. With this foresight, teams can weigh options early, decide how to respond, and plan outreach in the right order.

AI for Story Forecasting

Reporters rarely work in isolation. Their previous stories, tone, and interview style often foreshadow how a new piece might unfold. AI can analyze this public data to forecast likely narratives, giving teams time to scenario-plan and prepare fact-based responses.

In one recent case, the FleishmanHillard team leveraged AI to generate a full-length draft of a potential investigative article based on a reporter’s in-depth inquiry, their past work, and facts they were likely to uncover. The projection closely matched the final story, serving as a clear model for the client and FH counselors to work against and affording weeks to prepare. Together, they aligned messaging, cleared responses and rehearsed scenarios. When the article ran, the team responded with focus and confidence, avoiding both unwanted attention and business disruption.

Click Above for More From the FleishmanHillard Crisis Team

AI for Crisis Content Management

Crisis response is rarely just one statement. It quickly becomes a growing stack of analytics and materials: standby statements, employee letters, investor scripts, customer updates, government briefings, media talking points, FAQs and social posts. Managing it all can become chaotic, especially with lengthy approval chains.

AI tools like FH Crisis Navigator help bring order. Acting as a virtual program manager, it adapts approved language for different audiences with speed and consistency. Using this tool, a crisis counselor can generate drafts, maintain version control, and keep updates aligned across every document. This reduces drift, speeds up approvals, embeds expert counsel, and keeps teams focused. So, when leadership needs to respond – whether to investors, regulators, customers, or the public – everything is already in place and ready for review.

AI for Scenario-Based Training

Preparation has always been essential to crisis readiness. But traditional tabletop exercises often fall short of real-world complexity. AI-powered platforms like the FleishmanHillard Crisis Simulation Lab raise the bar. Run by experienced facilitators, these simulations evolve in real time based on participant decisions. They introduce realistic challenges like media calls, stakeholder emails and viral posts, all tailored to the organization’s sector and geography.

Simulations can launch in hours instead of weeks, making them useful for both training and real-time strategy support. Structured feedback focuses on fact management, stakeholder engagement, and adaptability – building the muscle memory teams need when reputations are on the line.

AI for Campaign Risk Screening

Crises don’t always come from the outside. Sometimes a product launch, influencer partnership, or purpose-driven campaign can spark backlash, trigger scrutiny, or misfire in a volatile moment.

FH Risk Radar helps teams assess these risks before campaigns go live. It reviews concepts against regulatory guidance, cultural signals, public sentiment, and platform-specific challenges. The system scores ideas across dimensions like reputational exposure, influencer fit, message durability, and cultural sensitivity. Instead of a simple go-or-no-go call, teams get a full risk profile and clear mitigation strategies. This shifts review from a late-stage checkpoint to a strategic advantage.

From Promise to Practice

For communicators, risk leaders, and executives, AI is no longer a future promise. It’s a working tool, a strategic coach, and a force multiplier available to improve outcomes now. It surfaces early warning signs, simulates reactions, forecasts narratives, manages complex content, powers training, and screens campaigns. It delivers sharper, faster options for decision makers when every move counts.

AI’s role in crisis and risk management will only grow more sophisticated. But the message today is simple: the technology is here and can be applied to create immediate value. The leaders who use it will be better prepared to protect reputation in high-stakes moments.

At FleishmanHillard, we’re applying these tools every day to help clients anticipate challenges, navigate uncertainty, and emerge stronger. At the heart of it is Augmented Judgment, Accelerated Execution – the combination of trusted human counsel and the structured speed of AI. Together, they help organizations make better decisions, faster.

Crisis Team width=

Matt Rose (top) – Americas Lead for Crisis, Issues & Risk Management: Matt is an SVP & Senior Partner in New York with more than 30 years’ experience in advising organizations on crisis and issues management, risk mitigation, and reputation recovery. He has guided companies through reputational crises, labor issues, regulatory challenges, ESG controversies, and high-profile litigation.
Alex Lyall – Lead, Risk Management, AI & Innovation: Alex is an SVP & Partner in New York with more than 15 years of experience in crisis communications, issues management, preparedness, and risk management, working across industries. As part of the leadership team, Alex will help define best practices, shape go-to-market strategies, and scales solutions, with a focus on AI integration and talent development.
 

FH Guidelines for AI in Crisis, Issues, and Risk Management Applications

At FleishmanHillard, we apply artificial intelligence with purpose, not hype. In crisis, issues, and risk management, that means combining human expertise and experience with proven frameworks, proprietary technology, necessary confidentiality, and responsible guardrails to help organizations respond with speed, confidence, and control.
During a crisis, there is no substitute for seasoned judgment. AI can surface information, suggest language, or model scenarios, but it cannot navigate the nuance of legal implications, stakeholder dynamics, or reputational risk in real time. That takes seasoned counselors who have sat in the room, weighed the tradeoffs, and led under pressure. When the stakes are high, experience is not just helpful, it is essential.
That is why each FleishmanHillard application of AI in the Crisis, Issues and Risk Management Practice is anchored in three principles:
  • Experienced crisis counselors remain at the center of each use case, ensuring that technology enhances but never replaces human judgment.
  • Our systems are designed in secure, quality-assured environments that safeguard client information and uphold rigorous ethical standards.
  • AI is embedded within tested frameworks and workflows, allowing teams to move faster without sacrificing accuracy, accountability, or trust.
This disciplined approach ensures AI strengthens decision-making rather than creating new risks. With FleishmanHillard, organizations embrace innovation in crisis, issues, and risk management with confidence, knowing that innovation never comes at the expense of accuracy, ethics, or trust.

 

 
Article

5 AI Risks Every Company Should Be Aware of – and What to Do about Them 

September 24, 2025
By Zack Kavanaugh

AI is accelerating, but its promise is falling behind.  

The tools are multiplying, but only 1% of organizations consider their AI efforts “mature” – and 95% of generative AI pilots are failing.  

Why? Because transformation is a people challenge, not just a tech race. 

This piece surfaces five often-overlooked risks that quietly stall progress – each one rooted not in code, but in communication. Breakdowns in clarity, coordination and leadership commitment continue to limit adoption and erode trust. 

And yet, these are exactly the areas where strategic communication plays a pivotal role – helping organizations course-correct, contain risk and unlock the value AI is meant to deliver. 

For leaders ready to close the gap, here’s where to focus next. 

1. The AI Narrative Isn’t Moving as Fast as Tech  

What’s happening: AI rollout is rolling out fast, but most employees remain unclear on what it means for their work. 

Why it matters: Multiple reports show that companies are investing in AI tools faster than they’re training teams or communicating the impact. The result? Employees feel left behind, unsure where they fit in or how to contribute. 

What to do: Communications should partner with L&D and AI enablement teams to build a clear, role-relevant narrative that connects AI to everyday work. That means going beyond the “what” and “why” to include practical, team-specific examples – and showing what good AI use actually looks like. Managers play a crucial role here and should be equipped to reinforce these messages in regular team settings. 

2. Shadow AI Is Outpacing Governance 

What’s happening: Employees are quietly using unapproved AI tools to stay productive – often because sanctioned options aren’t accessible, intuitive or well-communicated. 

Why it matters: Recent research shows that over half of employees using AI at work are doing so under the radar. Only 47% have received any training, 56% have made mistakes due to misuse and nearly half say they’ve gotten no guidance at all. That creates risk – for the business, the brand and the people trying to do the right thing without clear support. 

What to do: Communications should partner with IT, HR and Compliance to promote trusted tools, clarify what’s allowed and explain why governance matters. Use short, human-centered scenarios that help people understand tradeoffs and risks. Managers should be given clear guidance on how to check in with their teams and normalize asking, “What tools are you using and why?” 

3. People Assume AI Replaces Judgment – So They Stop Using Theirs 

What’s happening: Without the right framing and support, employees may treat AI output as the final answer – not a starting point for critical thinking, refinement or discussion. 

Why it matters: A recent MIT/Wharton study found that while AI boosts performance in creative tasks, workers reported feeling less engaged and motivated when switching back to tasks without it – suggesting that over-reliance on AI can dull ownership and reduce the sense of meaning in work. 

What to do: Communications and L&D teams should align around positioning AI as a co-pilot, not a decision-maker. Messaging should emphasize the value of human input – especially in work that shapes brand, strategy or outcomes that may pose ethical dilemmas. Training should encourage questions like: 

  • “Would I feel confident putting my name on this?” 
  • “Where does this need my voice, perspective or context?” 

By reinforcing the expectation that employees think with AI – not defer to it – organizations can strengthen decision quality, protect brand integrity and keep teams connected to the meaning in their work. 

4. The Organization Is Focused on Activity, Not Maturity 

What’s happening: Many organizations are tracking AI usage – but not its strategic impact. The focus is on activity (how often AI is used), rather than maturity (how well it’s embedded in high-value work). 

Why it matters: According to a Boston Consulting Group survey, 74% of companies struggle to achieve and scale the value of AI – with only a small fraction successfully integrating it into core, high-impact functions. Without a clearer picture of what good looks like, AI efforts risk stalling at the surface. 

What to do: Communications teams should partner with AI program leads to define and share an AI maturity journey – through narrative snapshots, team showcases or dashboard insights that reflect depth, not just breadth. Highlight moments where AI has meaningfully shifted workflows, improved decision-making, unlocked new capabilities or resulted in notable client or business wins. And celebrate progress in stages – from experimentation to strategic integration to measurable ROI – to help the organization see not just what’s happening, but how far it’s come. 

5. Leaders Aren’t Framing the Change – or Making It Visible 

What’s happening: Many leaders say they support AI – but too few are actively learning, using or communicating about it. When leaders aren’t visibly experimenting or sharing what they’re discovering, employees are left to wonder if the change is important or safe to engage with themselves. 

Why it matters: According to Axios, while a quarter of leaders say their AI rollout has been effective, only 11% of employees agree. That’s not just an implementation gap – it’s a trust gap. And the root cause isn’t technical. It’s about clarity, consistency and whether people feel the change is relevant, credible and real. 

What to do: Communications teams should make it easy for leaders to show up – not just with bold vision, but with curiosity and candor. Encourage short, human signals: what they’re trying, what surprised them, what didn’t work. Share safe-fail stories. Invite open conversations. When leaders model vulnerability and visible learning, they normalize experimentation – and create the cultural conditions that AI adoption actually needs to take root. 

Making AI Real – and Communicating What Matters Most 

These risks don’t stem from infrastructure or algorithms – they come from gaps in alignment, communication and visible leadership. And they escalate when left unspoken. 

In the first article of this AI adoption series, we made the case for a people-first approach to AI. In our second article, we unpacked the psychology of hesitation, showing how quiet friction, not overt pushback, is what most often stalls momentum. 

Our hope is that this third piece has connected the dots: Communications may not own every risk – but it’s essential to identifying, navigating and de-escalating them. 

The bottom line: Technology may spark change, but it’s clarity, trust and visible leadership that make it real. FleishmanHillard partners with organizations worldwide to align ambition and action, helping clients avoid pitfalls, contain risk and realize full value of AI. As the pace accelerates, that human advantage will be the ultimate differentiator. 

Article

Global Managing Director EJ Kim Brings New Leadership and Strategic Innovation To TRUE Global Intelligence

September 4, 2025

FleishmanHillard today announced the appointment of EJ Kim as global managing director of TRUE Global Intelligence, the agency’s global research, analytics and intelligence consultancy. This leadership move signals the next phase of growth for FleishmanHillard’s intelligence capability as a central driver of strategic innovation and business impact. 

TRUE Global Intelligence connects Omnicom’s industry-leading data stack with proprietary measurement frameworks, data and AI-powered audience insight tools and consulting-grade analysis. This award-winning approach sets a new standard for data-driven intelligence, blending smart data and methodological rigor with bold creative experimentation. By applying counselor-driven AI-powered solutions, the intelligence team accelerates analysis, sharpens strategy and unlocks more dynamic client programs, helping brands drive growth, shift perception and prove value at every stage of the communications cycle. 

“EJ’s appointment reinforces our commitment to have intelligence sit at the center of how we work and deliver value for clients,” said J.J. Carter, FleishmanHillard president and CEO. “We are embedding data-driven insight and advanced analytics in every aspect of our business, guiding smarter decisions, sharper strategy and more meaningful outcomes. With EJ’s leadership, TRUE Global Intelligence will power our ambition to help clients navigate complexity, seize opportunity, anticipate change and achieve results that matter.” 

“I am truly excited for the opportunity to bring substance to innovation, ensuring that the intelligence we deliver is not just fast but thoughtful, rigorous and built to last,” said Kim. “This is a pivotal moment for intelligence to lead not follow and what sets us apart is not just the data at our fingertips but how we apply critical thinking and creative rigor to turn that data into insight and action. As AI transforms how we work, the real power lies in how we think — through critical reasoning, methodological discipline and the ability to make these tools work harder for real outcomes. I’m proud to help shape what’s next alongside a team that believes how we get there matters just as much as where we’re headed.” 

A seasoned intelligence strategist, Kim brings deep experience in building, scaling and transforming insights and analytics functions into strategic solutions-focused consulting capabilities. She has established practices from the ground up, led successful post-M&A integrations and has a proven track record of evolving intelligence offerings to help organizations turn complexity into clarity and insight into influence. Prior to joining FleishmanHillard, Kim served as executive vice president and head of Nexus, Weber Shandwick’s global center of excellence for analytics and innovation. A recognized thought leader and change agent, she also co-founded NNABI, an award-winning science-backed wellness brand focused on perimenopause care, demonstrating her entrepreneurial mindset and commitment to purpose-driven innovation. 

Kim’s appointment follows a series of strategic leadership announcements across FleishmanHillard’s global network, underscoring the agency’s commitment to data-driven insight, innovation and measurable client impact. 

Other recent market leadership announcements include Mei Lee in Singapore, Madhulika Ojha in India, Adrienne Connell in Canada, Kristin Hollins across California and Marshall Manson in the United Kingdom as well as a new global corporate affairs leadership team — as FleishmanHillard continues to invest in leaders who deliver trusted counsel and measurable impact on a global scale.