Employee Login

Enter your login information to access the intranet

Enter your credentials to access your email

Reset employee password

Article

Understanding the GLP-1 Consumer: Pairing AI and Consumer Behavior Research to Map Potential Impact on Food, Nutrition and Innovation 

October 29, 2025
By Allison Koch

Obesity medications have created a new type of consumer with unique needs. These consumers are not only spending their money differently but also spending less on groceries while still figuring out how to integrate their new diet into their homes and social lives.  

Food companies, as well as health professionals and dietitians like me, are seeking to better understand the GLP-1 user and how best to support them, especially as the medications become more affordable and accessible.  

Consumer research is already showing us where there are opportunities to support GLP-1 users. For example: 

GLP-1 users are tech-savvy, diverse and often rely on online communities – underscoring a shift in how Americans get health advice.

Moving beyond the numbers with AI 

But how do we really get behind the statistics and inside the mind of a GLP-1 user?  

We created a synthetic audience—an AI-driven amalgamation of many users based on all of the research we could put into the tool—to explore their thoughts and use them as a springboard for discussion and inspiration. Our proprietary tool unveiled potentially unintended consequences medication users’ decisions may have, including how their dietary habits and behaviors could influence how and what their family eats. More broadly, their habits and decisions will drive how product innovation happens and how the food supply chain is impacted.

And our synthetic audience showed us clearly that:  

  1. One size fits none: the most effective engagement – whether clinical or product – starts with understanding and targeting micro-segments.  
  1. Rethink education with reach: health care professionals (HCPs) – preferably led by registered dietitians (RDNs) who are experts in connecting the food and healthcare sectors – as well as the broader healthcare and food industries need to embed in GLP-1 users’ ecosystems as most build health knowledge outside traditional channels (on YouTube, Reddit, TikTok and with peer groups). 
  1. Anticipate ripple effects: HCPs (and the industry where appropriate) need to help patients navigate this cascade with empathy, flexibility and real-world solutions beyond just nutrition effects.  

What industry leaders are saying 

With these insights in hand, earlier this month I challenged three industry professionals to apply our findings to their work in front of a crowded room at the recent Academy of Nutrition and Dietetics annual Food and Nutrition Conference & Expo (FNCE). Each panelist brought a unique perspective to the table, discussing how they work with and reach GLP-1 medication users as well as key considerations and implications for practice and the broader healthcare, food and beverage community. 

How far does the GLP-1 impact reach? My colleague and Audience Strategy and Data Innovation expert Amanda Patterson said, “The rise in GLP-1 medications is fundamentally reshaping not just how people eat, but what and how much they buy at the grocery store. Beyond the individual, these changes ripple out to families and social circles. Many users say their household food routines (grocery lists, meal prep, holiday or social meals) are being reworked to accommodate their new eating patterns.” 

How should the food industry respond? For long term implications if this trend continues, community nutrition dietitian and GLP-1 user Summer Kessel shared, “I’m hopeful we are course correcting from the days of massive portion sizes and novelty products over nutrition. However, I’m a little worried that if people rely too heavily on ‘low-calorie’ processed foods instead of balanced meals, they risk missing out on essential nutrients.” 

Can the right nutrition messages get through the marketing hype? Founder of the Better Nutrition Program and RDN Ashley Koff shared, “We can use awareness of GLP-1 medications to introduce the public to weight-health hormones and how they regulate numerous functions in the body known collectively as ‘weight health.’ In doing this, dietitians can expand the reach of GLP-1, GIP beyond medications and help people learn to assess and as indicated, optimize their own hormones – whether they ever use a medication or not.” 

Rethinking food and health communications 

As GLP-1s continue to change daily routines and expectations, helping consumers make the right decisions to stay healthy but also being present with family and friends at meals and other food-based activities will test how we communicate about food and health.  

Combining insights from AI, research and lived experience allows us to reach solutions faster and understand not just what works, but why.  

For more information on these insights and other key learnings from FNCE, contact Allison at [email protected]

Allison koch width= Allison Koch MS, RD, CSSD, LDN is a vice president in FleishmanHillard’s Chicago office, where she provides nutrition communications counsel for clients. A registered dietitian with more than 20 years of experience, she’s passionate about helping brands connect science and storytelling to inspire healthier choices and stronger consumer trust.

 
Article

Augmented Judgment, Accelerated Execution: AI’s Role in Crisis, Issues and Risk Management

October 14, 2025
By Matt Rose and Alexander Lyall

Everyone’s talking about the promise of artificial intelligence. For crisis, issues and risk managers, that promise isn’t theoretical anymore. It’s already changing the game. The speed, scale and complexity of today’s challenges demand more than human effort alone. We need tools that sharpen judgment, spot risks sooner, simulate outcomes and move faster than we ever could on our own.

At FleishmanHillard, we call this Augmented Judgment, Accelerated Execution. It’s the balance of seasoned, human counsel with the foresight, scale and speed of AI. When used well, AI doesn’t replace human judgment, it strengthens it. AI compresses timelines, expands context, flags risks earlier and gives leaders the clarity they need under pressure.

Here’s how we’re putting this advantage into practice at FleishmanHillard, using trusted frameworks and strong data governance to help clients address crises, issues and risk with confidence.

AI for Early Warning

AI is becoming an essential early warning system. It examines global news, regulatory updates, and social activity to detect emerging topics and weak signals before they escalate. By analyzing conversations across markets, languages, and, it connects jurisdictions patterns that siloed teams might miss, with speed and breadth that today’s lean human teams cannot match.

It can also track how issues are likely to evolve and flag pressure points like upcoming regulations, activist campaigns, or viral moments. In addition, it can be pointed to anticipate when separate concerns may converge, adding complexity to timing, messaging, audience response and stakeholder engagement. This kind of foresight helps leaders act early, communicate clearly and stay ahead before critical moments hit.

AI for Stakeholder Simulation

Spotting a potential issue is one thing. Understanding how different audiences might respond is the next. Employees may question values. Regulators may focus on compliance. Investors may worry about financial impact. Customers may be concerned about reliability.

AI helps make this analysis possible through FleishmanHillard’s SAGE Synthetic Audiences. These simulations, built on polling data, demographics, and behavioral insights, let teams pressure-test messaging in real time.

AI can also model how a story might spread. Coverage could draw regulatory attention, spark activism, or open the door for competitors. With this foresight, teams can weigh options early, decide how to respond, and plan outreach in the right order.

AI for Story Forecasting

Reporters rarely work in isolation. Their previous stories, tone, and interview style often foreshadow how a new piece might unfold. AI can analyze this public data to forecast likely narratives, giving teams time to scenario-plan and prepare fact-based responses.

In one recent case, the FleishmanHillard team leveraged AI to generate a full-length draft of a potential investigative article based on a reporter’s in-depth inquiry, their past work, and facts they were likely to uncover. The projection closely matched the final story, serving as a clear model for the client and FH counselors to work against and affording weeks to prepare. Together, they aligned messaging, cleared responses and rehearsed scenarios. When the article ran, the team responded with focus and confidence, avoiding both unwanted attention and business disruption.

Click Above for More From the FleishmanHillard Crisis Team

AI for Crisis Content Management

Crisis response is rarely just one statement. It quickly becomes a growing stack of analytics and materials: standby statements, employee letters, investor scripts, customer updates, government briefings, media talking points, FAQs and social posts. Managing it all can become chaotic, especially with lengthy approval chains.

AI tools like FH Crisis Navigator help bring order. Acting as a virtual program manager, it adapts approved language for different audiences with speed and consistency. Using this tool, a crisis counselor can generate drafts, maintain version control, and keep updates aligned across every document. This reduces drift, speeds up approvals, embeds expert counsel, and keeps teams focused. So, when leadership needs to respond – whether to investors, regulators, customers, or the public – everything is already in place and ready for review.

AI for Scenario-Based Training

Preparation has always been essential to crisis readiness. But traditional tabletop exercises often fall short of real-world complexity. AI-powered platforms like the FleishmanHillard Crisis Simulation Lab raise the bar. Run by experienced facilitators, these simulations evolve in real time based on participant decisions. They introduce realistic challenges like media calls, stakeholder emails and viral posts, all tailored to the organization’s sector and geography.

Simulations can launch in hours instead of weeks, making them useful for both training and real-time strategy support. Structured feedback focuses on fact management, stakeholder engagement, and adaptability – building the muscle memory teams need when reputations are on the line.

AI for Campaign Risk Screening

Crises don’t always come from the outside. Sometimes a product launch, influencer partnership, or purpose-driven campaign can spark backlash, trigger scrutiny, or misfire in a volatile moment.

FH Risk Radar helps teams assess these risks before campaigns go live. It reviews concepts against regulatory guidance, cultural signals, public sentiment, and platform-specific challenges. The system scores ideas across dimensions like reputational exposure, influencer fit, message durability, and cultural sensitivity. Instead of a simple go-or-no-go call, teams get a full risk profile and clear mitigation strategies. This shifts review from a late-stage checkpoint to a strategic advantage.

From Promise to Practice

For communicators, risk leaders, and executives, AI is no longer a future promise. It’s a working tool, a strategic coach, and a force multiplier available to improve outcomes now. It surfaces early warning signs, simulates reactions, forecasts narratives, manages complex content, powers training, and screens campaigns. It delivers sharper, faster options for decision makers when every move counts.

AI’s role in crisis and risk management will only grow more sophisticated. But the message today is simple: the technology is here and can be applied to create immediate value. The leaders who use it will be better prepared to protect reputation in high-stakes moments.

At FleishmanHillard, we’re applying these tools every day to help clients anticipate challenges, navigate uncertainty, and emerge stronger. At the heart of it is Augmented Judgment, Accelerated Execution – the combination of trusted human counsel and the structured speed of AI. Together, they help organizations make better decisions, faster.

Crisis Team width=

Matt Rose (top) – Americas Lead for Crisis, Issues & Risk Management: Matt is an SVP & Senior Partner in New York with more than 30 years’ experience in advising organizations on crisis and issues management, risk mitigation, and reputation recovery. He has guided companies through reputational crises, labor issues, regulatory challenges, ESG controversies, and high-profile litigation.
Alex Lyall – Lead, Risk Management, AI & Innovation: Alex is an SVP & Partner in New York with more than 15 years of experience in crisis communications, issues management, preparedness, and risk management, working across industries. As part of the leadership team, Alex will help define best practices, shape go-to-market strategies, and scales solutions, with a focus on AI integration and talent development.
 

FH Guidelines for AI in Crisis, Issues, and Risk Management Applications

At FleishmanHillard, we apply artificial intelligence with purpose, not hype. In crisis, issues, and risk management, that means combining human expertise and experience with proven frameworks, proprietary technology, necessary confidentiality, and responsible guardrails to help organizations respond with speed, confidence, and control.
During a crisis, there is no substitute for seasoned judgment. AI can surface information, suggest language, or model scenarios, but it cannot navigate the nuance of legal implications, stakeholder dynamics, or reputational risk in real time. That takes seasoned counselors who have sat in the room, weighed the tradeoffs, and led under pressure. When the stakes are high, experience is not just helpful, it is essential.
That is why each FleishmanHillard application of AI in the Crisis, Issues and Risk Management Practice is anchored in three principles:
  • Experienced crisis counselors remain at the center of each use case, ensuring that technology enhances but never replaces human judgment.
  • Our systems are designed in secure, quality-assured environments that safeguard client information and uphold rigorous ethical standards.
  • AI is embedded within tested frameworks and workflows, allowing teams to move faster without sacrificing accuracy, accountability, or trust.
This disciplined approach ensures AI strengthens decision-making rather than creating new risks. With FleishmanHillard, organizations embrace innovation in crisis, issues, and risk management with confidence, knowing that innovation never comes at the expense of accuracy, ethics, or trust.

 

 
Article

5 AI Risks Every Company Should Be Aware of – and What to Do about Them 

September 24, 2025
By Zack Kavanaugh

AI is accelerating, but its promise is falling behind.  

The tools are multiplying, but only 1% of organizations consider their AI efforts “mature” – and 95% of generative AI pilots are failing.  

Why? Because transformation is a people challenge, not just a tech race. 

This piece surfaces five often-overlooked risks that quietly stall progress – each one rooted not in code, but in communication. Breakdowns in clarity, coordination and leadership commitment continue to limit adoption and erode trust. 

And yet, these are exactly the areas where strategic communication plays a pivotal role – helping organizations course-correct, contain risk and unlock the value AI is meant to deliver. 

For leaders ready to close the gap, here’s where to focus next. 

1. The AI Narrative Isn’t Moving as Fast as Tech  

What’s happening: AI rollout is rolling out fast, but most employees remain unclear on what it means for their work. 

Why it matters: Multiple reports show that companies are investing in AI tools faster than they’re training teams or communicating the impact. The result? Employees feel left behind, unsure where they fit in or how to contribute. 

What to do: Communications should partner with L&D and AI enablement teams to build a clear, role-relevant narrative that connects AI to everyday work. That means going beyond the “what” and “why” to include practical, team-specific examples – and showing what good AI use actually looks like. Managers play a crucial role here and should be equipped to reinforce these messages in regular team settings. 

2. Shadow AI Is Outpacing Governance 

What’s happening: Employees are quietly using unapproved AI tools to stay productive – often because sanctioned options aren’t accessible, intuitive or well-communicated. 

Why it matters: Recent research shows that over half of employees using AI at work are doing so under the radar. Only 47% have received any training, 56% have made mistakes due to misuse and nearly half say they’ve gotten no guidance at all. That creates risk – for the business, the brand and the people trying to do the right thing without clear support. 

What to do: Communications should partner with IT, HR and Compliance to promote trusted tools, clarify what’s allowed and explain why governance matters. Use short, human-centered scenarios that help people understand tradeoffs and risks. Managers should be given clear guidance on how to check in with their teams and normalize asking, “What tools are you using and why?” 

3. People Assume AI Replaces Judgment – So They Stop Using Theirs 

What’s happening: Without the right framing and support, employees may treat AI output as the final answer – not a starting point for critical thinking, refinement or discussion. 

Why it matters: A recent MIT/Wharton study found that while AI boosts performance in creative tasks, workers reported feeling less engaged and motivated when switching back to tasks without it – suggesting that over-reliance on AI can dull ownership and reduce the sense of meaning in work. 

What to do: Communications and L&D teams should align around positioning AI as a co-pilot, not a decision-maker. Messaging should emphasize the value of human input – especially in work that shapes brand, strategy or outcomes that may pose ethical dilemmas. Training should encourage questions like: 

  • “Would I feel confident putting my name on this?” 
  • “Where does this need my voice, perspective or context?” 

By reinforcing the expectation that employees think with AI – not defer to it – organizations can strengthen decision quality, protect brand integrity and keep teams connected to the meaning in their work. 

4. The Organization Is Focused on Activity, Not Maturity 

What’s happening: Many organizations are tracking AI usage – but not its strategic impact. The focus is on activity (how often AI is used), rather than maturity (how well it’s embedded in high-value work). 

Why it matters: According to a Boston Consulting Group survey, 74% of companies struggle to achieve and scale the value of AI – with only a small fraction successfully integrating it into core, high-impact functions. Without a clearer picture of what good looks like, AI efforts risk stalling at the surface. 

What to do: Communications teams should partner with AI program leads to define and share an AI maturity journey – through narrative snapshots, team showcases or dashboard insights that reflect depth, not just breadth. Highlight moments where AI has meaningfully shifted workflows, improved decision-making, unlocked new capabilities or resulted in notable client or business wins. And celebrate progress in stages – from experimentation to strategic integration to measurable ROI – to help the organization see not just what’s happening, but how far it’s come. 

5. Leaders Aren’t Framing the Change – or Making It Visible 

What’s happening: Many leaders say they support AI – but too few are actively learning, using or communicating about it. When leaders aren’t visibly experimenting or sharing what they’re discovering, employees are left to wonder if the change is important or safe to engage with themselves. 

Why it matters: According to Axios, while a quarter of leaders say their AI rollout has been effective, only 11% of employees agree. That’s not just an implementation gap – it’s a trust gap. And the root cause isn’t technical. It’s about clarity, consistency and whether people feel the change is relevant, credible and real. 

What to do: Communications teams should make it easy for leaders to show up – not just with bold vision, but with curiosity and candor. Encourage short, human signals: what they’re trying, what surprised them, what didn’t work. Share safe-fail stories. Invite open conversations. When leaders model vulnerability and visible learning, they normalize experimentation – and create the cultural conditions that AI adoption actually needs to take root. 

Making AI Real – and Communicating What Matters Most 

These risks don’t stem from infrastructure or algorithms – they come from gaps in alignment, communication and visible leadership. And they escalate when left unspoken. 

In the first article of this AI adoption series, we made the case for a people-first approach to AI. In our second article, we unpacked the psychology of hesitation, showing how quiet friction, not overt pushback, is what most often stalls momentum. 

Our hope is that this third piece has connected the dots: Communications may not own every risk – but it’s essential to identifying, navigating and de-escalating them. 

The bottom line: Technology may spark change, but it’s clarity, trust and visible leadership that make it real. FleishmanHillard partners with organizations worldwide to align ambition and action, helping clients avoid pitfalls, contain risk and realize full value of AI. As the pace accelerates, that human advantage will be the ultimate differentiator. 

Article

Global Managing Director EJ Kim Brings New Leadership and Strategic Innovation To TRUE Global Intelligence

September 4, 2025

FleishmanHillard today announced the appointment of EJ Kim as global managing director of TRUE Global Intelligence, the agency’s global research, analytics and intelligence consultancy. This leadership move signals the next phase of growth for FleishmanHillard’s intelligence capability as a central driver of strategic innovation and business impact. 

TRUE Global Intelligence connects Omnicom’s industry-leading data stack with proprietary measurement frameworks, data and AI-powered audience insight tools and consulting-grade analysis. This award-winning approach sets a new standard for data-driven intelligence, blending smart data and methodological rigor with bold creative experimentation. By applying counselor-driven AI-powered solutions, the intelligence team accelerates analysis, sharpens strategy and unlocks more dynamic client programs, helping brands drive growth, shift perception and prove value at every stage of the communications cycle. 

“EJ’s appointment reinforces our commitment to have intelligence sit at the center of how we work and deliver value for clients,” said J.J. Carter, FleishmanHillard president and CEO. “We are embedding data-driven insight and advanced analytics in every aspect of our business, guiding smarter decisions, sharper strategy and more meaningful outcomes. With EJ’s leadership, TRUE Global Intelligence will power our ambition to help clients navigate complexity, seize opportunity, anticipate change and achieve results that matter.” 

“I am truly excited for the opportunity to bring substance to innovation, ensuring that the intelligence we deliver is not just fast but thoughtful, rigorous and built to last,” said Kim. “This is a pivotal moment for intelligence to lead not follow and what sets us apart is not just the data at our fingertips but how we apply critical thinking and creative rigor to turn that data into insight and action. As AI transforms how we work, the real power lies in how we think — through critical reasoning, methodological discipline and the ability to make these tools work harder for real outcomes. I’m proud to help shape what’s next alongside a team that believes how we get there matters just as much as where we’re headed.” 

A seasoned intelligence strategist, Kim brings deep experience in building, scaling and transforming insights and analytics functions into strategic solutions-focused consulting capabilities. She has established practices from the ground up, led successful post-M&A integrations and has a proven track record of evolving intelligence offerings to help organizations turn complexity into clarity and insight into influence. Prior to joining FleishmanHillard, Kim served as executive vice president and head of Nexus, Weber Shandwick’s global center of excellence for analytics and innovation. A recognized thought leader and change agent, she also co-founded NNABI, an award-winning science-backed wellness brand focused on perimenopause care, demonstrating her entrepreneurial mindset and commitment to purpose-driven innovation. 

Kim’s appointment follows a series of strategic leadership announcements across FleishmanHillard’s global network, underscoring the agency’s commitment to data-driven insight, innovation and measurable client impact. 

Other recent market leadership announcements include Mei Lee in Singapore, Madhulika Ojha in India, Adrienne Connell in Canada, Kristin Hollins across California and Marshall Manson in the United Kingdom as well as a new global corporate affairs leadership team — as FleishmanHillard continues to invest in leaders who deliver trusted counsel and measurable impact on a global scale. 

Article

A Look At Our Most Powerful AI Ingredient: People

September 2, 2025
By Ephraim Cohen

(Disclosure: Omni-based AI assistance in research and writing)

Amid the rush to brand every new dashboard, tracker or AI-powered package as a transformative solution, we’re making a different kind of bet. We’re betting boldly not on training people, but people as our transformative solution. It’s a bet we believe every communications professional should make.

To put a point on it: we can empower people with AI solutions for their clients. Or we can empower people to create the right AI solution for their client.

We’re going with the latter.

To be clear, people are the differentiated ingredient in data and AI powered solutions. We take communications professional—someone with expertise in communicating with stakeholders in various scenarios such as product launches or crisis situations – and add AI design skills. We then equip them with an industry leading audience and media data sets, institutional knowledge digitized into knowledge libraries, and the full range of AI models.

This philosophy drives our strategy behind FH Fusion, FleishmanHillard’s approach to enabling every single professional to architect and build intelligent, agentic AI solutions. The result: communications teams aren’t just using AI and data via Omnicom’s Omni platform, they are hands-on-keyboard designing the specific, outcome-oriented solution customized or created for each client.

Communications Subject Matter Expertise Remains the Difference Maker

There’s a crucial difference between communications expertise and subject matter expertise for communications. And for years, our industry has focused on communications expertise –reputation management, message development, narrative framing, media strategy and other areas. We’ve also long had teams with subject matter expertise in specific industries or stakeholder groups, not unsimilar to what a general industry or audience analyst might bring to the table.

Now, Communicators’ subject matter expertise can be the difference maker in developing effective solutions. Whether navigating healthcare regulations, global governance trends or financial disclosures, clients need more than storytelling. Combine that fluency with AI and data, and those very same counselors can create and continually improve powerful AI agents well versed in the knowledge and nuance of specific industries and scenarios. However, applying expertise to AI Agent development is only the start.

Pairing Expertise with Data Fluency (data sets and knowledge bases)

By adding data fluency and data resources, those same subject matter experts can greatly increase the precision and impact of their AI solutions. And what is data fluency?The ability to draw insights from diverse and often complex sources, including audience and media, corporate data sets, historical and best practice knowledge files, and synthetic data modeled from trends and behavior patterns.

Knowing how to find, interpret and apply these data types is no longer an additive skill, in the last few years we’ve made it core to being an effective counselor in the tomorrow world rapidly developing today. Now, we’re making it core to being an effective counselor and core to that counselor creating powerful AI agents and AI solutions.

Combining human expertise, AI and data fluency and data and AI tools into solutions.

The next evolution lies in knowing how to translate subject matter expertise and data fluency into intelligent systems, namely, agentic AI solutions. We’re not talking about programming or machine learning algorithms. We’re talking about training agents the same way we train teams: instilling expertise, data-driven insights, institutional knowledge, governance frameworks and strategic logic.

A few starting examples of what FH professionals are already building:

  • Replicate and scale their methods in risk and reputation management
  • Continuously learn from new inputs
  • Automate time-consuming workflows (while increasing quality)
  • Rapidly synthesize information to support better counsel and smarter decision-making in real-time

But these agents don’t come off a shelf. They’re built by people who understand what to teach them, understand the details, nuances and overall environments of the audiences and industries for which they are designing, and, as a result, how to deploy them in a way that ensures quality in the output of the AI solution.

Redefining Excellence in Communications

What was once considered top-tier communications expertise has evolved. Today’s standard is subject matter excellence for communications, paired with the fluency to interpret data and the capability to build AI-powered systems that scale our best thinking.

Because in a world moving faster every day, the value isn’t just in having expertise. It’s in knowing how to build with it.

Up Next …

And like any good movie, this is a bit of a post-credit teaser. What does this all mean for the next generation of communicators? In our upcoming posts, we’ll explore the emerging roles we believe agencies and clients alike will need—from solutions teams to knowledge librarians, cultural anthropologists and even art historians.

Stay tuned.

Article

Elevating Cybersecurity Messaging After Black Hat 2025

August 27, 2025
By Miranda Sanders

Las Vegas was sweltering for Black Hat 2025, and so were the conversations on the show floor. AI led to much of the discussion as both a powerful tool for defense and a fresh attack vector. For example, there was news on major advances in cloud and endpoint security and rising concern among experts about rising supply chain and infrastructure-targeted threats.

But what stood out to us this year wasn’t just the tech. It was how the conversation around security itself is evolving, raising the bar for communicators everywhere.

The news isn’t gone. It’s just different.

If you felt this year’s coverage was somewhat muted, you’re not alone. Gone are the days when Black Hat was the moment, a guaranteed headline in every tier-one business publication. Instead, the coverage that mattered most came from a handful of reporters, probably with deep, longstanding relationships in the Cyber space. Those publications included The Verge, VentureBeat, Wired, ZDNet or Network World. These reporters already have a clear understanding of a brand’s enterprise security business strategy. They can dive deep to better understand the industry implications from product news, from Google’s move towards better supply chain security, to SentinelOne’s managed services expansion, Microsoft’s “Project Sentinel AI”, Cisco’s quantum-resilient encryption and more.

The threat intel has hit home.

Five years ago, a single research report could dominate the news cycle, with dozens of stories written by security media during Black Hat. Now it takes more. The bar is higher, and editors want hard evidence that connects to real-world risk.

Outlets like Reuters and Bloomberg focused on threats with tangible implications for infrastructure and public safety. For example, Reuters covered activity around APT41 and Iranian cyber espionage. At the same time, Politico discussed the news’ geopolitical implications and potential policy responses.

Bloomberg reported on credible threats to electrical grids and potential impacts on critical infrastructure. The common theme? If threat intelligence impacts – or has a real, credible threat to impact – people’s lives, then it’s worth covering.

Former NYT reporter Nicole Perlroth’s keynote put it bluntly: the human impact of cyber risk is no longer hypothetical. It is today’s reality, and it’s only going to get more devastating. For communicators, translating technical findings into stories about people and policy is now essential.

Reporters want to experience, not just observe.

Several reporters on site said that the things they enjoyed most this year were moments set up by brands where they could place themselves in the shoes of security professionals on the front line of today’s biggest threats – whether during panels, sessions or dedicated private events. Several tier-one media outlets attended a Cisco Talos tabletop exercise. In this hour-long immersive session, they played a Dungeons and Dragons-like game to understand how an incident may play out in real life.

As communicators, prioritizing these immersive opportunities can turn complex topics into compelling stories.

What does this mean for security communicators?

If Black Hat was any indication, media are looking for clear, authoritative voices who can cut through the technical noise and connect security stories to business, policy and human impact. Here’s how to best do that for the most relevant themes we saw come out of Black Hat this year:

  • AI Dominance: Position spokespeople to discuss both the promise and risks of AI in cybersecurity, using clear, non-technical language.
  • Supply Chain Risk: Share concrete examples or data on how your organization addresses third-party and supply chain vulnerabilities.
  • Quantum Security: Media are looking for thought leadership and educational content if your brand is working on quantum-resilient security solutions.
  • Cloud & Zero Trust: Highlight practical business benefits of zero trust and cloud-native security in your messaging.
  • Critical Infrastructure & IoT: Prepare proactive statements around your efforts to protect critical infrastructure and IoT.
  • Real-World Impact: Emphasize how your solutions or research address current, active threats with clear, actionable outcomes.
  • Geopolitical Context: Be ready with expert commentary connecting cybersecurity developments to broader policy and international issues.

The pace of change in security and security communications isn’t going to slow down. As the landscape evolves, so does our approach to telling the stories that matter.

Stay tuned for more insights into security communications from us in the coming months.

Article

The Answer Engine Era Is Here

August 20, 2025
By Ellie Tuck

We are living through another fundamental shift in how people discover brands. But we’ve seen this pattern before: the move from analog to web, from search to social. Each time, the brands that adapted early gained lasting advantages. Now we are seeing the rise of LLM-powered answer engines and the emergence of Generative Engine Optimization (GEO), a strategy that leverages AI to optimize a brand’s visibility and reputation in answer engine results.

The numbers tell the story: over half of Google results now include a generative response. AI agents and chatbots are increasingly becoming the first stop for people seeking recommendations, advice or information. If your audience is already there and you are not auditing how your brand shows up, you are missing a critical piece of the discovery puzzle.

How we are navigating the shift

While the fundamentals of trust and quality content remain, GEO redefines how they are executed. Analysing tools like ChatGPT, Gemini and Perplexity shows that these models lean heavily on what is already in the public domain, especially high-trust, earned media sources.

In response, we have had to build custom tools to get under the hood of how a brand is being interpreted. These tools allow us to see where a client is showing up, how they are being described, and how that compares to others in their space.

This new landscape also demands a new level of precision from our creative campaigns. We are asking more specific questions. Is our messaging backed by the right expert validation? Is our content tailored for the types of media AI models trust? Is our phrasing distinctive enough to be picked up by both machines and people?

This is where creativity and technical precision now overlap. Our teams are building synthetic AI audiences to test ideas earlier and using our FH Fusion platform to assemble virtual focus groups that inform smarter, faster decision-making.

A practical framework for influence

Our approach is led by audience behavior. That has always been our starting point in PR, and it is no different in the world of AI.

To influence how LLMs respond, we focus on a few key levers:

  • Earned coverage in high-trust sources
  • Structured storytelling to make key messages clear
  • Cross-channel reinforcement of the right signals
  • Consistency, because LLMs rely on pattern recognition

This work is complex, and the environment is not static. But an adaptable, audience-led strategy puts us in the best position to succeed.

What this means for our industry

The implications are broad. Business leaders need to get smart about how these models make decisions, guided by real data, not guesswork. Answer engine visibility should become a core KPI, not just for communications teams, but for growth.

But reputational risk is a major factor. We are already seeing AI tools surface outdated or outright false content about brands. Because what an LLM says feels factual to users, our role shifts from defending a single source of truth to shaping the entire ecosystem that AI learns from. This is nuanced work, but it is also where we can have the most significant impact.

No one has all the answers yet. The models are evolving, the sources they trust are shifting, and the tactics that work today may not work tomorrow. But the brands that start auditing their answer engine presence now will have a significant advantage over those who wait.

The communications industry has adapted to every major shift in how people consume information. This one is no different, except for the speed at which it is happening. The question is not whether your brand will need a GEO strategy; it is how quickly you can build one that works. We’ve adapted before, and we’ll do it again.

Ellie Tuck width= Ellie Tuck is an SVP & Partner and Executive Creative Director based in New York.

 
Article

Why Primary Research is the Power Source for AI That Works 

August 11, 2025
By Marina Stein Lundahl

Generative AI isn’t a promise anymore. It’s here.  

In the momentum of this modern gold rush though, it’s easy to forget a critical truth: the power behind these tools is still human. The quality of generative AI outputs depends on the inputs we feed them, and that begins with the rigor of primary research.  

Since 2023, the use of generative AI by organizations has more than doubled, with 71% of companies leveraging it by 2025. One standout application? Synthetic audiences, a powerful new way for communicators to gain insight into their audiences’ attitudes, perceptions and behaviors. But just like it’s easy to get swept away by the wave of generative AI, it’s easy to think that synthetic audiences are rendering traditional primary research obsolete. Nothing could be further from the truth.  

Synthetic audiences can’t outrun the human source 

Primary research and AI aren’t in competition. They’re codependent. 

The best synthetic audiences are built on the back of great human data. On the other hand, primary research can be made more focused and agile when layered with synthetic audience outputs. Synthetic audiences can extend the life of primary research when we incorporate real-time news or cultural data, keeping the insights fresh and up to date. Understanding the complexities of this relationship enables researchers to maximize benefits of both methods.  

As the old saying goes garbage in, garbage out. 

That’s never been truer than it is today. 

The Human Edge: What AI Still Can’t Simulate 

AI’s emergence has elevated the importance of research design and data quality vigilance, as MRS chief Jane Frost highlights in her article covering the Global Data Quality Initiative. Now more than ever, poorly designed studies don’t just lead to flawed short-term insights; they embed those flaws into synthetic audiences that rely on these studies as crucial training datasets. When applied carelessly, this flawed insight can lead to misinformed decisions that create business or reputational risk. 

This new reality demands that we approach primary research with heightened rigor and foresight. The questions we ask, the participants we recruit and the methodologies we employ must all be optimized not just for their immediate results but for their value as training inputs for AI models that expand the radius of these data.  

The equation is simple: better human data lead to better AI models. Human insights provide texture and nuance that synthetic models currently fail to accurately simulate. 

  • Cultural Context: AI models struggle to understand deep-rooted and implicit cultural knowledge that humans navigate effortlessly through lived experiences 
  • Emotional Nuance: The richness and range of human emotional responses remains difficult to synthesize  
  • Emerging Behaviors: Primary research captures to-the-moment changes or evolutions in human behaviors before they become widespread enough to appear in secondary sources 
  • Contradictions and Complexity: Humans often hold conflicting views simultaneously; a complexity that enriches our understanding but challenges AI models 

These qualities aren’t “nice to haves.” They’re ingredients for insight that inspire action. The kind of action clients, policymakers and customers can trust.  

The ‘garbage in, garbage out’ dynamic shouldn’t be viewed as loose guidance for fine-tuning virtual audience models; there are real risks involved when primary sources are undervalued (e.g., algorithmic bias, insight homogenization and missed innovation opportunities). 

Reimagining Primary Research for the AI Age 

While critical to the relevance and credibility of AI-driven audience research, traditional primary research isn’t immune to the pressure to adapt and evolve in the age of advancing generative AI. Today’s research must be crafted with dual purposes: 

  1. Delivering precise and actionable insights 
  1. Creating high-quality, scalable inputs for AI systems and synthetic audiences 

This evolution means considering: 

  • Data Structure: How will this data need to be formatted to serve as effective model inputs? 
  • Comprehensive Capture: Are we collecting the contextual information AI needs for proper interpretation? 
  • Longitudinal Value: How will this data remain relevant as behavioral patterns evolve? 
  • Ethical Considerations: What guardrails ensure our data fuels responsible AI development? 

Final Word 

Forward-thinking organizations recognize that the competitive advantage is not in choosing between primary research and synthetic audiences, but in their purposeful integration. Investing in the quality, design and implementation of primary research is no longer optional. It’s a requirement to fuel the next generation of insights, both human and artificial.  

As we navigate this rapidly evolving landscape, we’re firmly planting the flag:  

Primary research isn’t just still relevant, it’s more important than ever and will improve synthetic audiences.  

Article

The Real Reason Your AI Rollout is Stalling

July 30, 2025
By Zack Kavanaugh

When it comes to the success of AI rollouts and adoption, there’s a notable delta between the perspectives of a company’s leaders and its employees.

About a quarter of leaders say their AI rollout has been effective. Only 11% of employees agree.

That’s not just a signal that implementation is lagging – it’s a signal that alignment is lacking. And that gap likely isn’t due to tech – at least not tech alone. More likely, it’s about trust, clarity, consistency, relevance … even identity.

You can launch the right tool with a solid rollout plan behind it. But if employees don’t understand why it matters – or where they fit in – behavior change stalls before it has the chance to take root.

Why Behavior Change Stalls: The Distance Between Intention and Action

Behavior change doesn’t just happen because a tool is available – it has to be intentionally built into the experience. Early. Clearly. With a level of resourcing and support commensurate with what the company has invested in the platform itself.

That means embedding behavior-shaping touchpoints from day one – not waiting for adoption to happen organically. What does this look like in action? Continuous feedback loops to surface and address employee hesitations, leaders modeling new behaviors in visible ways, regular moments of reflection woven into team rhythms, and dedicated roles focused on coaching and practical support – among other things.

People pull back when things feel vague. When the shift doesn’t connect to what they care about, or how they see their role. Even the best tools get overlooked if the environment around them doesn’t support the change they’re meant to create.

Where Behavior Change Breaks Down: The Subtle Signs of Resistance

But where exactly does the friction show up?

Resistance doesn’t always show up as vocal opposition – more often, it shows up in silence. A tool gets rolled out, but questions go unasked. Team conversations sidestep it. Some employees disengage or quietly revert to old habits like manually analyzing large swaths of data or generating meeting summaries and first drafts from scratch.

This refusal isn’t manifested as loud rebellion. It’s slow fade. And in AI transformation, that quiet drift is one of the biggest threats to sustained impact.

Left unchecked, that disengagement can erode tool ROI – dragging down productivity, creating adoption gaps across teams and limiting the career growth of those who hesitate, especially in roles where fluency with AI is quickly becoming table stakes. In short, when employees don’t buy in, the business can’t move forward at the pace it needs to.

The good news? These signals are visible – if you know where to look. Spotting and addressing them early can protect your investment, align your people and accelerate progress where it matters most.

The Three Layers of Hesitation: Enterprise, Team and Individual

For companies struggling to drive AI adoption, this is the moment to step back and start asking simple questions like the ones here.

While the tools will get better and the use cases will expand, none of that guarantees impactful adoption.

Right now, most organizations don’t need a newer generation of the technology. They need better feedback loops. More storytelling and open conversation. A stronger bridge between AI strategy and lived experience.

And more honest signals from leaders – that this isn’t just about the next tool, it’s about how the work is changing, why that matters and how the organization is committed to making space for people to come along with it.

The Final Takeaway: Change Sticks When Conditions Are Right

Adoption doesn’t accelerate just because the tools get better. And AI doesn’t scale well in confusion. These things happen only when the environment is ready – when culture, clarity and context catch up to the ambition.

That’s when change starts to feel real. And when people decide it’s worth leaning in.

Article

Leading Through Complexity: What Higher Ed Communicators Are Saying

July 25, 2025

What one word best describes your day-to-day work? 

That was the icebreaker posed by FleishmanHillard’s Sarah Francomano, who hosted and moderated a candid dinner conversation among senior higher ed communications and marketing leaders. Responses like “firefighter,” “pivot” and “controlling chaos” weren’t said for dramatic effect—they reflected the current state of the higher ed landscape. The group all concurred that leading communications in higher education today is intensely complex, often chaotic and always high stakes.

The conversation was twofold, starting with discussions around what senior leaders are currently seeing in higher education. Then, the conversation moved to what’s next and how higher-ed professionals can leverage AI and other emerging tools to support them in their roles.

The Current Reality: Complexity and Constant Pressure

Communications leaders in higher education are facing unprecedented, often competing demands—with the stakes higher than ever. A single misstep can trigger consequences ranging from trustee backlash to federal scrutiny. Plus, in an environment where issues are deeply personal and highly visible, it’s often the job of the communications team not just to respond, but to cut through the noise, determine whose voices matter most in a given moment and identify which relationships need to be prioritized in order to guide the institution through crisis or change.

Participants shared their experiences managing a high volume of inquiries on a consistent basis from students, parents, alumni, donors, faculty, media and the general public on issues pertaining to their schools. One participant described a case where their team received more than 10,000 emails in response to a global crisis. After sorting through all of the messages, they found that only a small fraction came from individuals actually affiliated with the institution. It was a telling example of how the general public’s perspective does not always reflect the opinions of key stakeholders who have an impact on a university.

Others spoke about the weight of deciding when—and whether—to issue public statements. Choosing to speak up on a cultural or political moment may be the right call in one case, but it often sets expectations for the next moment. The act of staying silent can also become a message, leaving universities at risk of receiving backlash. One communications leader noted that even a simple interaction with a reporter can draw the institution into a larger story, whether they want to be part of it or not.

Enrollment also surfaced as a key pressure point. Some schools are dealing with declining numbers and budget shortfalls; others are seeing higher-than-expected demand. Several attendees commented on the long-term risks of tuition discounting—the idea that while short-term financial aid boosts can help meet yield goals, they may also chip away at perceived brand value over time. Once an institution begins competing on price, it becomes difficult to return to a different model.

The Future: How AI is Shaping Strategic Readiness

Toward the end of dinner, the conversation shifted to some of the solutions now available to address the challenges that come with working in higher education. The group was introduced to a live AI-powered crisis simulation, led by FleishmanHillard’s Alex Lyall. The FH Crisis Simulation Lab draws from real-world crisis events and FH simulation methodologies and presents users with unfolding scenarios in the form of projected stakeholder reactions. Unlike traditional simulations, which are static, this AI-powered tool is dynamic in nature, responding to the real-time decisions of participants by evolving the crisis scenario to reflect how stakeholders might respond.

When the demo immersed participants in a campus protest scenario, the group decided to put the tool through its paces and selected the most aggressive response, forcing demonstrators to disband by a set deadline. The result generated backlash, escalation and reputational fallout in the form of emails, social media posts and media coverage, mirroring how a crisis team would experience these types of situations.

Participants were quick to note how well the tool captured the complexity and pace of an actual crisis. The AI agent mapped out the often-conflicting reactions across stakeholder groups—students, faculty, alumni, media, donors—and showed how quickly one decision can lead to a cascade of consequences. Later in the simulation, when the team chose how to correct course, the tool was prompted to generate internal and external holding statements that offered strong, usable drafts that could be easily customized to fit the voice of an institution.

Participants saw clear potential for the AI agent as both a training and planning resource—especially in conversations with boards or leadership teams. It provided a structured, precedent-informed way to explore how crisis scenarios might unfold, helping teams evaluate why one communications path might be more effective than another.

Alex shared that while this particular demo was generic, the FH Crisis Simulation Lab can be tailored to reflect each school’s culture, governance structure and audience. Even those in the room who were skeptical about AI said they could see its value in this kind of application—not to replace human instincts, but to sharpen and support them.

Going Forward: Navigating Reputational Complexities

The evening was a chance to connect with peers, swap stories and explore fresh ideas about what the future of higher ed looks like. It was an invigorating conversation that left many in the room feeling energized and inspired.

Higher ed communications may be complex, sometimes chaotic and full of tough calls—but it doesn’t have to be faced alone.