Employee Login

Enter your login information to access the intranet

Enter your credentials to access your email

Reset employee password

Article

Rebecca Weinstein and Jonathan Arias Win the 2026 U.S. Young Lions Digital Competition

April 23, 2026

FleishmanHillard’s Rebecca Weinstein and Jonathan Arias have won the Digital category of the 2026 U.S. Young Lions competition, taking home top honors for their concept “Tiny Tiny Desk Concerts.”

Their work focuses on the concept of a partnership with NPR’s iconic “Tiny Desk Concerts” series to let student musicians perform and record at their desks. Every single recorded and sold will fund music education through Save The Music Foundation, supporting the nonprofit’s work across more than 285 school districts nationwide.

Weinstein and Arias will represent TEAM USA at the Cannes Lions International Festival of Creativity, competing against the world’s top young creatives from June 22-26. FleishmanHillard served as the Digital category sponsor for this year’s competition, which also showcased strong talent from across the Omnicom Public Relations network. Weber Shandwick’s June Hernandez and Valiant Freeman won the PR category with a concept rooted in the power of silence, a partnership with the New York Philharmonic that highlights the real consequences of music education cuts through social and experiential activations.

“The 2026 TEAM USA winners reflect exactly why this competition matters: it gives the next generation of creative talent the opportunity to showcase their sharp, strategic thinking while advancing an important cause,” said Mike Rosen, Chief Revenue Officer at NCM. “Each winning team delivered fresh ideas that will help Save The Music reach new audiences and expand its impact. We’re excited to see them represent the U.S. on the global stage in Cannes.”

All five winning teams across Digital, Film, PR, Print, and Media categories will compete on the global stage in Cannes.

Article

A Corporate Communications Evolution: Strategies for the Agentic Age

April 22, 2026
By Matt Rose

Corporate Communications has long operated on a stable premise: organizations craft messages, distribute them through controlled and earned channels, and monitor how those messages are received. While tools and platforms have evolved, the underlying model has remained largely intact. At its core, the function exists to sustain visibility, build trust, and protect and enhance reputation among key stakeholders in ways that support business performance and long-term value.

Artificial intelligence challenges that model at a structural level.

The most significant shift is not faster content production or the automation of routine tasks. It is the growing role of AI as an intermediary in how information is consumed, interpreted, and acted upon. Where algorithms once filtered what audiences saw, AI now reshapes it. Organizations are no longer communicating directly with stakeholders; they are communicating through systems that filter, summarize, and reframe information before it ever reaches human audiences.

This shift extends well beyond efficiency. Historically, Corporate Communications assumed that messages, while filtered by journalists, analysts, and platforms, would remain largely intact if those filters were well understood. AI changes that dynamic. Information is no longer simply filtered; it is deconstructed and recombined with other sources to produce new outputs such as summaries, recommendations, and comparisons. Organizations are therefore not communicating discrete messages but contributing inputs into systems that determine how those messages are ultimately presented and understood. The implication is a shift from controlling the message to structuring both message and context, so that they are interpreted accurately by AI systems.

The Changing Nature of Information Consumption

Across stakeholder groups, this dynamic is already taking hold. Investors use machine-assisted tools to analyze earnings calls and identify inconsistencies. Journalists rely on AI to accelerate research and draft initial narratives. Policymakers and regulators are beginning to incorporate AI-generated summaries into their workflows. Customers and patients are turning to AI as a primary source of information and interpretation. In each case, information is no longer encountered in its original form. It is mediated.

This introduces a new layer of risk and opportunity. Errors, inconsistencies, or ambiguities can be amplified quickly. At the same time, well-structured, consistent information can be propagated more effectively than ever before. As a result, narrative control is shifting upstream, from the point of publication to the point of interpretation.

In this environment, the traditional focus on outputs is no longer sufficient. Press releases, speeches, and media engagement remain important, but they are only part of the picture. What matters is not just whether a message is distributed, but whether it is understood as intended across a range of human and machine interpreters.

Across stakeholder groups, this dynamic is already taking hold. Investors use machine-assisted tools to analyze earnings calls and identify inconsistencies. Journalists rely on AI to accelerate research and draft initial narratives. Policymakers and regulators are beginning to incorporate AI-generated summaries into their workflows. Customers and patients are turning to AI as a primary source of information and interpretation. In each case, information is no longer encountered in its original form. It is mediated.

This introduces a new layer of risk and opportunity. Errors, inconsistencies, or ambiguities can be amplified quickly. At the same time, well-structured, consistent information can be propagated more effectively than ever before. As a result, narrative control is shifting upstream, from the point of publication to the point of interpretation.

In this environment, the traditional focus on outputs is no longer sufficient. Press releases, speeches, and media engagement remain important, but they are only part of the picture. What matters is not just whether a message is distributed, but whether it is understood as intended across a range of human and machine interpreters. This requires a shift from outputs to systems.

From Outputs to Systems

An effective communications function must be capable of continuously ingesting external signals, interpreting their significance, generating aligned messaging, assessing potential risks, and executing responses in a coordinated manner. These activities must be integrated rather than siloed and must operate at a speed that reflects the pace of the external environment.

Many organizations are experimenting with discrete AI applications, such as automated content generation or enhanced media monitoring. While these efforts can deliver incremental value, they do not address the underlying structural challenge. Without integration, they risk creating a patchwork of capabilities that improves efficiency in isolated areas but does not fundamentally improve how the organization is understood or how effectively communications supports business outcomes.

The Emergence of Agentic Architectures

What is beginning to emerge instead is a more integrated, system-based model. Distinct AI capabilities perform specific roles within the communications lifecycle. Some systems monitor external signals, drawing on media, social, policy, and market data. Others synthesize this information into a structured understanding of emerging narratives and stakeholder sentiment. Additional capabilities generate content, assess potential risks, or support execution.

These elements are increasingly connected through an orchestration layer that ensures coordination across activities. The result is not a collection of tools, but a system that can sense, interpret, and respond in a continuous loop.

Importantly, this shift does not eliminate the role of human practitioners. Rather, it redefines it. As routine tasks are automated, the relative importance of judgment, context, and strategic decision-making increases. Communications leaders are required to not only craft messages, but to oversee how systems generate and deploy those messages at scale. While execution becomes more system-driven, accountability does not shift. Leaders remain responsible for the accuracy of content, the outcomes it produces, and the trust and credibility the organization maintains with its stakeholders.

Implications for Organizational Design

This evolution has implications for organizational design. Many communications functions remain structured in silos, separating media relations, social and digital, executive communications, and reputation management. While this structure provides clarity, it can lead to fragmentation in execution. Inconsistencies across channels become more visible, and the ability to respond quickly to emerging issues is constrained.

An AI-enabled model places greater emphasis on integration. Shared data layers, common intelligence frameworks, and coordinated workflows become central. The goal is not to eliminate functional expertise, but to ensure that it operates within a unified system rather than in parallel tracks. In practice, this can result in a more centralized model supported by shared capabilities.

Rethinking Measurement

Measurement must evolve as well. Traditional indicators such as volume of coverage, impressions, or engagement rates capture activity, but not whether stakeholders are interpreting the organization’s actions and positions as intended. Advances in data availability now make it possible to assess who is reached, whether priority audiences are engaged, and how messages are interpreted. Metrics such as relevant audience reach, message resonance, and narrative alignment provide a more accurate view of effectiveness in shaping stakeholder perception and supporting business outcomes.

These approaches are more complex and often more resource-intensive, but they reflect how communication actually works in an AI-mediated environment. The central question is no longer how far a message travels, but how accurately it is understood and by whom.

Implementation Considerations

Despite the sophistication of the end state, implementation does not require a comprehensive transformation from the outset. Organizations that are making progress typically begin with focused applications that address clear needs, such as executive briefing tools that synthesize external signals or systems that accelerate the drafting of media responses while maintaining consistency with approved messaging.

Efforts to modernize Corporate Communications have often been constrained by cost concerns and the perception that its impact on business outcomes is indirect. In this case, those barriers are lower. Most large organizations already have access to advanced AI capabilities through enterprise technology investments. The incremental cost of applying them within communications is relatively modest. The greater challenge lies in rethinking how the function operates and how value is defined.

The Risk of Inaction

The risk of inaction is not that organizations move too slowly internally. It is that their stakeholders move more quickly externally. As AI becomes embedded in how information is consumed and decisions are made, narratives are increasingly shaped by systems outside the organization’s control. Inconsistencies are surfaced more quickly, and misinterpretations can scale rapidly.

Addressing this risk requires more than faster response times. It requires ensuring that the organization’s information is structured, consistent, and accessible in ways that support accurate interpretation.

Conclusion

Artificial intelligence is not simply enhancing Corporate Communications. It is changing the conditions under which communication takes place. Organizations that move toward integrated, system-based approaches will be better positioned to maintain control over how they are understood, sustain trust with stakeholders, and support long-term business performance and value. Those that do not may find that control increasingly resides elsewhere.

In a world where perception is shaped as much by machines as by people, the ability to manage how information is interpreted becomes a core strategic capability.

Matt Rose width= Matt Rose is the Americas Lead for Crisis, Issues & Risk Management. An SVP & Senior Partner in New York, he brings more than 30 years’ experience in advising organizations on crisis and issues management, risk mitigation, and reputation recovery. He has guided companies through reputational crises, labor issues, regulatory challenges, ESG controversies, and high-profile litigation.

 
Article

Why Trust is the Real Competitive Advantage in Ag Tech  

April 2, 2026
By Vanessa Sapino, Kristin Hollins and Shelly Kessen

At this year’s World Agri-Tech Summit in San Francisco, several key insights cut through all the AI, robotics and data ecosystems discussions with one clear stand out: Trust and relationships form the foundation of successful technology adoption and meaningful connections in agriculture.

We took away from the conversations that the future of successful ag tech isn’t built in boardrooms. It starts at the farm level, with credible voices, practical solutions and farmers who see themselves as partners in innovation. It’s shaped by people who deliver not only technology, but who understand the market, the mission and the opportunity for real change.

As a proud co-sponsor with Western Growers within the California Delegation of the Ag Tech Alliance, FleishmanHillard was on the ground hearing directly from farmers, food leaders, agribusinesses and tech innovators, along with global policy, industry and academic leaders about what’s working and what’s not.

What we heard repeatedly was striking. While innovation and disruption drive today’s ag tech conversation, farmers still rely most heavily on word of mouth, recommendations from trusted advisors, and partnerships built over years. From the tech company perspective, the conversation centered on differentiation and how to stand out in a crowded market while competing for limited investment and customer attention.

This creates a fundamental challenge: While ag tech companies seek to differentiate in an oversaturated market, farmers seek clarity amid piecemeal options. As one farmer panel pointed out, there is no “Good Housekeeping seal of approval” for ag tech. Farmers face a bewildering array of options, each claiming to solve different pieces of the puzzle. The result? Adoption stalls.

Farmers need holistic solutions that work immediately, reliably, practically, and profitably. That demand for certainty is where trust becomes currency. Without a credible source vouching for a solution, many farmers find themselves in analysis paralysis. But trust shortens the decision cycle. When a farmer trusts a source, they can move faster.

Relationships Are Infrastructure

In agriculture, relationships aren’t soft. They’re structural. A trusted agronomist, equipment dealer or financial advisory team becomes part of operational infrastructure because that person understands the farm’s specific challenges, geographic weather patterns, soil conditions, financial constraints and business goals.

New technology that arrives without relationship context is just noise. Conversely, technology that arrives with a trusted recommendation becomes an asset.

To keep that infrastructure intact, farmers, food companies, agribusinesses and investors across every panel kept emphasizing the same characteristics for technology that actually gets adopted: practical, reliable, immediate ROI, user-friendly, easy to operate, and easy to service. The key takeaway: functional innovation earns credibility.

The Communications Parallel: Moving Forward

The same principle that governs farmer tech adoption also governs communications strategy. Just as farmers need advisors and relationships from day one, organizations across every industry — from scrappy startups to established enterprises — need a trusted communications partner embedded in their growth journey from the beginning to help craft their narrative.

When an organization partners with a communications advisor from day one rather than after launch or when they need crisis response, something powerful can happen. As the ag tech ecosystem faces a challenging commercialization gap, the answer isn’t just deeper partnership with farmers. It’s recognizing that breakthrough ideas only scale when translated into stories that farmers, investors and the entire market can understand, believe in and ultimately adopt. That translation work happens early, or it doesn’t happen at all.

That’s how you build understanding and credibility. That’s how you scale. And in agriculture — as in communications — trust is everything.

Article

Notes From the Road: RSA Conference 2026 Edition

April 1, 2026
By Scott Radcliffe

While at this year’s RSA Conference I overheard a very senior security executive at a well-known security company remark that he “came to RSA expecting a security conference and instead seemed to arrive at an AI conference.” Like many things said in jest, there was more than a little truth buried inside.

Walking through the exhibitor halls, you’re immediately struck by the nearly comprehensive inclusion of AI in nearly every offering on display—from threat detection to incident response to risk management. It seemed every vendor had either retrofitted their solution with AI or built one from scratch.

It would be easy to dismiss it all as hype, another technology cycle where marketing teams latch onto a buzzword without a lot of substance to offer under the surface. Surely at least a little is snake oil, but to dismiss everything as vaporware would be miss the dramatic and evolutionary step AI represents for the cybersecurity space.

In the short twelve months since last year’s RSA conference, we’ve witnessed countless AI experiments, implementations and innovations, and even the most experienced security minds in the world are grappling with uncertainty about what’s coming next.

The Great Shift: From “Humans in the Loop” to Autonomous Operations

At last year’s conference, most discussions around AI in security were grounded at some level on keeping “humans in the loop” of the decision-making and execution process. AI could augment, assist and accelerate actions taken by human admins and users, but the final call had to rest with a human who understood context, nuance and consequences.

That narrative has fundamentally shifted in a single year. As Wall Street Journal reporter James Rundell pointed out from his first impression of this year’s conference, the industry has undergone a philosophical change over the course of the last year. Security teams are no longer asking whether AI should act independently—they’re asking how to best, and hopefully safely, architect systems where AI must act independently and, quite often, in real-time.

This isn’t a subtle distinction. It represents a wholesale reimagining of how we defend our networks and systems. The efficiency gains of this headlong leap into AI are real, but so are the risks, and that tension is what keeps many security leaders up at night.

Identity as the New Perimeter

If autonomous AI is the emerging challenge, then identity has become an even more critical battleground. Anyone who’s paid attention to the security space recently is familiar with the popularity and continued growth of identity-based attacks that use known, often re-used credentials like usernames, email addresses, and passwords to gain access to systems. With AI systems now being granted expanding autonomy and access to sensitive data, the question of whom, or more accurately, what—should be able to access particular systems, networks, or information has taken on even greater urgency.

Early implementations of AI agents have already demonstrated the dangers of unchecked permissions. Give these systems too much access or too broad an ability to act, and they can quickly spiral into trouble. A key message that echoed through many of the talks at RSA this year make clear that guardrails aren’t optional, they’re foundational. As organizations deploy AI more widely, the ability to establish firm, granular controls around identity and access will be absolutely critical. In a world of autonomous intelligent agents, identity becomes the ultimate arbiter of what’s possible.

AI’s Dual-Use Dilemma for Security: Offensive Operators Will Have a Huge Head Start

Perhaps the most sobering insight I took away from RSA this year is how far behind defenders will be, and for how long, in the AI race. AI certainly represents an immediate force multiplier for attackers, and it will take a significant amount of time for defenders to catch up. Kevin Mandia, a veteran cybersecurity executive with decades of experience founding some of the industry’s most iconic companies, put some sobering specifics to this sentiment. In his view, AI will provide a clear advantage to offensive operations for the next two years before the defense can accumulate enough data and operational experience to train systems that keep pace.

The advantage goes beyond speed, though that’s certainly part of it. AI enables attackers to operate with precision and personalization previously unattainable at scale. Rather than deploying generic attack tactics across broad targets, AI allows threat actors to generate bespoke attack plans tailored to individual organizations—understanding their specific vulnerabilities, mimicking their communication patterns, and timing operations to maximize success. For defenders, holding the line while playing catch-up will be a daunting but necessary challenge.

The Sovereignty Conversation: A Quiet but Consequential Shift

Away from the AI spotlight, Microsoft’s CISO for AI and Technology Data, Igor Tsyganskiy, brought up a fascinating nuance to the data sovereignty trend many cloud providers are facing during a fireside chat. As organizations continue to adopt cloud architectures, where data lives—physically and jurisdictionally—has moved from a compliance checkbox to a strategic security consideration.

Different regions, regulatory frameworks and threat landscapes all create scenarios where the location and control of data become material to security architecture. This trend will likely only intensify as companies navigate an increasingly fragmented geopolitical environment. Data sovereignty has been a growing trend for a number of months at this point. The interesting point Tsyganskiy raised at the conference last week, however, was the urgent need for organizations to consider operational contingencies as well in their plans to satisfy data sovereignty requirements.  A recent airstrike that destroyed Amazon’s data center in Bahrain underscores the point: it doesn’t take a missile to disrupt operations, so organizations should be prepared as the answer may not be as easy as flipping the switch to another data center in a desired location.

For security and communications leaders, this means the conversation with the business can’t remain purely technical. It has to account for regulatory, geopolitical and strategic business considerations.

The Fundamentals Still Matter (Maybe More Than Ever)

Rob Joyce, the former director of cybersecurity at the NSA, emphasized a reality that can sometimes get lost amid the AI hype: the fundamentals of cybersecurity still remain a powerful and largely effective defense. His point is worth emphasizing, especially at a conference filled with vendors pitching the latest solutions the security industry has to offer.

Attackers, Joyce argued, continue to disproportionately target organizations that don’t execute the basics well. Though those attacks will only grow as bad actors begin to use AI as a force multiplier, organizations that prepare by adhering closely to good security fundamentals will be in a much better position to weather the coming storm. This means companies that lag in patching systems, haven’t broadly deployed multi-factor authentication, maintain inadequate logging practices, or generally fail to stay prepared are putting their systems at much greater risk.

I would argue the same applies to communications and marketing teams. Ensuring you’re prepared, properly integrated with the rest of the organization and generally ready to help your organization stay ahead of a threat environment evolving at exponential speed is more important than ever. Furthermore, I’d add that the time has come for marketing and communications teams to do their part and partner with technical teams to ensure the security conversation organizations have with their boards and business leaders isn’t dominated by buzzwords but is instead grounded in ensuring the foundational elements of security are strong enough to build upon.

It’s certainly easy to walk away from RSA 2026 with a sense of dread. But the deeper message embedded throughout the conference would be missing.

Yes, AI represents a significant challenge. Yes, attackers have a near-term advantage. Yes, data sovereignty is becoming a more complex puzzle to solve. But it’s a challenge I think we’re all up for if we’re ready.

Scott Radcliffe is FleishmanHillard’s global director of cybersecurity, leading the firm’s Cybersecurity Center of Excellence and advising clients on rising cyber risks. He recently rejoined FH from Apple, where he led cybersecurity communications and previously served as the agency’s senior global data privacy and security expert.

Article

Why Your AI Rollout Is Stalling (And What Actually Moves the Needle)

March 25, 2026
By Zack Kavanaugh

Most organizations are investing heavily in AI but seeing minimal return. The tools are rolling out. The impact isn’t landing. This article examines why adoption is stalling, what employees are really feeling and why a new model for change is essential to close the gap between investment and outcomes. 

There’s a paradox unfolding in organizations right now – and its quietly derailing AI initiatives at scale. 

Companies are pouring millions – in some cases, billions – into AI infrastructure. Platforms are deploying. Training programs are launching. And yet, most organizations report that their AI efforts aren’t delivering the results they expected.  

In fact, 95% of generative AI pilots fail to reach measurable business impact. Only 1% of organizations consider their deployments truly mature. And across the workforce, a third of employees are actively considering leaving over unclear AI expectations and lack of support. 

The investment is real. The adoption and impact is missing – and the disconnect is striking.  

So, what gives? 

What we’ve learned supporting organizations through AI transformation is this: they’re treating it like a technology problem when it’s actually a people problem. And until we acknowledge that difference, adoption – and business impact along with it – will continue to stall. 

The Emotions Nobody’s Talking About 

Walk into most organizations right now, and the conversation sounds logical. “Here’s the business case. Here’s the ROI. Here’s the productivity uplift.” But underneath that rational overlay is something messier – and infinitely more powerful: how people actually feel

The data points here are endless – and we could marshal dozens more to prove that adoption is stalling. But honestly? They don’t really matter. What does matter is whether you feel your organization is progressing at the rate you know it’s capable of.  

If not, or if you don’t know where to start to answer that question, it may be time to look closely at your adoption strategy.  

The AI Readiness Gap 

This is the mistake we see companies continuing to make: assuming a strong business case is enough to win people over. We’re treating AI adoption like a switch you flip, when it’s actually a continuous, messy, non-linear process that requires people to move through change at different speeds. 

Most organizations are still leaning on traditional change models – the kind that default to logic and expect a single launch moment to do the heavy lifting.  

But AI transformation isn’t a single moment. It’s not a product launch. It’s a fundamental shift in how people think about their work, what they value in their roles and whether they trust the organization to shepherd them through it. 

That gap – between what leaders expect and what employees experience – is the real barrier to adoption. 

A Different Path Forward 

What’s needed is a model designed for how people actually change. Not how we think they should change. How they actually do. 

The good news: that change management model exists; it features three phases and three layers of employees’ experience, and all three matter equally: 

Phase 1: Normalization – The Emotional Layer 

Normalization is about shifting mindsets – listening, building psychological safety and trust, de-weirding tools and making AI part of everyday conversation. Before anyone can adopt anything, they need to feel safe, seen and supported. This means listening before launching.  

It also means leaders modeling vulnerability, not just expertise. And it means identifying trusted voices –both champions and skeptics – and giving them visibility in shaping the journey. When you remove the mystique around AI and make it visible in how people actually talk and work, adoption becomes possible. Listening earns you permission to lead. 

Phase 2: Experimentation – The Personal Layer 

Experimentation is about shaping habits – encouraging participation and creating low-risk opportunities to try, learn, fail safely and reflect. Once people feel safe, they’re ready to connect AI to their own work and identity.  

This is where curiosity replaces skepticism. You can help replace skepticism with curiosity when you share stories from peers – not polished case studies, but real moments where someone figured something out or tried something that didn’t work. When people see themselves in the adoption story, they move from “this doesn’t apply to me” to “I see where this helps.” Experiments become personal. Habits begin to form. Failure becomes data, not judgment. 

Phase 3: Integration – The Operational Layer 

Integration is about scaling impact – building and validating use cases, measuring value, embedding AI into workflows and scaling solutions. When adoption becomes embedded in how work actually happens, impact becomes measurable and repeatable.  

Proven experiments turn into templates and workflows. Success stories become standard operating procedures. Recognition systems reward AI fluency. And AI stops feeling like the new initiative and starts feeling like “just how we work.”

The Continuum, Not the Launch 

The shift here is fundamental. Instead of treating adoption as a destination, we’re treating it as a progression.  

Instead of betting everything on a single launch moment, latest tool or new corporate mandate, we’re developing constant feedback loops. Instead of assuming readiness, we’re building it – intentionally, measurably and with employees at the center.  

Whether you’re leading an organization, a department or a team, your people will never move cleanly through one phase alone. They will move at their own pace. Some people will be experimenting while others are just beginning to normalize. And as new information emerges, they will oscillate back and forth – revisiting earlier phases to deepen their foundation before moving forward again. 

The bottom line: AI adoption accelerates only when the environment is ready – when culture, clarity and context catch up to ambition. That’s when change starts to feel real. And when people decide it’s worth leaning in.  

More to come on all this. Stay tuned.  

Article

Get the Report: Inside China’s 2026 Two Sessions

March 24, 2026

China just locked in its economic roadmap for the next four years with a 4.5–5% growth target. Here’s what matters: The 2026 Two Sessions formally endorse a pivot toward innovation-driven growth, economic resilience and calibrated openness that reshapes how global companies operate, partner and communicate across markets.

Our latest analysis cuts through the noise to explain what actually matters for your organization in 2026. Based on observation and conversations with leaders across sectors and regions, it examines the strategic context, the trade-offs China is managing and what corporate communications professionals need to know to navigate influence and opportunity in this environment.

Article

The New Culture Gap Report: How Brands Stay Relevant for 100 Years

The lifespan is expanding. By 2050, the population aged 100 and older will reach 3.7 million. For brands, this changes everything about loyalty.

Your customer at 25 won’t be the same person at 75. The brands that win aren’t chasing quarterly engagement metrics. They’re building for something longer and deeper: relevance across a century-long life. Meanwhile, 73 percent of consumers feel the world is more unstable than ever, driving what we call Existential Consumerism. They’re optimizing their bodies, securing their futures, protecting their identities. The paradox: the systems designed to deliver control are quietly eroding it.

Our latest Culture Gap Report commissioned by FleishmanHilllard UK, The 100-Year Life Brand Opportunity, explores how brands can stay meaningful across decades of profound change. We reveal the consumer shifts redefining loyalty and the strategic moves that separate brands built to last from those built to trend. Get top findings here or dive into the full report below:

Click above to get the latest Culture Report.
Article

FleishmanHillard Named Among PRovoke Media’s Best Public Relations Agencies in the World

March 10, 2026

FleishmanHillard has been recognized by PRovoke Media as one of the Best Public Relations Agencies in the World, earning recognitions as a top agency in Consumer, Technology, Healthcare, Public Affairs and Corporate public relations.

The recognition comes from PRovoke Media’s comprehensive 12-month analysis of the global PR industry to compile they describe as “the most thorough assessment of the public relations agency landscape.”

Agencies were evaluated based on financial performance, quality of creative work, culture and employer brand, innovative products and services and contributions to industry thought leadership.

The recognition reflects FleishmanHillard’s position as a global communications consultancy redefining modern communications through the integration of AI, data and earned-first creativity as standard tools across its teams while ensuring counselors understand data to design solutions with clients rather than simply deploying them.

Article

The Tech Industry’s License to Lead Problem: How Tech Companies Made Themselves Vulnerable to the AI and SaaS Apocalypse Doubt

March 4, 2026
By Michelle Mulkey

Last week, a short-seller’s Substack moved markets. AI companies were drawing red lines with the U.S. government while also backing away from brand promises. And then rethinking after stakeholder backlash. Earnings reports reinforced an ever-widening gap between the strength of their outlook and the stock price. What is going on?

Tech companies have masterfully sold AI capabilities to their customer base. What they haven’t done is bring their other stakeholders—investors, employees, policymakers and the broader public—along on a coherent story about what it all means or why it matters to them.

The B2B Trap

For too long, the B2B technology industry has been plagued by a self-inflicted wound: the product-as-brand trap, an approach that fundamentally misunderstands the complexity of the modern B2B buying cycle. Driven by engineering-led cultures and the relentless pressure of quarterly product cycles, tech companies have overwhelmingly prioritized the “what” over the “why.” And, as a result, they built their entire communications infrastructure around a single audience: enterprise buyers. When AI emerged as a transformative technology with profound implications for jobs, the economy, regulation and society, tech companies simply applied the same playbook: speaking technical hype to those with purchase authority.

This worked when the only stakeholder that mattered was the customer signing the contract. But it no longer does. Today, a B2B product message isn’t the same thing as a corporate narrative that builds belief and drives competitive differentiation in the minds of investors, talent, regulators and society. While tech companies optimized sales messaging, they surrendered the authority to shape how stakeholders understand the broader implications of their innovations. Investors, employees and the public have filled that void with their own narratives. Most of them anxious.

The License to Lead Data

This vulnerability is directly rooted in abandoning what our License to Lead research, first released in January, reveals about how to create stakeholder confidence beyond the tech buyer. The data is unambiguous: stakeholders don’t extend confidence based on technical prowess alone. They extend it when companies demonstrate ethical behavior (24%), clear communication (21%), integrity (76%), and accountability (74%). Tech companies have leaned entirely into capability claims while neglecting the foundational work of stakeholder engagement and transparency.

Worse, they’ve created a credibility liability. When employees worry about job displacement and hear only technical defensiveness, confidence erodes. When investors question AI’s ROI or how SaaS fits into an AI future and get more hype and hyperbole, belief wanes. When society hears about AI’s economic impact and starts to experience its energy impact, skepticism hardens into doubt and resistance, which is exactly what we’re seeing in current market valuations.

The Path Forward

The good news: It’s not too late. The companies that shift from product-centric hype to an authentic corporate storytelling that own the “why,” engages honestly about implications and drives to clear takeaways about differentiation and impact will be the ones that regain and retain stakeholder confidence, investor trust and ultimately, their License to Lead in this critical moment.

The research on License to Lead presents an urgent corrective that demands a fundamental shift for the tech industry. Communications leaders have to reposition themselves as the builders of stakeholder confidence and the architects of strategic clarity.

Article

The Five Principles of Decision-Ready Intelligence: A Framework for Making Hard Calls in an AI-driven Environment

March 3, 2026

Powered by TRUE Global Intelligence

Organizations are generating more data than ever, and AI tools are now being woven into nearly every corner of decision-making. But the speed and volume of these new systems have created a new risk for leaders: intelligence that looks authoritative at first blush but falls apart under scrutiny.

When confidence is eroded, it doesn’t merely lead to bad decisions, it undermines leaders’ ability to act at all. As our recent License to Lead research shows, when stakeholders lose confidence in how decisions are made, leaders lose the permission to adapt and execute when strategies shift.

The gap between what technology can do and what leaders actually need has never been wider. We take a clear-eyed look at why that gap is widening, and how leaders can close it with decision-ready intelligence. At the center are five principles that set the standard for intelligence that is grounded in reality, driven by context, strengthened by human expertise and resilient under pressure.

The Challenge

Across boardrooms, a new tension is emerging: leaders are being asked to make faster, higher-stakes decisions with intelligence systems that haven’t kept pace with the speed or complexity of the market.

AI has changed the workflow, but not always for the better. It produces more information, more quickly, and with more confidence, even when the underlying signals are fragmented, distorted, or outright manufactured.

Executives are finding themselves in meetings where numbers look precise but fall apart under basic scrutiny. Social listening feeds inflate trends driven by bots. Tools and algorithms give weight to the loudest voices instead of the most relevant ones. AI-generated analyses confidently misread sarcasm, context, or policy detail. And teams don’t realize the flaws until the decision is made.

The hype is fueling the problem. Many teams now treat AI outputs as inherently superior to human interpretation, even when the model draws from noisy data or fills gaps with unsubstantiated guesses. As a result, leaders are making strategic decisions based on insights that feel authoritative but aren’t anchored in anything verifiable.

Why This Matters

In a moment when nearly every information stream is compromised by platform shifts, algorithmic changes, and generative noise, some of the most consequential choices inside organizations today are informed by dashboards and summaries that no one has fully interrogated.

Many organizations are starting to feel the consequences: strategy built on thin intelligence, misreads of sentiment leading to audience disconnects, delayed course corrections, and a growing sense that the tools meant to make decisions easier are, in reality, making them riskier.

As our License to Lead research shows, credibility is the gating factor for action. Ninety-two percent of engaged consumers say companies with strong reputations have greater permission to undertake major business transformations.

External benchmarks also show the stakes are real. Gitnux reports that poor underlying intelligence tied to bad data costs companies an average of $12.9M a year. Eighty-eight percent (88%) of companies report a direct impact on their bottom line due to poor data, eroding 15-25% of revenue. An estimated 40% of AI projects fail to deliver ROI due to poor data quality.

What’s missing is clarity, and the discipline to separate what is real from what merely appears to be. That gap is driving the need for decision-ready intelligence: insight that is accurate, contextual, and defensible under pressure.

The Five Principles of Decision-Ready Intelligence

TRUE Global Intelligence, FleishmanHillard’s intelligence consultancy, developed the Principles of Decision-Ready Intelligence to close that gap. These principles define the standards required to generate insight leaders can trust in an environment where speed, hype, and noise increasingly shape the inputs behind major strategic decisions.

1. Quality & Organization

Inputs must be right before outputs can be trusted, and there are two core tenets.

First, data must be accurate, verified, enriched, and reviewable. That means clear processes for validation and traceability so leaders know exactly where inputs came from and whether they meet the standard for decision-making. This also includes understanding how different file formats, structures, and metadata are interpreted by AI models so inputs aren’t distorted before analysis even begins.

Second, a wide net is not a wise net. Leaders need relevance, so part of our job is to guide clients toward the sources that reflect meaningful public or stakeholder signals and away from the noise masquerading as insight.

If this first foundation isn’t sound, nothing built on top of it is reliable.

2. Context & Focus

Decision-ready intelligence starts with alignment: What strategic, business, or communications question are we trying to answer?

When this question is clear, analysis becomes sharper. It prioritizes the variables that matter and starts relying on smaller, high-quality datasets. It favors focused methods that reveal why something is happening, not just what happened.

Too much analysis is disconnected from the decision it is meant to inform. Dashboards bloat, metrics add up, and models optimize for volume rather than clarity. The result is intelligence that reports activity without explaining meaning.

Insight is only useful when it answers the question at hand.

3. Guardrails and Expertise

AI accelerates the work. It does not replace judgment.

There is a misconception that automation reduces the need for experienced oversight. In reality, it magnifies the consequences of getting something wrong.

Decision-ready intelligence relies on experts who understand the limits of the data, the behavior of platforms, the context behind anomalies, and the boundaries of what any model can reasonably reveal. They bring the pattern recognition of AI lacks, set guardrails, validate assumptions, and challenge outputs. Most importantly, they recognize when something can’t be answered.

This is a form of discipline that ensures that speed never outruns accuracy or context.

4. Curiosity & Critical Thinking

AI delivers answers with certainty, even when the signals behind them are unstable. The risk is not just the error itself; it’s the false confidence attached to the error.

That’s why curiosity and critical thinking play an integral role in this framework. Curiosity triggers are the moments when something in the data doesn’t add up: a spike that doesn’t match the environment, a contradiction across sources, or a pattern that defies logic.

Through critical thinking, we can trace these anomalies back to their source, understand whether the data reflect a real-world signal or a model artifact, and, if something shouldn’t exist, adjust the process so it doesn’t reappear.

This human layer of understanding ensures conclusions can stand up to internal review, external challenges, and the decision itself.

5. Shared Literacy & Accountability

The ultimate purpose of intelligence is action, and most actions at the leadership level are strategic decisions: how to position, when to engage, what to say, where to invest, what risk to take.

That’s why shared literacy and accountability are part of the intelligence discipline as stakeholders work together to give the analysis strategic direction.

This principle connects directly back to Context & Focus. When the intelligence work is built around a specific strategic question, we must answer that question head-on.

It also creates shared understanding across teams. Without that shared literacy, strategy splinters. Not because the intelligence was wrong, but because it wasn’t communicated in a way that aligned the people responsible for acting on it.

This is the standard moving forward.

The pressure on leaders isn’t going to ease. AI will continue to accelerate workflows, expand access to data, and reshape how information moves across organizations. Without standards, speed simply amplifies whatever is already there, good or bad.

The organizations that will navigate this moment effectively are the ones building the discipline to question what their tools produce, align around shared interpretation, and hold the work to a standard that reflects the stakes.

The Principles of Decision-Ready Intelligence provide the structure to meet that responsibility. They help teams narrow the signal, apply context, challenge assumptions, and ensure intelligence is something leaders can act on. And as the information environment becomes more complex, that discipline becomes the differentiator.

Decision-ready intelligence is one of the few levers leaders fully control to strengthen their License to Lead, building confidence before decisions are tested rather than trying to recover it afterward.

Decision-ready intelligence isn’t optional. AI can support strategic judgment, but it cannot take responsibility for it. We remain accountable for the decisions we make.

That accountability extends to the partners we choose. Communications leaders should expect and demand more rigor from the tools, vendors, and agencies they engage. At a minimum, they should ask:

  • Where does the data come from?
  • How is it validated?
  • Who is interpreting the data, and how?
  • What guardrails are in place when the model gets it wrong?
  • What standard does this intelligence have to meet before it reaches a decision-maker?

If a partner can’t answer those questions clearly, they’re not providing intelligence; they’re providing risk. You should demand insight that is real, relevant, and ready for decisions that carry real consequences.

Lead Authors: Ben Levine, Ines Schumacher, Eric Rydell