Employee Login

Enter your login information to access the intranet

Enter your credentials to access your email

Reset employee password

Article

Why Trust is the Real Competitive Advantage in Ag Tech  

April 2, 2026
By Vanessa Sapino, Kristin Hollins and Shelly Kessen

At this year’s World Agri-Tech Summit in San Francisco, several key insights cut through all the AI, robotics and data ecosystems discussions with one clear stand out: Trust and relationships form the foundation of successful technology adoption and meaningful connections in agriculture.

We took away from the conversations that the future of successful ag tech isn’t built in boardrooms. It starts at the farm level, with credible voices, practical solutions and farmers who see themselves as partners in innovation. It’s shaped by people who deliver not only technology, but who understand the market, the mission and the opportunity for real change.

As a proud co-sponsor with Western Growers within the California Delegation of the Ag Tech Alliance, FleishmanHillard was on the ground hearing directly from farmers, food leaders, agribusinesses and tech innovators, along with global policy, industry and academic leaders about what’s working and what’s not.

What we heard repeatedly was striking. While innovation and disruption drive today’s ag tech conversation, farmers still rely most heavily on word of mouth, recommendations from trusted advisors, and partnerships built over years. From the tech company perspective, the conversation centered on differentiation and how to stand out in a crowded market while competing for limited investment and customer attention.

This creates a fundamental challenge: While ag tech companies seek to differentiate in an oversaturated market, farmers seek clarity amid piecemeal options. As one farmer panel pointed out, there is no “Good Housekeeping seal of approval” for ag tech. Farmers face a bewildering array of options, each claiming to solve different pieces of the puzzle. The result? Adoption stalls.

Farmers need holistic solutions that work immediately, reliably, practically, and profitably. That demand for certainty is where trust becomes currency. Without a credible source vouching for a solution, many farmers find themselves in analysis paralysis. But trust shortens the decision cycle. When a farmer trusts a source, they can move faster.

Relationships Are Infrastructure

In agriculture, relationships aren’t soft. They’re structural. A trusted agronomist, equipment dealer or financial advisory team becomes part of operational infrastructure because that person understands the farm’s specific challenges, geographic weather patterns, soil conditions, financial constraints and business goals.

New technology that arrives without relationship context is just noise. Conversely, technology that arrives with a trusted recommendation becomes an asset.

To keep that infrastructure intact, farmers, food companies, agribusinesses and investors across every panel kept emphasizing the same characteristics for technology that actually gets adopted: practical, reliable, immediate ROI, user-friendly, easy to operate, and easy to service. The key takeaway: functional innovation earns credibility.

The Communications Parallel: Moving Forward

The same principle that governs farmer tech adoption also governs communications strategy. Just as farmers need advisors and relationships from day one, organizations across every industry — from scrappy startups to established enterprises — need a trusted communications partner embedded in their growth journey from the beginning to help craft their narrative.

When an organization partners with a communications advisor from day one rather than after launch or when they need crisis response, something powerful can happen. As the ag tech ecosystem faces a challenging commercialization gap, the answer isn’t just deeper partnership with farmers. It’s recognizing that breakthrough ideas only scale when translated into stories that farmers, investors and the entire market can understand, believe in and ultimately adopt. That translation work happens early, or it doesn’t happen at all.

That’s how you build understanding and credibility. That’s how you scale. And in agriculture — as in communications — trust is everything.

Article

Notes From the Road: RSA Conference 2026 Edition

April 1, 2026
By Scott Radcliffe

While at this year’s RSA Conference I overheard a very senior security executive at a well-known security company remark that he “came to RSA expecting a security conference and instead seemed to arrive at an AI conference.” Like many things said in jest, there was more than a little truth buried inside.

Walking through the exhibitor halls, you’re immediately struck by the nearly comprehensive inclusion of AI in nearly every offering on display—from threat detection to incident response to risk management. It seemed every vendor had either retrofitted their solution with AI or built one from scratch.

It would be easy to dismiss it all as hype, another technology cycle where marketing teams latch onto a buzzword without a lot of substance to offer under the surface. Surely at least a little is snake oil, but to dismiss everything as vaporware would be miss the dramatic and evolutionary step AI represents for the cybersecurity space.

In the short twelve months since last year’s RSA conference, we’ve witnessed countless AI experiments, implementations and innovations, and even the most experienced security minds in the world are grappling with uncertainty about what’s coming next.

The Great Shift: From “Humans in the Loop” to Autonomous Operations

At last year’s conference, most discussions around AI in security were grounded at some level on keeping “humans in the loop” of the decision-making and execution process. AI could augment, assist and accelerate actions taken by human admins and users, but the final call had to rest with a human who understood context, nuance and consequences.

That narrative has fundamentally shifted in a single year. As Wall Street Journal reporter James Rundell pointed out from his first impression of this year’s conference, the industry has undergone a philosophical change over the course of the last year. Security teams are no longer asking whether AI should act independently—they’re asking how to best, and hopefully safely, architect systems where AI must act independently and, quite often, in real-time.

This isn’t a subtle distinction. It represents a wholesale reimagining of how we defend our networks and systems. The efficiency gains of this headlong leap into AI are real, but so are the risks, and that tension is what keeps many security leaders up at night.

Identity as the New Perimeter

If autonomous AI is the emerging challenge, then identity has become an even more critical battleground. Anyone who’s paid attention to the security space recently is familiar with the popularity and continued growth of identity-based attacks that use known, often re-used credentials like usernames, email addresses, and passwords to gain access to systems. With AI systems now being granted expanding autonomy and access to sensitive data, the question of whom, or more accurately, what—should be able to access particular systems, networks, or information has taken on even greater urgency.

Early implementations of AI agents have already demonstrated the dangers of unchecked permissions. Give these systems too much access or too broad an ability to act, and they can quickly spiral into trouble. A key message that echoed through many of the talks at RSA this year make clear that guardrails aren’t optional, they’re foundational. As organizations deploy AI more widely, the ability to establish firm, granular controls around identity and access will be absolutely critical. In a world of autonomous intelligent agents, identity becomes the ultimate arbiter of what’s possible.

AI’s Dual-Use Dilemma for Security: Offensive Operators Will Have a Huge Head Start

Perhaps the most sobering insight I took away from RSA this year is how far behind defenders will be, and for how long, in the AI race. AI certainly represents an immediate force multiplier for attackers, and it will take a significant amount of time for defenders to catch up. Kevin Mandia, a veteran cybersecurity executive with decades of experience founding some of the industry’s most iconic companies, put some sobering specifics to this sentiment. In his view, AI will provide a clear advantage to offensive operations for the next two years before the defense can accumulate enough data and operational experience to train systems that keep pace.

The advantage goes beyond speed, though that’s certainly part of it. AI enables attackers to operate with precision and personalization previously unattainable at scale. Rather than deploying generic attack tactics across broad targets, AI allows threat actors to generate bespoke attack plans tailored to individual organizations—understanding their specific vulnerabilities, mimicking their communication patterns, and timing operations to maximize success. For defenders, holding the line while playing catch-up will be a daunting but necessary challenge.

The Sovereignty Conversation: A Quiet but Consequential Shift

Away from the AI spotlight, Microsoft’s CISO for AI and Technology Data, Igor Tsyganskiy, brought up a fascinating nuance to the data sovereignty trend many cloud providers are facing during a fireside chat. As organizations continue to adopt cloud architectures, where data lives—physically and jurisdictionally—has moved from a compliance checkbox to a strategic security consideration.

Different regions, regulatory frameworks and threat landscapes all create scenarios where the location and control of data become material to security architecture. This trend will likely only intensify as companies navigate an increasingly fragmented geopolitical environment. Data sovereignty has been a growing trend for a number of months at this point. The interesting point Tsyganskiy raised at the conference last week, however, was the urgent need for organizations to consider operational contingencies as well in their plans to satisfy data sovereignty requirements.  A recent airstrike that destroyed Amazon’s data center in Bahrain underscores the point: it doesn’t take a missile to disrupt operations, so organizations should be prepared as the answer may not be as easy as flipping the switch to another data center in a desired location.

For security and communications leaders, this means the conversation with the business can’t remain purely technical. It has to account for regulatory, geopolitical and strategic business considerations.

The Fundamentals Still Matter (Maybe More Than Ever)

Rob Joyce, the former director of cybersecurity at the NSA, emphasized a reality that can sometimes get lost amid the AI hype: the fundamentals of cybersecurity still remain a powerful and largely effective defense. His point is worth emphasizing, especially at a conference filled with vendors pitching the latest solutions the security industry has to offer.

Attackers, Joyce argued, continue to disproportionately target organizations that don’t execute the basics well. Though those attacks will only grow as bad actors begin to use AI as a force multiplier, organizations that prepare by adhering closely to good security fundamentals will be in a much better position to weather the coming storm. This means companies that lag in patching systems, haven’t broadly deployed multi-factor authentication, maintain inadequate logging practices, or generally fail to stay prepared are putting their systems at much greater risk.

I would argue the same applies to communications and marketing teams. Ensuring you’re prepared, properly integrated with the rest of the organization and generally ready to help your organization stay ahead of a threat environment evolving at exponential speed is more important than ever. Furthermore, I’d add that the time has come for marketing and communications teams to do their part and partner with technical teams to ensure the security conversation organizations have with their boards and business leaders isn’t dominated by buzzwords but is instead grounded in ensuring the foundational elements of security are strong enough to build upon.

It’s certainly easy to walk away from RSA 2026 with a sense of dread. But the deeper message embedded throughout the conference would be missing.

Yes, AI represents a significant challenge. Yes, attackers have a near-term advantage. Yes, data sovereignty is becoming a more complex puzzle to solve. But it’s a challenge I think we’re all up for if we’re ready.

Scott Radcliffe is FleishmanHillard’s global director of cybersecurity, leading the firm’s Cybersecurity Center of Excellence and advising clients on rising cyber risks. He recently rejoined FH from Apple, where he led cybersecurity communications and previously served as the agency’s senior global data privacy and security expert.

Article

Why Your AI Rollout Is Stalling (And What Actually Moves the Needle)

March 25, 2026
By Zack Kavanaugh

Most organizations are investing heavily in AI but seeing minimal return. The tools are rolling out. The impact isn’t landing. This article examines why adoption is stalling, what employees are really feeling and why a new model for change is essential to close the gap between investment and outcomes. 

There’s a paradox unfolding in organizations right now – and its quietly derailing AI initiatives at scale. 

Companies are pouring millions – in some cases, billions – into AI infrastructure. Platforms are deploying. Training programs are launching. And yet, most organizations report that their AI efforts aren’t delivering the results they expected.  

In fact, 95% of generative AI pilots fail to reach measurable business impact. Only 1% of organizations consider their deployments truly mature. And across the workforce, a third of employees are actively considering leaving over unclear AI expectations and lack of support. 

The investment is real. The adoption and impact is missing – and the disconnect is striking.  

So, what gives? 

What we’ve learned supporting organizations through AI transformation is this: they’re treating it like a technology problem when it’s actually a people problem. And until we acknowledge that difference, adoption – and business impact along with it – will continue to stall. 

The Emotions Nobody’s Talking About 

Walk into most organizations right now, and the conversation sounds logical. “Here’s the business case. Here’s the ROI. Here’s the productivity uplift.” But underneath that rational overlay is something messier – and infinitely more powerful: how people actually feel

The data points here are endless – and we could marshal dozens more to prove that adoption is stalling. But honestly? They don’t really matter. What does matter is whether you feel your organization is progressing at the rate you know it’s capable of.  

If not, or if you don’t know where to start to answer that question, it may be time to look closely at your adoption strategy.  

The AI Readiness Gap 

This is the mistake we see companies continuing to make: assuming a strong business case is enough to win people over. We’re treating AI adoption like a switch you flip, when it’s actually a continuous, messy, non-linear process that requires people to move through change at different speeds. 

Most organizations are still leaning on traditional change models – the kind that default to logic and expect a single launch moment to do the heavy lifting.  

But AI transformation isn’t a single moment. It’s not a product launch. It’s a fundamental shift in how people think about their work, what they value in their roles and whether they trust the organization to shepherd them through it. 

That gap – between what leaders expect and what employees experience – is the real barrier to adoption. 

A Different Path Forward 

What’s needed is a model designed for how people actually change. Not how we think they should change. How they actually do. 

The good news: that change management model exists; it features three phases and three layers of employees’ experience, and all three matter equally: 

Phase 1: Normalization – The Emotional Layer 

Normalization is about shifting mindsets – listening, building psychological safety and trust, de-weirding tools and making AI part of everyday conversation. Before anyone can adopt anything, they need to feel safe, seen and supported. This means listening before launching.  

It also means leaders modeling vulnerability, not just expertise. And it means identifying trusted voices –both champions and skeptics – and giving them visibility in shaping the journey. When you remove the mystique around AI and make it visible in how people actually talk and work, adoption becomes possible. Listening earns you permission to lead. 

Phase 2: Experimentation – The Personal Layer 

Experimentation is about shaping habits – encouraging participation and creating low-risk opportunities to try, learn, fail safely and reflect. Once people feel safe, they’re ready to connect AI to their own work and identity.  

This is where curiosity replaces skepticism. You can help replace skepticism with curiosity when you share stories from peers – not polished case studies, but real moments where someone figured something out or tried something that didn’t work. When people see themselves in the adoption story, they move from “this doesn’t apply to me” to “I see where this helps.” Experiments become personal. Habits begin to form. Failure becomes data, not judgment. 

Phase 3: Integration – The Operational Layer 

Integration is about scaling impact – building and validating use cases, measuring value, embedding AI into workflows and scaling solutions. When adoption becomes embedded in how work actually happens, impact becomes measurable and repeatable.  

Proven experiments turn into templates and workflows. Success stories become standard operating procedures. Recognition systems reward AI fluency. And AI stops feeling like the new initiative and starts feeling like “just how we work.”

The Continuum, Not the Launch 

The shift here is fundamental. Instead of treating adoption as a destination, we’re treating it as a progression.  

Instead of betting everything on a single launch moment, latest tool or new corporate mandate, we’re developing constant feedback loops. Instead of assuming readiness, we’re building it – intentionally, measurably and with employees at the center.  

Whether you’re leading an organization, a department or a team, your people will never move cleanly through one phase alone. They will move at their own pace. Some people will be experimenting while others are just beginning to normalize. And as new information emerges, they will oscillate back and forth – revisiting earlier phases to deepen their foundation before moving forward again. 

The bottom line: AI adoption accelerates only when the environment is ready – when culture, clarity and context catch up to ambition. That’s when change starts to feel real. And when people decide it’s worth leaning in.  

More to come on all this. Stay tuned.  

Article

Get the Report: Inside China’s 2026 Two Sessions

March 24, 2026

China just locked in its economic roadmap for the next four years with a 4.5–5% growth target. Here’s what matters: The 2026 Two Sessions formally endorse a pivot toward innovation-driven growth, economic resilience and calibrated openness that reshapes how global companies operate, partner and communicate across markets.

Our latest analysis cuts through the noise to explain what actually matters for your organization in 2026. Based on observation and conversations with leaders across sectors and regions, it examines the strategic context, the trade-offs China is managing and what corporate communications professionals need to know to navigate influence and opportunity in this environment.

Article

The New Culture Gap Report: How Brands Stay Relevant for 100 Years

The lifespan is expanding. By 2050, the population aged 100 and older will reach 3.7 million. For brands, this changes everything about loyalty.

Your customer at 25 won’t be the same person at 75. The brands that win aren’t chasing quarterly engagement metrics. They’re building for something longer and deeper: relevance across a century-long life. Meanwhile, 73 percent of consumers feel the world is more unstable than ever, driving what we call Existential Consumerism. They’re optimizing their bodies, securing their futures, protecting their identities. The paradox: the systems designed to deliver control are quietly eroding it.

Our latest Culture Gap Report commissioned by FleishmanHilllard UK, The 100-Year Life Brand Opportunity, explores how brands can stay meaningful across decades of profound change. We reveal the consumer shifts redefining loyalty and the strategic moves that separate brands built to last from those built to trend. Get top findings here or dive into the full report below:

Click above to get the latest Culture Report.
Article

FleishmanHillard Named Among PRovoke Media’s Best Public Relations Agencies in the World

March 10, 2026

FleishmanHillard has been recognized by PRovoke Media as one of the Best Public Relations Agencies in the World, earning recognitions as a top agency in Consumer, Technology, Healthcare, Public Affairs and Corporate public relations.

The recognition comes from PRovoke Media’s comprehensive 12-month analysis of the global PR industry to compile they describe as “the most thorough assessment of the public relations agency landscape.”

Agencies were evaluated based on financial performance, quality of creative work, culture and employer brand, innovative products and services and contributions to industry thought leadership.

The recognition reflects FleishmanHillard’s position as a global communications consultancy redefining modern communications through the integration of AI, data and earned-first creativity as standard tools across its teams while ensuring counselors understand data to design solutions with clients rather than simply deploying them.

Article

The Tech Industry’s License to Lead Problem: How Tech Companies Made Themselves Vulnerable to the AI and SaaS Apocalypse Doubt

March 4, 2026
By Michelle Mulkey

Last week, a short-seller’s Substack moved markets. AI companies were drawing red lines with the U.S. government while also backing away from brand promises. And then rethinking after stakeholder backlash. Earnings reports reinforced an ever-widening gap between the strength of their outlook and the stock price. What is going on?

Tech companies have masterfully sold AI capabilities to their customer base. What they haven’t done is bring their other stakeholders—investors, employees, policymakers and the broader public—along on a coherent story about what it all means or why it matters to them.

The B2B Trap

For too long, the B2B technology industry has been plagued by a self-inflicted wound: the product-as-brand trap, an approach that fundamentally misunderstands the complexity of the modern B2B buying cycle. Driven by engineering-led cultures and the relentless pressure of quarterly product cycles, tech companies have overwhelmingly prioritized the “what” over the “why.” And, as a result, they built their entire communications infrastructure around a single audience: enterprise buyers. When AI emerged as a transformative technology with profound implications for jobs, the economy, regulation and society, tech companies simply applied the same playbook: speaking technical hype to those with purchase authority.

This worked when the only stakeholder that mattered was the customer signing the contract. But it no longer does. Today, a B2B product message isn’t the same thing as a corporate narrative that builds belief and drives competitive differentiation in the minds of investors, talent, regulators and society. While tech companies optimized sales messaging, they surrendered the authority to shape how stakeholders understand the broader implications of their innovations. Investors, employees and the public have filled that void with their own narratives. Most of them anxious.

The License to Lead Data

This vulnerability is directly rooted in abandoning what our License to Lead research, first released in January, reveals about how to create stakeholder confidence beyond the tech buyer. The data is unambiguous: stakeholders don’t extend confidence based on technical prowess alone. They extend it when companies demonstrate ethical behavior (24%), clear communication (21%), integrity (76%), and accountability (74%). Tech companies have leaned entirely into capability claims while neglecting the foundational work of stakeholder engagement and transparency.

Worse, they’ve created a credibility liability. When employees worry about job displacement and hear only technical defensiveness, confidence erodes. When investors question AI’s ROI or how SaaS fits into an AI future and get more hype and hyperbole, belief wanes. When society hears about AI’s economic impact and starts to experience its energy impact, skepticism hardens into doubt and resistance, which is exactly what we’re seeing in current market valuations.

The Path Forward

The good news: It’s not too late. The companies that shift from product-centric hype to an authentic corporate storytelling that own the “why,” engages honestly about implications and drives to clear takeaways about differentiation and impact will be the ones that regain and retain stakeholder confidence, investor trust and ultimately, their License to Lead in this critical moment.

The research on License to Lead presents an urgent corrective that demands a fundamental shift for the tech industry. Communications leaders have to reposition themselves as the builders of stakeholder confidence and the architects of strategic clarity.

Article

The Five Principles of Decision-Ready Intelligence: A Framework for Making Hard Calls in an AI-driven Environment

March 3, 2026

Powered by TRUE Global Intelligence

Organizations are generating more data than ever, and AI tools are now being woven into nearly every corner of decision-making. But the speed and volume of these new systems have created a new risk for leaders: intelligence that looks authoritative at first blush but falls apart under scrutiny.

When confidence is eroded, it doesn’t merely lead to bad decisions, it undermines leaders’ ability to act at all. As our recent License to Lead research shows, when stakeholders lose confidence in how decisions are made, leaders lose the permission to adapt and execute when strategies shift.

The gap between what technology can do and what leaders actually need has never been wider. We take a clear-eyed look at why that gap is widening, and how leaders can close it with decision-ready intelligence. At the center are five principles that set the standard for intelligence that is grounded in reality, driven by context, strengthened by human expertise and resilient under pressure.

The Challenge

Across boardrooms, a new tension is emerging: leaders are being asked to make faster, higher-stakes decisions with intelligence systems that haven’t kept pace with the speed or complexity of the market.

AI has changed the workflow, but not always for the better. It produces more information, more quickly, and with more confidence, even when the underlying signals are fragmented, distorted, or outright manufactured.

Executives are finding themselves in meetings where numbers look precise but fall apart under basic scrutiny. Social listening feeds inflate trends driven by bots. Tools and algorithms give weight to the loudest voices instead of the most relevant ones. AI-generated analyses confidently misread sarcasm, context, or policy detail. And teams don’t realize the flaws until the decision is made.

The hype is fueling the problem. Many teams now treat AI outputs as inherently superior to human interpretation, even when the model draws from noisy data or fills gaps with unsubstantiated guesses. As a result, leaders are making strategic decisions based on insights that feel authoritative but aren’t anchored in anything verifiable.

Why This Matters

In a moment when nearly every information stream is compromised by platform shifts, algorithmic changes, and generative noise, some of the most consequential choices inside organizations today are informed by dashboards and summaries that no one has fully interrogated.

Many organizations are starting to feel the consequences: strategy built on thin intelligence, misreads of sentiment leading to audience disconnects, delayed course corrections, and a growing sense that the tools meant to make decisions easier are, in reality, making them riskier.

As our License to Lead research shows, credibility is the gating factor for action. Ninety-two percent of engaged consumers say companies with strong reputations have greater permission to undertake major business transformations.

External benchmarks also show the stakes are real. Gitnux reports that poor underlying intelligence tied to bad data costs companies an average of $12.9M a year. Eighty-eight percent (88%) of companies report a direct impact on their bottom line due to poor data, eroding 15-25% of revenue. An estimated 40% of AI projects fail to deliver ROI due to poor data quality.

What’s missing is clarity, and the discipline to separate what is real from what merely appears to be. That gap is driving the need for decision-ready intelligence: insight that is accurate, contextual, and defensible under pressure.

The Five Principles of Decision-Ready Intelligence

TRUE Global Intelligence, FleishmanHillard’s intelligence consultancy, developed the Principles of Decision-Ready Intelligence to close that gap. These principles define the standards required to generate insight leaders can trust in an environment where speed, hype, and noise increasingly shape the inputs behind major strategic decisions.

1. Quality & Organization

Inputs must be right before outputs can be trusted, and there are two core tenets.

First, data must be accurate, verified, enriched, and reviewable. That means clear processes for validation and traceability so leaders know exactly where inputs came from and whether they meet the standard for decision-making. This also includes understanding how different file formats, structures, and metadata are interpreted by AI models so inputs aren’t distorted before analysis even begins.

Second, a wide net is not a wise net. Leaders need relevance, so part of our job is to guide clients toward the sources that reflect meaningful public or stakeholder signals and away from the noise masquerading as insight.

If this first foundation isn’t sound, nothing built on top of it is reliable.

2. Context & Focus

Decision-ready intelligence starts with alignment: What strategic, business, or communications question are we trying to answer?

When this question is clear, analysis becomes sharper. It prioritizes the variables that matter and starts relying on smaller, high-quality datasets. It favors focused methods that reveal why something is happening, not just what happened.

Too much analysis is disconnected from the decision it is meant to inform. Dashboards bloat, metrics add up, and models optimize for volume rather than clarity. The result is intelligence that reports activity without explaining meaning.

Insight is only useful when it answers the question at hand.

3. Guardrails and Expertise

AI accelerates the work. It does not replace judgment.

There is a misconception that automation reduces the need for experienced oversight. In reality, it magnifies the consequences of getting something wrong.

Decision-ready intelligence relies on experts who understand the limits of the data, the behavior of platforms, the context behind anomalies, and the boundaries of what any model can reasonably reveal. They bring the pattern recognition of AI lacks, set guardrails, validate assumptions, and challenge outputs. Most importantly, they recognize when something can’t be answered.

This is a form of discipline that ensures that speed never outruns accuracy or context.

4. Curiosity & Critical Thinking

AI delivers answers with certainty, even when the signals behind them are unstable. The risk is not just the error itself; it’s the false confidence attached to the error.

That’s why curiosity and critical thinking play an integral role in this framework. Curiosity triggers are the moments when something in the data doesn’t add up: a spike that doesn’t match the environment, a contradiction across sources, or a pattern that defies logic.

Through critical thinking, we can trace these anomalies back to their source, understand whether the data reflect a real-world signal or a model artifact, and, if something shouldn’t exist, adjust the process so it doesn’t reappear.

This human layer of understanding ensures conclusions can stand up to internal review, external challenges, and the decision itself.

5. Shared Literacy & Accountability

The ultimate purpose of intelligence is action, and most actions at the leadership level are strategic decisions: how to position, when to engage, what to say, where to invest, what risk to take.

That’s why shared literacy and accountability are part of the intelligence discipline as stakeholders work together to give the analysis strategic direction.

This principle connects directly back to Context & Focus. When the intelligence work is built around a specific strategic question, we must answer that question head-on.

It also creates shared understanding across teams. Without that shared literacy, strategy splinters. Not because the intelligence was wrong, but because it wasn’t communicated in a way that aligned the people responsible for acting on it.

This is the standard moving forward.

The pressure on leaders isn’t going to ease. AI will continue to accelerate workflows, expand access to data, and reshape how information moves across organizations. Without standards, speed simply amplifies whatever is already there, good or bad.

The organizations that will navigate this moment effectively are the ones building the discipline to question what their tools produce, align around shared interpretation, and hold the work to a standard that reflects the stakes.

The Principles of Decision-Ready Intelligence provide the structure to meet that responsibility. They help teams narrow the signal, apply context, challenge assumptions, and ensure intelligence is something leaders can act on. And as the information environment becomes more complex, that discipline becomes the differentiator.

Decision-ready intelligence is one of the few levers leaders fully control to strengthen their License to Lead, building confidence before decisions are tested rather than trying to recover it afterward.

Decision-ready intelligence isn’t optional. AI can support strategic judgment, but it cannot take responsibility for it. We remain accountable for the decisions we make.

That accountability extends to the partners we choose. Communications leaders should expect and demand more rigor from the tools, vendors, and agencies they engage. At a minimum, they should ask:

  • Where does the data come from?
  • How is it validated?
  • Who is interpreting the data, and how?
  • What guardrails are in place when the model gets it wrong?
  • What standard does this intelligence have to meet before it reaches a decision-maker?

If a partner can’t answer those questions clearly, they’re not providing intelligence; they’re providing risk. You should demand insight that is real, relevant, and ready for decisions that carry real consequences.

Lead Authors: Ben Levine, Ines Schumacher, Eric Rydell

Article

The Patient Engagement Gap Your Competitors Are Closing

February 26, 2026
By Barry Sudbeck

Here’s a question more pharma executives are asking: Does patient engagement move the needle, or is it just good optics?

It’s a fair question.

In an era where pharmaceutical innovation must prove its value not only through clinical efficacy but also through demonstrated patient relevance, the question is no longer ‘whether’ to engage patients—it’s whether that engagement translates into an advantage.

New research from FleishmanHillard’s Global Health & Life Sciences group found it might. Released in recognition of Rare Disease Day, The Patient Engagement Premium: Defining the Strategic Value of Patient Input in Drug Development examines FDA submissions for rare disease therapies approved between 2018 and 2024 and finds directional associations between documented patient input and regulatory outcomes.

From Philosophy to Evidence

The shift from transactional patient engagement to embedded patient evidence isn’t new thinking though, but it is accelerating practice. And as regulatory scrutiny of traditional DTC channels intensifies and Health Technology Assessment bodies increasingly consult patient advocacy organizations, companies face a choice: embed patient evidence directly into development processes, or risk losing ground to those who do.

But let’s be honest, executive decision-makers demand more than anecdote. This research represents a crucial step toward establishing a measurable evidence base for patient engagement as a strategic investment, not just a values statement.

A Rigorous Approach to a Complex Question

The analysis examined 179 rare disease drug approvals that included Patient Experience Data (PED) tables, a requirement formalized following the 21st Century Cures Act. Each product was assigned a ‘Patient Engagement Score’ based on six distinct engagement activities, from patient advisory committee insights to patient-reported outcomes (PROs) and clinical outcome assessments (COAs).

Here’s what we found:

  • Patient input is increasingly embedded in regulatory submissions. Nearly nine in ten submissions in 2023-2024 explicitly cited at least one patient engagement activity, up markedly from earlier in the study period. PRO and COA data have become the most common form of patient input, signaling that companies may be integrating patient insights systematically and earlier in development.
  • Higher engagement scores trended with patient-centered labeling. Products with label claims tied to patient input averaged 1.4 documented engagement categories versus 1.0 for those without, a modest but directional association that could confer commercial advantage.
  • Company size isn’t a barrier. Mid-cap sponsors engaged in patient-centered activities nearly as frequently as large pharmaceutical companies. Translation? The potential benefits of patient engagement appear accessible across the competitive landscape.

What Happens to Companies That Don’t Move?

Let’s be clear: the evidence base is still developing, and these associations are directional rather than conclusive. But the implications are hard to ignore.

Patient engagement is evolving from ethical consideration to strategic necessity. Companies are prioritizing structured, quantifiable patient data, particularly PROs and COAs, for FDA submissions. Yet many underutilize other pathways, including patient organization partnerships and patient preference studies. That suggests that comprehensive investment in the full spectrum of patient evidence could be an untapped competitive edge.

For smaller companies not yet systematically integrating patient perspectives, the takeaway is encouraging, structured engagement may level the playing field. For larger companies that under-invest in patient input, the risk is equally clear, patient-centered rivals may be building advantages that compound over time.

Looking Ahead

As the evidence base expands and sponsors document patient engagement more comprehensively, clearer patterns will likely emerge. But the direction of travel is already obvious: regulators, payers, and patients themselves are reshaping how innovation is valued. Companies that embed patient engagement as foundational, not peripheral, will compound advantage across regulatory, payer and reputation landscapes. The infrastructure to do it exists. The question is in the execution.

Our approach combines regulatory expertise with data science and AI tools to help clients operationalize patient input across the product lifecycle, ensuring innovation is positioned as both evidence-driven and human-centered.

The pharmaceutical industry is at an inflection point. The companies that treat patient engagement as foundational—not peripheral—will define what comes next.

To access the full report or discuss how strategic patient engagement can create value for your organization, visit fleishmanhillard.com or contact Barry Sudbeck and Laura Musgrave, Patient Engagement Specialists with FleishmanHillard’s Global Health & Life Sciences group.

Click the image to download our Global Health & Life Sciences patient engagement analysis

Article

AI is Reshaping Communications: Inside FleishmanHillard’s Enterprise-Wide Approach

February 19, 2026

In his new Forbes piece, Bernard Marr explores the breakneck pace of AI transformation in the communications landscape with Ephraim Cohen, FleishmanHillard’s global head of data and digital. Cohen reveals that unlike past technological shifts that took decades to prepare for, today’s AI evolution is happening so rapidly that even full-time experts are struggling to keep pace.

Watch Their Full Conversation Here:

Three Key Takeaways:

1. Democratizing AI Across the Organization Rather than creating an elite “AI team,” Cohen outlines empowering every employee with hands-on access to frontier models and training. This bottom-up approach has yielded more powerful, bespoke solutions because they’re built by people who intimately understand client challenges, rather than strictly technical specialists.

2. The Power of Curated Knowledge Libraries Building digitized libraries of proven case studies and best practices that feed AI agents creates more relevant, accurate outputs than relying on open internet training data. For crisis simulations and campaign work, this approach delivers precision over generic AI-generated content.

3. Keeping Humans in the Driver’s Seat Cohen emphasizes that human creativity remains paramount. AI works best as a talented assistant—helping test, refine, and optimize human ideas, not replacing them. The result: less “AI slop,” more polished, high-impact work.