Infusing the Human Role in AI – A paradox for communications success in an era of digital transformation
Nearly every organization – regardless of industry – is facing an existential question right now: How will AI change the way we do business? For some, the technology brings optimism in the shape of streamlining cumbersome processes or eliminating the mundane. For others, there’s a real question of relevance, the role individuals will continue to have in shaping the business, and concerns around misuse or misappropriation of their IP and other sensitive information. For most, there seems to be an expectation that more AI means less human involvement, for better or worse.
Regardless of where any singular business is in its AI trajectory, one thing is true for all: AI – and generative AI in particular, is precipitating mass-scale business transformation faster than any technology before it.
In rapid succession, the questions that tend to follow the first are:
- How can this technology help us and what are the risks for our organization?
- How are we managing the process within our organization?
- Who is responsible for dictating when and how we’re communicating forthcoming changes to our business?
- How will our business roll out the technology to balance speed with safety and deliver industry-leading tech solutions while ensuring compliance and mitigating risk?
In a recent client salon about the role of gen AI in communications, senior FleishmanHillard counselors discussed and debated the merits of the technology, the AI transformation that is upon us all, and the value of thoughtful and deliberate communications as the most effective tool to ensuring the success of the transformation.
While questions about guardrails and best practices abounded – and responses led to more questions with seemingly endless grey areas – one reality became clear in every part of the discussion: at this still-early stage, the success of AI implementation has more to do with the humans who wield it than it does with the technology itself. In essence, successful adoption and implementation of generative AI will be the result of uniquely human applications of the technology if we know how to navigate it and communicate about its potential.
As organizations embrace AI-driven technologies to stay competitive and relevant, the corporate communications function plays a critical role. Communications will not only engage employees in the evolution of the technology, but will also ask critical questions about the use of AI on behalf of other audiences. Communications teams will develop, and help their organization’s leaders anticipate responses to AI risk scenarios, and will help pressure test the organization’s AI transformation strategy and how it is rolled out across the enterprise and to key stakeholders. Of the many topics, the conversation highlighted three key principles for communicators to consider as their organizations embark on this change:
Earn trust, don’t ask for it | Transparency is accountability
- In the age of misinformation and disinformation, it is the leaders and the employees who are the ultimate arbiters of an organization’s credibility and reputation. The time to build trust is not when you’re in the middle of a crisis but well before a crisis unfolds. Businesses build credibility, an inherently human quality, and demonstrate accountability through transparency, communication and ownership of outcomes. Working backwards from a hypothetical breach of trust and implementing the checks and balances that cultivate it can minimize an organization’s level of exposure and serve as an insurance policy for your reputation, particularly with a relatively new technology like generative AI.
Understand your threshold for risk | Compliance over defiance
- Any AI communications strategy that is not aligned to specific business imperatives will fall short and risk exposure to the business. It is essential for any organization’s AI transformation to begin with the business objective. If the point of guardrails is to allow a business to move faster and engage employees, the board, investors and other stakeholders, guardrails must then be based on business priorities and the level of risk the organization is willing to tolerate – not the fear of what organizational breakage might happen. Understanding how to communicate that internally through a lens of measured optimism is – notably – another uniquely human characteristic. Simply focusing on the risks of generative AI, without the counterbalance of defining opportunities and organizing safe experimentation leads to prohibition and creates pathways to defiance versus compliance. This is especially likely when curious, energetic employees are ready to move faster than the organization. Leading with the promise of the technology, balanced with the measured understanding of what level of risk the organization can tolerate, is the path to fostering dialogue and engagement.
Resist normalization | Overcome a sea of sameness in communications
- While generative AI can expedite our understanding and synthesis of topics, historical events and the like, it is also rapidly accelerating the normalization of how those very topics/events are reported, documented and reflected. Ultimately, by referencing and assimilating a wide range of existing content, generative AI without appropriate oversight could increase the risk of a sea of sameness – communications that increasingly looks more alike across organizations and brands, and appears less distinctive to audiences. Communications teams are at particular risk when using the tools to break down complex and highly nuanced topics. In our role as communicators, it is incumbent upon us to introduce critical examination into the communications process so that our content is not simply a modified representation of the homogenized whole, but rather delivers a nuanced perspective that reflects the culture, values and integrity of our organizations. The introduction of an old-fashioned, familiar media tool like an editorial board within an organization can ensure that internal and external communications created through generative AI are credible, bespoke for the audiences they’re intended to reach, truly reflective of the organization’s values, purpose and brand, and uniquely expressed, without letting go of benefits of AI for faster content generation.
Credibility. Optimism. Integrity. Reflecting on all that was discussed, it’s clear that a humanistic approach to AI adoption and rollout is perhaps one of the strongest risk management and mitigation strategies for leadership to implement today. Ironically, it is likely to be the Generative AI implementations with the most qualitative, editorial-style human oversight that are most effective, rather than the “low- to no-humans” version that concerns many organizations.
So, how long is the runway for generative AI adoption? It’s both right now and at least three years out. A recent Gartner report predicted that by 2026, “over 80% of enterprises will have used GenAI APIs and models and/or deployed GenAI-enabled applications in production environments, up from less than 5% early 2023.” Though it may feel as though there’s time, getting it right means organizations must start planning now if they intend to reap the benefits of the technology. Effective communication will be essential to ensuring key stakeholders see the value and the opportunity versus the challenge and risk inherent in the adoption of any new technology; and that means prioritizing those audiences at every stage and phase of the process with meaningful updates and intentional checkpoints and resources. It’s about creating a shared vision of the future where AI augments human capabilities, leading to better, more meaningful work.
We know that AI holds tremendous promise – in business and in our lives personally. And, despite undeniable perils and the potential for misuse or unintended consequences, guiding principles rooted in integrity can minimize those risks. Personally, I see something optimistic about the opportunity and ripple effects generative AI technology will have within and across industries – so long as we stay focused on the very real human impact that is at the heart of the technology. AI’s promise is within reach, and a responsible, people-centered approach is the bridge to a brighter future.
This communication is offered as general background and insight as of the date of publication, but is not intended to be and should not be taken as legal advice. Each organization should confer with its own legal counsel and its own business and strategic advisors for guidance that is specific to and considers the organization’s status, structure, needs, and strategies.