CommPRO

View Original

AI, Ethics, and Public Relations

The greatest danger of artificial intelligence is not the technology, but how we use it.

September is PRSA’s Ethics Month. One area of focus will be on the ethical use of artificial intelligence.        

With adoption having reached a tipping point, industry visionaries are now leveraging AI to generate content, pitch, measure, and even manage the whole PR process. 

As we dive into the AI pool, we need to be guided by ethical considerations of what is right for the industry. Ultimately, “just because you can, doesn’t mean you should.”  

Unlike many who talk about the idea of AI, as a technology company we are knee deep in the weeds of doing. We are looking at building data models and programming that embrace the potential of artificial intelligence to enhance our mission of connecting clients with agencies and professionals.  

Although it may be compelling for individual agencies or companies to leverage AI-tools to create a first mover advantage, we need to recognize the potential for a dark side.

I don’t want to get all Terminator here, but in the same way there is debate among technology luminaries about the existential risks of artificial intelligence, eyes need to be wide open as we look at the consequences of its use in the PR and communications industry. The elimination of jobs is likely the least of our concerns.                                 

Since individual performance is often measured by the latest new business win, article, or social media impressions, the incentive to generate quick results is significant. And, the temptation to take short cuts is strong.  

When incentives distort ethical behavior, we are at risk of chasing “fool’s gold.” As I wrote, in Tunnels & Funnels: Why We Make Bad Decisions & How We can Make Better Ones, it is a fundamental part of human behavior to focus on what we are chasing and lose sight of the broader context. When we go into tunnels we have to stop and look up, to ensure we see the big picture. 

This is at the heart of why we need to think about ethics in general, and an ethical approach to AI in particular. Ethics provide a long-term context and a broader dimension than money alone, to making the “right” decisions for our businesses, industry, and society as a whole. 

No doubt this is tricky territory. Morals and ethics are deeply entwined, but what I’d argue is that, for our purposes, we should be thinking of ethics broadly as a guide for action that will ensure the sustainability of individual businesses and the industry over the long term. This idea is built into PRSA’s code of ethics.       

There a number of key questions we need to ask to guide the ethical use of AI:  

  • What will be the impact of producing content at scale using Chat GPT or other tools to pitch journalists or to enhance SEO?

  • Will AI-driven pitching tools enhance journalists’ engagement with PR people, strengthen or damage relationships, and/or lead to journalists tuning out?

  • What will be the impact of AI-driven automation of PR industry functions on the professional skillsets of people in the industry?     

  • How should we think about the use of AI-tools that leverage data, pitches, images or other potentially copyrighted information?

  • What will we need to do around inherent biases built into AI to avoid the exclusion of certain groups from the industry?  

These complex issues need to be discussed in far greater depth. While industry codes of ethics are a powerful guides to behavior, they will need to be updated to reflect what AI is now making possible. This is being done at a government and societal level – and this needs to happen in our industry.  

History shows us that ethics is generally a slow follower of innovation and technology. The rollout of AI-driven tools will be swifter than the implementation of rules or guidelines governing their use. The potential consequences of the unethical or inappropriate uses of technology, will ultimately be manifest down the line. All the more reason for more discussion today. 

The future is never certain. 

The greatest risk of AI for our industry is that we get to a point where the arms race to generate more (pitches, content, automation) will lead to less engagement with journalists, audiences tuning out, and ultimately reputational damage to the industry.     

AI used in the right way does have the potential to create more informed professionals, focused pitches, and smarter research which creates a better and richer human engagement. It is up to us to choose the path we take.