AI Ethics Need Time
At the same time Major League Baseball (MLB) has made a revolutionary rule change to speed up America’s pastime, some renown business leaders have called a timeout to slow down the planet’s hottest new technology. In a world that places a high priority on time, is it fair to ask organizations to hit pause on artificial intelligence (AI)?
With many believing that notoriously long baseball games have outlasted the attention spans of fans now conditioned for shorter bursts of entertainment, MLB made the game-changing addition of a pitch clock, which already seems to be serving its purpose of expediting play.
However, sports don’t always imitate life. Concerned about the meteoric rise of AI and its potential abuses, over 18,000 people have signed the Future Life Institute’s open letter that asks all AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Among the notable signatories are tech leaders Elon Musk and Steve Wozniak and 2020 presidential candidate Andrew Yang.
Most of us have heard of at least some ethical infractions attributed to AI that range from the art tool Midjourney slyly outfitting Pope Francis in a longline white puffer coat to complaints that an uncensored chatbot continually offends human decency.
In light of such concerns, some marketing professionals have joined ranks with their tech colleagues and said that the proposed pause on AI development is, akin to Keebler cookies, “an uncommonly good idea.” On an even larger scale, Italy recently became the first western nation to ban ChatGPT.
However, not everyone agrees that an AI pause is necessary. While major brands like Coke, Duolingo, and Expedia are increasingly leveraging the power of ChatGPT for their digital marketing, Microsoft has gone much further, making multimillion dollar investments in Open AI, the app’s owner.
Also questioning the prudence of the open letter and proposed AI pause are “some prominent AI ethicists” and Microsoft co-founder Bill Gates, who has said, “I don’t really understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop.”
Gates’ issue with the AI pause doesn’t seem to be so much that he believes it’s a bad idea in principle as he fears its unilateral implementation, i.e., many around the world won’t honor the halt. As Microsoft’s largest single stockholder, Gates understandably doesn’t want the company to fall behind in the AI race.
Gates elaborated on his AI perspective saying, “Clearly there’s huge benefits to these things… what we need to do is identify the tricky areas.”
Coming from someone who is a prolific reader and arguably one of humanity’s greatest intellects, “tricky” is a very interesting choice of words.
Gates probably didn’t mean “tricky” in the sense of sly or deceptive, but rather he chose the adjective to convey that AI issues are complex, delicate, and intricate. Either way, the word is very informative for the approach to ethics that should be taken with AI.
When a bomb squad comes upon an unrecognized incendiary device that it needs to deactivate, and one of its members says, “This is going to be tricky,” it’s probably not code to pick up the pace and rush headlong into the defusing process. Instead, “tricky” would likely signal to everyone on the team that they should slow down and think, “How exactly do we want to go about this?”
Based on the educated opinions of tech experts who know much more about the potential risks and rewards of AI than do most of us, the transcendent and ever-evolving technology is potentially explosive, or “uncommonly” “tricky.” Some of the potential pitfalls include information accuracy, privacy, intellectual property, offensive content, attribution, and impact on humans' livelihoods.
When it comes to tricky ethical issues, it’s not only okay to pump the brakes, hit pause, and take a beat – it’s desirable. Moral choices shouldn’t be rushed; rather, they often benefit from more time to allow for:
Consideration of other opinions: Any given person’s, organization’s, or industry’s perspective is naturally limited and usually biased to some extent. It’s very helpful, therefore, to engage other stakeholders who can offer divergent views, or at least ask good questions.
Better projection of likely outcomes: A danger of rushing through a product trial is that some consequences only become known over time, after they happen. Such a long-term delay of launching may not be practical, but additional conceptual testing is usually possible and is likely to identify other probable occurrences.
Deeper reflection on pertinent principles: Identifying what specific moral issues are at play in a given situation requires very intentional analysis. Determining what particular courses of action are decent, fair, honest, etc. requires even greater contemplation.
Unfortunately, a MLB pitcher facing a full count on a prolific home run hitter can no longer take extra time to gather himself before throwing the next pitch. However, even though the game of high tech is moving at a very rapid pace, there is no pitch clock on AI ethics.
Gates is right that not everyone in the world will hit pause on AI development at the same time, which is concerning. But why, then, not apply the same logic to an issue like greenhouse gases? Certainly not every organization or nation is working to reduce their CO2 emissions; yet, Gates is, thankfully, a vocal advocate and large financial supporter of mitigation efforts.
In ethics, it’s not only tenuous to try to think too fast, it’s also ill-advised to reason: “Others won’t take a stand, so why should I?” Pumping the brakes on new technology is not always needed, but given AI’s life-altering potential, some extra time to talk and reflect equals “Mindful Marketing.”