What Role Should Brands Play in a World of Misinformation?
While much of America was barbecuing, firework-ing or otherwise celebrating the Fourth of July holiday, a federal judge in Louisiana was limiting interactions between the White House and social media companies, based on a case brought by the Attorneys General in Louisiana and Missouri.
In the spirit of tl;dr (too long, didn’t read), I’ll summarize the cases and ruling with this: the intention is to limit government censorship of opinions that run counter to its policies (COVID regulations and election results, most notably). The government would contend that suppression of disinformation is a public service, especially during a public health crisis, so the question becomes: where’s the line between stopping the spread of dis/misinformation and infringing on someone’s free speech?
This question is likely to reach the Supreme Court for a final decision, but what’s the role of brands in this ongoing battle? First, let’s define the difference between misinformation and disinformation: the former being typically an error committed without intention or malice, while the latter is often part of a coordinated campaign to confuse, contradict and create conflict.
As professional communicators are among the arbiters of brand reputation, we have a duty to correct misinformation but also to seek out and stem the spread of disinformation. This is how we can best prevent more widespread creation and distribution of false information. It’s also one of the critical concerns around AI taking hold of creative development and news generation. When sources feed off one another, the risk of false or ‘fake’ content proliferating our feeds becomes more difficult to mitigate.
If that’s why we play a role, how do we do it? We start by ramping up our monitoring and listening efforts to find not only the posts and pieces of content, but more importantly, the sources and channels propagating mis/disinformation. We not only report these handles and publishers but also support third-party groups with missions to monitor public conversations, intercept and flag false information (even when the platforms don’t).
Speaking of the platforms themselves, we must keep the pressure on them to increase internal policing of disinformation campaigns. For every innovation like Threads, we should see two or three new products aimed at making sure conversations are not only ‘friendly,’ but also trusted.
While brands should ideally celebrate expression and differing opinions on social media, the line must be drawn when opinion gives way to misinformation, be it innocently or with nefarious intent. One solve is that Community Guidelines have existed at the brand page level for more than a decade and should be relied upon when conversation moderation is necessary. Think of it as a home security system for your house (brand page) inside an already gated development (the platform).
By clearly stating that information which can be proven false will not be tolerated or allowed on the page, you’ve made it clear that this content will be suppressed (deleted/hidden). It’s a form of digital defense that helps sure up your monitoring efforts so you can focus on the fun stuff: the proactive storytelling and consumer engagement.