St. Luke’s Health System in Kansas City, Mo., earlier this month discovered a memo circulating online that purported to come from the health system, claiming that alcohol consumption would reduce the risk of contracting COVID-19.
Health system leadership had never seen the memo before, and the prevention tip—which highlighted drinking vodka as a way to protect against the coronavirus—had no scientific backing.
“We received close to 100 messages into our Facebook private inbox overnight, with people sharing this memo and asking if this was authentic information,” said Jani Johnson, St. Luke’s Hospital of Kansas City’s CEO. She said most of the messages were from people living in countries outside the U.S., including Lebanon, the Philippines and Saudi Arabia, and some noted they had first seen the message shared via messaging service WhatsApp.
“We recognized that it was really essential for us to take immediate action,” Johnson said. “It’s potentially dangerous information, and we wanted to combat that and get in front of it quickly.”
Johnson said St. Luke’s has been responding to those messages one by one, as well as posting a notice to its Facebook page clarifying that the memo wasn’t from the health system and contained inaccurate information.
“False reports are circulating that say drinking alcohol can reduce the risk of COVID-19,” the health system wrote. “THIS IS NOT TRUE. St. Luke’s follows CDC guidance.”
St. Luke’s isn’t sure who originally created the memo or what their intention was.
The incident highlights the growing concern that malicious actors could deploy false information about the coronavirus to generate confusion or profit.
The uncertainty around COVID-19 leaves opportunities for malicious actors to spread misinformation, taking advantage of gaps in knowledge for their own gain—such as to try to sell fake cures, said Alain Bernard Labrique, an associate professor at Johns Hopkins Bloomberg School of Public Health and director of the Global mHealth Initiative at Johns Hopkins University.
“As we, as scientists, refine our understanding of the disease, we are usually very cautious in our statements about what we know and the certainty with which we know these things—and we also are very explicit in what we don’t know,” Labrique said. “It’s that unknown that usually creates the perfect environment for speculation, rumor and the creation of false information.”
That’s particularly true on social media platforms, where sensational claims can gain traction more easily than factual ones.
The Trump administration in recent weeks has urged tech giants including Facebook and Twitter to help curtail the spread of misinformation. In a private meeting earlier this month, the White House reportedly asked technology companies to work together to tackle COVID-19 conspiracy theories before they go viral, according to the Washington Post.
Since then, technology companies have said they’re stepping up to the task.
Facebook, Google, LinkedIn Corp., Microsoft Corp., Reddit, Twitter and YouTube released a statement pledging the group’s commitment to “combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies.”
To ensure apps in its App Store provide reputable information about the outbreak, Apple has said it will only admit coronavirus-related apps that are from recognized entities, such as “government organizations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.”
“The public has had access to a lot of information,” Labrique said, noting that social media and technology have made it easier for scientists to communicate directly to the public. But “it also has had access to a lot of misinformation.”