In the early and thus far most devastating stages of the COVID-19 pandemic, scientists were at a near loss on how to treat the deadly disease. The public was desperate for information. Consequently, two antimalarial drugs—chloroquine and hydroxychloroquine—were the subject of a Twitter storm in the marketplace of ideas known as social media. The medication was lauded as a potential cure: There was run on the medication, creating a shortage for those who used it for other medical indications, such as lupus. One person died and another was hospitalized after taking chloroquine as a prophylactic.
Although the drug therapy turned out not to be the magic bullet, researchers from the University of Cincinnati wanted to know what influences caused so many people to believe this therapy was indeed the answer, despite warnings from leaders in the scientific community that the efficacy of the drug was unfounded. Their findings appear in the journal Social Media + Society.
Supported by the UC Office of Research’s Digital Futures Initiative and with funding from the Andrew W. Mellon Foundation, a multidisciplinary team analyzed over 100 million Twitter posts related to COVID-19. By focusing on tweets, likes and retweets citing the drugs by name, the team learned that science and politics were directly competing against each other; and the loudest voice in the social media platform, then President Donald Trump, contributed greatly to the falsehood, even though he was not the originator of the claims.
“The research attempted to provide more clarity between misinformation, disinformation and B.S.,” says the study’s lead author Jeffrey Blevins, professor and head of UC’s Department of Journalism. The distinction, he says, is that disinformation is an intentional act of deception, misinformation is ignorance of fact, and B.S. is not caring whether the information is true or false based on indifference to the claim or allegiance to its origin. Examples of the latter going around the social sphere were that the virus was caused by 5G wireless, that African Americans were immune and that a certain type of toothpaste was the cure.
“We have to be aware that there are all sorts of actors on social media, and they are not all credible; just because something is trending or in the echo chamber, it tends to make it sound more credible,” says Blevins. “The sheer volume of the message or the fact that something goes viral doesn’t necessarily make it true,” he says, noting that the drug therapy claims fell into the category of misinformation because there was some evidence based in science that it could be useful.
After all, even the president boasted of taking it without harm.
However, it was the president’s tweets about the drug therapy and the feedback loop between Fox News and Trump followers, Blevins says, that propagated the falsehood of a potential cure, making it “a political issue instead of a medical issue.”
Another finding, Blevins says, is that the news media focused more on fact-checking the president for misinformation, instead of the true originators: misguided physicians and conspiracy theorists such as QAnon. “[Trump] got a lot of attention because he was the most significant actor in the spread of misinformation.” But what was ignored by news media, researchers say, was where the misinformation originated. “There wasn’t any real discussion about the truth,” says Blevins.
Additionally, the UC study produced charts and graphs that map out which voices dominated the messaging. “When you see the interrelations of Twitter handles mapped out with color and shape in the network visualizations, you actually perform a type of analysis that couldn’t be done before,” says study co-author James Lee, associate vice provost for digital scholarship and director of UC’s Digital Scholarship Center. These visuals, he says, are illuminating: “If you just read the tweets, you do not see the impact. Data visualization really allowed us to hone in on who the real influences were in this case.”
Source: Read Full Article