Bogus cures for COVID-19 made up a significant portion of the health misinformation on the platform.

Despite Facebook’s efforts to label misinformation during the coronavirus pandemic, a new report found networks spreading inaccurate health claims got a total of 3.8 billion views on the platform in the past year. 

The report, released by nonprofit advocacy group Avaaz this week, found that the peak of misinformation on Facebook came as the coronavirus pandemic spiked in April. Networks pumping out the misinformation received about 460 million views in just a month. 

One of the major types of health misinformation Avaaz identified was “content about bogus cures and unsubstantiated health treatments.” President Donald Trump has repeatedly pushed questionable treatments for COVID-19, like bleach disinfectants and the drug hydroxychloroquine, against the advice of his scientific advisers.

Pages and groups spreading misinformation far eclipsed leading, trustworthy health organizations on Facebook, according to the report. To make matters worse, researchers found only 16% of the health misinformation had a label on it identifying it as untrustworthy. 

“The social media platform acts as a ‘vector’ for conspiracy beliefs that are hindering people from protecting themselves during the COVID-19 outbreak,” Avaaz wrote. “Facebook itself has promised to keep people ‘safe and informed’ about the coronavirus, and well before the pandemic, acknowledged that ‘misleading health content is particularly bad for our community.’” 

In the wake of the pandemic more than 100 doctors and nurses, many of whom are on the frontlines of addressing the COVID-19 crisis, wrote a letter to Facebook explaining that misinformation was making it more difficult to treat patients. 

Avaaz’s report places blame on Facebook’s algorithm for boosting dangerous and incorrect health content. 

“Health misinformation is often sensationalist and provocative and will therefore receive significant engagement,” the report reads. “This engagement will, in turn, be interpreted by the algorithm as a reason to further boost this content in the News Feed, creating a vicious cycle where the algorithm is consistently and artificially giving health misinformation, for example, an upper hand over authoritative health content within the information ecosystem it presents to Facebook users.” 

Researchers at Azaav also note that all of Facebook’s attempts to address the issue have fallen short. 

Azaav encourages a two-step solution to misinformation on the site. The first is to correct the record by providing users who have seen misinformation with “independently fact-checked corrections.” Researchers believe that first step would decrease belief in misinformation by almost 50% on average. 

The second step, according to Azaav, is to detox the algorithm by “downgrading misinformation posts and systemic misinformation actors in users’ News Feeds,” this would decrease their reach by up to 80%, researchers estimate. 

A Facebook spokesperson told The Hill that although the company shares Avaaz’s goal of limiting misinformation “their findings don’t reflect the steps we’ve taken to keep [misinformation] from spreading on our services.” 

Lawmakers in Washington have called out the social media giant noting that misinformation makes an already deadly pandemic even more dangerous. 

“We cannot allow the public sphere, increasingly conducted online, to be dominated by harmful health misinformation and scams,” said Sen. Mark Warner in a statement to The Hill. “We must acknowledge that the scale of these platforms calls for increased scrutiny.” 

Facebook has promised moves to address the spread of health misinformation like elevating information from trusted sources and limiting the spread of potentially harmful posts.