TikTok search results full of misinformation, analysts say
TAMPA, Fla. - As TikTok continues to grow in popularity, especially among younger users, it prompted one group to study the amount of misinformation present on the social media platform.
Researchers at NewsGuard said they completed the project because of the large number of young users on the social media platform. TikTok said it has 1 billion active users on the app.
"We found this and started this report because we read about how young people are increasingly turning to TikTok as a search engine and wanted to find out how it performed when it came to news," Jack Brewster, a NewsGuard senior analyst, said.
PREVIOUS: Social media companies offer few changes to safeguard US midterm elections
The team said it searched for prominent news topics including COVID-19, the election, the Russian invasion of Ukraine and school shootings. Their investigation found almost 20% of the videos presented as search results contained misinformation.
Brewster said TikTok can be a big source of education for young people as well as where they get their news and information about how to live their lives.
"And so if they're being fed misinformation, especially within the first 20 results, as we found, that obviously can be a big part of their news diet and where they get information. That's concerning," Brewster said.
MORE: People enter 'dissociative state' when using social media, researchers say
In a statement a TikTok spokesperson said:
"Our Community Guidelines make clear that we do not allow harmful misinformation, including medical misinformation, and we will remove it from the platform. We partner with credible voices to elevate authoritative content on topics related to public health, and partner with independent fact-checkers who help us to assess the accuracy of content."
TikTok said it removed more than 100 million videos so far this year due to violations in its community guidelines report. According to its website, content on TikTok first goes through technology that identifies and flags potential policy violations, such as adult nudity, violent and graphic content.
When content is identified as violating the platform's community guidelines, it will be automatically removed, or it will be flagged for additional review by our Safety team.