How Online Misinformation Exploits ‘Information Voids’网络虚假信息如何利用“信息真空”
作者: 谢红/译The past year, described as the biggest election year in recorded history, is considered to be one of the biggest years for the spreading of misinformation and disinformation. Both refer to misleading content, but disinformation is deliberately generated. Political parties have long competed for voter approval and subjected their differing policies to public scrutiny. But the difference now is that online search and social media enable claims and counterclaims to be made almost endlessly.
过去的一年被称为有史以来最重大的选举年,也被认为是错误信息和虚假信息传播最为严重的年份之一。这两种信息均为误导性信息,不过虚假信息是人为有意制造出来的。长期以来,各党派竞相争取选民的支持,各自的政治主张受到公众的审视。现如今的不同之处在于,网络搜索与社交媒体使正反主张几乎可以无休止地提出。
A recent study in Nature highlights a previously underappreciated aspect of this phenomenon: the existence of data voids, information spaces that lack evidence, into which people searching to check the accuracy of controversial topics can easily fall. It might no longer be enough for search providers to combat misinformation and disinformation by just using automated systems to deprioritize these sources.
《自然》杂志最近刊登的一篇论文着重指出了此前该现象未被充分重视的一个方面:网络上存在数据真空,也就是缺乏证据的信息空间,上网搜索以核实争议话题准确性的人很容易掉进去。而对于网络搜索服务商来说,依靠系统自动降低信息优先级的办法来打击错误信息和虚假信息已不够。
The mechanics of how misinformation and disinformation spread has long been an active area of research. According to the ‘illusory truth effect’, people perceive something to be true the more they are exposed to it, regardless of its veracity. This phenomenon pre-dates the digital age and now manifests itself through search engines and social media.
长期以来,错误和虚假信息的传播机制一直是热门研究领域。根据“真相错觉效应”,接触某个信息的次数越多,人们越容易相信其真实性,而不论它到底是不是真的。这种现象远在数字时代之前就存在,如今通过搜索引擎和社交媒体呈现出来。
In their recent study, Kevin Aslett, a political scientist at the University of Central Florida in Orlando, and his colleagues found that people who used Google Search to evaluate the accuracy of news stories—stories that the authors but not the participants knew to be inaccurate—ended up trusting those stories more. This is because their attempts to search for such news made them more likely to be shown sources that corroborated an inaccurate story.
奥兰多中佛罗里达大学的政治学家凯文·阿斯利特及其同事在最近的研究中发现,有些新闻报道的参与者不知道但作者知道它有误,而利用谷歌搜索来评估其准确性的人最终愈发相信这些新闻。这是由于用户的搜索行为让他们接触到更多佐证失实报道的信息源。
Google’s algorithms rank news items by taking into account various measures of quality, such as how much a piece of content aligns with the consensus of expert sources on a topic. In this way, the search engine deprioritizes unsubstantiated news, as well as news sources carrying unsubstantiated news from its results. Furthermore, its search results carry content warnings. For example, ‘breaking news’ indicates that a story is likely to change and that readers should come back later when more sources are available. There is also an ‘about this result’ tab, which explains more about a news source—although users have to click on a different icon to access it.
谷歌算法通过综合评估多项指标对新闻条目进行排序,比如新闻内容与权威信源的一致性。通过这种方式,谷歌可以在搜索结果中降低无事实依据的新闻及此类新闻来源的优先性。此外,搜索结果还带有内容警告。比如,“突发新闻”意味着新闻内容很可能会更改,读者应稍后再来查看,届时会有更多的信息来源。还有一个“关于此结果”的标签, 内含对相关信息的更多解释,不过用户必须点击另一个图标才能访问它。
Clearly, copying terms from inaccurate news stories into a search engine reinforces misinformation, making it a poor method for verifying accuracy. So, what more could be done to route people to better sources? Google does not manually remove content, or de-rank a search result; nor does it moderate or edit content, in the way that social-media sites and publishers do. Google is sticking to the view that, when it comes to ensuring quality results, the future is automated methods that rank results on the basis of quality measures. But there can be additional approaches to preventing people falling into data voids of misinformation and disinformation, as Google itself acknowledges and as Aslett and colleagues show.
显然,将失实新闻中的词句复制到搜索引擎会强化该虚假信息,导致这种验证准确性的方法效果不佳。既如此,还能做些什么来引导公众获取更优质的信息来源呢?谷歌不会手动删除内容或降低信息的优先级,也不会像社交媒体网站和出版商那样审核或编辑内容。谷歌坚持认为,就保障搜索结果的质量而言,将来要依据优质标准自动对结果排序。但正如谷歌自己所承认的,以及阿斯利特和同事们所指出的那样,可以采取额外的措施来防止人们陷入充斥着虚假和错误信息的数据真空。
Some type of human input, for example, might enhance internal fact-checking systems, especially on topics on which there might be a void of reliable information. How this can be done sensitively is an important research topic, not least because the end result should be not about censorship, but about protecting people from harm.
比如,某种形式的人工介入可能会增强内部事实核查系统,尤其适用于可能缺乏可靠信息的主题。如何审慎地做到这一点是一个重要的研究课题,毕竟最终结果并非为了审查,而是为了保护人们免受伤害。
There’s also a body of literature on improving media literacy1—including suggestions on more, or better education on discriminating between different sources in search results. Mike Caulfield, who studies media literacy and online verification skills at the University of Washington in Seattle, says that there is value in exposing a wider population to some of the skills taught in research methods. He recommends starting with influential people, giving them opportunities to improve their own media literacy, as a way to then influence others in their networks.
还有很多文章谈到提高媒体素养,包括建议提供更多更好的培训以提高人们识别网络信息真伪的能力。在西雅图华盛顿大学进行媒体素养和网络识别技能研究的迈克·考菲尔德指出,将研究方法中的一些技能传授给更广泛的人群是有价值的。他建议从有影响力的人开始,给他们提供机会提高自己的媒体素养,进而影响其社交圈中的其他人。
One point raised by Paul Crawshaw, a social scientist at Teesside University in Middlesbrough, UK, is that research-methods teaching on its own does not always have the desired impact. Students benefit more when they are learning about research methods while carrying out research projects. He also suggests that lessons could be learnt by studying the conduct and impact of health-literacy2 campaigns. In some cases, these can be less effective for people on lower incomes, compared with those on higher incomes. Understanding that different population groups have different needs will also need to be factored into media-literacy campaigns, he argues.
英国米德尔斯堡提赛德大学的社会科学家保罗·克劳肖却认为,传授研究方法本身并不总是能达到预期的效果。学生一边学习研究方法一边实施研究项目时获益更多。他指出,对健康素养运动过程和效果的研究也能给予经验教训。某些情况下,相对于高收入人群,此类活动对低收入人群的影响更小。他认为,开展媒体素养运动时还要考虑到不同人群有不同的需求。
Clearly, there’s work to do. The need is urgent, because it’s possible that generative artificial-intelligence and large language models will propel misinformation to much greater heights. The often-mentioned phrase ‘search it online’ could end up increasing the prominence of inaccurate news instead of reducing it.
显然,相关工作刻不容缓,因为生成式人工智能和大型语言模型可能会让虚假信息大幅增多。人们时常挂在嘴边的“上网查查”一词最终非但无法减少不实新闻,反而会让其更加突出。
(译者为“《英语世界》杯”翻译大赛获奖者)
1指媒介使用者面对不同媒体中各种信息时,所表现出的信息的选择能力、质疑能力、理解能力、评估能力、创造和生产能力以及思辨的反应能力。 2健康素养,指个人获取和理解基本健康信息和服务,并运用这些信息和服务做出正确决策,以维护和促进自身健康的能力。