AI-Generated Faces Are More Trustworthy?人工智能生成的面孔更可信吗?
作者: 埃米莉·威林厄姆 陈先宇When TikTok videos emerged in 2021 that seemed to show “Tom Cruise” making a coin disappear and enjoying a lollipop, the account name was the only obvious clue that this wasn’t the real deal. The creator of the “deeptomcruise” account on the social media platform was using “deepfake” technology to show a machine-generated version of the famous actor performing magic tricks and having a solo dance-off.
2021年,抖音海外版上出现了几个“汤姆·克鲁斯”的视频。视频中的“汤姆·克鲁斯”或表演硬币消失魔术,或在吃棒棒糖,只有账号名能清楚地表明视频内容并不真实。在抖音海外版上创建“深度汤姆克鲁斯”账号的人正是使用了“深度伪造”技术,通过机器生成知名影星汤姆·克鲁斯的图像,让其表演魔术和独舞。
One tell for a deepfake used to be the “uncanny valley” effect, an unsettling feeling triggered by the hollow look in a synthetic1 person’s eyes. But increasingly convincing images are pulling viewers out of the valley and into the world of deception promulgated by deepfakes.
以往辨别深度伪造的要素是“恐怖谷”效应,即人们看到合成人空洞漠然的眼神时会感到不安。但日趋逼真的图像正将观者拉出深谷,带入深度伪造所宣扬的欺骗世界。
The startling realism has implications for malevolent2 uses of the technology: its potential weaponization in disinformation campaigns for political or other gain, the creation of false porn for blackmail, and any number of intricate manipulations for novel forms of abuse and fraud. Developing countermeasures to identify deepfakes has turned into an “arms race” between security sleuths on one side and cybercriminals and cyberwarfare operatives on the other.
深度伪造技术能达到的真实程度让人吃惊,这意味着存在恶意使用该技术的可能:用作虚假宣传的武器,以获取政治或其他方面的利益;用于制造虚假色情内容进行敲诈;通过一些复杂操作,实施新型虐待和诈骗。开发识别深度伪造的反制手段已演变为一场“军备竞赛”,竞赛一方是安全“侦探”,另一方则是网络罪犯和网战特工。
A new study published in the Proceedings of the National Academy of Sciences of the United States of America provides a measure of how far the technology has progressed. The results suggest that real humans can easily fall for machine-generated faces—and even interpret them as more trustworthy than the genuine article. “We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” says study co-author Hany Farid, a professor at the University of California, Berkeley. The result raises concerns that “these faces could be highly effect-ive when used for nefarious3 purposes.”
《 美国国家科学院院刊》上发表了一份新的研究报告,该报告对深度伪造技术的发展程度进行了评估。研究结果表明,真人易为机器生成的面孔所骗,甚至认为其比真实人脸更可信。报告合著者、加利福尼亚大学伯克利分校教授哈尼·法里德说:“我们发现,合成人脸不仅非常逼真,而且被认为比真实人脸更可信。”这一结果引发了人们的担忧——“使用合成人脸行不法之事可能很有效果”。
“We have indeed entered the world of dangerous deepfakes,” says Piotr Didyk, an associate professor at the University of Italian Switzerland in Lugano, who was not involved in the paper. The tools used to generate the study’s still images are already generally accessible. And although creating equally sophisticated video is more challenging, tools for it will probably soon be within general reach, Didyk contends.
“ 我们确实已进入危险的深度伪造世界。”未参与上述研究的瑞士意大利语区大学(位于卢加诺)副教授彼得·迪迪克如此说道。研究所用生成静态图像的工具已普及。迪迪克认为,尽管同样复杂的视频较难制作,但公众也许很快就能用上相关工具。
The synthetic faces for this study were developed in back-and-forth interactions between two neural networks, examples of a type known as generative adversarial networks. One of the networks, called a generator, produced an evolving series of synthetic faces like a student working progressively through rough drafts. The other network, known as a discriminator, trained on real images and then graded the generated output by comparing it with data on actual faces.
这项研究使用的合成人脸是在两个神经网络反复交互往来的过程中生成的。这两个网络是典型的生成对抗网络。其中一个名为生成器,生成一系列不断演变的合成人脸,就像一名学生逐步完成草图一样。另一个名为鉴别器,对真实图像进行学习后,通过比对真实人脸的数据,评定生成器输出的图像。
The generator began the exercise with random pixels. With feedback from the discriminator, it gradually produced increasingly realistic humanlike faces. Ultimately, the discriminator was unable to distinguish a real face from a fake one.
生成器从随机像素开始训练。得益于鉴别器的反馈,生成器生成的人脸越来越逼真,直至鉴别器无法区分真假面孔。
The networks trained on an array of real images representing Black, East Asian, South Asian and white faces of both men and women, in contrast with the more common use of white men’s faces in earlier research.
与早期研究更常用白人男性面孔不同,两个神经网络的训练素材是大量再现黑人、东亚人、南亚人以及白人男女面孔的真实图像。
After compiling 400 real faces matched to 400 synthetic versions, the researchers asked 315 people to distinguish real from fake among a selection of 128 of the images. Another group of 219 participants got some training and feedback about how to spot fakes as they tried to distinguish the faces. Finally, a third group of 223 participants each rated a selection of 128 of the images for trustworthiness on a scale of one (very untrustworthy) to seven (very trustworthy).
研究人员先汇集了400张真实人脸及与之匹配的400张合成人脸,然后从中选择128张,要求315 名受试者辨别真假。另一组219名受试者在进行辨别时获得了一定的培训和反馈,其内容涉及如何识别出假面孔。第三组223名受试者对选出的128 张图像进行可信度评分,评分范围为1(非常不可信)到7(非常可信)。
The first group did not do better than a coin toss4 at telling real faces from fake ones, with an average accuracy of 48.2 percent. The second group failed to show dramatic improvement, receiving only about 59 percent, even with feedback about those participants’ choices. The group rating trustworthiness gave the synthetic faces a slightly higher average rating of 4.82, compared with 4.48 for real people.
第一组受试者辨别真假人脸完全靠猜,平均准确率为48.2%。第二组的准确率也没高多少,仅为59% 左右,即便他们了解到那些受试者选择的反馈信息也没有用。在第三组的可信度评分中,合成人脸的平均得分略高,为4.82,而真实人脸的平均得分为4.48。