Want to Get Humans to Trust Robots? Let Them Dance人机共舞,建立信任
作者: 萨姆·琼斯/文 袁峰/译A performance with living and mechanical partners can teach researchers how to design more relatable1 bots.
人机搭档表演能教会研究人员如何设计更让人认同的机器人。
A dancer shrouded in shades of blue rises to her feet and steps forward on stage. Under a spotlight, she gazes at her partner: a tall, sleek2 robotic arm. As they dance together, the machine’s fluid movements make it seem less stere-otypically3 robotic—and, researchers hope, more trustworthy.
舞者从深浅不一的蓝色光晕中站起身来,走上舞台。聚光灯下,她凝视着舞伴——一架颀长、优美的机械臂。人机共舞时,机械臂动作流畅,看起来并不刻板机械,研究人员希望这也会让它看上去更可靠。
“When a human moves one joint, it isn’t the only thing that moves. The rest of our body follows along,” says Amit Rogel, a music technology graduate researcher at Georgia Institute of Technology. “There’s this slight continuity that almost all animals have, and this is really what makes us feel human in our movements.” Rogel programmed this subtle follow-through4 into robotic arms to help create FOREST, a performance collaboration between researchers at Georgia Tech, dancers at Kennesaw State University and a group of robots.
“当人活动关节时,不只是关节在动,身体其他部位也顺势而动。”佐治亚理工学院音乐技术研究生研究员阿米特·罗杰尔说,“几乎所有动物都具有这种细微的动作连贯性,而这确实让我们感觉自己是人而非机器。”罗杰尔将这种微妙的顺势动作编入机械臂的程序,助力创作“福雷斯特”——佐治亚理工学院研究人员、肯尼索州立大学舞者和一组机器人三方协作的表演项目。
The goal is not only to create a memorable performance, but to put into practice what the researchers have learned about building trust between humans and robots. Robotics are already widely used, and the number of collaborative robots—which work with humans on tasks such as tending factory machines and inspecting manufacturing equipment—is expected to climb significantly in the coming years. But although they are becoming more common, trust in them is still low—and this makes humans more reluctant to work with them. “People may not understand how the robot operates, nor what it wants to accomplish,” says Harold Soh, a computer scientist at the National University of Singapore. He was not involved in the project, but his work focuses on human-robot interaction and developing more trustworthy collaborative robots.
该项目的目标不仅是创作令人难忘的表演,而且还要将研究人员对建立人类与机器人互信的认识付诸实践。机器人技术已得到广泛应用,与人类协同执行照看工厂机器和检查制造设备等任务的协作机器人的数量有望在未来几年大幅攀升。但尽管机器人日益常见,人们对它们的信任度却仍很低,因而越加不愿意与其协作。“人们可能不了解机器人如何运作,也不明白它想要完成什么任务。”新加坡国立大学计算机科学家苏顺鸿表示。他虽未参与该项目,但其工作侧重于人类与机器人交互和开发更值得信赖的协作机器人。
Although humans love cute fictional machines like R2-D2 or WALL-E, the best real-world robot for a given task may not have the friendliest looks, or move in the most appealing way. “Calibrating5 trust can be difficult when the robot’s appearance and behavior are markedly different from humans,” Soh says. However, he adds, even a disembodied6 robot arm can be designed to act in a way that makes it more relatable7. “Conveying emotion and social messages via a combination of sound and motion is a compelling approach that can make interactions more fluent and natural,” he explains.
人类喜爱R2-D2或WALL-E之类可爱的科幻机器人,但现实世界中执行特定任务的最佳机器人未必是外表最友善或动作最迷人的。“当机器人的外表和行为与人类迥然不同时,难以通过调试建立信任。”苏顺鸿说,但他又指出,即便是无躯体的机械臂,也可设计得行为举止更让人认同。他解释说:“通过声音与动作相结合的方式表达情感和传达社交信息具有说服力,能使交互更加顺畅、自然。”
That’s why the Georgia Tech team decided to program nonhumanoid8 machines to appear to convey emotion, through both motion and sound. Rogel’s latest work in this area builds off years of research. For instance, to figure out which sounds best convey specific emotions, Georgia Tech researchers asked singers and guitarists to look at a diagram called an “emotion wheel,” pick an emotion, and then sing or play notes to match that feeling. The researchers then trained a machine learning model—one they planned to embed in the robots—on the resulting data set. They wanted to allow the robots to produce a vast range of sounds, some more complex than others. “You could say, ‘I want it to be a little bit happy, a little excited and a little bit calm,’” says project collaborator Gil Weinberg, director of Georgia Tech’s Center for Music Technology.
正因如此,佐治亚理工学院团队决定为非人形机器编制程序,使其看似能通过动作和声音表达情感。罗杰尔在这一领域的最新工作建立在多年研究的基础上。例如,为了弄清哪些声音最能表达特定情感,佐治亚理工学院研究人员让多名歌手和吉他手观看《情感轮盘》示意图,挑选一种情感,然后咏唱或演奏匹配的乐音来表达该情感。然后,研究人员运用由此获得的数据集训练一个机器学习模型——他们计划将该模型嵌入机器人。他们想让机器人发出各种各样的声音,其中一些声音比其他声音更复杂。佐治亚理工学院音乐技术中心主任、项目协作者吉尔·温伯格说:“你可以说,‘我希望它有些许快乐、些许兴奋、些许平静。’”
Next, the team worked to tie those sounds to movement. In 2020, the researchers had demonstrated that combining movement with emotion-based sound improved trust in robotic arms in a virtual setting (a requirement fostered by the pandemic). But that experiment only needed the robots to perform four different gestures to convey four different emotions. To broaden a machine’s emotional-movement options for his new study, which has been conditionally accepted for publication in Frontiers in Robotics and AI, Rogel waded through9 research related to human body language. “For each one of those body language [elements], I looked at how to adapt that to a robotic movement,” he says. Then, dancers affiliated with Kennesaw State University helped the scientists refine those movements. As the performers moved in ways intended to convey emotion, Rogel and fellow researchers recorded them with cameras and motion-capture suits, and subsequently generated algorithms so that the robots could match those movements. “I would ask [Rogel], ‘can you make the robots breathe?’ And the next week, the arms would be kind of ‘inhaling’ and ‘exhaling,’” says Kennesaw State University dance professor Ivan Pulinkala.