New research shows that children are more likely than adults to succumb to robotic peer pressure, which is a disturbing finding as children are interacting with social intelligence machines faster and faster.
An experiment led by Anna-Lisa Vollmer from the University of Bielefeld in Germany reminds people that modern technology can have a profound impact on children and influence the way they think and express their opinions – even if they know, or at least Suspicion, their opinions are wrong.
Vollmer experiment aims to measure the social impact of robots on children and adults, especially the way in which robotic peer pressure may contribute to social integration. The results published today in the journal Science Robots show that adults are largely unaffected by robotic influences, but for children who are in line with the robotic peer group’s views, even if these views are clearly wrong. This research means that we need to pay close attention to the social impact of robots and artificial intelligence on young children – considering the frequency with which children interact with social machines, this is an increasingly important issue.
Vollmer’s team used a mature technology developed by psychologist Solomon Asch, now known as the Asch paradigm. This technique measures how people are compatible with others in simple visual judgment tasks. In this case, Vollmer asked her participants to match a set of vertical lines on the computer screen. Testing is not difficult, because it is easy to tell which two lines match each other in size. The key is not to test vision, but to assess the ability of participants to resist integration when their peers provide incorrect answers.
For this new study, Vollmer tested adults and children. A total of 60 adults were divided into three groups: a separate group (control group), a group involving three other human subjects, and a group of three robots (SoftBank Robotics Nao humanoid robot for experiments). In each of the two tests, all three members of the two companion groups (human and robot companions) consistently gave the wrong answer. Consistent with other studies, adults tend to be in line with the opinions of their peers, and even if the answer is open, it is clearly wrong. But adults are not convinced by the companion pressure exerted by social robots, resisting the wrong answers from the machine. Interestingly, this result contradicts the assumption of “computer as a social actor” (CASA), which states that, in the words of the research author, “people naturally and unconsciously treat computers in a largely social way. And other forms of media, attributed to the technical quality of humanity.”
The same experiment was conducted on 43 children between the ages of 7 and 9. Except for the absence of a human peer pressure group, this test is the same as the test given to adults; it is well established that children are more susceptible to social influence. In this case, the researchers hope to focus on the impact of the robot companion. The results show that, unlike adults, children are “significantly affected” by robot companions, providing nearly the same error response in nearly 75% of the time.
One possible explanation is that children are only responding to the novelty of the situation, that is, they must share social space with the robot. But the researchers wrote, “This criticism is unfounded,” because the control group did not observe a decrease in accuracy. Another possibility is that children feel afraid of the existence of adult experimenters who appear in the room during the mission. Similarly, the researchers disagreed, saying: “This still indicates that the robot exerts companion pressure and does not invalidate observations and conclusions,” adds that “robots” may be owned by someone, some people or organizations, and may So become an agent. Indirect social peer pressure. One of the
This observation – robot The obvious ability to induce child obedience – important for recognition. The researchers wrote in the study:
From this perspective, care must be taken when designing applications and artificial intelligence for these physical machines, especially because The long-term effects of social robot exposure are poorly understood. The development of children and socially disadvantaged groups. More specifically, problems may arise not only from intentional programming of malicious behavior (for example, robots designed to deceive), but also from unconsciousness in artificial systems. The deviation exists or the learning system itself misunderstands the data collected autonomously. For example, if the robot recommends products, services or preferences, then will compliance and convergence be higher than traditional advertising methods?
Subsequently, the researchers recommended ongoing discussions on this issue, such as product regulations and other protective measures to minimize Risk of child interaction during child machine interactions.
There is no doubt that as social intelligence robots and artificial intelligence become more powerful and broader, potential harm to children will follow. Children do not have enough intellectual or emotional power to resist the effects of these technologies. This is a thought-provoking discovery that requires further thinking and action.