Article originally written by Sejal Mackey ’27.
AI, which is quickly becoming increasingly popular when using almost any kind of website, has also started to gain prominence as becoming a friend to a user and engaging in social interactions. In an Imperial College London study, humans began to gain sympathy for AI bots that were ostracized from a fun game.
Empathy is wired into humans. There’s no doubt that humans are built to stop unfairness for other humans. But with AI, it becomes questionable on how humans view sociable robots as having feelings or not. The study included 244 human participants, as researchers studied how they responded to an AI virtual agent being excluded by a human in the game called “Cyberball.” The game was simply just players passing a virtual ball to each other on screen, but some games included the AI bot being excluded from the game of catch. Participants often favored throwing the ball to the bot after the unfair treatment, with older participants being more likely to notice and adjust the unfairness.
As AI companions become more popular, users would start to become more likely to include them as team members and engage with them socially. This can be an advantage in work scenarios, but unhealthy if AI is used to replace human interactions. It’s important to help people distinguish between virtual and real interactions so that they do not fall into unhealthy patterns with AI bots. The Cyberball game also might’ve had some faults, as it doesn’t exactly represent how humans interact in real-life scenarios with chatbots or voice assistants. New experiments, that would include face-to-face conversations with AIs in different settings, will help test how far these findings go in human to AI interactions.
Overall, this study raises questions about how the future of AI will look when interfering with human life. Humans could start to see AI as social beings, but this blurs the lines of AI being a tool or a friend – which could cause misplaced emotional attachment. As a suggested solution, researchers of this study recommend avoiding designs that make AI very human-like because of the connections humans could make with it.
Ultimately, AI should be used in a way with a clear distinction between humans and robots. This is the most ethical design that helps human social nature stay healthy. However, this study raises an important question: how can AI developers make their AI bots the most helpful but still ethical?
Sources:
Humans Sympathize With And Protect AI Bots From Playtime Exclusion, Finds Study (Cover Image)
https://hwbusters.com/freestyle/humans-sympathize-with-and-protect-ai-bots-from-playtime-exclusion-finds-study/
Humans protect AI bots from playtime exclusion, finds Imperial study
https://www.imperial.ac.uk/news/257159/humans-protect-ai-bots-from-playtime/

Leave a comment