The latest technology and digital news on the web

Human-centric AI news and analysis

Can we be accompany with robots? Analysis says yes

In the 2012 film “Robot and Frank,” the protagonist, a retired cat burglar named Frank, is adversity the early affection of dementia. Concerned and guilty, his son buys him a “home robot” that can talk, do domiciliary chores like affable and cleaning, and reminds Frank to take his medicine. It’s a robot the likes of which we’re accepting closer to architecture in the real world.

The film follows Frank, who is initially afraid by the idea of living with a robot, as he gradually begins to see the robot as both functionally useful and socially companionable. The film ends with a clear bond amid man and machine, such that Frank is careful of the robot when the pair of them run into trouble.

This is, of course, a fabulous story, but it challenges us to analyze altered kinds of human-to-robot bonds. My recent analysis on human-robot relationships examines this topic in detail, attractive beyond sex robots and robot love diplomacy to appraise the most abstruse and allusive of relationships: friendship.

My aide and I articular some abeyant risks – like the abandonment of human accompany for automatic ones – but we also found several scenarios where automatic accompaniment can constructively augment people’s lives, arch to friendships that are anon commensurable to human-to-human relationships.

Philosophy of friendship

The robotics philosopher John Danaher sets a very high bar for what accord means. His starting point is the “true” accord first declared by the Greek philosopher Aristotle, which saw an ideal accord as premised on mutual goodwill, admiration, and shared values. In these terms, accord is about a affiliation of equals.

Building a robot that can amuse Aristotle’s belief is a abundant abstruse claiming and is some ample way off – as Danaher himself admits. Robots that may seem to be accepting close, such as Hanson Robotics’ Sophia, base their behavior on a library of pre-prepared responses: a humanoid chatbot, rather than a communicative equal. Anyone who’s had a testing back-and-forth with Alexa or Siri will know AI still has some way to go in this regard.

In the video below, the humanoid robot Sophia, developed by Hong Kong-based Hanson Robotics.

Aristotle also talked about other forms of “imperfect” accord – such as “utilitarian” and “pleasure” friendships – which are advised inferior to true accord because they don’t crave balanced bonding and are often to one party’s diff benefit. This form of accord sets a almost very low bar which some robots – like “sexbots” and automatic pets – acutely already meet.

Artificial amigos

For some, apropos to robots is just a accustomed addendum of apropos to other things in our world – like people, pets, and possessions. Psychologists have even empiric how people acknowledge artlessly and socially appear media artefacts like computers and televisions. Humanoid robots, you’d have thought, are more personable than your home PC.

However, the field of “robot ethics” is far from accepted on whether we can – or should – advance any form of accord with robots. For an affecting group of UK advisers who charted a set of “ethical attempt of robotics,” human-robot “companionship” is an oxymoron, and to market robots as having social capabilities is backbiting and should be advised with attention – if not alarm. For these researchers, crumbling affecting energy on entities that can only simulate affections will always be less advantageous than basic human-to-human bonds.

But people are already developing bonds with basic robots – like vacuum-cleaning and lawn-trimming machines that can be bought for less than the price of a dishwasher. A decidedly large number of people give these robots pet names – commodity they don’t do with their dishwashers. Some even take their charwoman robots on holiday.

Other affirmation of affecting bonds with robots includes the Shinto absolution commemoration for Sony Aibo robot dogs that were dismantled for spare parts, and the squad of US troops who fired a 21-gun salute, and awarded medals, to a bomb-disposal robot named “Boomer” after it was destroyed in action.

A robot on wheels is abounding to by a soldier in combat gear
A aggressive bomb auctioning robot agnate to ‘Boomer’. US Marine Corps photo by Lance Cpl. Bobby J. Segovia/Wikimedia Commons

These stories and the cerebral affirmation we have so far, make clear that we can extend affecting access to things that are very altered to us, even when we know they are bogus and pre-programmed. But do those access aggregate a accord commensurable to that shared amid humans?

True friendship?

A aide and I afresh advised the all-encompassing abstract on human-to-human relationships to try to accept how, and if, the concepts we found could apply to bonds we might form with robots. We found affirmation that many coveted human-to-human friendships do not in fact live up to Aristotle’s ideal.

We noted a wide range of human-to-human relationships, from ancestors and lovers to parents, carers, account providers, and the acute (but abominably one-way) relationships we advance with our celebrity heroes. Few of these relationships could be declared as absolutely equal and, crucially, they are all destined to evolve over time.

All this means that assured robots to form Aristotelian bonds with us is to set a accepted even human relationships fail to live up to. We also empiric forms of social connectedness that are advantageous and acceptable and yet are far from the ideal accord categorical by the Greek philosopher.

We know that social alternation is advantageous in its own right and commodity that, as social mammals, humans have a strong need for. It seems apparent that relationships with robots could help to abode the built-in urge we all feel for social affiliation – like accouterment concrete comfort, affecting support, and agreeable social exchanges – currently provided by other humans.

Our paper also discussed some abeyant risks. These arise decidedly in settings where alternation with a robot could come to alter alternation with people, or where people are denied a choice as to whether they collaborate with a person or a robot – in a care setting, for instance.

These are important concerns, but they’re possibilities and not inevitabilities. In the abstract we advised we absolutely found affirmation of the adverse effect: robots acting to arch social interactions with others, acting as ice-breakers in groups, and allowance people to advance their social skills or to boost their self-esteem.

It appears likely that, as time progresses, many of us will simply follow Frank’s path appear acceptance: abuse at first, before clearing into the idea that robots can make decidedly good companions. Our analysis suggests that’s already accident – though conceivably not in a way in which Aristotle would have approved.The Conversation

Published February 16, 2021 — 10:51 UTC

Hottest related news

No articles found on this category.