Right now, we are at the verge of major innovations in mobile tech, robotic tech, nanotech and countless other breakthroughs. It is an exciting time, but potentially a scary time as well.
As robots and AI continue to advance, there will be some very real concerns regarding both privacy and ethics. Speaking at Google’s Big Tent conference in London this week, several knowledgeable experts in robotics came together to discuss the future of robots, particularly focusing on the moral side of things.
The group was made up of Jon Snow (moderator), Bertolt Meyer from the University of Zurich, Observer columnist Carole Cadwalladr and Noel Sharkey, part of the Campaign to Stop Killer Robots and robotics and AI expert at University of Sheffield.
Some of the big questions asked were how will robotic tech further change society, and who will determine what we should and shouldn’t do.
According to Meyer:
“I visited a lab in southern California where they are creating a chip that will go in people’s brains to restore memory function in Alzheimer’s patients. Now when they put this chip in healthy rats, they got a super rat with excellent memory. I asked one of the scientists working on this for 30 years, should we be doing this with humans? He says, ‘I don’t know’. The ethical implications hadn’t occurred to him.”
While each side obviously had different opinions about the future of these advanced technologies, ultimately they all agreed that scientists are probably not the ones who should decide on the ethics of engineering autonomous robots. But then who should? A good question.
“Business people aren’t the best to answer these questions either — as soon as there’s money to be made, lots of questions you or I ask will be put aside. Now they are niche products, but if they become available to the mass market through augmenting human capabilities, it could become profitable”.
According to Sharkey, “science should be allowed to progress, but we shouldn’t be caught off-guard the way we were with the internet”.
“It’s always humans making the decision to kill people, and it’s crucial for laws of warfare that we can separate competence and civilians. No one will be held accountable otherwise”.
Ultimately, it is about handling regulation on the governmental level. As well as just having everyday people understanding the ethics of what they should or shouldn’t do with the tech.
Robotic Tech Opens the Door to Both Good and Bad
But of course there are good and bad things about these tech advances. Meyer was talking about how he feels about his artificial hand, without wearing which he feels “super incapacitated”. With advancements in robotic limbs, devices that can help people see/hear and other enhancement tech – we have the potential to truly impact lives in a positive way.
Unfortunately, there are also negative implications here. “When it comes to enhancement, we have to worry about it ethically. Will we get a society where people are forced to [replace healthy limbs] otherwise they won’t get a job?” says Sharkely.
Then there is the idea that not everyone will be able to afford such tech, further dividing classes.
As Cadwalladr puts it:
“The tools by themselves are wonderful, but the way we use technology is not symmetrical. Those who have more money will become super mortals. When I saw Sergei Brin with his Google Glasses at a Ted conference, he went on this game with Emotiv technology that shows your mental focus. Sergei leapt to the top of the leader board. I did pretty average. Sergei is highly intelligent already, but if I put the glasses on and they won’t increase my abilities”.
Snow also seems to believe that inequality arises with the advancement of technology. Cadwalladr and Sharkey also talked about the good and bad sides of using robots as assistants in hospitals and care homes.
“They’re talking about using them as nursing assistants or elder care — but doesn’t that take away from our humanity? Isn’t it better for people to talk to people,” said Cadwalladr. Sharkey is worried about “human dignity; accountability”. There are certainly many questions about what robotics will mean for the future. There are tons of positives on the horizon, but there are real concerns as well. So how do we proceed? We need to start seriously talking and regulating now.
The future is coming, now we just need to be ready for it. What do you think about the future of robotics and AI? Who should be regulating these technologies and making the bigger ethical decisions relating to them?