The fear of a technological singularity

The fear of a technological singularityIn 2017, a driverless Tesla car crashed killing the test driver. It was not the first vehicle to be involved in a fatal crash, but was the first of its kind and the tragedy opened a can of ethical dilemmas.

With autonomous systems such as driverless vehicles there are two main grey areas: responsibility and ethics. Widely discussed at various forums is a ‘dilemma’ where a driverless car must choose between killing pedestrians or passengers.

Here, both responsibility and ethics are at play. The cold logic of numbers that define the mind of such systems can sway it either way and the ‘fear’ is that passengers sitting inside the car have no control.

Any new technology brings a new set of challenges. But it appears that creating artificial intelligence-driven technology products is almost like unleashing the Frankenstein’s monster.

Artificial Intelligence (AI) is currently at the cutting-edge science and technology. Advances in technology, including aggregate technologies like deep learning and artificial neural networks, are behind many new developments such as that Go playing world champion machine.

However, though there is great positive potential for AI, many are afraid of what AI could do, and rightfully so. There is still the fear of a technological singularity, a circumstance in which AI machines would surpass the intelligence of humans and take over the world.

Researchers in genetic engineering also face a similar question. This dark side of technology, however, should not be used to decree closure of all AI or genetics research. We need to create a balance between human needs and technological aspirations.

Much before the current commotion over ethical AI technology, celebrated science-fiction author Isaac Asimov came up with his laws of robotics.

Exactly 75 years ago in a 1942 short story Runaround, Asimov unveiled an early version of his laws. The current forms of the laws are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Given the pace at which AI systems are developing, there is an urgent need to put in some checks and balances so that things do not go out of hand.

There are many organisations now looking at legal, technical, ethical and moral aspects of a society driven by AI technology. The Institute of Electrical and Electronics Engineers (IEEE) already has Ethically Aligned Designed, an AI framework addressing the issues in place. AI researchers are drawing up a laundry list similar to Asimov’s laws to help people engage in a more fearless way with this beast of a technology.

In January 2017, Future of Life Institute (FLI), a charity and outreach organisation, hosted their second Beneficial AI Conference. AI experts developed ‘Asilomar AI Principles’, which ensures that AI remains beneficial and not harmful to the future of humankind.

The key points that came out of the conference are: “How can we make future AI systems robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining people’s resources and purpose? How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have?”

Ever since they unshackled the power of the atom, scientists and technologists have been at the forefront of the movement emphasising ‘science for the betterment of man’. This duty was forced upon them when the first atom bomb was manufactured in the US. Little did they realise that a search for the atomic structure could give rise to nasty subplot? With AI we are at the same situation or maybe worse.

No wonder at an IEEE meeting that gave birth to ethical AI framework, the dominant thought was that the human and all living beings must remain at centre of all AI discussions. People must be informed at every level right from the design stage to development of the AI-driven products for everyday use.

While it is a laudable effort to develop ethically aligned technologies, it begs another question that has been raised at various AI conferences. Are humans ethical?
Share on Google Plus

0 comments:

Post a Comment