A.F. Lier,van | Part of a book | Publication date: 15 January 2016
Over the past few years, the rapid development of artificial intelligence, the huge volume of data available in the cloud, and machines’ and software’s increasing capacity for learning have prompted an ever more widespread debate on the social consequences of these developments. Autonomous cars or the application of autonomous weapon systems that operate based on self-learning software without human intervention, are deemed capable of making life-and-death decisions, are leading to questions on a wider scale about whether we as human beings will be able to control this kind of intelligence, autonomy, and interconnected machines. According to Basl, these developments mean ‘ethical cognition itself must be taken as a subject matter of engineering.’ At present, contemporary forms of artificial intelligence, or in the words of Barrat, ‘the ability to solve problems, learn, and take effective, humanlike actions, in a variety of environments,’2 do not yet possess an autonomous moral status or ability to reason. At the same time, it is still unclear which basic features could be exploited in shaping an autonomous moral status for these intelligent systems. For learning and intelligent machines to develop ethical cognition, feedback loops would have to be inserted between the autonomous and intelligent systems. Feedback may help these machines learn behaviour that fits within an ethical framework that is yet to be developed.
Author(s) - affiliated with Rotterdam University of Applied Sciences