Moralities of Intelligent Machines is a research group studying the moral psychology of robotics and artificial intelligence (AI).
In modern societies, autonomous industrial machines, self-driving cars and healthcare robots are making increasingly many decisions with moral ramifications. The moral “code of conduct” of these AIs needs to be programmed and implemented by humans. However, there are no agreed upon rules to guide the development of moral robotics; currently, this development rests almost solely on the shoulders of large companies with minimal input from the scientific community or general public.
We’re particularly interested in how humans perceive robots that make moral decisions, and what type of morality we’d ideally like robots to abide by. Currently, we’re using an array of tools in experimental social psychology and cognitive science to study human behavior and perception in situations where robots make moral decisions, such as decisions involving human lives. We also actively participate in societal discussion, both at the governmental and public level.
In our research, we are studying people’s attitudes, feelings and thoughts towards 1) moral decisions made by caregiving robots; 2) life-like robot prostitutes; 3) memory implants that fix or increase human memory capacity; and 4) mind upload; that is, uploading one’s consciousness onto a computer. We are also actively developing extensive psychometric tools to study attitudes towards robots and robotics, as well as science fiction hobbyism. Such tools are currently missing in the scientific literature; however, our preliminary results have shown that they have excellent psychometric properties in predicting human behaviour in situations where morality and robotics intersect. We support open scientific practices.