While self-driving vehicle technology is still being refined, one crucial task is developing "machine intelligence"—a fancy way of describing the kinds of decisions a driverless car will have to make as it navigates complicated infrastructure.
Researchers at the Massachusetts Institute of Technology have created a method for "gathering a human perspective on moral decisions made by machine intelligence," by presenting a series of choices that could potentially face a driverless car whose brakes have failed.
Which is preferable? A driverless vehicle plows into a crowd of jaywalking runners or avoids the pedestrians and kills a family with three children?
Characters in the scenarios go beyond age and gender to include weight, profession and socioeconomic status—even throwing in animals. At the conclusion of the battery of choices, users are given their results: how important was it to save a larger number of lives, protect passengers or uphold the law?
They're tough choices, but if we can't face them, should we really be asking machines to make them for us?