Sunday 10 March 2019

Driverless cars

This is a fun and thought-provoking series of questions about driverless cars (I just did it together with Mama).

If you click on the link you must decide whether a driverless car, in which the brakes have failed, should smash into this or smash into that.  Whichever option you choose, someone (or at least an imaginary someone) will die.

Philosophers like to call these sorts of questions "trolley problems", and historically there have been mixed opinions about the usefulness of them.  To some people, they are too artificial to be meaningful for real life -- all the rich, relevant, real-life information has been abstracted out to the point of uselessness.  To other people, they allow us to clarify which values and principles are important -- all the distractions have been abstracted out, leaving us with the relevant hard questions.

But whatever the case about that, these days with the way driverless car technology is rapidly improving, many people see trolley problems as directly relevant to the programming of driverless cars.

If a human is at the wheel of a car that is having an unavoidable crash, we don't expect that person to have perfect reactions and thought processes regarding where they should steer the car.  People are excused as just reacting in the moment.

But when we have a driverless car, there is no such thing as reacting in the moment.  Its reactions cannot be excused.  How it reacts depends on the pre-crash programming, which was consciously encoded weeks/months/years in advance.

As a society, we have choices about what values and principles we encode into driverless cars.  In an unavoidable accident:
  • should we prioritise humans over non-human animals?
  • should we prioritise more lives over fewer lives?
  • should we prioritise passengers or pedestrians?
  • should we prioritise law-abiders over law-flouters?
  • should we prioritise old people or children?
Face recognition technology may even make it possible for us to identify each person involved in the accident in progress:
  • should we prioritise doctors over criminals?
  • should people have a "social usefulness" ranking, which determines who gets hit in an unavoidable accident?
As a society, we also have choices about who chooses which values and principles to be encoded into driverless cars.
  • should it be entirely up to each for-profit car manufacturer?
  • or should there be some sort of government regulation/standardisation?
  • should the general population be consulted?
  • or should experts decide (who counts as an expert?)?
  • should the algorithms be publicly available to everyone, or should they be hidden by corporations or ministries?
Lots of very tricky questions.

And it is important to remember that there are no value-neutral, purely objective mathematical algorithms here.  Whatever algorithms are encoded into driverless cars, they will include the values of those in charge of encoding them.

Here is an article introducing some of the results of the above-linked series of questions.

I've taught this topic a few times to critical thinking students.  I often show them bits of YouTube videos, such as from this, this, this and this.

No comments:

Post a Comment