Machines and built-in morality

With Google’s driverless cars now street legal in California, Florida, and Nevada, Gary Marcus for the New Yorker ponders a world where machines need a built-in morality system.

That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

Data analysis seems to be headed in the same direction. Where machines will have to start making human-like decisions, data represents more of the real world and looks less like snippets in time. As the gap between numbers and what they represent shrinks, the more we have to think about ethics, privacy, and whether or not what we’re doing is right.

8 Comments

  • This daily moment of reflection brought to you by Isaac Asimov.

  • Don’t we already have this? If the computerized driver’s goal is to minimize loss, and the loss of 40 children is calculated as greater than the loss of you, crisis averted.

  • Adam beat me to it. This also reminds me of Daniel Suarez’s Kill Decision. http://thedaemon.com/killdecisionsynopsis.html Here’s the relevant part in the teaser: “It’s no secret that America relies on remotely piloted drones to target adversaries overseas. But fifty other nations are developing drones of their own, and the next generation will be much scarier: autonomous machines that acquire and destroy targets without direct human intervention.”

  • OK, I call BS on the example above.

    Assumptions involve an intersection on a bridge with a posted speed limit in excess of 50mph.

    A computerized driver would prevent you from exceeding the speed limit. The computerized driver for the bus would prevent it from running a red light. Your computerized driver would prevent you from running the red light.

    The described scenario would not happen. If you as a driver over-rode your computerized driver to exceed the speed limit, then your computer should make the decision to end your life instead of the bus load of kids.

    • OK, I call BS on the explanation above.

      I was racing back to the hospital for diabetic children (which had its supply of insulin stolen and/or spoiled after the earthquake), for which I am the sole provider. My car was loaded with the medicine that’d save their lives and the lives of the other children left in the hospital that the bus couldn’t carry.

      The bus veered into me because the driver overrode the computer so she could speed, but she unfortunately suffered a medical issue while the bus was in her control. I’d programmed my car to race back to the hospital with priority “speed because 400+ lives are at stake.”

      My car should obviously have valued my life and the other contents of my car above the sick kids in the bus. Silly AI!

  • Gordon Werner November 29, 2012 at 6:42 pm

    in FL, CA, and NV … even if the car is auto-driven … are you still legally liable for any accidents / incidents / etc …?

    • I didn’t even realize that it was legal to ride in a car that was being driven by a computer yet… Are you saying it is in FL, CA and NV?