Don Norman on why “mostly autonomous” cars are dangerous
Cars that are autonomous most of the time (but not all the time) are dangerous, argues Donald Norman:
“Whether you are fan or foe of completely autonomous vehicles, note that it will be decades before we have full automation of cars. One of the most difficult problems is the transition: from (A) manual driving, to (B) mostly automatic, to (C) mostly-but-not-quite-fully-automated, and (D) fully autonomous. Today we have a mix of A and B with companies experimenting with C. I fear C. Why? Cars at this level require a person to sit at the wheel, ready to take over when the automation fails (California law requires this). Today, the introduction of safety features in level B such as lane change warning, blind-spot indicators, stability control, anti-lock braking, and automatic braking if about to collide has reduced injuries.
Why is C so dangerous? Because the more reliable the automation, the less likely the driver will be to respond in time for corrective action. Studies of airline pilots who routinely fly completely automated airplanes show this (as do numerous studies over the past six decades by experimental psychologists). When there is little to do, attention wanders.”
In fact, he writes, “the step from B to C to D is so problematical that I recommend doing it more quickly: go directly to D, avoiding the dangerous partial automation that leaves the driver with nothing to do most of the time.”