An interesting catch-22 to think about when considering that AI programs can go beyond our natural irrationality when making moral decisions. A good thought provoking article
Should A Self-Driving Car Kill Its Passengers In A “Greater Good” Scenario? | IFLScience
Picture the scene: You’re in a self-driving car and, after turning a corner, find that you are on course for an unavoidable collision with a group of 10 people in the road with walls on either side. Should the car swerve to the side into the wall, likely seriously injuring or killing you, its sole occupant, and saving the group? Or should it make every attempt to stop, knowing full well it will hit the group of people while keeping you safe?