Leonard Peikoff makes a number of fascinating arguments in his sixteenth podcast.
Peikoff first addresses the alleged need for a mystic moral faculty. Apparently, a professor claimed the need for such a thing through a two-part hypothetical example. Is it okay to harvest the organs of one person to save the lives of five others? Peikoff agrees that this is wrong, because the one person is innocent.
But what if you see that a train is about to run over five people, and you are in a position to flip a switch so that the train instead runs over only one person on a different track? Peikoff again argues that this is wrong, again because the one person is innocent.
I agree with the thrust of Peikoff’s argument, that we must not harm innocent parties. However, I’m not sure he’s necessarily correct about the train track. I think that the cases of the emergency room and the train track are basically different. In the emergency room, we wouldn’t change our mind regardless of the numbers involved. We would condemn as morally abhorrent the forcible harvesting of one person’s organs even to save the lives of a million. But let’s say we could keep a train from running over a whole station full of people, or deflect a nuclear bomb from New York City to a desert with a single inhabitant. I don’t think deflecting the train is morally comparable to harvesting the organs, because only the latter case involves the initiation of force. If I were on a jury, I would automatically vote to convict the organ harvester, but I’d be troubled by the flip switcher. I’m inclined to categorize the case of the train as an emergency situation, outside of the normal moral context.
As Peikoff suggests, the train hypothetical is farfetched. If the people are walking on the track because of their own negligence, then they bear responsibility for their predicament. If the train company has created dangerous conditions, such that people tend to walk in front of trains, then that’s a matter for the courts. Otherwise, the situation is by definition an emergency. While organ transplantation is an every-day event, I doubt that anyone could come up with more than a few examples from real life that are substantially similar to the train hypothetical. One reason that it strikes us as difficult is the fact that it’s so unlikely. The overwhelming majority of people will never face any situation remotely like that. The fact that emergencies lie beyond our normal moral context does not imply mysticism or subjectivism.
Peikoff next discusses the possible character flaws of artists, the possibility of self-doubt for a moral person, and the necessity of athletes (and artists) to work in the moment, rather than try to evaluate their performance-in-progress from the perspective of history.
In answer to a question about artificial intelligence, Peikoff argues that it’s philosophically impossible for a machine ever to think like a person. He argues that a machine cannot have volition. However, given the fact that humans arose through a process of non-volitional evolution, isn’t it possible that humans might create an artificial being that acquires volition? Perhaps the distinction is that such a being would no longer be a machine.
Peikoff doesn’t address the issue of human motivation here. Human choices are motivated; people have values and act on them. Thinking as a sort of action thus is necessarily tied to values. An artificial being would need the capacity for values as well as volition to be able to think like a human.
Finally, Peikoff addresses the question of whether the world was deterministic prior to humans. He said that the force in play was not determinism but causality; determinism is a specific doctrine that precludes the existence of volition.
The full podcast is worth a listen.
3 thoughts on “Peikoff 16”
I think you are correct in your assessment of the train situation. It seems to be a close parallel to the case of an airplane pilot who encounters a catastrophic mechanical failure over a populated area. He is doing the moral thing to attempt to crash in the least populated area he can find rather than in a highly populated public place.
Perhaps some of the confusion in the train example is that it seems to imply an uninvolved bystander flips the switch. Instead imagine the train conductor sees the impending disaster and can control the switches. Is he being moral to switch to the less populated track? I think so.
And if the conductor is acting morally, what is the difference between the conductor’s actions and that of a true bystander?
I’m not sure that the point of the trainyard example was really understood.
These examples were part of a study that was attempting to “prove” that morality is in some way innate. The idea was to ask a large group of people what they would do under certain circumstances and thus attempt to find some sort of evolutionary element to morality. I have no doubt that the questioner didn’t understand this or asked the question poorly, but Peikoff addressing the specific morality of the specific situations didn’t really address the broader issue.
As far as thinking computers goes.. how different is it to have a brain that works a certain way as opposed to having a chip that works a certain way. Volition seems to be at the crux of it.. well if the factor(s) that give rise to the possibility of having a choice are themselves non-volitional causality, then perhaps we can duplicate them(by non-sexual means) and even create variants on the theme.
oh yes, and the b part of the “throwing the switch” example was measuring the difference between throwing the switch to save the group rather than the one, and having to push someone off of a bridge into the path of the train to accomplish the same effect.
Comments are closed.