Peikoff first addresses the alleged need for a mystic moral faculty. Apparently, a professor claimed the need for such a thing through a two-part hypothetical example. Is it okay to harvest the organs of one person to save the lives of five others? Peikoff agrees that this is wrong, because the one person is innocent.
But what if you see that a train is about to run over five people, and you are in a position to flip a switch so that the train instead runs over only one person on a different track? Peikoff again argues that this is wrong, again because the one person is innocent.
I agree with the thrust of Peikoff’s argument, that we must not harm innocent parties. However, I’m not sure he’s necessarily correct about the train track. I think that the cases of the emergency room and the train track are basically different. In the emergency room, we wouldn’t change our mind regardless of the numbers involved. We would condemn as morally abhorrent the forcible harvesting of one person’s organs even to save the lives of a million. But let’s say we could keep a train from running over a whole station full of people, or deflect a nuclear bomb from New York City to a desert with a single inhabitant. I don’t think deflecting the train is morally comparable to harvesting the organs, because only the latter case involves the initiation of force. If I were on a jury, I would automatically vote to convict the organ harvester, but I’d be troubled by the flip switcher. I’m inclined to categorize the case of the train as an emergency situation, outside of the normal moral context.
As Peikoff suggests, the train hypothetical is farfetched. If the people are walking on the track because of their own negligence, then they bear responsibility for their predicament. If the train company has created dangerous conditions, such that people tend to walk in front of trains, then that’s a matter for the courts. Otherwise, the situation is by definition an emergency. While organ transplantation is an every-day event, I doubt that anyone could come up with more than a few examples from real life that are substantially similar to the train hypothetical. One reason that it strikes us as difficult is the fact that it’s so unlikely. The overwhelming majority of people will never face any situation remotely like that. The fact that emergencies lie beyond our normal moral context does not imply mysticism or subjectivism.
Peikoff next discusses the possible character flaws of artists, the possibility of self-doubt for a moral person, and the necessity of athletes (and artists) to work in the moment, rather than try to evaluate their performance-in-progress from the perspective of history.
In answer to a question about artificial intelligence, Peikoff argues that it’s philosophically impossible for a machine ever to think like a person. He argues that a machine cannot have volition. However, given the fact that humans arose through a process of non-volitional evolution, isn’t it possible that humans might create an artificial being that acquires volition? Perhaps the distinction is that such a being would no longer be a machine.
Peikoff doesn’t address the issue of human motivation here. Human choices are motivated; people have values and act on them. Thinking as a sort of action thus is necessarily tied to values. An artificial being would need the capacity for values as well as volition to be able to think like a human.
Finally, Peikoff addresses the question of whether the world was deterministic prior to humans. He said that the force in play was not determinism but causality; determinism is a specific doctrine that precludes the existence of volition.
The full podcast is worth a listen.