Think of what happens when you put a new food into your mouth. You don’t have to decide if it’s disgusting. You just know. You don’t have to decide if a landscape is beautiful. You just know.While I certainly understand his point, and have used the phrase post hoc a few times in my writing on debates, I don't think that there is any mutual exclusion by claiming that: 1) the basis of morality is about harm and fairness, and 2) humans are intuitively guided to recognize this by their evolution.
Moral judgments are like that. They are rapid intuitive decisions and involve the emotion-processing parts of the brain. Most of us make snap moral judgments about what feels fair or not, or what feels good or not. We start doing this when we are babies, before we have language. And even as adults, we often can’t explain to ourselves why something feels wrong.
In other words, reasoning comes later and is often guided by the emotions that preceded it. Or as Jonathan Haidt of the University of Virginia memorably wrote, “The emotions are, in fact, in charge of the temple of morality, and ... moral reasoning is really just a servant masquerading as a high priest.”
The idea is that we don't make the rules of morality, we discover them, which makes morality objective. But at the same time, most of us are not philosophically-trained, so we can't adequately argue with others about that fact, leaving us looking like we're just running on emotion/intuition. I would argue that we are usually running on emotion/intution when we add small numbers too, but so long as our "adding mechanism" is well-trained, it doesn't matter.
The snap judgments we make in morality are probably accurate because they are the result of evolutionary "training" to hone them in on accuracy. Just because moral judgments may be based on feelings doesn't mean the underlying moral statements are not factual: "burning a baby alive is wrong." See some related posts.