Thanks for this piece!
Related things I look for are authors who bash or sneer at their opponents, or impute strawman arguments to them and then tear those strawmen apart as if they’ve proved something. Daniel Dennett did this in Consciousness Explained (in which he failed to explain consciousness).
Dennett mocked the “Cartesian theater” and its homunculus audience, in effect claiming that the subjective experience the theater represents is wrong because the brain is not actually wired that way. Dennett refuted Searle’s’ Chinese room by hypothesizing output from the room that he claimed would pass the Turing test. He had a point, based on Turing’s point, but it was still a strawman, as we now see with LLMs—they do what Dennett was hypothezing, but only a few people claim current LLMs are conscious.
In contrast, I respect the book each I’ve read of Steven Pinker and Jared Diamond, who presented their claims and supported them, then presented the opposing claims along with their supporting arguments or data, then dissected and refuted the opposing arguments, sometimes by recontextualizing the data, giving the reader a chance to judge between the two claims.
Your piece, especially the slippery slope fallacy, stirred a point in truth units I've not talked about, which is the practice of plotting data points then drawing a (trend) line or curve through them. In this context, the data points are the objective truth, and the lines or curves are the subjective truth, in the sense that the data points are supposedly from the real world, and the trend line is human judgment—for example, the exclusion of outliers, or taking either the mean or median average. In the pragmatic big picture, the test is what approximation (if any) is useful and relevant to the application (test) toward which the data/approximation is being put.