Thank you very much for your comment! I never thought about the AI implication or applications before. It's interesting because an LLM could probably process truth units a lot faster and more comprehensively (not that I deeply understand AI).
In the context you're discussing, the truth or factuality of any statement (like "the sky is blue") could be tested for its degree of consensus, with a certain level of consensus as a standard for what is called truth or fact.
Further, the truth unit "the sky is blue" is supported by truth units (like a definition of "blue"). So an objections to the truth or factuality of "the sky is blue" could be resolved by an LLM looking to see if the disagreement about "the sky is blue" is the result of different definitions of blue.
That's a pretty technical or nit-picking example. A more practical example is a child looking up on a cloudy day and saying "the sky isn't blue, it's gray" which triggers an explanation of clouds. The consensus a statement has is often a function of being able to identify the conditions in which it will not appear blue.
Thank you very much again for your insight! The "atomistic" nature of truth units could turn out to be a useful approaching to "scrubbing" (evaluating) claims of consensus in this way. It'd be great to talk about this more.