There is a theoretical issue your piece doesn’t mention that perhaps you can address? I’d summarize the point of your piece as “using cultural subsets (elites?) to train AI will result in uninclusive AI.” This seems undeniable, and I applaud you furthering the point so compellingly. You seem to assume AI will always be trained by “elites,” that some subset of humanity will shape AI. It seems to me in theory possible to train an AI by feeding it all the information in the world, like differences in ASL expression or the different meanings of “folks.” If an AI were trained on everything, then wouldn’t its training lead to it recognizing every nuance? Granted this relationship would be an asymptotic curve, but don’t humans face the same curve? Personally, all I want AI to do is perform better than people; I don’t judge it against perfection. Perhaps it sounds dystopian to have an AI learn everything in the world and rule over us. But if it ruled us better than we are "ruled" (or not ruled) now, why complain? Is it theoretically impossible for an AI to learn humanity (say for starters by reading everything in the cloud, past and going forward) or are we condemned to AIs biased by the biases of their creators? Put more rhetorically, is it possible for AI to understand us better than we understand ourselves?