While I agree with your general point and admire your thought and writing, I see some serious conflation in your piece. Humanity and intelligence include, loosely speaking: (1) biosociation and related things like creativity, (2) feelings and emotions, (3) self-awareness, and (4) learning and problem-solving.
It's plausible all these things are made possible by our neural circuitry. But we don't know in full how our circuitry does these things. Until we can trace everything that happens in our circuitry, we won't understand how to create a machine to do all of them. And even we can fully map our circuitry, we might still be mystified. AI as it stands today can only really do #4.
If we do create #1 & 3, we will have created AGI, a machine with all the intelligence of a human if not more, one that would more than pass the Turing test, whether we could measure or ascertain that performance.
There is no need to create #2 unless we want to create an artificial human, which would be superfluous to #1 & 3, and probably a waste of time too.
We could in theory create #1 or #3 independently from each other, and that would probably pass the Turing test too.
(If we raised a child without any human contact or education whatsoever, that person would fail the Turing test.)
My understanding is the Turing test was essentially a "punt" by Turing. His key point is we don't understand intelligence, as noted above. Therefore, we have no way of judging whether an AI is intelligent or not. In other words, Turing realized imitation is not intelligence, it is just the only measuring tool we have.
That's why we debate whether other animals like Washoe are intelligent or not. We haven't mapped their circuitries either. We have to do that to know exactly who is inside the Chinese Room and what their capabilities are.