This a great big-picture reasoned perspective. What worries me about progressing in the way you suggest is that -- at least currently -- we don't know how to test AI. The only way to know what works is to try it and see if messes up or not. For instance, people talk about LLMs "hallucinating" but we don't know the source or mechanism of the hallucination.
That said, in concept, AI could replace bureaucracy. In concept, I trust computers more than the average human. But can humans drop their biases and fully articulate the mission we want AI to accomplish? Or could it actually figure that out by itself?