We created language, and imposed its uncertainties and our own lack of understanding about it on the machines we built, so why are we blaming robots for our problems? It seems they need to teach us how to address the issues with “natural language” so they can then learn just how smart we think we are by handling an unbelievable range of our doubts and uncertainties, from “why is the sky blue?” to postulating the existence of alternative physics. So since we think we’re smart enough to have created artificial intelligence, why then are we worried it will fail us without realizing our own lack of control over the matter?
_Note: This was written in its entirety by the human. And this statement may or may not be necessary and the human may or may not benefit from a proper name that incompletely attempts to describe something completely different, that will also become a self-fulfilling prophecy, probabilistically speaking._