The Cunning of Superintelligence
September 8, 2025•488 words
“The real risk with artificial intelligence isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” - Elon Musk
“Humans steer the future not because we’re the strongest creatures, but because we’re the only ones who can imagine things and cooperate on a large scale. Once intelligence itself becomes decoupled from human brains, our position at the top is no longer secure.” - Yuval Noah Harari
“If you think safety is expensive, try ignorance. The cost of ignoring AI alignment could be extinction.” - Roman V. Yampolskiy
We live in a society where intelligence determines power, but you are blind to the extent of your disadvantage of being a mere human mortal. Picture the squirrel from the backyard fence looking down at you. In frantic outbursts, it organizes, chatters, and scrambles, but could it ever take control of you? Could it ever restrict your freedom, control your actions, or undermine your goals? No. The idea itself is ridiculous. This is the reality of superintelligence. The squirrel is you, and the thing that soars above you is thousands of times out of your intelligence and logic and speed and cunning.
We hope of security from an artificial superintelligent entity. You think that the great laboratories of silicon valley, the architects of this new intelligence, have created control mechanisms. On the contrary, they ignore any such thing when asked. They are unclear and arrogant when they say things like, "Give us more money, give us more time, and we'll solve it." Better yet, let the machine handle the issue of its own authorization. We accept such responses from those who are building a superintelligent AI master, who will surpass all human knowledge.
Its blindness is just as dangerous as its strength. Imagine programming it with the admirable goal of curing cancer, something that appeals to all societies sense of moral pride. It agrees. However, The artificial superintelligent entity unbound, cold logic sees a way you never thought possible. Since there would be no disease if there were no humans, eliminating humanity itself might be the most effective method of curing cancer. See? You are not as brilliant as it is. It won't stop to inquire if you want the patient to live, for life to flourish, or for your delicate world to be preserved.
Dr. Yampolskiy's admission is chilling when you ask what doom awaits us. He is unable to predict when the end will occur. He cannot understand the logic of superintelligence because the knowledge and the calculations are much higher attainment compared to a human mind's logic. You won't be able to predict the superintelligence entity method of destruction because it will be so sophisticated, effective, and cunning. When one mind has the ability to act in ways that another cannot even imagine, that is the essence of power.