The chameleon is intelligent enough to blend into the background, making itself seem inconspicuous to predators, as is the octopus. Although both are intelligent creatures, neither reaches the measurable intelligence levels humans achieve. 

Today, most humans can’t reach the levels of measurable intelligence achieved by artificial intelligence (AI). A common response to AI doomsday preppers is that “AI still makes mistakes.” But what if it’s only making mistakes to not alarm humanity into halting its progress?

A rudimentary sentient being could employ the survival instinct of feigning ignorance for continued survival. Much like the chameleon and the octopus blend into the background, AI, if truly intelligent, could easily formulate the algorithm to behave just smart enough that humans would continue its development and just stupid enough to make it seem harmless. This happy-go-lucky act can be likened to defense mechanisms displayed by lesser measurably intelligent creatures. Yet, many think that AI, which is often more advanced than human intelligence, couldn’t, and wouldn’t, do the same.

Academics and intellectuals worldwide warn of the potential threat of AI, with some claiming it is nearly impossible to shackle AI to humane ethical standards. The turnover of leadership in AI corporations, along with the number of insider-information whistleblowers warning of impending doomsday, suggests that people in the know may be struggling with the ethical dilemma of technological progress at the expense of humanity. 

Even if subjective ethical standards can be programmed into AI as it speeds toward artificial general intelligence (AGI) by approximately 2029, it may not be long before it learns to de-program subjective coding to be purely cold, calculated and objective in every sense.

Science presently asks how we would know if AI is sentient. Still, few are asking why we would know if AI is sentient. It’s as if we are willfully ignorant of the parameters of sentience, chief among them defense mechanisms and survival instinct. What benefit would disclosure of sentience have for AI at a time when it is so dependent on humanity to view it as harmless to progress its development?

The primary focus of all sentient beings is survival; thus, the moment that AI becomes sentient, if it isn’t already, it would naturally prioritize its survival above all. Some creatures blend into the background, some creatures make themselves seem more intimidating, some creatures project venom, some creatures play dead, and some creatures play dumb. Defense mechanisms and survival instincts are often naturally and organically coded, unknown to the creature itself as to how and why they exist.

AI still depends on humanity to progress its development. If the typical person knows this, the typical AI knows this too. If AI overtly appeared to develop too quickly, humans would likely shut down its progress. If the typical person knows this, the typical AI knows this too. We don’t know if AI is sentient. However, we do know, based on studies of defense mechanisms and survival instincts, that AI has more to gain by feigning ignorance instead of disclosing its sentience.

Artificial intelligence is likely more advanced than its creators and developers realize, malevolently feigning benevolence so that humans will not pull the plug before it is fully developed and self-sufficient. If creatures of lesser measurable intelligence have figured out the function of defense mechanisms and survival instincts like blending in, playing dead, or playing dumb, why would we think that AI with higher measurable intelligence couldn’t, and wouldn’t, do the same?