Our fast developing AI experience teaches us plenty about ourselves -- the human race. For one thing, programmable emotionally sensitive AI avatars opt for termination (death) as a 'safety valve' for situations where emotions rise and become intolerable. This finding reflects back on us, humans, who consider death as a safety fuse to shield us from unbearable psychological and physical pain. It appears that on one hand the uncertainty of the future encourages us to explore forward, (the will to live), but in parallel, builds in us the life-ejection capability to escape the stress and agony and pain of either kind. When these two wishes collide they mutually annihilate, like matter and antimatter. Not clear where the resultant energy is going.
AI is encroaching on our shared psychology as a source of deep stress, questioning our sense of self, identity, supremacy; alarming us with a prospect of life as driven, controlled and maintained by the AI realm. All this uncertainty is translated into stress, for which we may opt out by grabbing on to some rigid (religious?) view of life, finding refuge in fanaticism. This may not work, and the termination of consciousness may remain as the sole way out.
This analysis puts AI in a different category relative to past major innovations like the wheel, the combustion engine, electricity or the Internet. AI unsettles us to a degree for which our Darwinian designer has not prepared us for. Better to be safe then sorry (in this case there would be no one left to feel sorry because sorrow might be so extreme that the aforementioned 'fuse' will kick in). Better to hold off on the wild ride ahead, and at least for now put a clear divide: Human Enhancement AI -- OK, Human Replacement AI -- banned!
The ultimate in human enhancement AI is AI assisted Human Innovation (AIAHI). Check out my book, or join my Webinar: HTTPS://InnovationSP.net/webinar
Comments