AI, the popularly known acronym for artificial intelligence has become synonymous with fear and anxiety for those who project a kind of future in which our personal freedoms have been completely lost to microscopic magnetic charges traveling at nearly the speed of light. Think Skynet in the Terminator movies — an AI that becomes self-aware and immediately perceives humanity as the problem. There have been countless books, movies, and shows of a similar like.
No doubt Isaac Asimov is rolling in his grave at the notion that we’re charging full steam ahead into an AI driven world without any thought or debate related to the ethical guard rails that ought to be built into AI coding. Asimov is the only author of prominence that I’m aware of who dug deep into the ethics of AI in a novel format. In his most important work, I Robot, he suggested three laws to govern AI. They are,
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
On the surface the three laws seem logical and necessary, but the story of I Robot (not the poorly written 2004 movie with Will Smith) is a mosaic of short stories illustrating how the three laws can at times conflict and produce all sorts of unintended problems. In other words, even with the constraints of the three laws we are still in jeopardy of something going horribly wrong when an AI becomes self-aware.
Fast forward from 1950 Isaac Asimov to 2023 when the tech industry is fully embracing any and all monetizable AI constructs for the end goal of boosting profits and share prices. On the surface it would seem we’re cartwheeling toward some form of a Skynet type of catastrophe.
Take for example the advance of AI-driven audio and video. The software is so good now that anyone’s likeness can be utilized to manufacture voice and video portraying people saying and doing anything the unscrupulous technologists may desire. Mostly it’s been hailed as a boon for entertainment. There’s even a YouTube channel dedicated to creating fake videos of Tom Cruise.
But there is also the fake video of President Zelensky telling Ukrainian troops to surrender. The video was poorly produced and wasn’t taken seriously, but the software improves itself every day.
Already there are companies moving fast to produce software to detect the fakes, but in the meantime we are at the mercy of what we see and hear from the likes of those with improper or illegal agendas for power through manipulation.
This seems terrifying.
But what if there is a hidden blessing to it all?
Keep reading with a 7-day free trial
Subscribe to the DEEPER side of things to keep reading this post and get 7 days of free access to the full post archives.