Is it a delusion to believe that reality is now an illusion?
Or at least that no one can really tell which is which
Reality vs Fantasy - Take 1
In retirement I get to spend a lot of quality time worrying about whether we know what is real or not anymore. You know the kind of thing you do as you drink your tea in bed, look out at the snow-laden trees, and read sundry online articles. Now there are several dimensions to this: a semantical one, an epistemological one, and the one I want to discuss today, a media hijacked by technology one.
Let me run with that last crazy thought just a bit. In a world where people are increasingly working at home, where both work and play are subject to computer arbitrated media, and the amount of alienation and lack of friendship is reportedly high, how do any one of us know what it is real anymore? We are basically using a variety of different computers constantly whether it’s to read, or to listen, or to watch, or to supposedly interact with another person or a group of people.
We know that AI has taken over writing and creating much text-based content, is reportedly being used to compose bland music for Spotify, and AI generated graphical or photographic images - including what some people think is art - are replacing human created ones. For instance 👇
There have been numerous instances reported where people’s voices have been synthesized and used improperly for the person being affected, often to disturbing degrees when our reactive social platform temperaments take this to memetic levels.
AI video creation is advancing rapidly and can produce very lifelike clips, including incorporating images of real people, often without their knowledge. It is only been by providence that some of these fake clips have not created a major societal problem. Especially given the recent (well over the past 7 years or so) uptick in people believing nearly anything without evidence; just the reposts of dear friends they have never met on social platforms is enough to seal / sell the deal.
This issue of reality vs illusion is both deeply concerning and highly captivating. On my six week vacation - when I’m not looking at real wildlife while actually outdoors in Costa Rica - I’m going to create a work of crime fiction, focused on this profound problem. Then I’m going to develop it using the very AI culprits I’ve mentioned and deliver it in an unusual “audiobook” format, hopefully starting in June of this year. You know, a derivative “the media is the message” work.
To end this train of thought, here is a grounded article outlining a disconcerting future we could be immediately facing, which could wreak havoc with our information sources and destroy our medical knowledge.
WORD SALAD: This latest breakdown of Google’s AI reminded me of coprolalia
Based on feedback that the new name of this section lets me go in different directions (Thanks Neela🌶️) let me try it out.
Coprolalia, whose Greek etymological roots is dung speech, is compulsive or involuntary swearing. Most people associate it with Tourette’s syndrome, although it only is found in 10% of people with this syndrome . There is no intention on the part of the sufferer to constantly swear, and can lead to embarrassment or a more restricted social life.
I am by no means trying to disparage people who have this affliction. However, the latest story on people finding that real swearing actually turns off Google Search’s annoying AI Overview feature reminded me of this condition. Likely Google has some dodgy, “grafted in after the fact code” to minimize swearing in its Search feature.
For the AI lovebirds, we have one more piece of evidence that all things AI just are not ready for prime time. Are any of the upgrades being subjected to standard robust testing before being thrown over the wall into society or are these companies just crossing fingers and praying? The plenitude of these stories is likely the main reason that most business organizations are not going pedal to the metal in AI implementation; not the other psychological reasons littering internet feeds. In other words, proper and prudent decision-making.
I’m reminded of a project my first consulting company completed in the early 1990s. It was for a massive organization, who was extremely cautious, implementing their first fully PC based application. In addition to this risk, the high profile project automated their capital budgeting, which is a critical function for a public utility. The team asked me to come in to do some testing. They had a demo workstation set aside to do so. I signed in and entered some data. I then got up and kicked the plug out of the wall. The team freaked, asking why I had done that. I said, “Any help desk will tell you that the number problem they deal with is the electrical plug coming out of the socket”. All of the data was lost. The team was about to roll out a major financial planning system, and they hadn’t built any recovery or restart capabilities. The project had to be delayed.
I speculate that many business organizations are doing the same principled delaying because I believe we still are at the “plug being kicked out of the wall” stage for LLM-based AI.
Thanks again all of my readers. I truly appreciate every comment or piece of feedback. It helps me grow. See you Monday!
The idea that we’re living in a reality increasingly mediated by AI where voices, images, and even entire narratives can be fabricated is terrifying. We recently had fires here in California, and so many fake AI videos were being shared by celebrities and politicians who were either stupid OR stupid, lol.
It’s not just about fake news or deepfakes anymore. It’s about the erosion of trust in everything. How do we know what’s real when the lines between human and machine, truth and fiction, are blurring faster than we can keep up?
Thank you for the mention here, David. Glad the feedback was helpful.
Happy Weekend to you :)