A MEGA SUPER AI edition from Costa Rica
My small late offering about the state of AI from stolen moments on a fantastic vacation
One of the absolute delights of taking a road trip anywhere in Latin America are the road signs. I just love the Spanish hyperbole. My favourite is MEGA SUPER, which normally is a small modest grocery store. So despite my title, this is a small modest piece, mostly derivative from other thinkers, about the state of AI.
When I start the Techtonic newsletter I didn’t think that it would end up with so many AI stories.
Writing helps crystallize my thinking, to see patterns and make connections. To develop themes completely that sound so simple in my head. As Joan Didion said, “I write entirely to find out what I’m thinking, what I’m looking at, what I see and what it means. What I want and what I fear”.
That is it, “What I fear.”
I have seen, close up for decades, the thousands of minor and major social impacts that our ever changing, ever accumulating technology causes. In the space of my career we have gone from a fearful technophobe society to a carefree technophile one. It is hard to believe the speed of this attitudinal shift.
I remember my first application system conversions - from paper systems to computer ones - and the weeks of training and chat sessions needed to alleviate worry and concern. I recall conducting a 3 month post-implementation review at one bank where we had put in a new system. I found two of the key clerks still keeping manual shadow books hidden away in drawers as the official records.
Not any more. We now LOVE technology! Metaphorical T-shirts with I ❤️ Tech abound. I have what I think of as decidedly non-techie friends extolling every possible virtue of AI. In their eyes AI can do no wrong. They overuse AI for everything. As one friend told me with the whispered reverence only the religious have, “ChatGPT is so fast!”
I didn’t have the heart to say that since the early 1950s that was (and is) the primary benefit of computers: they are fast. Plus have these people not read about the hundreds of billions being spent on “AI compute”? The stock market sensation Nvidia? That will get you speed alright.
So that is the core of my fears. Naïve fans wanting, More! More! More! (Andrea True, do a Perplexity search) coupled with scruple light purveyors of this computational pulchritude. Plus the current AI is inadequately architected with foundational problems including balsa wood guardrails. For most people though AI has magical powers. Reminds me of the first iPhone users whose thumbs are now permanently arthritic.
Let’s proceed in this order: 1. the technology itself; 2. the vendors; and 3. AI users. It is my contention that it is all three of these factors interacting together that has created an awkward turning point for society.
(Below is the little restaurant where some of the thinking for this was drunk,….er developed)
The AI Technology
I would love to see a survey asking most technologically naïve users when and where they think AI came from. The “schismatics” - a group I have identified as believing that between 2008 and 2012 we underwent such a profound technology disruption that everything previous in technology history is irrelevant - will naturally believe it is borne of this century. The rest, I am unsure about.
The roots of AI are in the 1940s and 1950s, and many concepts from that era - like neural networks - are still foundational. Advances in AI through the decades since, has had a start / stop and boom / bust type of progress. In the 1980s, for instance, there was a mini-surge in what we knew then as expert systems. Currently most of the focus and investment has been on Large Language Models (LLMs) with other key models like neuro-symbolic approaches being secondary, often because of the technical challenges integrating these different methods.
Coupled with the LLM focus is the concept of Artificial General Intelligence (AGI), basically when AI is similar to or surpassing human cognitive skills. This is something this century’s snake oil…I mean well meaning leaders of AI product companies are crowing about in their hype to get everyone on board.
Melanie Mitchell who is an AI researcher and publishes here on Substack released a two part post that dealt with one of the required concepts of AGI, an abstract world model. I recommend it to readers who want more detail. Her conclusion is that, “The claims of emergent abstract world models in LLMs are not yet supported by strong evidence.”
That is a rigorous review. But even casual usage is error prone enough that users with prudence and caution should be more skeptical. I first used generative AI more than 4 years ago when I was doing a radio program called Departures, featuring the music of that week’s recently deceased. The web tool I was using had AI built in so I thought I would add a short bio of each artist I featured. As I read the first one it created, there were several “minor” errors, like getting the birth and death years wrong, as well as misidentifying the artist’s most popular song. WTF! I could write the bio just as fast if I had to fine tune the prompt correctly, repeatedly, and do a super slow read through to ensure correctness. At that point I realized AI was out for me for many purposes. These weren’t hallucinations, this was mis / disinformation, depending on how culpable you want to make the vendors. It is a built-in design flaw. It hasn’t improved all that much despite the unbelievable investment in infrastructure.
Costa Rican Notes
Here are 3 more things we did:
Experienced a sheet lightning storm with pounding rain and strong ocean breezes. Thrilling!
Squirrels eating the connective tissue holding a coconut to the tree, which then fell and smashed the windshield on our rented car
My wife startling a peccary on an early morning ramble
The Vendors
Many writers are blaming vendors solely for AI problems, but I think that this is incorrect. Now I have been snidely snarking at these people in most of my newsletters. However, I have 45 years of dealing with software vendors and caveat emptor is the first watchword one must have in dealing with them. A real issue of “The Big Schism” is that hype is now truth, and everyone is an influencer.
The AI drawbacks haven’t improved qualitatively despite the much bigger compute as I observed above. Serious researchers are able to cause immediate and significant issues in each new minor upgrade in the variety of AI offerings we currently have. (Another bugbear of mine: the ridiculous names and numbers of these AI models, with new releases seemingly weekly)
Stop throwing this untested and unproven stuff at us! Work and get it right. Would Silly Valley techies fly on a plane with this many problems?
Why we continue to believe their rubbish is what makes me so scathing of our third category.
The Users
I made a comment on Substack this week calling what is happening, the slaughter of innocents. Some people took umbrage at what I said but people’s collective behaviours say more than their words. Study after study shows the unvarnished acceptance by the uninitiated including a drop in their critical thinking skills; each week I highlight significant byproducts of AI errors. Et Al.
Here are some insights from Arvind Narayan, a computer scientist and professor at Princeton, made recently on BlueSky:
“We're seeing the same form-versus-function confusion with Deep Research now that we saw in the early days of chatbots. Back then people were wowed by chatbots' conversational abilities and mimicry of linguistic fluency, and hence under-appreciated the limitations.
Now people are wowed by the "report" format of Deep Research outputs and its mimicry of authoritativeness, and under-appreciate the difficulty of fact-checking 10-page papers. So I think some of the initial excitement will fade (which is not to deny that the tool is extremely useful).”
This jibes with an experiment I did on Deep Research compliments of Jing Hu - who is a scientific journalist monitoring AI. The topic I had Jin prompt Deep Research with was “What are the key things a consulting entrepreneur should have in order to start their own business?” A large impressive report was created. Here are my observations of this report on a topic for which I’m an expert:
The report was overly comprehensive and complete throwing in absolutely every possible topic, often fraught with contradictions
The report had too many motherhood phrases: like ”competing on price is a race to the bottom” is just an aphorism. In fact getting established as your own firm pricing below large or established competitors is almost critical.
The findings had no sense of priority or proportionality, so that keeping your LinkedIn profile up to-date is as important as networking or proposal writing skills.
For all the verbiage, it didn’t really answer the question in a human way, which would be something like, “Here are the first 5 or 7 things you need to do in order to be successful. Focus and get established before doing these next steps.”
After writing this I’m a bit depressed, because I believe that as a society we have jumped in fast, long and hard, thinking these systemic problems are cute or unimportant.
That to me is a self slaughter of the innocents, by the innocents.
Who knows when Thursday’s newsletter will arrive as we are in the jungle baby and anything can happen here. Throw in a comment, like, or restack if you want to. These actions are deeply appreciated!
PS I heard CR is quite prone to torrential rains - that's a bummer :(
You're still having quite the adventure - Happy Wednesday David!
Ahhhh nostalgia!
I remember the days when businesses tiptoed into digital transformation with trepidation.
Backup paper records hidden in drawers like security blankets. Now, we’ve swung so far in the opposite direction that AI-generated nonsense is gospel for some. The ‘I ❤️ Tech’ crowd reminds me of people who used to forward chain emails in the early 2000s (blind trust in the magic of the machine).
The hunger for speed, certainty, and automation comes from a place of very human vulnerability. We want clarity. We want shortcuts. We want to believe there’s an answer out there that will make things easier, faster, and smarter. But in doing so, we risk outsourcing not just tasks but thinking itself. And that, to me, is the real danger - not AI replacing humans, but humans relinquishing their own discernment in the face of AI’s confident mediocrity.
That word again - "discernment."
Brilliant as always David.