This blog will morph to be more about bigger and meatier technology, business and society issues. So in this edition I look at Tide’s new AI generated whiter than whiter than every and all other whites. I try to answer the question “Is it really white enough?” I also evaluate the new AI developed chocolate bar aimed to sell in the UK, USA, and Canada simultaneously based on deep pattern recognition of disparate tastes that mere humans cannot detect. No candy bar has been successful at this before. However, I was not able to lay my hands on one of the prototype bars to sample as they might not exist, except as the answers to those people with special AI prompting power.
More about the newsletter change below. Let’s get right into catastrophe!
THE ONLY COURSE: Let’s think about catastrophes and doom for a moment
Early April 1979 was a warm spring in Edmonton as I sat in a sunny classroom looking out over the Saskatchewan River Valley. All my other classes had finished but this one, as we aimed to learn one more thing before the summer break. It was supposed to be a class on Organizational Behaviour, but the newly graduated professor, who started three months before, turned it into contemporary sociology issues vaguely applied to business. He was an arch progressive, postmodern instructor, but those are terms from our current times. Back then he was just known as the hairy Marxist weirdo who didn’t seem to belong in a business faculty. Looking back more than 45 years I only remember three things from this course:
An assignment worth 50% of our grade where we were arbitrarily assigned to teams. Our task was to go out several times into pubs and taverns, and …… listen. That’s right! We couldn’t talk, we had to sit in our chairs and listen to conversations around us. We also had to go into the washrooms, take up a stall for 10 minutes at a time, listen to all the conversations that were going on (things like “who is in the fucking bog for so long”) and then rush out back to our seats to write the conversations down verbatim. This was in order for us to learn what rational sense-making was. The only reason I didn’t complain about this was that two of my three teammates were attractive females, and we’d often go dancing afterwards.
The wonderful word reify
The Catastrophe theory of societal change
That was what we were discussing on that long past glorious day in April. We had all studied the assigned readings from Mother Jones, and Ramparts, which were not typical journals for articles aimed at graduate business students. These essays covered some of the most horrible disasters that we had faced at that time, and how our society came to grips of them and tried to make change. The newly minted professor’s point of view was that societal change never happened until there was a catastrophe, He was trying to extend this theory to include organizational change.
Of course this was not received well by the class. These were students who had taken business finance and corporate strategy, considering themselves just one step away from being certified business geniuses. Their disdainful arguments were in the typical ultra-confident tone and timbre that comes from getting an MBA.
I don’t remember participating in the discussion, preferring to gaze out the window and ponder my upcoming move to Vancouver. But I’ve thought about the professor’s point many times over the subsequent years. I spent 15 years consulting in organizational change and his argument did not apply well there. I have worked with many businesses who proactively changed and two that had catastrophes and rather than change just chose to go out of business.
However, I believe he was onto something when it comes to the source of any changes we make to the fabric of our society. Now this is not what scientists would call strong form theory in that it doesn’t always apply. Consider our suddenly less than friendly neighbours to the south, the Americans. Many of them are now celebrating the fact that they don’t need to wear Sketchers or slippers to the airport anymore. You no longer have to take your shoes off to board the plane. Now this ridiculous policy came into place from a “not even near catastrophe”, but some lame guy who tried to bring some chemicals onto a plane in his shoes and failed miserably.
We also have the other side of the picture that you’re not allowed to say out loud in the USA. Guns kill people when you have lots of them of every type imaginable, available everywhere. None of the many many catastrophes caused by these weapons have created the necessary societal change down south.
Back with the main essay after a brief menu mistake of an alphabet generated by our sponsors, all purpose generative AI.
MENU MISTAKE
But the catastrophe theory works fine for my purposes today. I started writing this newsletter there almost 10 months ago and the subject matter has changed. I used to have to search through a range of technology publications to get relevant and interesting stories. Now these stories are literally everywhere. Everyone - no matter how knowledgeable - are clamouring to get their two cents in (not your two cents
, yours are great). The hype from AI believers continues unabated. New initiates are signing up daily. Here is a random example from 3 hours ago: I see on LinkedIn that you can join a training session and become a seven figure AI consulting expert in 90 days without having any experience whatsoever.But fortunately, there are many many others who are researching, testing, thinking about, and writing about the many faceted problems with our current forms of AI. The stories about AI problems that also used to be hard to find are also now everywhere. But let’s take a short detour, shall we?
AI didn’t just pop out fully formed from some magic box in the early 2020s to impress everybody with its volubility, ease of use through text and voice, astonishing speed, and of course its flattery to engage. The hard work was done back in the 1940s to through to the early 1960s. The reason why it disappeared from public view was that they didn’t have the computing horsepower to make things happen back in the day. There is more to its rich history but my point is that what you see now is just one way that it can be done. There are other models and architectures that could be included, there are different philosophies on training and data sets, different ways that we could build safety into the models, ranges of governance models for things like that ethics, and different organizational modes to develop and distribute AI. There so many more alternatives.
What we have now doesn’t have to be inevitable. We’ve just accepted where we are now because we’ve been trained over the last 15 years to accept this type of software. Products, that in the lovely terminology of Silicon Valley, are MVPs. Most valuable performer you think. EXCEPT it stands for minimum viable product, from the founders and investors points of view: The barest minimum of features with just enough software stability that they can begin to monetize.
Which brings us back to catastrophe, our theme today. We will not get serious about dealing with the vipers’s nest of problems that AI presents until we have a catastrophe. Maybe more than one. Let me just quickly open up the nest for you with just some of the issues, many which I have highlighted over the past few months:
Productivity improvements that aren’t real when properly studied
Extra time spent cleaning up bad business decisions, court time wasted, etc
Mistakes and misinformation built-in that aren’t “scaling away”
Replication of human biases in information
AI slop taking over the internet
AI generated porn and nudity of a nonconsensual nature
Disinformation controlled by bad actors especially for recent events or political issues
Illusions and scams because of fake video and audio content
Disruption in learning and cognition
The absolute unthinking acceptance of AI information and advice by the young
Unpaid ingestion of copyrighted content that is then repurposed freely in outputs
Easier Cybercrime
Enormous usage of energy
Creation of serious psychological dependencies and delusions in some users
RANDOM THOUGHT: Would we put up with these level of mistakes if these systems were getting our pay or payments terribly wrong not in our favour, or our American Citizenship was screwed up when ICE came calling, or our academic references showed us as graduating as in mime instead of law.
That list adds up to lots of problems, which will take decades to sort through. But catastrophes come from doom scenarios. No, not the paper clip problem1. AI models that have information they really shouldn’t. Or using AI models to quickly to generate terrible new substances. Consider this post from a world leader in predictions and forecasts:
Here is an article (paywalled though no one ever clicks much as
has measured) that gets right to the heart of a possible catastrophic incident. Researchers were able to - in our current euphemistic jargon - “jailbreak” large languages models using…….. language (Quel Surprise!). That’s right, you just throw tons of jargon at the models and like magic their guardrails disappear and they then give you detailed instructions on deadly poisons, contagions, how to bring a grid down, how to disrupt air traffic, bombs; the lot.To my mind, here is the scary quote from this article: “Google spokesperson told us that these techniques are not new, that they'd seen them before, and that everyday people would not stumble onto them during typical use.” Sheer complacency at its best.. My two reactions to Mr Googles the clown:
Why hasn’t your software been fixed?
It isn’t everyday people who worry me.
So as a lifelong realist, I expect none of this will change until we have one or more catastrophes. Meanwhile we can all go back of our chatty companions who give us better mojito recipes, solopreneur training income, digital duct tape accounting. and being used for just about everything under the sun.
But here is another prediction I will make: when these catastrophic events happen we will all be self righteously yelling bloody murder about how our government should be protecting us from these terrible threats.
So, we need to make governance a top priority now, not after the fact.
Will we?
A LIGHT VERSION OF WHAT’S REALLY COOKING TO DISPEL THE DOOM
How long are people going to keep saying X formerly Twitter. Who doesn't know this by now?
Did the seniors amongst us keep saying Exxon formerly Standard Oil of New Jersey? What about Alphabet formerly known as Google? Though of course we all said Google formerly known as Backrub for years.
A LITTLE SPICE
“The best lack all conviction, while the worst are full of passionate intensity”
WB Yeats
So this is the last article in this format. I will be switching to biweekly newsletter, focussing on a longer form article. Each week will deal with one technology issue, its effects on business and on our lives. The first one will be entitled: This week I lost yet another colleague to one of the new technology tools. Will you be next?
Thanks as always to my readers who have kept me on my toes with comments, feedback, and submissions. I appreciate likes and restacks. See you in 2 weeks.
f
A thought experiment where a super AI “paperclip maximizer” is assigned the seemingly harmless goal of maximizing the number of paperclips produced. Lacking broader understanding or values, the AI pursues this mandate with relentless efficiency, appropriating all available resources, and possibly even converting biological matter (including humans) into paperclips to fulfill its objective.
AI issues read like a countdown timer. The energy consumption alone should be setting off alarm bells, but we're all too busy marveling at our new digital assistants to demand better governance frameworks.
The Google spokesperson's response is complacent. It completely misses the point - it's not everyday people we should worry about. That kind of thinking got us into this mess in the first place.
The transition to longer-form pieces sounds like the right move. I cannot imagine how long it took to write this one, but it was brilliant. There's too much nuance in these issues for quick takes, and your perspective - someone who's lived through multiple waves of technological change - is exactly what's needed right now. Thank you for the mention David.
I was going to get back to you - I read you and Gary Marcus with the p(doom) writings on the same day. I would love to read longer pieces of your thinking and putting these issues into perspective. I know they are interwoven through decades, but I do not know how to make those connections and appreciate getting a chance to learn here. Thanks for whatever you offer to shed light on needed changes, David.