I was about to duvet up for bed when I saw this one David. With all the AI buzz these days. I've been thinking a lot about this idea that catastrophe drives change, and honestly, it feels pretty accurate for major societal shifts. We saw it with the financial crisis, and it's hard to argue we'd get serious climate action without some truly dire consequences.
When it comes to AI, I'm always unease. We're rushing into this technology, accepting "minimum viable products" with all their inherent flaws, and seemingly just hoping for the best. The list of problems ā misinformation, bias, copyright issues, the sheer energy consumption ā it's alarming.
It really makes you wonder if we're going to collectively wait for a massive breakdown, some undeniable AI-induced disaster, before we finally demand real accountability and robust regulation. I'm with you; we need proactive governance now, not after the fact. Otherwise, we're just setting ourselves up for an inevitable fall.
David, 45 years ago as a bright-eyed undergrad engineering student, I studied pattern recognition (and eventually did my senior year project using it). The early work in this field was done to sort out radio and radar signals - trying to find the signals within the noise.
What I learned then has been super helpful in keeping my feet firmly on the ground and avoiding the AI hype. As you correctly pointed out, the current versions only became possible with the rise in computing power.
The insight that has stayed with me is that pattern recognition is simply correlation analysis with a very very large number of variables and what are termed, in the computational world, sparse matrices. In other words, we're looking for correlations across many data points and an almost equal number of variables.
There are two big problems with this, super familiar to statisticians. First, correlation is not causation. No-one should conclude that tomatoes are killing people (causation) just because almost everyone in North America that has died has eaten tomatoes (correlation). The other is that bad data (i.e. garbage in) results in bad results (i.e. garbage out). Statisticians spend a lot of time trying to get good clean data. But the people running LLMs seem to be completely undiscriminating in the data they are using for "training". Consequently, none of what you're describing is surprising, and I am extremely skeptical that we will see anything truly novel from AI for many years.
Don't get me wrong... it can be useful. I use it regularly to help me tighten up my writing. I also use it to get what one friend describes as B- surveys of a subject new to me. But it tends to be B- work, because of what it gets trained on. It's a useful, but limited tool, which can lead you down very misleading paths, which is, I think, one of your points.
I was going to get back to you - I read you and Gary Marcus with the p(doom) writings on the same day. I would love to read longer pieces of your thinking and putting these issues into perspective. I know they are interwoven through decades, but I do not know how to make those connections and appreciate getting a chance to learn here. Thanks for whatever you offer to shed light on needed changes, David.
Thanks for those comments Hans. I think a longer and focus article allows me to stretch out and weave together the different strands through time that have lead us to where we are today.
AI issues read like a countdown timer. The energy consumption alone should be setting off alarm bells, but we're all too busy marveling at our new digital assistants to demand better governance frameworks.
The Google spokesperson's response is complacent. It completely misses the point - it's not everyday people we should worry about. That kind of thinking got us into this mess in the first place.
The transition to longer-form pieces sounds like the right move. I cannot imagine how long it took to write this one, but it was brilliant. There's too much nuance in these issues for quick takes, and your perspective - someone who's lived through multiple waves of technological change - is exactly what's needed right now. Thank you for the mention David.
Thanks so much for your comments about this. I always wanted to write longer form pieces, but wasnāt confident in my writing. The 10 months of helped me improve.. I really appreciate the encouragement.
It actually doesnāt take me as long as you think to write these pieces because every day I spend one to three hours reading about these things and putting down stray bits in a OneNote. Maybe took me 90 minutes to write and 60 minutes to polish. Learning to write through voice to text dictation.
I loved the Yeats quote, messages and perspectives here and thank you for always giving me something powerful to chew on, David! I also learned a new word (you are never too old to learn) reify!
I was about to duvet up for bed when I saw this one David. With all the AI buzz these days. I've been thinking a lot about this idea that catastrophe drives change, and honestly, it feels pretty accurate for major societal shifts. We saw it with the financial crisis, and it's hard to argue we'd get serious climate action without some truly dire consequences.
When it comes to AI, I'm always unease. We're rushing into this technology, accepting "minimum viable products" with all their inherent flaws, and seemingly just hoping for the best. The list of problems ā misinformation, bias, copyright issues, the sheer energy consumption ā it's alarming.
It really makes you wonder if we're going to collectively wait for a massive breakdown, some undeniable AI-induced disaster, before we finally demand real accountability and robust regulation. I'm with you; we need proactive governance now, not after the fact. Otherwise, we're just setting ourselves up for an inevitable fall.
David, 45 years ago as a bright-eyed undergrad engineering student, I studied pattern recognition (and eventually did my senior year project using it). The early work in this field was done to sort out radio and radar signals - trying to find the signals within the noise.
What I learned then has been super helpful in keeping my feet firmly on the ground and avoiding the AI hype. As you correctly pointed out, the current versions only became possible with the rise in computing power.
The insight that has stayed with me is that pattern recognition is simply correlation analysis with a very very large number of variables and what are termed, in the computational world, sparse matrices. In other words, we're looking for correlations across many data points and an almost equal number of variables.
There are two big problems with this, super familiar to statisticians. First, correlation is not causation. No-one should conclude that tomatoes are killing people (causation) just because almost everyone in North America that has died has eaten tomatoes (correlation). The other is that bad data (i.e. garbage in) results in bad results (i.e. garbage out). Statisticians spend a lot of time trying to get good clean data. But the people running LLMs seem to be completely undiscriminating in the data they are using for "training". Consequently, none of what you're describing is surprising, and I am extremely skeptical that we will see anything truly novel from AI for many years.
Don't get me wrong... it can be useful. I use it regularly to help me tighten up my writing. I also use it to get what one friend describes as B- surveys of a subject new to me. But it tends to be B- work, because of what it gets trained on. It's a useful, but limited tool, which can lead you down very misleading paths, which is, I think, one of your points.
I was going to get back to you - I read you and Gary Marcus with the p(doom) writings on the same day. I would love to read longer pieces of your thinking and putting these issues into perspective. I know they are interwoven through decades, but I do not know how to make those connections and appreciate getting a chance to learn here. Thanks for whatever you offer to shed light on needed changes, David.
Thanks for those comments Hans. I think a longer and focus article allows me to stretch out and weave together the different strands through time that have lead us to where we are today.
AI issues read like a countdown timer. The energy consumption alone should be setting off alarm bells, but we're all too busy marveling at our new digital assistants to demand better governance frameworks.
The Google spokesperson's response is complacent. It completely misses the point - it's not everyday people we should worry about. That kind of thinking got us into this mess in the first place.
The transition to longer-form pieces sounds like the right move. I cannot imagine how long it took to write this one, but it was brilliant. There's too much nuance in these issues for quick takes, and your perspective - someone who's lived through multiple waves of technological change - is exactly what's needed right now. Thank you for the mention David.
Thanks so much for your comments about this. I always wanted to write longer form pieces, but wasnāt confident in my writing. The 10 months of helped me improve.. I really appreciate the encouragement.
It actually doesnāt take me as long as you think to write these pieces because every day I spend one to three hours reading about these things and putting down stray bits in a OneNote. Maybe took me 90 minutes to write and 60 minutes to polish. Learning to write through voice to text dictation.
Youāve got a great system going David. The 10 months of consistent effort really shows in your writing. I REALLY mean that.
Voice-to-text is underrated.
Iāve been meaning to try it myself. Maybe I will this weekend.
I loved the Yeats quote, messages and perspectives here and thank you for always giving me something powerful to chew on, David! I also learned a new word (you are never too old to learn) reify!
Thanks Chason. Appreciate the comments and the restack