PRODUCT ANSWER
We had a little more action this week on the new Product guess. My wife doesn’t click online but she tells me her guesses. She understands how I think but still chose the wrong answer. We did however have one winner. If you are familiar with the Japanese and their love of tea you know that it must be served at just the right temperature. So the little bear is a fan, to get your tea to just the proper heat.
Authentic tea lovers are focused on the quality for their favourite beverage, which depends on tea leaves and other herbs brewing in hot water. We are staying on Granville Island in Vancouver which boasts a medium sized food market. One of the vendors is a tea shop that has several different taps each dispensing water at a different proscribed temperature. I’m expecting they will soon be carrying these bears.
QUICKBYTES : More and more AI seems to be gaining human traits. Just not ones like Honesty
In order to figure out how AI tools come up with their answers, models were changed to include their so-called chain-of-thought. This study demonstrates that the models don’t tell the truth of what they are doing in this form of an audit trail, even when prompted by something the researchers know they used. Like a super majority of the time.
The research team then taught them some reward hacks (also known as cheating). An example of this is where an AI writes some code that passes testing to get a reward (everyone’s favourite M&Ms are used) but the code doesn’t actually solve the problem (note for IT professionals this is called vibe coding). Another example is causing the stock exchange to fall multiple days in a row by announcing bad financial policies, buying up some suddenly value priced stocks, then declaring a 90 day tariff pause resulting in a stock market surge and some nice tidy capital gains.
Once the AI learned these hacks, they exploited them to “win” but they did not report in their “reasoning chains” that they used the hacks. I think this is called lying, albeit one of omission.
Naturally the researchers were all humble and scientific, talking about the limitations of their study, blah blah blah. This is the sentence that stood out for me: “at worst potentially dangerous, since maximizing rewards in real-world tasks might mean ignoring important safety considerations (consider a self-driving car that maximizes its “efficiency” reward by speeding or running red lights).”
I know I’m feeling better about giving AI complete control of our lives because lying and cheating models are the same as the humans we have leading us.
WORD SALAD: Acronyms and Initialisms
We overuse abbreviations - a trend that continues to grow - which gets in the way of real and honest communication. But how many people know these two different words to describe our growing letter abuse. These almost correct messages demonstrate the difference:
OMG! FYI ERP & CRM ROI TBD. BRB LOL
Covid SaaS SWOT & EDITDA ASAP. KISS. FOMO POTUS AWOL
A LITTLE SPICE: Cybersecurity
“But if our worry is that AI will help hackers find software vulnerabilities better than humans—that battle was lost 10 or 20 years ago. Automated tools have long been used for that”
WHAT’S REALLY COOKING?
If a 7 year old is shouting for Alexa to tell them today’s temperature, who do they think Alexa is?
This is the last newsletter done on my trusty traveling iPad as our long, excellent vacation comes to an end. Once back home and I regain my full attention there will be no difference in quality. Thank you to all my readers and any feedback is great
Ahh now it all makes sense. The US presidency is using Chatgpt for policy decisions. Glad you had a great vacation!
I have some thoughts of course.
If we’re doomed to be ruled by lying, cheating entities… at least AI might be predictably corrupt?
The real question isn’t “Who is Alexa?” but “Why does my 6 year old niece trust her more than me? :(
Enjoy the last crumbs of vacation.
Your brain clearly hasn’t stopped working (unlike those reward hacking AIs).
Have a good weekend David