Also: Public Service Announcement about the Lumberjack
The last few weeks I've been more or less offline. We're T-1 week until our daughter's arrival and my attention has shifted towards preparing our house for a baby. But even between painting the nursery and replacing our roof I also read the infamous MIT study.
If I had a dollar every time someone asked me "if AI can learn and remember if I tell it how to do something" over the last three years, I wouldn't have a mortgage by now.
If you're not a developer who actually works with machine learning and don't even have a weird fixation on technical stuff, your perception of AI is likely to be closer to the median internet user's.
According to public perception AI is:
- self-aware and conscious, a better therapist than any of the professionals I've ever worked with and a really good advisor
- it learns and if it fails I can just ask why it failed and it can self reflect
- I can just have a conversation and gradually become more attuned to who I am and what I need and will basically read my thoughts
Translating that into a (likely uninformed) opinion on the economy, it's safe to say that if AI is indeed like that, letting it loose in the business world it'd wreak havoc. Needless to say it hasn't yet. In fact, most of my statements from last year's post is still true.

I'll say this. It's like the metaverse all over again.
Reality says hi
LLMs are sophisticated pattern matchers that operate in isolation each time. The report's "learning gap" isn't a gap - it's the core architecture.
They keep saying "if only AI could learn and remember" like that's a minor feature request rather than asking for a completely different technology. If it wasn't that big of a thing, we wouldn't have this issue even after $40billion corporate spending on it. All the ideas about workplace disruption is just a fucking expensive coping mechanism.
I've been parroting this for a long time so I'll say it again. LLMs are:
- Stateless between sessions
- Unable to actually learn from interaction
- Incapable of genuine reasoning about cause and effect
- Just predicting likely next tokens based on training data
The "agentic AI" solution they propose is cope layered on cope - adding memory databases and workflow orchestration on top of models that still can't actually understand what they're doing.
So that answers the issue with AI. But what about the other half of the equation – people?
Decoding Glue People
This year I ran two full audits for two 8 figure companies, and spot audits (for a single department or business unit) for about 10 others ranging from retail, banking, ecommerce, B2B services. Not tech startups. "Boring businesses". The ones that make the economy go and the ones that make real money.
I have over 300 hours of voice recordings on my computer that I had to process. All of these towards one singular goal: answer the question all of the companies asked us:
How should I use AI in my company?
This topic alone is probably worth its own post, but here's a jarring observation: humans usually don't know what the hell they're doing, no matter the size of the company.
In a small business it's all about survival. You do what needs to be done. You come up with processes but they're short lived because your business changes so much. This usually remains chaotic until someone (usually the founder or an incredibly talented ops person) sees through everything and becomes a real glue in the business. The become the first of the many Glue People over the lifetime of the company.
In mid-market you have more resilient processes, but not robust enough to cover the org. This creates a gap between management and the rest of the workforce. ERP systems get deployed so people will stop using Excel for everything, but what really happens is they go from email → Excel → email to email → ERP → Excel → ERP → email. Work gets more bloated not more efficient. This slows down companies, where "Glue People" appear to offset the slowness. The go-to person for all things IT/logistics/ops/etc. These people act like Loki at the end of season 2, holding together the company's invisible best practices.

In the enterprise codifying these glue people is already addressed, but instead of reaching efficiency in value creation, these organizations reach efficiency in consensus generation. The work processes are so convoluted, the business so complicated that nobody understands the entire thing. Reports often switch direction, and people massage documents until the reports show what management wants to see and not an ultimate source of truth. Because the only thing that matters in the enterprise is Business As Usual (BAU) which is just a fancy word for an alliance of Glue People working together.
All three problems: survival, glue people, and BAU stem from the same thing: data is a fucking mess in every company.
That's the reality into which we're trying to shove AI. That's why 95% of AI projects fail (although a sample size of 52 and an unclear methodology of "talking to people at a conference" hardly constitutes a scientific study so take that MIT report with a grain of salt).
Over those 300 hours of interviews I constantly tried to identify what made the Glue People so vital. What did they know that their processes, systems, best practices didn't? Why didn't they know it?
Management could just see that there's a monster Excel sheet that half a department is using and uploading and downloading stuff in there creates inefficiencies. What they don't see is that Excel is full of macros built by Jim 12 years ago who is not even at the company anymore, but those macros were built based on Jim's decade-long understanding of the intricate details of how to handle supplier and vendor relationships that nobody codified since.
Because of Glue People, the rest of the organization is complacent. You don't have to be able to explain why things work the way they work.
You can just find Jim and ask him.
But you can't make your internal training manuals into the one liner of "ask Jim". Which is why official processes exist, but after every onboarding session, new employees usually get a coffee with someone who tells them "how work is really done here".
Make AI think less not more
So it would seem that at an organization level neither AI nor humans think or learn. We're just winging it at scale and then expecting AI to do that too.
AI labs sold us the idea that AI can do that, but it cannot.
So what if the solution is not AI thinking more, but humans thinking more?
What if you integrate AI in ways so that it doesn't have to figure out how to wing it. Compartmentalize it?
I launched this blog a year ago with the idea of narrow agents, but there was one crucial bit missing: tacit knowledge. The actual work processes of the Glue People.
So based on the work I'm doing with a client in Australia I started testing my theories on how to decode glue people from these transcripts.
I have built a system that is capable of codifying unwritten processes from raw interviews. It generates 90%+ accuracy of execution from a single Loom video and it creates full org charts with complete documentation.
It's built on a concept that my client called "intelligence agnostic work".
It's basically productizing glue.
We call it Jig and I'll write more about these in the near future. It's the missing link that could make AlfredOS a truly useful tool, remove the need for deep work in n8n workflow creation for your business and find the actual ROI in AI.
The AI race is won at BAU.
Whoever gets to decode how they actually do BAU and convert it to "intelligence agnostic work" will succeed in AI integration. Not by shoving a fancy AI agent or a chatbot into a company. Not by buying a ChatGPT Team or Microsoft Copilot license. Not even by learning n8n.
But by understanding the whats and the whys of business as usual.
Public Service Announcement
I know it's not a nice thing to announce something like this and then disappear. But I will do just that. The reason for my absence over the last few weeks was because I've been doing something we dubbed as extreme nesting with my wife. Our house wasn't in a very good shape as we learned at the beginning of this year and as the due date is looming I found myself remodeling the whole thing to prepare the house for the baby's arrival.
Once she's here I'll go offline for a few weeks. This means that I won't be available for calls, won't be taking new clients and those participating in the AI Operator Bootcamp will also need to wait until November to pick it up. It will happen just with some delay.
I hoped I could deliver it before the baby but life happened. (I'll add in an extra surprise gift in exchange for the interruption of service).
So yeah, a few of you have asked me if I'm still doing the Lumberjack. Yeah I am and from November you can expect things to go back to normal with the blog and my services.
But until then, I'll put down my axe and switch to father mode. ❤️