35 Comments

Excellent article. I feel like everybody thought that GenAI was a silver bullet, but it's not. As an enterprise genAI consultant I've seen how it's complicated for employees to learn those new skills and management is required to support them - and it's not happening. So they do a couple of crappy prompts, say "this thing does not work" and go work as before. And on the other hand - I'm using it everyday and iy totally changed the way I work. Your comparison with Excel is to the point. How many people today, 35 years after it's first release, are still using Excel like in 1985 when they could save thousands of hours if they knew how to use it properly. But again, you need to know what problem you're trying to solve before you look into the technology.

Expand full comment
author

Thank you Philippe! One thing seems certain: Your ability to utilize Excel is not exclusively determined by your ability to use VLOOKUP. You also need to be able to know when and how to use it and plug it into the rest of the business.

That plugging thing.

That's missing.

But how do we plug it?

Expand full comment
Aug 6Liked by David Szabo-Stuban

Great read! This articulates well some of the challenges I have noticed in using AI tools for my work, but couldn't quite put my finger on. These are them. Best to you on this path, I'll be back to read more!

Expand full comment
author

Thank you Julie! Question is, what next? How do we close the gap?

Expand full comment

Yes, what next? When I use these tools, I'm still switching between one and another to get what I need (and re-verifying that things are accurate) because they each seem to be better at one thing and not another, at least for my work. It feels clunky. So, it seems like despite the quick rise and adoption of AI, there are still a lot of missing pieces, context for one. I'm not sure how that gap is filled beyond using the tools, being able to articulate what's not quite right, and working to improve them. It also feels that, given all the AI hype, we all expected it to be perfect out of the gate, which is definitely unrealistic. For reference, I mainly use ChatGPT, Claude, Gemini, NotebookLM. I've played with Perplexity and CoPilot and maybe need to check those out again. I'm open to other suggestions!

Expand full comment
Aug 19Liked by David Szabo-Stuban

Outstanding article, thanks David

Expand full comment
Aug 19Liked by David Szabo-Stuban

Outstanding article, thank you David!

Expand full comment

This is amazing. You articulated this in a way I’ve felt but couldn’t. There’s so much more to this than proper prompting. Thank you for sharing and I look forward to your adhd insight. Similar struggles here. Best of luck on your new path!

Expand full comment
author

Thank you so much! Yes, prompting is just the tip of the iceberg.

Expand full comment

The problem is not that AI isn’t good, it’s that people don’t know what to use it for, or how to use it properly, it’s not about replacing the human, but about augmenting their intelligence. Asking it to do work for us only returns subpar results, but that’s not how it should be used, and “prompting”, please don’t call it prompt engineering unless you are running thousands of variations in the same prompt, systematically documenting variations, qualifying output, building test systems to evaluate outliers, breaking the same prompts, and statistically analyzing the results to come up with better ways to word these prompts, aka, real prompt engineering-all people teach is how to structure prompts better, no engineering involved sorry. )Anyways, I digress, using Ai properly is about speeding up humans ability to think. For example, brainstorming, evaluating one’s logic or arguments, finding different ways of seeing things, evaluating and assessing code. It’s not about creating these things for one, but about bouncing ideas, and supplementing and empowering the human to make better informed decisions and supplement them with generic views and ideas they may not have thought of. It’s not about getting the outliers, but leveraging the common, and the middle ground. ChatGPT is better than all humans at finding the average, middle of the road patterns. That’s what it should be used for, and when used masterfully, it turns a person into a super human.

Expand full comment
author

thanks for the thoughtful response Marco. you hit the nail on its head. "people don't know what to use it for". my observation is that becoming a more skilled carpenter will not necessarily translate to you building better homes. similarly, I see a systemic issue with how we think about applying technology. (similarly, we have the entire knowledge of the human race in our pockets yet most of us use it to watch reels on it). when compared, I think THINKING has 10x more weight than SKILL, like:

utility = 1.1*skill+10*thinking

i'd wager that Sherlock Holmes would be an incredibly productive person with LLMs even without any prompt engineering training and easily outperform the best educated prompt engineers.

on the note of engineering: this is a notion I tend to meet again, mostly from engineers gatekeeping terms. i'm not interested in that conversation because i think it's a silly true scotsman fallacy. the world refers to intentional LLM use as prompt engineering whether we like it or not

Expand full comment

oh ya, to add to the "Prompt Engineering" dilemma... true prompt engineering course work would teach how to get real utility by emphasizing on the thinking element. Which can be developed through an analytical framework.

Expand full comment

Interesting. I always equated learning how to use the AI as learning what it's used for to be the same thing. About Prompt Engineering, the reason I think it's important to make that distinction is not beacause of gate-keeping, it's because a lot of places advertise things like "learn prompt engineering and you too can land a prompt engineering job at Netflix that pays half a million USD. yay!" When in reality they don't actually teach any engineering, but rather they teach you how to organize and structure prompts based on prompting research. This is not the thing that Prompt Engineering jobs are looking for as that can be learned by anyone within a couple of months. They are looking for people who have systematic understanding of how to test, and device optimal prompts, and that requires understanding of software development principles, even if it doesn't require hard-coding, understanding coding helps in the logic and flow, as well as in the processes... So yes, i think it's important that words mean things and that the way we use them is precise so it's not misleading. So i would really recommend we push the clear distinction between "learn prompting" vs "learn prompt engineering"

Expand full comment
author

well, as the rest of the post explained, while i have my own opinions on it, it's not my fight to fight.

to respond to the "true prompt engineering work" yeah but then you're not teaching prompt engineering ,you're just teaching problem solving and critical thinking.

and yes, that is a soft skill that we're really shit at as a species. just look at PISA scores for kids. level 6 is the max, where kids have critical thinking and advanced problem solving skills and the median level across oecd countries is level 2.

this is why education sucks. we KNOW what you need to thrive but if you actually achieve it we'll collectively treat you with awe like an alien genius.

i'd rather solve this problem for one person at a time. it's close to my own zone of genius:)

Expand full comment

I totally agree… another odd consideration is there’s a hidden danger in the ‘super human’ capabilities it delivers to existing, already experienced, subject matter experts. SMEs can research and integrate information so much more effectively it has a danger of creating a mentoring and knowledge transfer crisis as well. I cut my teeth on researching and crunching data, implementing plans, writing proposals etc. Now if I wanted that done, I’m actually MORE inclined to quickly use AI to pull something together rather than ask someone else to do it. That is a problem that we’re yet to see manifest.

Expand full comment

Can always count on you for an intelligently articulated and scientifically driven deep dive! Definitely along for the new adventure you are on! Let me know how or if I can help!

Expand full comment
author

Thank you Matt, so great to see you here!

Expand full comment

#ignore

#troll

#dota_slang

Expand full comment

Great article David. Szabo-Sturban I think a lot of the problem in enterprise is that they are not investing in training staff in how to use AI to solve problems. Let's face it, learning these platforms is no intuitive to non-geeks. So people go into an app or ChatGPT and expect it to provide solutions that are not clearly defined.

While I'm pretty good at prompt engineering, I'm finding better results with interacting with AI conversationally.

Each platform has its own strengths and weaknesses. ChatGPT stinks at providing sources, but Perplexity is good at it. Pi is better than all of them in my experience and has yet to give me an inaccurate or irrelevant source.

While it may not be critical to learn prompt engineering, it'd be a shame to not use AI for the situations where it excels in saving time: outlines, summarizing long articles and reports, clarifying one's writing -- all tasks that take inordinate amount of time without AI.

I'm not giving up on teaching people how to use AI, but I do agree that prompt engineering is not the revolution.

Expand full comment

I used to work with the Watson team in the early days on how I could apply Watson to assist with the global help desk strategy, and generative AI has come SO far since then.

The LLMs solved the information scale issues of the old days (we used to need a corpus of millions of documents for a viable business case to train) but we’re stalled at the same point. AI is amazing at finding and re-presenting existing information and awful at understanding it.

Recently I wrote an article on why corporations should NOT replace their help desks with AI - perfect example of how the outlier problem you identify manifests in a business case. ALL existing knowledge in IT support is at best, a few years old and frequently a few days old as the underlying infrastructure transforms. Thus, all new information is undocumented as it happens in real time to a new environment and therefore unsupportable with AI for ‘outliers’ - anything new.

Yet corporations have been sold on the idea that a helpdesk will disappear with AI. The whole industry has been focused on the wrong things.

I think AI will eventually have massive impacts on employment, but transformation will be slower than the current bubble promises as business cases to justify costs are siloed into areas with an actual RoI. Unless of course there’s another leap in capabilities - but I don’t think it’s just a matter of more compute.

Expand full comment
author

brilliant stuff, thanks for adding. indeed, LLMs are incredible tools but there are systemic issues missing and those far outweigh whatever best practices you develop in using LLMs

Expand full comment

Excellent comment. I totally agree, LLMs aren’t about replacing humans, they should be about supplementing the human in the loop. Empowering them by addressing the common issues first, and then passing it off to the customer support personnel with a summary of all attempts at solutions, and that way really speed up the ability of the expert, for some reason everyone tried to replace humans because how impressive it seemed without understanding that it’s a tool that works with averages, not with the outliers.

Expand full comment

Awesome piece, read it from a to z in one breath.

Expand full comment

So, I use a selection of about 30~ SOTA models + Gemini and Claude on the daily to write code, ideate, discuss plans, and architect large scale - working implementations of things. The fault is not in the stars my friend, but in your naive and jaded interpretation of what can be achieved with a computer and your willpower. Also, the use of the term 'AI' at all shows your ignorance of the technology in the market; There is no such thing as AI. What you've been using is a subset of machine learning known as deep learning and your right, deep learning is statistical not qualitive - you get out of it an average of what you put in every single time... prompt engineering is not worthless, the way you use LLMs and your goals for increasing productivity are what's worthless - your failure to succeed is an amalgam of your lack of vision and lack of ability.

Expand full comment
author

Good for you. I suggest you submit your solution to the ARC-AGI prize (https://arcprize.org/arc), based on your comment it should be a simple enough challenge for you.

Sarcasm aside you said you use different models to code. 99% of the world doesn't code and exclusively using coding to determine the utility of any LLM is like using a knife's ability to spread butter to determine its utility. Most people work in businesses that do not even employ a single developer, yet they are looking to make use of frontier models somehow. They learn prompting, spend time with it and some still struggle, while some succeed. The causality is unknown. One thing I'm certain of: if people fail to derive productivity increase from AI (oops) it's not because they don't know how to prompt.

The rest of your comment doesn't deserve a response, because you didn't address any of my arguments. I hope directing your frustration at me helped you in some way.

Expand full comment

Fair point - Though I would say this - the right prompt escalates your **input** from generic to profound - as such maybe your right, prompt engineering might be worthless by its self, and maybe you can get the same results regardless of the prompt - that is to say if you have taken the time to reduce all of the uncertainties in your request, and were able to fully flesh out your concept and the steps required to achieve it, with minimal variance in the outcome - then yes i agree that prompt engineering may not change the outcome. I think the difference is those people who don't code and want to use this tech to succeed generally need to learn to ask better questions, give better context, and plan their outcomes in a much deeper and meaningful way. Maybe prompt engineering should be called Idea Engineering haha.

Expand full comment
author

And alas, we agree. Engineers are significantly better at conpartmentalizing problems. That’s a competence that will - in my experience - be a better predictor of increased productivity via LLM use than learning prompt engineering.

Which is why I decided not to teach the latter anymore.

Question is, how do we teach this skill to people en masse? Surely we can’t make everyone a coder.

Expand full comment

I not sure r u biased, but what u wrote and what I read online differ quite vastly. R they real enough?

Expand full comment
author

Well I wrote a lot of things, can you elaborate please?:D

Expand full comment

1. Prompt engineering enhanced my verbal communication skills a lot, even psychiatrist was shocked at my ability to communicate my pain points with them.

2. I somehow feel AI is still slowly being adopted by companies at my country. It might be in infancy stage, but not delayed stage.

Expand full comment

3. Productivity was enhanced. I can come out with lesser bugs python code chunk before i fell sick, the only time I was stuck was at debug and troubleshooting them.

Expand full comment

Sad to say, it’s normal for people to consider before paying. I believe low code won’t be here and real without the coders to built the base or foundation layer of it for zapier and make.com.

Coding allow us to have more customisable features and flexible design, it permits us to deliver better solutions for clients.

U r the one getting my free consultation I guess. #9gag

#slight_insult

Expand full comment
author

It would seem that we both agree that AI adoption is really slow.

My analysis was about why.

Also, I'm really happy that you see a productivity increase and that your verbal skills improved. Thanks for sharing your thoughts!:)

Expand full comment

I’m a consultant. My job, in essence, is to solve problems. My view is that businesses are always jumping on the latest and greatest shiny tool without defining basic requirements. E.g., they want to deploy data analytics and data security platform and the ops team struggles to implement but that’s because the exec mgmt or strategic governance owners haven’t even defined a data classification standard, data mgmt processes, etc. these are problems that start from top-down, then bottom-up analysis. There’s an initial significant level of effort in the initial definition of “Situation, Complication, Resolution” that most don’t want to go through.

Expand full comment