If you’re anything like me, you have read a billion and one blogposts, articles, essays, and discussions on AI and LLMs. As is customary in our times, most opinions I’ve encountered are the extremes: “AI will democratize access to knowledge and will save us all” vs “AI will put us all out of a job and should be banned”.

I myself have grave concerns about the long-term societal effects of LLMs, even though I use them extensively at work. I have found an essay now that captures the essence of my concerns exceptionally well: The machines are fine. I’m worried about us.

[…] When I see junior PhD students entering the field now, I see something different. I see students who reach for the agent before they reach for the textbook. Who ask Claude to explain a paper instead of reading it. Who ask Claude to implement a mathematical model in Python instead of trying, failing, staring at the error message, failing again, and eventually understanding not just the model but the dozen adjacent things they had to learn in order to get it working. The failures are the curriculum. The error messages are the syllabus. Every hour you spend confused is an hour you spend building the infrastructure inside your own head that will eventually let you do original work. There is no shortcut through that process that doesn’t leave you diminished on the other side.

I see the same thing at work. Junior developers, barely out of university, come in and produce code that they have not thought through and have no real understanding of. A disturbing number of candidates in our interview process are having LLMs write their take-home assignments for them, and then cannot answer basic questions about it when asked during a follow-up interview.

[…] The strange thing is that we already know this. We have always known this. […] Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we’ve collectively decided that maybe this time it’s different. That maybe nodding at Claude’s output is a substitute for doing the calculation yourself. It isn’t. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient.

Centuries of pedagogy, defeated by a chat window.

The essay focuses on education and on students, making the association with the junior developers an easy one, but I do not think it is at all specific to them. There’s no small amount of folk wisdom about the importance of life-long learning, to the point that those words together have somehow lost all meaning; but it’s no less true for that.

Therefore, I think we’ll also begin to see the detrimental effects even on people with pre-existing experience, knowledge and wisdom over the next years. Skills you do not use will atrophy, and insights will get more shallow. We are already seeing a rise of uniformity in people’s choice of words online and how they reason, and I can’t help but think that that is just the beginning.

[…] But the real threat is […] quieter, and more boring, and therefore more dangerous. The real threat is a slow, comfortable drift toward not understanding what you’re doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can’t produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but can’t sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does.

Frank Herbert, in God Emperor of Dune, has a character observe: “What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there’s the real danger.” Herbert was writing science fiction. I’m writing about my office. The distance between those two things has gotten uncomfortably small.

A distinction must be made however, because not all problems and tasks are created equal. Sometimes, learning and development is legitimately not the point and maybe even irrelevant: a restaurant owner who wants a website for their restaurant does not by any measure need or want or is expected to acquire expertise in web development. They do not care about HTML and JavaScript, DNS sounds like a drug, and cookies are something to snack on. They just want a website with their menu on it.

For this type of tasks, LLMs really do democratize access to knowledge, dramatically lowering the barriers to entry and enabling vastly more people to achieve cool things than would otherwise be possible. This example does, to me, feel like a win-win. One might argue that here too AI is taking jobs, because instead of hiring a web developer, the restaurant owner can just prompt their favorite AI agent - but I don’t think that’s really true: realistically, how likely would be a restaurant owner to actually do that, given the expense? And if in a world of AI where they can finally do that because it has gotten cheap enough, all the better: the point is the same.

It is tricky because it’s rarely obvious what type of task it is that you’re facing. There are learning opportunities everywhere and in everything, but they are not at all equally valuable. Nonetheless, the danger remains: it is just so very easy to prompt the AI first for everything instead of going through the trouble of using your own brain. I know this from experience. It is unnervingly fast for this to feel normal. It is so fast, it is so efficient and effortless! The AI thinks my insights are brilliant and my questions are sharp! But for some problems, that is a trap, because the effort you deny yourself is actually wisdom and experience that you are rejecting. But there’s nobody there to point it out, and there’ll be no realization, only perhaps years and years later.

And yet, none of this particularly matters on the individual level. None of us can really do anything about it, because in the presence of sufficiently fierce competition, your choices are to utilize any advantage you can get and the long-term consequences be damned… or risk being out-competed and be left behind. It’s Moloch, once again.