Don’t Let AI Turn You Into a Button-Presser
My two cents on reading, thinking, and staying sharp in the age of AI.
Recently, I finished the book “Understanding Eventsourcing” by Martin Dilger. I quite enjoyed reading it, as it is well written and without any fuss. I picked it up because, while I worked a bit with existing event-sourced systems, I never needed to create or change them. So I was eager to learn more about this style of architecture and, as it turns out, a way of thinking about software systems in general.
However, and here it comes: after I finished it, I started to ask myself: is reading a book for work now a waste of time and a mere “academic” activity, like reading a textbook in my free time? Is there a need and a good reason to read this long-form material in the age of speed, AI, and multitasking?
I pondered this question for quite some time, but I came to the answer: yes, it absolutely is. And it matters even more. Why?
Firstly, and this is not AI-related, in recent years we have been bombarded with huge amounts of instant-gratification, short-form content in all areas of life. Just take TikTok, where the whole business model is based on this, or YouTube Shorts for everyone over 30. This is far more engaging for our brains than interacting with long-form content. I can see this in my own attention span, sadly.
This is now similar in the engineering context: using AI, we can work on more than one thing at once. And while multitasking still does not work for our brains, and AI will not change that, many of us are now expected to juggle multiple tasks at once. Engaging with long-form content can give your brain a good break from that.
So my first point is: we need to train our brains to be able to concentrate on one complex thing for more than 30 seconds. I think many of us have problems with this now.
Secondly, if we do not educate ourselves, we will just become the human machine pressing enter for everything the AI suggests. Because without knowing how things work in the background, you cannot make informed decisions and are doomed to just accept everything until it works out the way you want. Some would argue that this is not bad at all, as long as it eventually works as required, and that AI does not really need a human in the loop. It is the same as going to a doctor for a headache and just agreeing that they will first inspect your ears, then amputate your left leg, and finally give you some ibuprofen. Well, eventually your headache disappears.
As engineers, we are responsible for what we build. Whether we write the code ourselves or not does not matter. If you do not understand it, you cannot take responsibility for it.
So keep learning. You can read a chapter while you wait for Claude to finish your prompt.
These are just my two cents on learning and AI. They might not be novel, but I felt the need to write them once more.

Your title caught my eye! I fully agree with this part “As engineers, we are responsible for what we build. Whether we write the code ourselves or not does not matter. If you do not understand it, you cannot take responsibility for it.” Many don’t realize that if you are putting an app out there you need to be responsible for not just how it works but the security side of it too,