Photo by Pavel Danilyuk
Using a LLM in your daily life is like joining forces with a brilliantly devious sidekick who confidently serves up solutions that are just plausible enough to waste your time.
Disclaimer: Although I have a technical background and I have a reasonable grasp of the technology behind AI, I am not an AI expert at all.
Throughout my life I have always been fascinated by technology. I started playing around with computers around the age of 12 and distinctly remember proclaiming to my parents, at age 15, that I wanted to become a computer scientist.
I started experimenting with 3D printing around 2012 and got into Bitcoin trading around the same time. I have always liked exploring technological frontiers and wrapping my head around new concepts (it’s all a big puzzle, waiting to be solved). Although AI exploded in the last few years it didn’t really trigger a lot of excitement in me.
Don’t get me wrong, I use LLM AIs troughout my day: they autocomplete my code, they explain new (to me) concepts and I used ChatGPT extensively to iterate over my ideas while building the core engine of Nestruo.
But most of the time it feels like I have joined forces with an evil sidekick. It almost always convinces me that it’s really smart and that it can help me in so many ways. And just when you start trusting it, it throws a wrench in the works. It does this by serving you this seemingly correct and very clever answer. But the moment you start working with it, you discover something is wrong.
It isn’t directly apparent where the error lies, so you dive deeper in the problem. The deeper you get the more you become convinced that this error had to be deliberately planted. No way something is so smart that it can generate something so plausible which is still so utterly wrong. It must be evil. And when you confront it with it’s error it just plainly answers: “Oh yes, I was wrong, here is the correct solution” while lauging out loud (althought I have never heard it I imagine those datacenters are filled with evil laughs).
I experienced this firsthand while building Nestruo. In the beginning, when I didn’t fully grasp the problem space, I was convinced that, with the right prompt, AI would able to generate snippets of codes that would solve specific parts of the problem space. But almost every time I prompted it, it generated code that looked smart and correct but in the end didn’t accomplish the requested goal. It was my evil sidekick silently laughing me in the face.
I gradually dove deeper and deeper into the problem space and started iterating over the problem using note taking and diagrams while building multiple proof of concepts. Throughout this process the AI helped me in exploring the problem space and generating ideas. Whenever I got stuck in a specific space I would prompt the AI and it would list a set of known algorithms which I could then further explore.
So LLM AIs are, for me, fancy search engines. They can generate plausible code snippets while I’m coding and they are saving me a lot of time doing that. They help me with exploring the vast human knowledge base by returning results in seconds which would have taking me days of Googling.
They might even, now or in the close future, be able generate whole code bases for simple apps or even submit clever sounding pull requests to your project.
But you should never trust them. They are not your trusted friend, they are your evil sidekick.