A Journey with Artificial Intelligence
From a late-night scroll to building products with AI — what years of working with these tools actually taught me about thinking, communication, and where the human still has to lead.
September 2022
I'm scrolling Instagram. It's evening, the kind of quiet scrolling that doesn't really have a destination. And then something stops me.
A video. Nature footage — or what looks like nature footage. Something about the movement, the light, the way it transitions feels unlike anything a camera could capture. I watch it again. And again. I knew immediately. This is AI. And I needed to know how.
Not in a passive, curious kind of way. In the way where you can't put the phone down until you find an answer. How are these made? What tools? What's the process? That urge to explore, to understand, to go deeper — it was immediate and it never left.
I left a comment on a YouTube video asking what tools were used. The creator responded quickly and pointed me toward Kaiber. That was the beginning.
Learning to Communicate With a Machine
Kaiber pulled me in fast. I registered, started experimenting, and within the first few sessions I understood something that would shape everything that followed: the output is only as good as what you put in.
Prompt engineering — before I even knew that was the term for it — became an obsession. How you structure a sentence, what you include, what you leave out, the order of descriptors, the specificity of a mood or a color or a movement. All of it mattered. And I realized this wasn't new to me at all. I had always believed that communication is the foundation of everything — in relationships, in work, in life. Now I was applying that same principle to a machine, and the machine was responding.
I wasn't just creating videos. I was developing a vocabulary for a new kind of conversation.
The preview images Kaiber generated — even at lower quality — started filling my library. Not because they were technically perfect, but because they came from somewhere real. I was expressing something through them. Emotions I didn't have words for yet. Aesthetics I had carried around in my head for years without a way to externalize them. Suddenly I had a way.
A Space to Heal
The year that followed was one of the hardest of my life.
My father passed away. His absence left a kind of silence I didn't know how to fill. Grief has a way of making everything feel both urgent and meaningless at the same time. You reach for things that make you feel alive. You reach for things that give you a sense of continuity, of self, when everything around you is shifting.
For me, one of those things was generative AI.
I kept creating. AI-generated images, visual explorations, abstract representations of things I was feeling but couldn't say out loud. I shared some of it on Instagram. Other creators found me. I found them. There's something quietly powerful about connecting with people who understand that art doesn't need a gallery or a credential — just an intention and a way to make it visible.
What I understand now, looking back, is that the tool was never the healer. I was. The tool gave me a channel. The process of making something — of taking an inner state and externalizing it — that was the thing that helped. AI just made that process accessible at a moment when I needed it most.
Going Deeper: Google, Vertex AI, and a Notebook
At some point, creating images and videos wasn't enough. I needed to understand what was actually happening inside these systems. How does a model take a string of words and produce an image? What is a token? What does it mean to fine-tune something? Why does rephrasing a prompt by three words produce a completely different result?
I took Google's prompt engineering courses on Vertex AI. Then I kept going — safety, responsible AI, the ethics of these systems, the mechanics of how large language models are trained and evaluated. I took notes by hand, in a notebook. Still do. There's something about writing things down physically that makes them stay — a different kind of encoding than typing.
Getting certified wasn't the point. Understanding was. And understanding changed the way I worked with every model I touched from that point forward. I stopped being surprised by the strange outputs. I stopped being frustrated by the limitations. I started seeing the system clearly — what it can do, why it does what it does, and where the human still needs to lead.
A model is a reflection of its training data, its architecture, the decisions made by the people who built it. When you use it, you are in a relationship with all of those decisions, mediated through language. The quality of that relationship depends entirely on your side of it.
AI as a Software Engineer
I'm a software engineer. I've been building things — applications, systems, products — for years. And like most developers, I had a new relationship with AI entering my workflow.
At first I searched for answers the old way. Stack Overflow, documentation, trial and error. I felt something like guilt about using AI for coding questions. As if reaching for it was an admission that I didn't know enough. As if it made the work less mine.
I got over that quickly once I actually started using it.
As my codebases grew, as projects became more complex, my usage grew with them. I was using AI to write entire components, configure deployment environments, debug CORS issues at midnight before a deadline, generate test cases, and document code I had written months ago. The time I saved on syntax and boilerplate went directly into architecture decisions, integration work, and the kind of thinking that actually requires a human.
Claude changed something for me. The responses felt more grounded, more honest about uncertainty, more willing to say "I don't know" or "this might not be the best approach." I started trusting it more, which paradoxically made me use it more carefully. Trust and scrutiny aren't opposites — they work together when you're building something real.
The model is never the variable — you are.
Two Ways to Work With an Agent
As AI coding tools matured — Cursor, Claude Code, Codex — the question shifted from "should I use AI for this?" to "how much should I hand over?"
The first is full delegation. A company needed a landing page. One screen, a clear CTA, a contact email. I used Claude to craft the exact prompt, then fed that prompt into Codex. Full page, generated in one shot. Ten minutes from requirement to something shippable. That's the line. If you're in full control of what was generated — if you can own it, debug it, extend it — then full delegation is a legitimate and efficient choice.
The second is block by block. On complex, long-running projects where every architectural decision compounds over time, I write code deliberately. I use AI to research, to generate options, to write a specific function I'll then read carefully before it touches the codebase. I decide what gets added. I decide what stays. The AI is a fast, knowledgeable collaborator — but I'm the one who has to live in this code, maintain it, explain it to clients, extend it under pressure.
The choice between these two modes isn't about AI capability. It's about the nature of the responsibility. Simple, contained scope? Delegate freely. Complex, evolving, high-stakes? Stay close to every line.
The Fractal Principle
I've believed something for a long time — long before AI existed in any form I recognized. Technology is always about the person using it. Always.
A hammer doesn't build a house. A person with a hammer builds a house. The quality of the house depends on the person's vision, skill, judgment, and care. The hammer is just a hammer.
AI makes this impossible to ignore because it speaks back. It generates. It appears to reason. It mimics the most distinctly human things we do — language, creativity, problem-solving. So the temptation to center the tool is stronger than ever.
But the principle holds. A model responds to what you give it. Your clarity, your understanding of the domain, your ability to recognize a good output from a bad one — these are what determine the value of anything AI produces in your work. The model is a multiplier. What it multiplies is you.
I think of this as a fractal. The same pattern repeating at every scale. You and the hammer. You and the computer. You and the LLM. At every level of technological complexity, the human is still the generative force. We made these tools in our image — and in using them well, we learn something about ourselves.
That's what years of working with AI have given me, beyond the shipped features and the saved hours. A clearer picture of my own thinking. That clarity is worth more than any generated component.
Where I Am Now
I run VibeIT, a software engineering and DevOps company. I build products, lead teams, make architecture decisions, write code, configure infrastructure, create content, and think about the future of this industry every day.
AI is embedded in all of it. In the development workflow. In the content creation process. In the way I research, plan, document, and communicate. It's not a separate layer — it's woven in, in the places where it adds real value.
And I still open my notebook when I need to remember something important.
The tools change. The person using them is the constant. That's always been true. It's just easier to see now.