← All Guides
Career

From "Almost a Coder" to Actually Shipping Products

I built my own PC. I had a Raspberry Pi. I tried to learn Python three times. I understood variables and loops but never got past the tutorial stage. Every attempt hit the same wall: the gap between "I understand the concept" and "I can build something useful" was too wide.

Then AI tools happened.

In 18 days, I went from zero shipped products to running a small digital products business with 5 templates, a free web app, a deployed website, and a content library. I'm tracking every session, and the data tells a story I didn't expect.

The timeline (condensed)

Day 1 First HTML/JS dashboard. A grocery price comparison tool. I didn't write the code, but I specified what it should do and iterated on the output.
Day 3 Automated a daily AI newsletter via GitHub Actions. First time touching CI/CD, triggered by a cron schedule.
Day 4 Processed 16,000 emails with AI classification. 92 parallel agents. Built a searchable knowledge base from the results.
Day 8 First custom skill for Claude Code: a reusable research methodology. Started treating AI tools as extensible, not just conversational.
Day 11 Built a complete HTML5 merge game. 184 creatures, 16 biomes, power-ups, island progression. It wasn't fun (more on that later), but it was real.
Day 16 6-page web app with quiz, ingredient checker, 257-product database, and UK-specific guidance. A real product for a real community.
Day 18 5 products packaged, website deployed, content library written. A real business, not a side project.

What I actually do

I don't write code. I specify what I want, review what the AI produces, and iterate. My value isn't in the typing: it's in knowing what to build, what questions to ask, and when something is wrong.

Here's what a typical session looks like:

  1. I describe what I want in plain language, usually one sentence
  2. The AI builds it, makes mistakes, I catch them, we iterate
  3. I test it by using it, spot what doesn't feel right, and describe the fix
  4. We go back and forth until it works properly

The "coding" is maybe 5% of my input. The other 95% is product thinking: what should this do, who is it for, what's the simplest version, and is this actually useful.

What I got wrong

The merge game

I spent significant time building a mobile merge game. 184 creatures, economy systems, power-ups, island progression. The problem? The core merge mechanic wasn't fun. I'd built the restaurant's pricing menu before the food tasted good.

The lesson was painful but important: AI tools make it dangerously easy to add features. You can build so fast that you skip the step where you check if the foundation works. Speed without direction is just efficient waste.

Over-engineering

Early projects grew too complex too fast. A finance dashboard that started as "show me my spending" turned into a 6-tab Streamlit app with budget modelling and life-change projections. It works, but I could have shipped something useful in a quarter of the time.

I now force myself to ship the simplest version first. The fancy stuff can come in iteration 2, if iteration 1 proves the basic concept works.

Not verifying AI output

AI tools are confidently wrong. They'll write a product description that sounds good but makes claims you can't support. They'll generate data that's plausible but incorrect. They'll structure a document that looks professional but has cross-reference errors.

I learned to verify everything that matters. Not every line of CSS, but every data claim, every product promise, every number.

What surprised me

Skills compound weirdly

Building a grocery dashboard taught me about Chart.js, which I used in a finance dashboard, which taught me about data structures, which helped me build a game, which taught me about UX, which made my next dashboard better.

None of these skills are deep. I can't write a Chart.js plugin from scratch. But I can specify one precisely enough for AI to build it, because I've seen enough to know what's possible and what good looks like.

Domain knowledge is the moat

Anyone can ask AI to build a dashboard. The reason my finance dashboard has value isn't the HTML: it's the 50+ spending categories designed through iterative classification of real data, the budget modelling logic, the life-change projections. That comes from years of actually managing personal finances, not from a prompt.

AI tools amplify domain knowledge. If you know a lot about a subject, you can now produce output at a pace that was previously impossible. If you don't know anything, you'll produce plausible-looking rubbish very quickly.

The process is the product

I've spent 61 sessions over 18 days. My evolution dashboard tracks every session: what was built, how long it took, what skills were used. The data shows I've produced roughly 760 hours' worth of professional-equivalent work.

The individual products are useful. But the process, the workflow, the skill library, the session management system: that's what has lasting value. It's a methodology for turning domain expertise into shipped products using AI tools.

The honest numbers

In 18 days:

Those numbers aren't magic. They reflect a specific combination: existing domain knowledge + AI tools + a willingness to ship imperfect things and iterate. Remove any of those three and the numbers would be very different.

What I'd tell someone starting

  1. Start with something you already know about. Your first AI project should use your existing expertise, not teach you a new domain simultaneously.
  2. Ship the ugly version. A working prototype with bad CSS is infinitely more valuable than a beautiful mockup that does nothing.
  3. Verify what matters. If it's going to someone else, check it. If it's just for you, let it be rough.
  4. Build the process, not just the product. Every project should leave you with reusable patterns, skills, and workflows. The compound effect is where the real value lives.
  5. Don't learn to code. Learn to specify. Learn to review. Learn to iterate. The code is the cheapest part now.