I Use AI to Build Software and I'm Not Entirely Comfortable With It
On using AI-assisted development seriously whilst taking the criticism seriously too.
I use AI to write quite a lot of code. I've built most of my current infrastructure around it (containerised dev environments, governance hooks, automated testing gates, the lot) and I'm not confessing this so much as just saying it, because I think transparency is the minimum viable honesty when you're working this way.
But I'm also not here to tell you it's fine, because I'm genuinely not sure it's entirely fine.
The thing is, I came to this from the security side. I spent years as a CISO, thinking about risk frameworks and governance and what happens when systems fail in ways nobody planned for, so when I started building software products (medical billing, media audit compliance, news intelligence) and started using LLMs as part of that process, I didn't just open a chat window and start prompting. I designed the workflow from scratch, with CI pipelines, cross-project knowledge systems, and testing gates that catch the kinds of mistakes AI is quietly confident about. The architecture around the AI is, in some ways, more work than just writing the code myself would have been, although I think that's an important distinction even if I'm aware it might sound like rationalisation.
The criticism I take most seriously is about training data: where it came from, who made the things the model learned from, whether they were asked, whether they were compensated. These aren't trivial questions and I don't have good answers to them. Nobody does, really, because the companies involved have been deliberately vague about it, which is never a reassuring sign from my experience in governance.
I've seen people frame AI-assisted development as a kind of creative theft at scale, and I don't think that's entirely wrong. The fact that I benefit from these tools doesn't mean I get to dismiss the people who were never given a choice about contributing to them. That tension is real, and I sit with it. It hasn't stopped me using the tools, which you could argue makes me a hypocrite. I'd probably argue I'm a pragmatist, but the line between those two things is thinner than I'd like.
And training data is only one part of it. There's the economic displacement angle: I'm a solo developer doing work that would have employed a small team, and I benefit from that directly. There's the concentration of power in a handful of AI companies that I'm now dependent on. There's the genuine uncertainty about whether AI-assisted code is making software better or worse across the industry (I've built governance to catch the mistakes, but most people haven't). There's the environmental cost of all this compute. Each of these deserves more than a paragraph, and I intend to write about them properly, but I want to at least name them here rather than wave vaguely at "unresolved ethics" and move on.
In terms of what AI actually does in my workflow, it's a power tool, and that's the most honest framing I've got. It doesn't decide what to build, it doesn't self-organise, and it doesn't design governance systems or assessment methodologies or work out which regulations apply to a medical billing platform in Alberta. It doesn't wake up at 3am because it realised the audit trail architecture won't scale. I do those things. The AI writes code faster than I can type it, and sometimes it writes better code than I would have on a first pass, and sometimes it writes something so confidently wrong that it would have sailed through a code review if I hadn't built the testing infrastructure to catch it.
So when someone says "AI did the work," I think that's about as accurate as saying a nail gun built a house. The tool matters, but the person pointing it matters more, and the plans, the engineering judgment, the decisions about what not to build, those are entirely human (at least for now).
Also worth saying: I'm basically a solo developer building multiple regulated products across multiple jurisdictions. I'm one of four founding partners in a media audit company operating in Canada, the UK, and South Africa, and I designed our 31-metric assessment methodology from scratch because nothing suitable existed (which is a pattern, since I once invented a system interface maturity model at Bidvest Bank for the same reason). The AI doesn't do any of that thinking. It helps me move faster on the implementation, which means I can build things that would otherwise require a team I can't afford and don't have.
There's an impostor syndrome angle here that I should probably be honest about. I look at the broader industry, at the hundreds of thousands of developers who are smarter than me, who've been doing this longer, who have formal CS degrees and opinions about type systems, and I think, who am I to be building this stuff? But I've learned to recognise that voice as mostly anxiety rather than useful signal, since the work either holds up to scrutiny or it doesn't, the tests pass or they don't, and the governance framework is sound or it isn't. Those things are verifiable, and I've made them verifiable on purpose, because I don't trust vibes-based confidence (especially my own).
The question I keep coming back to isn't "should we use AI," because that ship has sailed and framing it as a yes/no question misses the point anyway. The more interesting question is how we use it without abandoning the discipline that makes software trustworthy, because the temptation is real. AI makes it very easy to move fast and skip the boring parts, and the boring parts are where the governance lives: testing, audit trails, access controls, documentation. I often describe it as building the plumbing for the house. Nobody wants to talk about plumbing, nobody wants to demo plumbing, but if you don't build the plumbing right you're going to be walking in crap every day.
I find self-promotion genuinely uncomfortable, and I'm not naturally inclined to write posts like this one. But I've come to believe that doing the work quietly isn't enough if the work itself matters, and I think the question of how to use AI responsibly in software development does matter, not because I've got the answer (I definitely haven't), but because the conversation needs more people who are actually building things this way and being honest about the trade-offs, rather than either evangelising or boycotting.
I'm not trying to convince anyone, and I'm not selling a course or a framework or a newsletter. I'm a person who uses these tools every day, who's thought carefully about the infrastructure around them, and who still isn't sure the ethics are resolved. I think that's a reasonable position to hold, even if it's not a particularly comfortable one.
Maybe especially then.