A Butler, not a Copilot - How My Software Development Practice has Changed by Using AI.
"It’s not the plane - it’s the pilot." - Chuck Yeager
I have had to work under my share of spectacularly incompetent managers over the years. The kind with abysmal handwriting, who Manage-by-Inbox, and freeze when faced with the conundrum of navigating the choices presented by the break room vending machine. One so bad and intrusive we all learned to keep our hands on the keyboard lest we be discovered "not working," as in not banging out code for 8 hours straight. The only thing they had recognized authority over was our employment.
And I've had a couple of spectacularly competent managers who if they found you laid back in your chair, feet on the desk and eyes closed, understood that you were working. Not because of blind faith or ambivalence for their job, but because you had proven yourself and they understood how coding works for the expert software developer: 70% thinking, 20% learning and experimentation, 10% keyboard action.
I was never considered a 10x developer - by others and certainly not by myself. I was known for 1) solving hard problems and 2) creating code that was stable and secure, but I had to work long hours to deliver on that. Where as the unicorns were wrapping up their projects and off waging mischief elsewhere, I was busy chasing one path, hitting a wall, tearing down lots of misdirected code, and not infrequently starting over with a new direction in mind. Frequently, I spent 60% of my time learning and experimenting and 40% working the keyboard. When I had the solution, there was immense satisfaction in having learned all the ins-and-outs of how a particular library or system worked. And there it was, in my mental library, ready for the next time I'd need it. THEN I'd be that 10x guy! Except that rarely happened. The technology would change and I'd be climbing up the learning curve once again.
It's the slice of "learning and experimenting" that has significantly shifted since pair programming with AI. Instead of sifting through dozens of kinda-sorta related Stack Exchange posts or - Egad! - ancient listserv posts, laboring to piece together the disparate snippets that suggest a solution or resolution, I drop the error text into AI and seconds later I have a robust answer or a readout with several possibilities to check. Wizard! In no time I've been presented with a solution AND the supporting rationale. If you're an experienced software developer, how many times have you worked through a convoluted Stack Exchange thread only to have the thread author post "I solved it." without offering a clue as to what the hell they did. Reputation points should be lost for clumsy participation like that.
It has also been a much more pleasant experience than pairing up with another human programmer. When our competencies were matched and our shiny reputations earned, pair programming with another human entailed a lot of negotiation and compromise. This is a good thing. When the pairing was a mismatch, the experience involved a lot of arguing and low-class status games. This is not a good thing.
I'll frame where I think software development as a profession is going with an instructive example...
The Example
This past several weeks I spent perhaps 30 hours working on a custom application to track just about every aspect of my health that needed attention - blood pressure, physical activities such as exercise, eating habits, blood work, scans of this or that - part of my now two year initiative to return to and monitor peak health. The existing application needed A LOT of love, having been pieced together over the past six years. What I accomplished in a 30 hour solo effort:
Refactored all the code to use a better Python library for accessing PostgreSQL.
Replaced all the billboard.js charts with echarts.
Found and eliminated several key bottlenecks (Python and SQL.)
Automated the ability to pull in data from different sources.
Added significant statistical analytical capabilities.
Added test code to 90% of the application.
Added half a dozen maintenance and configuration screens that'll keep me out of modifying the database directly.
Set up a one-click CI/CD workflow using Jenkins.
Crafted or refactored numerous bash scripts to support the whole effort.
Squashed countless bugs and annoyances.
It performs and looks like an entirely different application. The most satisfying thing about this effort? The dashboard has an accurate, integrated, and clear picture of my health status and progress. From this I'm able to make better informed decisions based on integrated data. In other words, the software is now a background part of my health initiative, as it should be.
What Does This Mean?
It's pretty clear that software development is at the forefront of professions subject to a "significant unscheduled disassembly" at the virtual hands of AI. This is more than a little ironic. Those of us in the profession thought we were Pygmalion. Turns out, our efforts are ending up more like Frankenstein's. Very similar to the run-up to the bursting of the dotcom bubble, a lot of dead wood had accumulated in the coder ranks. From a managers view, I saw it only get progressively worse. When DEI initiatives and the pandemic hit a fever pitch, the bottom began to fall away. The profession was in need of a toxic cleanse and both the lock-downs and AI appear to be just the colonic the profession needed.
Even so, I'm in the camp that software development as a profession isn't going to go extinct. It will, however, be radically different. It may not even be called software development (or engineering, if you prefer.) It may, in fact, resemble something more like the comical career blip known as "prompt engineer."
When I first heard the combination of "prompt engineer" it was in the context of "Sign up to become a certified Prompt Engineer!" I thought, "Showing up on time now has a certification? Perhaps micro-credentialing has gone a bit too far." Reading further, it was a course meant to "craft instructions for AI models." Or more simply, how to talk to the damn computer.
It sorta made sense. For decades I'd been paid to communicate with computers in their terms. Now we want to communicate with them in our terms. Fair enough, but I thought for certain this would be fraught. Turns out that's how things have been playing out. The idea of a computer generating sense based on prompts from a lot of non-technical types burdened with layers of cognitive bias, conflicting beliefs, divergent expectations, and an uncomfortably bloated GI track from too much lunchtime bread pudding is a little unsettling.
Whereas learning a new computer language never made it easier to speak with my fellow humans (often made it worse), learning some of the "prompt engineer" instructions can be more broadly applied to communicating with peers and the public at large. The skill of asking better questions, for example, based on the feedback we receive from our initial question is useful anywhere. It reminds me of several of the presuppositions from Neuro-Linguistic Programming:
"There is no failure, only feedback."
"The meaning of the communication is the response you get."
Whether this transference of learning will happen remains to be seen. Having experienced how people relate to each other with diminishing skill due to smartphones and social media, I mostly expect them to continue speaking at each other as if we are computers.
As recently as this past April (Coding with Grok and GIGO AI) I wrote about my experience of coding with AI. It was a good experience and I was impressed at how far things had come. With my most recent effort, just four months later, AI's skill (Grok. ChatGPT, Phind) has improved markedly, particularly in the area of error and system log file interpretation. I've used AI to tighten the security on the home network and further automate mundane tasks and notifications. For the most part, this is computers prompting computers. Given that computer code and error reporting is a well-structured text, it's no surprise AI can generate a detailed reply.
It's also done a decent job with tasks along the line of "Here's a function that generates a chart using billboard.js. Refactor it to use echarts and include a test harness." Again, it's a script designed to speak computer-ese being fed to a computer. Worked quite nicely.
But what about feeding AI a prompt meant to create something technical that's new? My experience with this continues to be sketchy unless my prompt reads like a super well crafted use case or user story, complete with robust acceptance criteria and definitions of done. It isn't a copilot I'd care to trust the controls to, but as a capable (but not altogether trustworthy) servant...I'm good with that. Any Agile product owner who's invested time toward developing her story writing capabilities will recognize the challenges of using AI in technical workplaces. In that respect, "prompt engineering" has been around since the days of use cases. Instead of non-technical people trying to talk sense to software developers, they're trying to talk sense to computers directly.
A final note. I think law is another profession with highly structured text that you might think would be subject to the ravages of AI. However, unlike software developers, lawyers are in a position to write laws that prevent AI from providing legal advice or guidance and thus can insulate their profession. But I wonder how long that can last.
Related Articles
Coding with Grok
I recently completed a significant overhaul of what will probably be the only web site I continue to maintain for the rest of my life: Java Zen, LLC. I've owned this domain for 30+ years and hadn't g…
GIGO AI
Or maybe that's "Garbage In, Gospel Out," given how seemingly intelligent people are prone to believe anything the AI bots spit out.
Can AI Get Mad Cow Disease?
(This post follows on the thoughts expressed in "The End of Your Knowledge Worker Profession Approaches" and Thoughts on AI and the Future of Agile Coaching posts.)
If you have any questions, need anything clarified, or have something else on your mind, please send a DM or email me directly.
Header image credit: Grok 3






