Tech Writing in the LLM Era

Entering my second week of an 8 week sabbatical (at Rover, you earn a sabbatical after 7 years, then again every 4 years after that), I want to make an effort to start writing more.

We’re in a weird spot in the industry right now - the barrier to writing really anything at all is significantly lowered thanks to the proliferation of LLMs. As a result, there is a saturation of content purely generated by AI across the internet. Those who work with LLMs often enough have become fairly skilled at identifying LLM-generated text and, based on one’s opinion on this content, have learned to tune it out.

The optimist in me tries really hard to see this lowered barrier to writing as a positive - there are plenty of very talented minds who maybe aren’t great at writing, or don’t have the time to synthesize learnings into something readable and published - but this also opens the door to purely engagement driven content of lower quality as well.

Human or LLM: does it even matter?

One thing this explosion of AI tooling has made me realize is that I should be actively questioning assumptions around these tools. For example, I recently complained to a coworker about how I could feel some aspects of my raw coding skill set atrophying as I leaned more and more into AI-based coding tools.

Their response was simple, but stuck with me: “Does it matter?”

Expanded: In a world where AI coding agents are competent enough to write well crafted code (which, since Opus, I believe that they are capable given quality prompting and context), is raw coding skill actually important?

There is a lot of implicit nuance in this question (too much to dive into for the purposes of this post - maybe in another!), but its an interesting idea that brings us back to the question above.

We’ve always been capable of publishing slop, LLM generated or otherwise. Writing some markdown and publishing it to a Jekyll blog published by Github Pages (like this blog!) is not exactly difficult. The problem now is the proliferation of this content. It can be harder to find the real, valuable learnings among the swarm.

That said, LLMs are more than capable of taking a well formulated post or notes and converting it into a very well written article. The broader issue is that readers, especially more advanced readers, are becoming more and more conditioned to skim past and tune out clearly LLM-generated text because so much of it is well written nonsense.

So, back to my original question - does it really matter if I write my blog, or if an LLM does? My opinion: it depends on 1) what my goal is for my writing, and 2) who my audience is.

I write this blog primarily for myself as a way to practice my writing and solidify my learnings, but secondarily for people like me - Staff+ or prospective Staff+ engineers who might be interested in similar topics. This is also exactly the audience who I would expect to similarly be adept at identifying (and averse to) purely LLM-generated content.

So yes, it does matter to me that I write my own blog content, but it is situational. For myself as it relates to this blog, I get the most benefit from turning my own thoughts into my own writing, in my own style. For my desired audience, it is important to me that I’m trusted and that it is my own voice shining through.

This blog (and beyond)

What does this all mean for my writing in this blog? I’ve generally written everything you see on this blog. From my personal AI principles, I’ve always aimed to only use LLM tooling for planning and editing, never for actual writing. It is about time for me to revisit those principles, but I do think for the purposes of this blog I may dial back LLM usage even further to maintain my own voice.

If you’re reading any of this (hello and thank you, by the way), you can expect that everything here will be written by my own hands based on my own learnings (if that sort of thing matters to you).

Beyond this blog, keep questioning your assumptions! To close out with an example: on my last day pre-sabbatical, I was polishing my final set of notes around a systems model that I had developed to estimate the potential impact of AI-native automated feature flag cleanup tasks. I had a few systems models and a brain dump of notes, and less than a few hours left before I would be gone for 8 weeks.

I didn’t want to leave this thread hanging, so I used Claude to generate a short writeup of my findings based on my notes and the models I had developed. I knew that 1) my findings where informative and could be expanded upon, and 2) my audience (close colleagues with whom I have already built trust) wouldn’t care as long as the findings were clear and grounded in reality.

The result was a very well written (if not LLM-flavored) document faithfully explaining my models and the resulting findings generated in a minute or two, rather than an hour of me writing it by hand.

There are many ways in which we can leverage LLM tooling today. Focus on your foundational goals and what truly drives them forward when deciding how to leverage these tools in your own work.