How I Write Docs as an Engineer

The AI made this cover image for me
The AI made this cover image for me

As part of my Offroad Engineering role at Smallstep, I’ve either written or reviewed most of our documentation. For me, writing great documentation is UX design work. It’s about building a bridge into knowledge. Our docs are aimed at a general developer audience, as Smallstep aims to make it easy for a small engineering team to design and run a Public Key Infrastructure (PKI).

So, my highest goal is to offer a smooth experience for people adopting our software. This is especially challenging because designing and operating a PKI requires deep domain expertise that includes X.509, asymmetric cryptography, TLS, and DevOps and infrastructure security. And, our software integrates with a lot of other things. It has a sprawling range of use cases in practice. So, while we don’t have a massive amount of documentation, our documentation must be richly detailed, while also covering a broad range of topics and speaking to a wide audience.

I love this sort of challenge. It took me months to get up to speed on the world of PKI. Today, 2.5 years in, I’m still learning so much. In this post, I want to share a bit about how I approach technical writing as an engineer.

Inputs

My work often begins when a new feature is ready for docs. We usually have a few artifacts lying amid the dust as it settles. Things like:

That’s about it.

Preparation

Generally speaking, here’s my process.

First, I review the artifacts I have. Once I have a basic sense of the feature, I create a functional test rig. This is crucial, because I need to use the feature in order to document it, and the test rig is my little sandbox.

When my test rig is ready, I get to play! I try the feature out, and I start writing down questions as they arise. I need to exercise the feature from tip to tail, just as I imagine a user would actually use it. Sometimes I will need more than one test rig to do this, as I may end up writing several examples or workflows for different contexts.

Even for the simplest feature, I may have a lot of questions. I answer as many as I can on my own before I start madly DMing whoever wrote the feature. (This is the price the engineer has to pay for not writing their own docs!) At this point, I’m filling in my knowledge gaps about how it works, why it was created, use cases, edge cases, important caveats, and interactions between this new feature and other features.

My questions at this point are very specific. Like, why is this default value X and not Y? How does new CLI flag P relate to old CLI flag Q? What can I suggest if a user wants to do C, but the default behavior only does A and B? Why is this feature called “G” and not “H”? What’s our migration path for an old feature that’s being deprecated and replaced?

Next, I step back from the whole thing and breathe. I need to integrate what I’ve learned, and scope the work. I still haven’t written any docs. What I have is a test rig, a lot of notes, and a good understanding of the feature, its intentions, its limitations, its edge cases, etc. I try to take a beginners mind as I look at how to integrate this feature into the structure of our docs. Which docs need to change, and how?

I’m also developing a sense of what shouldn’t be documented. It’s not feasible to document everything, so I have to pick and choose. This depends on having good intuition about how people use our software, and how I think they will use the new feature. I’m always building that intuition, by answering user questions and learning from them about how our software is used in the wild.

Finally, stepping back gives me a chance to confirm the name of the feature and other important high-level labels. Sometimes the person writing the feature is too far into the weeds to choose great high-level labels, so they use an internal label that would be confusing to anyone learning about the feature for the first time. I’m not saying this as a slight on the engineers. Every project benefits from people with different perspectives.

A quick story: We recently added a feature to our server software that allows you to store most of the configuration in a database instead of in a static JSON config file. When an early version of this feature landed on my desk, it was called “Database-Backed Provisioners.” While the label was technically correct, a user wouldn’t know what it means or why they’d need it without a deep mental model of our software. After playing around with it and talking to the feature owner, I chose to call it “Remote Management” in the docs, because the feature is really about remotely managing the server’s configuration through an API, instead of locally editing the JSON file. The right title also helped me understand which documentation needed to change: This needed its own section in our configuration guide, but I also needed to update our documentation about running the CA in Kubernetes and other High Availability environments, because those users stand to gain the most value from remote management: Instead of copying JSON files into pods or copying them between servers, you can configure all of the server instances remotely, via a single CLI command.

Writing

Now that I have a sense of how I’m going to frame the feature, I write. I’m usually writing one or more chunks of documentation for the feature first, and then I’m making several smaller updates that weave the feature into the reset of the documentation. As our docs grow, we will need better systems for managing internal refs, but right now it’s small enough that we can do this by hand.

Finally, I have a finished pull request. I reset my test rig and run through all of the workflows and code examples, making sure they all still work as they should. Then I ask the feature owner to give it a final review before merging.

Once the documentation is merged and deployed, people usually start using the new feature pretty quickly, so I keep an eye out for patterns of confusion or questions among our users. Docs really bloom under continuous improvement. It’s important to incorporate feedback so we can continue to focus on delivering great stuff, rather than answering the same questions.

Footnotes