Welcome to DevChat #11!

Here's what's we've got for today:

  • 💻 Code is for people
  • 🤖 Automated Patchnotes
  • 🎦 Live streams

This is an archive of a DevChat newsletter. To get the next one early, and in your inbox, sign up! To read past issues, head to the archives!

💻 Documentation as code

Programmers hate writing documentation. Most programmers, anyway. That's just stuff that gets in the way of the Real Work™, right?

The truth is that the Real Work™ of programming mostly consists of things that aren't directly writing code. Determining specs, designing an API, exploring technical limitations, learning best practices, training others, and so on and so on.

Even when a programmer is directly creating code, that code itself must find some middle ground between sometimes-mutually-exclusive goals:

  • The code must solve the problem at hand.
  • The code must be easy to maintain.
  • The code must be efficient enough.

In order to be maintainable, good code must be written for people. Specifically other people. Even if you're a solo developer, your future self will be a different person, at least with respect to the code you're writing now.

One way to deal with this is so-called "self-documenting code". Combining carefully thought-out variable and function names with clean-code practices and industry standards is mostly what this is about.

Self-documenting code is also an excellent practice because it reduces how much documentation is required outside the code (e.g. in comments). Having code in addition to comments that describe that code is a "Don't Repeat Yourself" (DRY) principle violation. The purpose of DRY is to prevent errors caused by changing something in one place without also changing that same thing in another place. If there's only one place, such a mistake isn't possible.

But how do you document bigger-picture stuff? The overarching purpose of a project? Its entry point? Who should be using it? Its dependencies? When it was last updated? What coding standards it's using?

It's a lot harder to keep your project documentation DRY than it is for the lines of code within it. This is where we get into "Documentation as Code" (borrowed from the concept of "Infrastructure as Code"). The goal is to have every piece of documentation somehow coupled to the functionality of the project itself, so that if one changes then both must change.

Wherever possible, anyway.

I admit that this is something I've been historically terrible about, and am trying to find ways to dramatically improve in my projects. I don't have a grand solution, but here are some useful concepts and tools I've been thinking about:

  • Global conventions. When I say "global" I mean across projects and teams. Adherence to convention is a type of documentation. For example, if everyone knows that event-triggered functions are always prefixed with on (e.g. onDownload()), then you can have simpler function names without also needing comments. Or if everyone agrees that callback functions will always start with an error argument, then everyone's code can take advantage of that without additional documentation.
  • Configs everywhere. Tools like "Cosmic Config" make it easy to simultaneously follow general industry practices while also doing things how you prefer to do them. By putting information into parseable, testable configuration files. This brings you into infrastructure-as-code and environment-as-code territory, further reducing documentation needs.
  • Automate everything. If a robot does something, a person doesn't need to know anything about how that something works. If you need to do something regularly in a project, turn that thing into code.
  • Prevent setup errors. All good tools have an init command (or similar) to make it easy to start using that tool with minimum error. The best ones interactively guide the user through decisions they need to make by asking human-friendly questions. Ideally the user would never even need to look at the resulting configuration file(s).
  • Docs and Code should have the same dependencies. This is a tricky one, but also the one I'm most excited about. It's reminiscent of the Dependency Inversion Principle. The idea is this: we normally treat documentation as being dependent on the code, but what if we had both depend on something else? That way we could make changes to that something-else and consequently both the code and docs would stay in tune.

For that last item, you would definitely need documentation to be built by code for it to work. A simple example is using configuration files -- the code that builds your docs can read values out of the same config files that your code does. In effect, the more you can abstract concepts into modular code or data, the more it can be used in automated docs and code.

API documentation is probably the best example of this, especially for languages as flexible as JavaScript: you could use centralized API documentation to dictate both the functionality of your code and the documentation that describes it! You can see something like this with Swagger/OpenAPI.

If you have any tips and tricks for docs-as-code, please share!

All of that brings us to one particular form of documentation...

🤖 Automated Patchnotes

Documentation is easiest when its writers and readers are the same people (internal documentation). It gets a bit harder when you add readers who have similar knowledge to the writers but aren't the same people (e.g. users of open source projects). It gets truly difficult when your audience is completely different from the authors, especially if you have multiple, non-overlapping audiences.

This latter problem is exactly the case for games, and any other software wherein there is huge information asymmetry between developers and downstream customers.

Let's take our own studio as an example.

When we change something in one of our games, we need to:

  • Be able to revert that change if it causes something to break.
  • Be able to identify when that change was introduced so we can trace future bugs to it.
  • Know which versions of the game include that change.
  • Communicate that change with Quality Assurance so they can design tests for it.
  • (Maybe) communicate that change with players.
  • Include that change in a summary, to post to the stores where we sell the game when we push out an update.

How do we solve all of these at once? Version control, and messages describing the changes in each version, handle the first three items right out of the gate. But what about the rest?

Our old way was mostly to treat these as separate problems. For every change made to a game, Seth (our game programmer) would:

  • Document that change in our version control system (Git) using technical language.
  • Document that change in a shareable document with instructions for how QA could test it.
  • (Maybe) document that change in another shareable document using player-friendly language.

In other words, Seth had to document every change twice or thrice, which included a lot of bouncing around between different services that managed that documentation.

But... he already needed to document everything in our version control system in the first place. Why couldn't we just use that?

And so, we did!

I created some tooling based on Conventional Changelog. That project is, unfortunately, poorly organized for newcomers. It also had some limitations due to design choices that conflicted with how we liked to work. Still, at root, my tools do exactly what that project does.

In short, the way that Conventional Changelog works is to establish a convention for writing version control messages, help you use that convention, and then parse your convention-following messages to convert them into patchnotes. To group your notes by project version it expects you to follow semver versioning practices, and to use "version commits", which are commits whose message is only and exactly the project version.

To see an example in action (using Conventional Changelog), take a look at the commit log for my open source project Stitch:

Screenshot of the Git commit log for Stitch, showing a version commit and conventional changelog messages.

And then the resulting changelog file:

Screenshot of the part of the Changelog file that corresponds to the commit messages in the last image.

This changelog file is created as part of the versioning process (see the versioning commands in Stitch's package.json), using the command conventional-changelog -p angular -i CHANGELOG.md -s -r 0 (assuming you've installed node, and then Conventional Changelog globally via npm i -g conventional-changelog-cli).

So that's all well and good for an open-source project but what about our case, where we need different changelogs for different users (internal, QA, and players)?

The features I added on top of Conventional Changelog for our own tooling are:

  • Streams: A message can include a "stream", which can then be allow-listed or block-listed when compiling patchnotes. For example, the message fix(internal|Site): Did some secret stuff. could be excluded by block-listing the stream "internal".
  • Multi-Message Commits: Conventional Changelog has taken a hardline stance: only one change will be parsed out of a given commit message. This doesn't make room for squash commits, accidents, or simply the reality that sometimes commits contain multiple changes for all kinds of reasons. My tooling allows for this.
  • Access Restrictions: We compile multiple versions of our changelogs for different downstream users, and host all of them on our website. Based on the permissions level of the user looking at the patchnotes, we serve the correct ones. For example, if you visit the Rumpus patchnotes you see a lot less than I see when I visit that same page. We manage this by having the different patchnotes auto-uploaded to our site as part of the compile process, and then Rumpus is set up to serve them based on user permissions.
  • UI: Finally, Conventional Changelog spits out a Markdown file. This is already human-readable, and also renders to even-more-readable HTML. My tooling creates a JSON file as well, which allows me to more easily parse it to do interesting UI stuff. The website UI parses that JSON and creates an interface that users can interact with.

Have you used Conventional Changelog, or other similar systems? Tell me about your experience.

All of that brings me to...

🎦 Live streams

The tools I have for parsing Git messages and creating the output JSON/Markdown files are old and bad. They get the job done, but aren't maintainable and don't easily make room for new features. Also the output JSON format is... uh... nightmarish.

I've been wanting to recreate those tools for a while now, in particular so that I can decouple some of them from our proprietary stuff so that (1) I can make the tools publicly available, and (2) my other public code can use the new tools (instead of having to use Conventional Changelog). Additionally, I want to add some user-friendly layers on top, like a config file and an interactive init command.

Since it isn't a high priority for the studio to do this I've decided to play with it as a side project. I got started last Thursday, live on Twitch. It was a blast. We had 30 people nerding out about super technical Node/Typescript stuff. Amazing.

I did some light editing of the streamed video so that I can upload it to YouTube with some annotations, but HOLY CRAP is video processing slow. So I don't have a link for you yet.

If you want to follow along with the project, here are the things you can do:

  • Subscribe to the Bscotch Twitch channel and watch the streams (and let it send you alerts when I go live).
  • Subscribe to the Bscotch YouTube channel so that any edited streams I upload pop up in your feed.
  • On the GitHub repo, use the "Watch" ▶ "Custom" ▶ "Releases" option to have your GitHub feed update when new versions come out.
  • Follow my Twitter since I'll cross-post there when I'm going live.
  • Join the Bscotch Discord, then in the #roles channel opt in to the "Webdev Stream" role so that I can ping you when I go live.

I'm new to streaming and YouTube, so I'll gladly take your feedback and suggestions.

Until next time

That wraps DevChat #11!

I've been having a great time hearing from readers. If you haven't said hello, please do!

Share with others by forwarding, or link directly to the archived post.

Have a great week!

❤ Adam