A coworker was talking to me about how difficult it can be sometimes to explain technical problems to non-technical users. Most users want to know how the system works, especially if it’s critical to their daily responsibilities, but it’s important for software developers to draw a line where the domain ends and the technical details remain. Saying a problem was caused by a key consistency error from something strange with Posgres sequences should not mean anything to a user wondering why restoring old data didn’t work. On the other hand, just referring to it as a database problem casts too wide a net that may impede the user’s understanding when troubleshooting similar problems in the future (oh the database must be broken again). The key to knowing the line to draw is where the domain meets the technical details. If you have to start explaining computer science vocabulary like schemas and callbacks, you’ve gone too far. The trick is to know enough about the user’s domain and good technical abstractions to explain at the component and interaction level.
This conversation on technical communication sparked another where a some users said they had programming experience and therefore wanted the exact underlying data and error exposed so they can make more informed decisions. The ability to write some C or Python is very different statement than claiming competence with application development. There is a distinction between being code literate and the ability to debug systems without any source to read at all! The tough part of building and debugging anything other than a trivial application isn’t knowing the programming language itself, but how the components of the system fit together. When something breaks it’s not because someone forgot a semicolon or called a function incorrectly, it’s more often when assumptions about the context of the system change. Even with the complete technical detail about the specific error encountered, the raw knowledge might still be useless and misleading without a broader understanding of how the entire system works together. There are lots of layers underneath the syntax necessary at the application level that cause problems that aren’t even directly related to the code that reported the error shown. Understanding these assumptions requires a very different set of knowledge that shares nothing with a language’s semantics. Like any detective work, knowing the result is only the start of understanding the cause. A deeper level of technical detail from errors for novice programmers can be harmful. Seasoned programmers want messages with the right level of detail for their context, since not every programmer works on the same level of abstraction. This is a fundamental part of writing high-level software. At the user level good error messages are difficult without knowing the intention of the user; are they trying to use the application or are they debugging it’s dependencies? Just showing the stack trace to the user is considered bad practice because it assumes the context is that the user is familiar with the names and usages of the function. Having information without the context to correctly interpret it makes it useless.
On the flip side of crossing the domain-implementation divide, I’ve seen too many instances where the developer fails to understand the entirety of the user’s domain and engineers the wrong abstractions, which makes communication and understanding even more difficult. I’ve heard this referred to as conceptual debt, which is an apt metaphor because it compounds in the same way that bad technical decisions do. When a stakeholder, manager, or product developer describes a requirement for a system, they often have a mental model for how it should work. But without knowledge of the internal data model of the program, the user can’t describe where their feature would actually fit. They normally know where it should show up on the UI and how, but sadly, many interfaces have no relation to the backing data model. The most direct knowledge they have of the internal workings comes from experiences with strange bugs and unintuitive corner cases that expose limitations that don’t make any sense in their own mental model. The most common model for backing logic and data I’ve seen follows that which the Eve language embraced, the spreadsheet. It’s similar enough to relational databases that it works well enough in small cases; where I’ve seen it fall apart very quickly is for event or stream based systems where a spreadsheet has trouble working with the caveats of not having everything fully available and neatly aligned. The ability for an engineer to model a domain they aren’t familiar with is more difficult than a stakeholder learning some programming syntax and patterns. Domain specific languages aside, I’m always surprised that more domain experts don’t learn to program in some basic fashion so they have more common ground with the implementers. It’s much more common for programmers to grow into domain experts via management tracks than vice versa, which means the people writing the code rarely know what it’s intended to do! Having knowledge of both ends is critical in providing the right context and information at all the levels of abstraction.