In the Loop · These are questions for wise men with skinny arms

In the Loop

I’ve been focusing on people management instead of just code development. The switch to human centric processes instead of machine driven requires a real change in attitude towards what is desired and acceptable. I’ve taken a serious amount of time and attention to automate and document all of my technical processes to eliminate as much human interaction as possible. I did this ostensibly to save time in the long run, but I started appreciating it immediately for the reduced mental overhead. When developing automated processes, it’s just the natural direction to attempt to automate it all the way down, but that’s not the direction you should take when working with human-centric processes. It turns out that humans are pretty bad computers, but can pretend in a pinch.

My first point of understanding is when organizing people, the overhead reduction shouldn’t be the only goal. I’ve experienced problem focused engineers who come at project management with a mindset of getting more for less. Striving to optimize efficiency or speed sounds good until you realize that you aren’t a perfectly stable system even under the best circumstances. I’ve heard and seen far too much hatred for ‘unproductive meetings’, and love of strict processes with perfect deadlines. Not every meeting will be perfectly efficient, trying to optimize too far it will cause hard communication errors instead of just a little slack for everyone involved. Attempting to synchronize and centrally coordinate everyone to hit a deadline can be done, but it’s a huge trade-off to set up processes that can be perfectly planned. Properly directed chaos can be far more efficient and effective than a series of perfectly planned and executed interlocking processes. Thinking in terms of fault tolerance is a better analogy, where robustness and autonomy are key characteristics. Overhead in that sense is cherished as a buffer, it’s a necessary component to keep the system working consistently.

Debugging and fixing human processes is also interesting since there is an immediate observer effect. Just attempting to solve the problem might be the final solution, a perfect heisenbug. It’s impossible to both study the relationship and work within it. Discovering a problem within your own network of relationships can be far harder than finding a bug in an application. You can’t subject your co-workers to constant testing and a strict set of behavior guidelines. While there are metrics to monitor, interpreting them correctly is a science still in progress. Any kind of explicit bug report or failure signals a much more serious problem of understanding the system at hand. Finding the source of the problem can be harder still, and the easy problems to find are often the hardest to fix. Attempting to fix all human processes in a controlled and repeatable way is basically impossible, where with machines it’s perfectly well understood and expected.

When addressing individual problems, organizational scar tissue appears like the correct reaction from an engineering mindset. Choosing to solve every issue with another process leads to a patchwork of fixes; just like constantly patched code, it eventually becomes unmanageable. Technical solutions not applied for fear of system collapse is a clear indicator of an unstable system that needs a redesign. Given that you can’t design a stable person, it’s better to view every process solution as a risk. Then you can weigh the existence of the process with the probability of the event and the severity of its impact. This gives a clear benefit to processes that protect against unrecoverable errors while disincentivizing processes where the problem and efficiencies of the solution are unclear. My favorite answer to “Are you going to stop this from happening again?” is “No. This happens sometimes and we deal with it pretty well already.” It’s normally heartbreaking to close a bug as ‘Won’t fix’, but with people problems, it’s a great option.

With automated systems, the priority is generally one direction in favor of the user. The code is written to be read and the user interface is designed to be used. The machine should be compromised to meet these goals. For human systems, there are people on both sides, so at least one side has to compromise for a goal that’s not in both of their best interests. Instead of only considering the side you’re on, it’s best to make 2-way communication easy for both ends. If it’s not easy to hear and easy to say, one side has been given the advantage. Isolating and strictly controlling communication in a technical sense is generally beneficial since it makes the system easier to understand as a whole. Encapsulating important information to helps simplify each component’s contract in the system. Humans are already hard to understand individually, and even more unpredictable as a whole. Attempting to simplify for just the work necessary leaves out all of the contexts that caused the work to be generated in the first place. This connection to the impact of the work is a major component in motivation, so removing it to favor the side that has to provide it causes one side to compromise more than the other. Information asymmetry is a big problem, and it that has very different implications in different types of systems.

Project Planning for the Unknown

I’ve been looking back on my design documents and project plans from last year and I’ve found that they were all fatally flawed in one way or another. I normally love writing documentation and communicating project plans, but I can’t help but feel that the time I invested was wasted. Other priorities shifted quickly, and while I tried to amend the plans and goals, I realized I fell far behind. I was constantly designing things I’d never designed before, and planning things I’d never before accomplished. I knew enough about both for rough estimates and solid first attempts, but I’d generally have an assumption proven wrong in the early stage of the project that wrecked the design or schedule. On one hand, I learned a huge amount in breadth and depth very quickly, but on the other hand, it was brutal to experience first-hand for months on end.

I knew well enough that attempting to plan a large project down to the last detail before it even begins is a fool’s errand, but I had gotten some mileage out of high-level design goals for architecture. I hated seeing projects treated like golden baby eggs, protected at all costs by the person who proposes and implements them without complete justification. They might even be the right thing to do, but when I hear a project ‘was the only real option’ I get very suspicious. I’ve always had an innate lack of confidence in my designs. It not only makes it easier to distance myself from the actual bad ones, but it helps me seek out help or confirmation for the good ones. But I’ve been slowly trending towards very conservative, minimal upfront designs. This sounds good until I say it’s minimal because I’ll only design the parts I’ve done before, and I don’t know the new work necessary at anything other than the highest level. I know what things are generally possible, but my definition of easy is if I have done it successfully before. That list is growing quickly, but the high-level design of what I already know hasn’t been useful. Instead of writing out the strengths and high points of the design, I wish I could plan better for the worst gritty parts I’ve never done before, but how do you write out a plan to avoid the pitfalls for something you don’t know how to do? The iterative design process is the only real design process available then, where it doesn’t matter what the original plan was if the new one is demonstrably more realistic.

I think the most interesting projects for design critiques are the adopted ones, where the current maintainer had no involvement in the original design and proposes massive changes. The first draft of the changes read like the 95 theses, blaspheming all of the choices of the previous designer. I’ve always wanted to know what other people think of my work without me around, so when I write design documents I try to make it almost argumentative. I’m having a discussion with a fictitious future maintainer about why I didn’t use a certain library or why I overused a language feature. Sometimes I realize I lost my own argument after I finish the project. I then have to revisit my own project to get it back on track. Planning this work is much more manageable since I’ve already done it wrong at least once and saw the result. Sometimes I get it wrong a second time too and find that my initial attempt was better all along. Each of these adventures just exposes me to the vast space of software development. There are many things that are closely related and similar enough to understand at a cursory level, but I find that trying out the unknown teaches me far more and far faster than re-treading my steps and attempting to optimize with what I already know. It’s like farming the same level monsters in an RPG when it’s far better to fight increasingly difficult monsters and then come back and cream the first ones.

Planning as a skill is really a function of experience in a specific role. The ability to roughly estimate the timeline of poorly defined features is as much about knowing the team as it is knowing the business requirements. The technical knowledge aspect plays into it very little because those finer grained technical details aren’t a significant fraction of what will take the most time to accomplish. Attempting to plan from the ground up requires near perfect knowledge of each of the steps, as any one of them could throw the rest of the specifics in a different direction.

I really enjoy being active in business and management activities with new challenges that contrast with my technical experiences.