Embrace Layers · These are questions for wise men with skinny arms

Embrace Layers

After doing mostly backend work for a number of years, I jumped back into the mayhem that is front-end web development. I had kept a careful eye on it while I was away, but never dove deeper than reading the blog posts. It’s both less bad than some would have you believe and far worse than I remembered.

The problem I had was that I had forgotten so much of the arcanery. All the tools were different, but that didn’t matter because the new ones did the same thing. The latest Javascript frameworks had support for all of my favorite functional patterns I picked up elsewhere, but finding a good use for them in the wild has been rough. The ability to organize and control styling had increased dramatically, but so had the expectations and complexity. I still had knowledge of the basics, it’s not an entirely separate world, but it was a game played by different rules.

I was so focused on the data shape and clean plumbing that the presentation constantly shocked me at how difficult and time consuming it could be. Integrating to an end user is clearly the toughest API! The problem I saw was that declarative styling is very far away from the primitives. HTML and CSS don’t have an escape hatch like high-level DSLs in some other environments. I couldn’t code around and come from the other direction to make things easier for myself when something wasn’t working. I ran into more dead ends where I had to just abandon my work and learned very little other than to not attempt that path again.

Advantages

Manual labor begets more manual labor, where human-machine interaction really requires a human on both sides. I’ve seen so much of the data, modeling, and operations tools continue to rise in the level of abstraction and require minimal manual configuration to achieve maximum functionality. This is a huge boon to productivity and extensibility, where features are now baked into the layer instead of reimplemented by everyone for each application. When everyone is using a shared layer, improvements to the tooling, robustness, and performance are easily distributed to everyone and require less time from each individual participant. No longer does everyone have to maintain their own little library or custom framework to help them go from data modeling to the end user, there are thousands! Enough to cover most domains, constraints, and personal preferences.

So why do people hate the framework flavors, churn, and the rise of layers? I hated HTML and CSS quirks because I saw them as yak shaving exercises that distracted from the end goal. I was learning how to use the tool instead of building the thing that needed building. While this was frustrating, I was also able to take a step back and appreciate that if I learned these layers, I wouldn’t have to manage the implementation specific details to accomplish the goal. Some people like the implementation more than the functionality; the code more than the user. When the code changes and the functionally doesn’t, it feels wasteful. If the programmer looks at their solution in isolation they see no benefit from one framework to the next. They are helpless to slow the pace of adoption. “Stop changing things!” “What I have works for me!” “Just keep supporting this, I don’t have time to learn new technologies!” This feeling of helplessness means that every new layer that the programmers don’t already trust is a huge leap into the unknown. Strangely, this makes it easier to move down the stack into more complexity than to move up and lose control in exchange for free shared functionality.

A Strawman Enterprise

When I do talk about the benefits of a popular modern stack to those in the enterprise space, senior engineers are often immediately suspicious of the number of layers. “What does the user get out of this?”, “How could this ever be performant?”, “What happens when there is a bug in one of these?”. All valid questions for certain circumstances. Junior engineers tend to pile on the layers, hoping that each one of them gets them closer to their goal, but without understanding how or why each one acts individually. It works the same way with junior code, adding layers of abstraction, patterns, libraries, and unnecessary object graphs to help break a problem down into one they can solve when a straightforward solution could cut through all of that complexity. The problem isn’t that the layers are bad, it’s that all of that functionality was being misappropriated or just left underutilized. Some of this motivation could be attributed to resume-driven development, or just plain riding the hype train, but you can also get lucky when following powerful industry trends.

The user doesn’t get a direct benefit, but the developer gets a benefit from an engaged and large community. A happy developer who can find existing solutions quickly is one who can deliver better features more consistently. If the stack is too out of date the developer may have to reinvent the wheels of the new hotness to get the same benefits or take development paths that lead them away from current industry practices. For some enterprises, the external community is not considered valuable. “We can support our own engineering practices”. Sure, but why? Again, every piece of manual labor begets only more manual labor, multiplying the work you need to do for every layer on top of it. Logging and exception handling, packaging and deployment, test framework initialization, authentication, and security, are all things most layers need and for your own bespoke layer to include this you either need to spend the time or ignore a valuable efficiency feature. It’s a huge fallacy that you can have an engineering org be an island, and if you’re going to rely on external solutions, why not use all of them that are helpful? If the thing you build isn’t standardized you’ll be alone in maintaining and eventually migrating it. If you pick the wrong standard, you’ll be alone in maintaining it, so going at it alone is getting the worst deal of picking anything! If you do correctly pick the “new hotness”, you get more future choices of basically free features and support. You might get choice fatigue, but that far is far outweighed by how little you have to build yourself to accomplish great things on the shoulders of a world of giants.

The runtime performance of the stack matters in some circumstances, but it generally matters less than what many engineers wish it did. Performance is one of the few hard software requirements, a science within a world of art. The tendency is to optimize for the known, in the face of uncertainty at least have a bastion of engineering excellence to defend. The problem is that technical performance is very rarely the thing that makes projects successful. Instead, chasing performance can lead away from solving the true problems that bringing more layers into the system will fix. At worst, if the project is wildly successful with a compromised performance profile it can be fixed! Software projects, if managed with modern tools, are surprisingly flexible. So when the question of performance arises, the response shouldn’t be “value programmer productivity”, it should be “you can make it as fast as you need it to be if it’s worth it”. Choosing something because it’s the fastest at the moment against other options is just as dangerous a bet as picking a layer solely because it’s well supported by a vendor or community. If that fact changes, would you be willing to migrate to continue chasing performance or is it a convenient excuse when you don’t have any other hard requirements? Does that performance gain you anything or is it vanity? When coming from a world where performance used to break projects’ viability to where it no longer does, it leaves behind the habit that anything new is looked down upon. Old tools run slower on old systems than new tools run on new systems, but because that makes old tools run faster on new systems than new tools those slower tools are considered inferior regardless of the other merits. Performance should be a non-argument for choosing tools in any circumstance where the solution doesn’t have a known upper limit. If the stack meets the performance requirements then it’s a binary question, not a ‘better’ question.

As for bugs, choosing a path that is both visible and well worn will reduce the cost to fix and the chance of issues. The game changer in layers is open source and community contributions to build not just a single isolated layer, but a network of tools, documentation, examples, tests, and integrations for that layer. When adding a closed-source vendor to a stack, you need to trust the vendor completely to provide all of those things. Some vendors earned this trust while others became notorious for abandoning their platform or selling half-baked promises. This is no different than picking a community for support in the long term. In the short term the fear is that without paying an external vendor for immediate support, the number of things that can go wrong in any of these layers increases with the number of layers. The reasoning is obvious, more code more bugs! The reality is that your own abandoned code is more likely to have problems as it generally wasn’t a fit for purpose as a packaged library. If you go and stay on this path of building small customization on top of popular components, your odds of running into a problem in any component are slim. It’s when you think that you can fork and customize without cost and support that things can fall off a cliff of maintaining other people’s code. People who are used to stacks without good package support freak out when they see the sea of packages required to the most trivial tasks in a modern environment. The reason for this fear is because they’ve had serious problems maintaining their own packaging systems and can hardly imagine how this tower survives at all. And yet it does, with far more stability than I’ve seen most enterprise build and dependent systems provide. Most bugs aren’t technical in nature, it’s not that it doesn’t do what was expected, it’s a misunderstood requirement. An external layer might not do what you expect in some circumstance, but how different is that than not knowing what it should do in case you didn’t test against? Instead of writing the code and the tests, with an external layer you really just need to test it against your requirements.


I love platform-level customization. It’s rewarding in a creative way where banging your head against someone else’s layer isn’t. The problem is that it’s only rewarding in the short term. Once you start on the lonely road of customization and eschewing modern stacks for legacy reasons it becomes harder and harder to dig out of it. Any modernization project then isn’t an upgrade, it’s a total revolution. You’ve had to build your own platform that integrates with nothing out in the modern tech world, so there’s no way out and everything looks worse than just sticking with making minor improvements by yourself.