Working to avoid wasting time can be worse than just wasting it. The pursuit of efficiency in high-scale projects pays immediate dividends, but the exploration lost in the process could negate any of the intended gains. Messy innovation often starts as ‘wasted’ time, but’s hard to know what’s really wasted without a larger context of the problem space. The ability to tolerate failures and take risks can be well worth the loss of some implementation efforts. Pursuing efficiency for its own sake causes brittle plans, where robust plans require ‘wasting’ some time to better ensure success. When people consider a process or plan and see inefficiencies to remove, what they might be attempting to eliminate is robustness.
For new projects and lean processes, it can be tantalizing to avoid doing anything that isn’t a clear path to the goal, but that also presupposes that the direct path was even viable from what is known at the outset. It often isn’t, and pretending that planning alone will be enough to discover a solution often underestimates the complexity of the problem. Exploring worthwhile paths shouldn’t count as wasting time, as you’d never know if the current path is shortest without trying other ones. Doing the research to completely de-risk a project before implementation can take more time than just doing it. On the other side of planning vs doing is saying ‘the only failure is one where you learned nothing’, which assumes you learned something worth the attempt. Exploration is a calculated risk, you can still take a chance and get nothing worthwhile in return. The middle ground is discovering more options than before without losing sight of the goal, even if it is neither a definitive dead-end or a clear short-cut. Planning to do more discovery than required can provide those alternate paths that make a plan more robust than going with what seems like a straight shot from the outset.
The most direct path may not be the most visible, as the problem’s landscape can be fraught with obstacles in both planning and execution. There may be a big change in strategy halfway through a long project, changing the destination of the project’s journey. There are often places in projects where you can’t know what’s a good plan beyond a certain point until you get there. These are the hills, valleys, twists, turns, and paths of exploratory projects. The top of a mountain might not be where you want to end up, but you’ll get the best vantage point of the area if you can do it. Building in this time to explore and become well-acquainted with the problem domain is hard to justify when the scope of work is just the destination. It’s even harder when you have to consider how the terrain changes over time, as work spent exploring in a fast-moving space could be wiped away before you can exploit it to your advantage. If you do learn the best trails, you’ll also be in the best position to be both flexible and reduce risk for later journeys.
While the destination is the goal, finding value in the work is the most important part of the journey. The fastest journeys are the ones you’ve done before, but the journeys where you learn the most are the riskiest. Risky projects in modern business are pariahs, as management science would explain that the organization can be reduced to stories and numbers of a complicated machine. And on some level that’s exactly what it is, everyone’s needs and desires are a network of interlocking and individually understood interactions. But then on the systems level it gets messy, imperfect communication, single points of failure, human error and pettiness. All of that business science has an answer to that too, but you have to weigh the complexity of the more nuanced model against the benefits. Attempting to wring the most efficiency out of a model that’s poorly calibrated tends to exacerbate the failures while taking too much credit for the random successes. If you know a good second order approximation and then plan for things to go wrong, you can probably handle fifth order problems. But if you have a fifth order model and it’s calibrated poorly, then your model might actually be the cause of the problem instead of describing them.