Skip to main content
EB

The Desire Path Problem - Why Platform Governance Fails

May 15, 2025 · 10 min read

The incident started simple enough. A routine deployment had triggered a cascade of failures across three different services. What should have been a fifteen-minute rollback turned into a six-hour war room session, not because the technical fix was hard, but because nobody could agree on who should fix what.

The postmortem revealed the real culprit: a web of team boundaries, service ownership ambiguities, and deployment dependencies that had evolved organically over two years. Our architecture diagram showed clean lines and clear interfaces. Our organizational reality told a different story entirely.

That's when I realized we weren't just debugging code. We were debugging a socio-technical system—and we'd been using the wrong mental model the entire time.

The Four Worlds: Understanding System Types

We love to believe our software systems are ultimately knowable—maybe complicated, with lots of moving parts, but fundamentally predictable if we just understand them properly. And sometimes, that's exactly right. But this intuition misses a crucial insight from systems thinking.

Systems exist across four distinct categories, each requiring fundamentally different approaches:

  • Simple: Clear cause-and-effect relationships, best practices apply universally
  • Complicated: Analyzable through expertise, good practices can be determined
  • Complex: Patterns emerge through experimentation, practices evolve over time
  • Chaotic: Requires immediate action to establish stability, novel approaches needed

Historically, most software fell into the first two categories. Your COBOL batch jobs were simple. Your enterprise databases were complicated. But as software became ubiquitous and more of our lives became digitized—with distributed teams, real-time user interactions, and systems spanning organizational boundaries—we crossed into genuine complexity.

"Complicated" and "complex" aren't just different degrees of the same thing. They're fundamentally different types of systems that require entirely different approaches.

Complicated systems are like a modern car engine. There are thousands of parts, intricate timing mechanisms, and sophisticated engineering. It's complicated—you need real expertise to understand it fully. But it's also predictable. Given the same inputs (fuel, air, spark), you get consistent outputs (power, emissions). When something breaks, there's usually a clear cause-and-effect relationship. A skilled mechanic can diagnose problems systematically, and the same fix will work reliably across similar engines.

Your batch processing pipeline that runs nightly reports? That's complicated. Clear inputs, deterministic outputs, well-understood failure modes. When it breaks, you can trace the problem, fix it, and document the solution for next time.

Complex systems, on the other hand, are like a forest ecosystem. You can study individual trees, understand soil chemistry, and observe wildlife behavior, but the forest as a whole exhibits emergent properties that can't be predicted from studying its parts in isolation. Small changes can have cascading effects—remove wolves and deer populations explode, changing vegetation patterns, affecting soil composition, altering water flow. The same intervention in two similar forests might produce completely different outcomes because of subtle differences in initial conditions, timing, or context.

Many modern enterprise platforms have crossed into this territory. When you have distributed teams deploying independently, services owned by different business units, real-time user behavior creating feedback loops, and emergent usage patterns nobody explicitly designed, you're not dealing with a complicated system anymore. You're dealing with a complex adaptive system.

But here's the crucial insight: these systems are socio-technical by nature. The technical components (services, databases, APIs) interact continuously with human components (teams, incentives, communication patterns, cultural norms) to produce emergent behavior. You can't govern the technical layer independently of the human layer because they co-evolve—changes in team structure reshape the architecture, while architectural decisions influence how teams communicate and collaborate.

This is why Conway's Law feels so inevitable: "Organizations design systems that mirror their communication structures." It's not just an observation—it's a prediction about how socio-technical systems will evolve over time.

The crucial insight isn't that all enterprise software is complex. A well-designed API gateway might be complicated but not complex. A machine learning recommendation system with millions of users creating unpredictable feedback loops while being built by autonomous teams across different time zones? That's definitely complex. The skill lies in recognizing which parts of your system exhibit which characteristics—and understanding what that means for how you govern them.

Following the Desire Paths

There's a concept from urban planning that illuminates what's happening in our software systems: desire paths. These are the unpaved trails worn into grass or dirt, created not by design but by use. They defy the neat geometry of sidewalks and manicured layouts. They tell a quiet truth: people don't follow design—they follow utility.

The same phenomenon happens in software and platform engineering. You can architect the most elegant, "best-practice" platform—complete with sophisticated tooling, CI/CD pipelines, microservices frameworks, and internal SDKs—but if developers are routing around it, creating their own scripts, standing up shadow infrastructure, or building workaround systems, you're not being ignored. You're being outvoted.

Desire paths in engineering show up everywhere once you start looking:

  • Teams bypassing your service mesh to talk directly to each other's services
  • A rogue spreadsheet becoming the team's de facto service registry
  • Developers ignoring your golden path deployment process in favor of rolling their own CI jobs
  • Platform teams pushing an abstraction that no one adopts—while everyone copies and modifies a hacky shell script that just works
  • Shadow databases, unauthorized cloud accounts, and "temporary" workarounds that become permanent

Here's the thing that most platform teams miss: these aren't failures of discipline or education. They're signals. Every workaround is a vote on what's actually useful. Every bypassed system is feedback on what's missing, slow, or friction-filled. This is evolutionary pressure in action—the system trying to adapt to actual needs rather than theoretical requirements.

A wise urban planner doesn't pave over the desire path with a "Do Not Enter" sign. They observe where people actually walk, understand why they chose that route, and then pave the path people are using. Design follows behavior, not the other way around.

But most platform governance does the opposite. We see the desire paths and interpret them as problems to solve rather than solutions to understand.

Why Traditional Governance Fails Complex Systems

Traditional software governance evolved for complicated systems. It assumes predictability, clear cause-and-effect relationships, and the ability to control outcomes through process and planning. This works beautifully when you're managing a batch processing system or a well-defined API.

But when applied to complex systems, this approach doesn't just fail—it actively makes things worse.

The Autonomy Amplifier
Here's why modern software tends toward complexity: team autonomy. When you have multiple teams making independent decisions about their services, deployment schedules, and technology choices, you create a socio-technical system where human dynamics become a primary driver of system behavior.

This isn't a bug—it's often the entire point. Autonomous teams can move faster, innovate more freely, and respond to user needs without waiting for centralized approval. But autonomy introduces unpredictability. Team A's optimization might create unexpected load patterns for Team B's service. Team C's choice of message queue technology influences how Team D designs their error handling.

You cannot govern this kind of system from the top down because the system's behavior emerges from the interactions between autonomous agents—both human teams and technical components. Traditional governance assumes you can control inputs to get predictable outputs. But in socio-technical systems, the "inputs" are teams making autonomous decisions based on their local context, priorities, and constraints.

The Architecture Fiction
The architecture diagram on your wiki represents your aspirations, not your reality. The real architecture is encoded in:

  • Who gets paged when things break
  • Which teams can actually deploy independently
  • Where the informal expertise lives
  • How decisions get made under pressure
  • What workarounds people actually rely on

Complex systems are socio-technical by nature. The technical components interact with human components—teams, incentives, communication patterns, cultural norms—to produce emergent behavior. You can't govern the technical layer independently of the human layer because they co-evolve.

The Rigidity Trap
When traditional governance approaches encounter unexpected behavior in complex systems, the typical response is to add more rules, more processes, more controls. But this creates what I call organizational senescence—the enterprise equivalent of biological aging.

Just as organisms accumulate damaged cells that refuse to die but secrete inflammatory signals, organizations accumulate "zombie processes"—approval workflows that no longer serve their purpose, compliance requirements that outlived their context, and architectural standards that prevent adaptation. These processes persist not because they're useful, but because no one has the authority or incentive to eliminate them.

The result is organizational inflammation: every change requires navigating layers of legacy process, every innovation gets slowed by accumulated bureaucratic scar tissue. In fact, much of what we call "red tape" arises when platform architects put up roadblocks to ensure desire paths are NOT followed, inadvertently contributing to this senescence.

Meanwhile, the actual problems—misaligned incentives, communication bottlenecks, cognitive overload—remain unaddressed because they exist in the socio-technical layer that pure technical governance can't see.

The Category Error

Here's the fundamental issue: most platform governance failures aren't execution problems—they're category errors. We're trying to manage a forest like it's an engine.

When you treat a complex system as if it were merely complicated, you get:

  • Process solutions for emergence problems: More documentation when the issue is misaligned incentives
  • Technical fixes for social challenges: Better APIs when the problem is team boundaries
  • Compliance metrics instead of outcome metrics: Measuring adherence to standards rather than developer productivity
  • Prevention strategies for adaptation needs: Trying to eliminate all failure instead of building resilience

The measurement paradox reveals the deeper issue: in complicated systems, you can predict what success looks like and measure progress toward it. In complex systems, success is often something that emerges—you recognize it when you see it, but you couldn't have specified it in advance.

Recognizing the Signs
How do you know when you're applying the wrong mental model? Look for these patterns:

  • Your platform is technically sound but has low adoption
  • Teams consistently work around your systems rather than with them
  • You spend more time explaining why your approach is correct than improving outcomes
  • Success stories feel anecdotal and hard to replicate
  • Small changes produce unexpectedly large disruptions

These aren't signs that you need better change management or more stakeholder buy-in. They're signs that you're governing a complex system with complicated-system assumptions.

The Path Forward

The good news is that once you recognize the category error, you can start thinking about governance approaches that work with complexity rather than against it. Instead of trying to eliminate the desire paths, you can learn to read them as valuable feedback about what your system actually needs.

Instead of fighting emergence, you can create conditions for good things to emerge. Instead of optimizing for compliance, you can optimize for adaptation. Instead of measuring adherence to process, you can measure the outcomes those processes were meant to achieve.

This requires a fundamental shift in mindset—from controller to participant, from architect to gardener, from preventing failure to enabling recovery. It means accepting that you can't predict exactly what your platform will become, but you can influence the direction of its evolution.

The forest doesn't need you to control every tree. It needs you to understand the ecosystem and help it thrive.

Your platform isn't misbehaving when it evolves in unexpected ways. It's showing you that it's alive—responding to pressures, adapting to constraints, finding new ways to create value.

The question isn't whether to embrace this complexity. The question is whether you'll recognize it soon enough to work with it instead of against it.

Because the desire paths are already being worn. The only question is whether you'll follow them or spend your energy trying to prevent them.


In the next post, we'll explore what evolutionary governance actually looks like in practice—how to read the desire paths, when to pave them, and how to build platforms that adapt rather than ossify. But first, you need to recognize whether you're managing an engine or tending a forest.