Home Services Assessment Sprint Software Subscription Technology Advisory Blog Delivery Risk Calculator About Contact Book a Call
The Pyramid Series

The Pyramid Series, Part 3: The Silo Factory — How Technology Support Teams Become Independent Fiefdoms

person Bill Clerici calendar_today May 2, 2026 schedule 11 min read
arrow_back Back to Blog
headphones
Listen to this article
The Pyramid Series, Part 3: The Silo Factory — How Technology Support Teams Become Independent Fiefdoms

This is Part 3 of the Pyramid Series. In Part 2, we explored how technology leaders shape the pyramid—for better or worse. Now let’s look at what happens inside the base layer when nobody’s paying attention: the support teams fragment into silos, and the foundation cracks.

If you’ve ever waited three weeks for a firewall rule, filed the same request in four different systems, or sat in a meeting where Cyber, QA, the PMO, and Infrastructure all claimed their work was “P1”—congratulations. You’ve experienced the Silo Factory firsthand.

Here’s how it gets built, why it persists, and what it actually takes to tear it down.

Same Teams. Same People. Different Structure. Different Results.
Same Teams. Same People. Different Structure. Different Results.

How Silos Get Built: The Origin Story Nobody Planned

No technology leader wakes up one morning and says, “I’d like seven independent fiefdoms that can’t talk to each other, please.” Silos aren’t designed. They emerge—slowly, quietly, and with perfectly reasonable justifications at every step.

It usually starts with specialization. The company grows. The CTO hires a Head of Security. A QA Director. A PMO lead. An Enterprise Architect. Each one builds a team, picks their tools, defines their processes, and establishes their own operating rhythm. This is normal. This is how organizations scale.

The problem is what happens next: each team optimizes for its own survival instead of for the delivery outcomes the organization actually needs.

None of this is malicious. Every team is doing what it thinks is right. But the cumulative effect is devastating: a foundation made of disconnected parts that can’t function as a foundation at all.

The Seven Symptoms of a Siloed Support Layer

How do you know if your technology support teams have become a silo factory? Look for these symptoms:

1. Every Team Has Its Own “Source of Truth”

Cyber tracks work in ServiceNow. The PMO uses Smartsheet. QA uses TestRail. Engineering uses Jira. Enterprise Architecture uses Confluence pages that haven’t been updated since the last reorg. Nobody’s data matches. Ask for the status of a project and you’ll get a different answer depending on which tool you query.

2. Handoffs Are Where Work Goes to Die

A Cyber requirement gets documented in a GRC tool, then manually transcribed into a PMO tracker, then re-entered as a Jira ticket for engineering, then logged again in QA’s test management system. At every handoff, context is lost, requirements mutate, and timelines slip. The original ask arrives at the delivery team looking nothing like what was intended.

3. Everything Is “Priority One”

Cyber says the vulnerability patch is P1. The PMO says the audit remediation project is P1. EA says the architecture review is P1. QA says the release can’t ship until the regression suite passes. Infrastructure says the capacity upgrade is P1. When everything is priority one, nothing is. The delivery team is left to navigate a minefield of competing demands with no clear tiebreaker.

4. No Shared Planning Cadence

Engineering runs two-week sprints. The PMO runs monthly portfolio reviews. Cyber runs reactive incident response on its own timeline. EA runs quarterly architecture councils. QA runs test cycles that don’t align with sprint boundaries. There’s no common rhythm, no synchronized planning, and no unified view of capacity versus demand.

5. Metrics That Measure Activity, Not Outcomes

Each team reports its own metrics to its own leadership. Cyber reports findings closed. QA reports defects found. The PMO reports projects on track. EA reports standards reviewed. None of these metrics answer the only question that matters: How fast can this organization deliver business value?

6. Tribal Knowledge Rules Everything

Need to get a deployment approved? You’d better know that Dave in Infrastructure is the only one who can process expedited change requests, and he’s out every other Friday. Need a security exception? Talk to Sarah in Cyber, but only after you’ve filed the form in the GRC tool and sent her a Slack message, because she doesn’t check the tool. The process depends on who you know, not what you know.

7. Adding Headcount Never Fixes It

The org responds to slowness by hiring more people into each silo. More security analysts. More QA engineers. More PMO coordinators. But delivery doesn’t get faster, because the bottleneck isn’t capacity—it’s coordination. You can’t solve a communication problem by adding more people who don’t communicate.

Silos don’t starve the organization of resources. They starve it of coherence. The people are there. The skills are there. The budget is there. What’s missing is a shared mission, shared tools, and a shared definition of success.

The Incentive Problem: Why Silos Resist Demolition

If silos are so obviously dysfunctional, why do they persist? Because the people inside them are being rewarded for silo behavior.

Every support team leader has built a kingdom. They have headcount. They have budget. They have a charter that justifies their existence. Their performance reviews are tied to metrics that only measure their team’s output, not the organization’s outcomes. Asking them to integrate with other teams feels like asking them to give up power—because it is.

Until these incentives change, the silos won’t. You can reorganize the boxes on the org chart all you want. If the metrics stay the same, the behavior stays the same. This is where the technology leader from Part 2 becomes decisive: only they have the authority to rewrite the incentive structure.

What a Unified Foundation Actually Looks Like

So what’s the alternative? What does it look like when the support layer operates as an integrated foundation instead of a collection of fiefdoms?

Shared Tooling and Shared Visibility

Every support team works in the same ecosystem. Not necessarily the same tool for everything, but integrated tools with shared visibility. When a Cyber requirement is created, it appears in the delivery team’s backlog automatically. When QA flags a defect, engineering sees it in real time. When the PMO needs a status update, they pull it from the same system engineering already uses—no re-keying, no reconciliation, no seventeen-slide status deck.

Shared Metrics Aligned to Delivery

Every support team is measured—at least in part—on delivery outcomes. Cyber’s success includes how fast security reviews complete, not just how many findings they close. QA’s success includes test automation coverage and pipeline pass rates, not just defects found. The PMO’s success includes deployment frequency and time-to-value, not just projects tracked. When everyone’s scorecard includes delivery velocity, the incentive to collaborate replaces the incentive to protect territory.

Embedded Support, Not Centralized Gates

Instead of centralized teams that delivery must queue up for, support capabilities are embedded into delivery streams. A security engineer sits with the delivery team. A QA automation engineer is part of the squad. Architecture guidance is available on-demand through lightweight consultations, not six-week review board queues. The Team Topologies model calls these “enabling teams”—and the distinction from traditional centralized support is night and day.

A Common Planning Cadence

All support teams synchronize their planning with the delivery cadence. If delivery runs two-week sprints, support teams align their capacity planning to the same cycle. Cross-team dependencies are identified in shared planning sessions, not discovered mid-sprint when a blocked ticket shows up. There’s one backlog view that shows all work—delivery and support—in a single, prioritized stream.

Automation as the Default

Manual gates are replaced with automated guardrails. Security scanning runs in the CI/CD pipeline, not in a manual review board. QA regression runs automatically on every pull request. Infrastructure provisioning is self-service through Terraform modules and platform APIs. Compliance checks are policy-as-code. The support layer’s job shifts from approving work to building the automation that makes approvals unnecessary.

The Maturity Assessment: Your Silo Detector

Here’s the uncomfortable part: most leaders don’t actually know how siloed their organization is. They have a vague sense that “things are slow” and “teams don’t communicate well,” but they can’t point to specific, measurable behaviors that explain why.

This is where the Maturity Assessment we introduced in Part 2 becomes your silo detector. A well-designed assessment doesn’t just evaluate individual team capabilities in isolation—it measures the connective tissue between teams:

Score each dimension. Visualize the gaps. Show every team leader where they stand—not to shame them, but to give them a shared, objective picture of the problem. Silos persist partly because nobody has a clear view of the damage they’re causing. The maturity assessment makes the invisible visible.

You don’t break down silos by telling people to collaborate more. You break them down by showing them—with data—what the silos are actually costing, and then giving them a shared scoreboard to track progress against.

The Technology Leader’s Role: Silo Architect or Silo Demolisher?

Everything in this article comes back to the technology leader. The silos exist because the leader allowed—or created—the conditions for them to form:

Breaking down silos requires the technology leader to make five deliberate moves:

  1. Rewrite the incentives. Every support team leader’s performance review must include delivery velocity metrics. If their team’s work isn’t making delivery faster, they’re not succeeding—regardless of their internal metrics.

  2. Mandate shared tooling. Not optional. Not “encouraged.” Mandated. One work management ecosystem with shared visibility. Eliminate the shadow systems.

  3. Synchronize planning. All support teams plan on the delivery cadence. Cross-team dependencies are identified in shared sessions, not discovered when things break.

  4. Embed support into delivery. Move from centralized gate-keeping teams to embedded enabling teams. Security, QA, and architecture expertise should sit with the people building software.

  5. Run the maturity assessment quarterly. Track silo behavior over time. Celebrate improvements. Escalate regressions. Make the scoreboard visible to everyone.

None of this is easy. Every move threatens someone’s territory. But the technology leader is the only person who can make these moves—and the organization’s ability to deliver business value depends on them doing it.


What’s Next

Silos are one half of the governance weight problem. The other half is the audit flywheel—the self-reinforcing cycle where every audit creates more controls, more overhead, and more reactive work that consumes the capacity that should go to business delivery.

In Part 4: The Audit Flywheel, we’ll go deep on how compliance becomes the product, why reactive audit response makes you less secure, and how mature organizations use continuous compliance and policy-as-code to break the cycle—with the maturity assessment as the diagnostic that shows you where to start.

This is Part 3 of a 6-part series. Read Part 1: A Tale of Two Pyramids and Part 2: The Technology Leader’s Dilemma if you haven’t already.

arrow_back Back to Blog
×

WANT TO WORK WITH US?

Let's talk about how we can accelerate your next project.

Get in Touch