This is Part 3 of the Pyramid Series. In Part 2, we explored how technology leaders shape the pyramid—for better or worse. Now let’s look at what happens inside the base layer when nobody’s paying attention: the support teams fragment into silos, and the foundation cracks.
If you’ve ever waited three weeks for a firewall rule, filed the same request in four different systems, or sat in a meeting where Cyber, QA, the PMO, and Infrastructure all claimed their work was “P1”—congratulations. You’ve experienced the Silo Factory firsthand.
Here’s how it gets built, why it persists, and what it actually takes to tear it down.

How Silos Get Built: The Origin Story Nobody Planned
No technology leader wakes up one morning and says, “I’d like seven independent fiefdoms that can’t talk to each other, please.” Silos aren’t designed. They emerge—slowly, quietly, and with perfectly reasonable justifications at every step.
It usually starts with specialization. The company grows. The CTO hires a Head of Security. A QA Director. A PMO lead. An Enterprise Architect. Each one builds a team, picks their tools, defines their processes, and establishes their own operating rhythm. This is normal. This is how organizations scale.
The problem is what happens next: each team optimizes for its own survival instead of for the delivery outcomes the organization actually needs.
Cyber picks a GRC platform nobody else uses, builds a ticketing workflow only they understand, and starts measuring success by findings remediated—not by how secure the delivery pipeline actually is.
QA builds a test management system disconnected from the engineering backlog, creates its own defect taxonomy, and measures success by bugs found—which perversely incentivizes finding problems over preventing them.
The PMO adopts a project portfolio tool that duplicates the engineering team’s sprint board, invents its own status reporting format, and measures success by projects tracked—not by features shipped.
Enterprise Architecture maintains a standards registry that nobody outside EA has ever read, runs quarterly review boards with six-week lead times, and measures success by standards published—not by how quickly teams can adopt them.
Infrastructure builds a change management process optimized for stability over speed, requires manual approvals for resources that should be self-service, and measures success by uptime—not by how fast a developer can go from code to production.
SDLC Governance creates process checklists that add ceremony without adding value, mandates artifacts that nobody reads after they’re produced, and measures success by compliance rates—not by delivery velocity.
Reporting pulls data from every team’s siloed tool into dashboards that tell seventeen different stories, reconciles nothing, and measures success by dashboards delivered—not by decisions improved.
None of this is malicious. Every team is doing what it thinks is right. But the cumulative effect is devastating: a foundation made of disconnected parts that can’t function as a foundation at all.
The Seven Symptoms of a Siloed Support Layer
How do you know if your technology support teams have become a silo factory? Look for these symptoms:
1. Every Team Has Its Own “Source of Truth”
Cyber tracks work in ServiceNow. The PMO uses Smartsheet. QA uses TestRail. Engineering uses Jira. Enterprise Architecture uses Confluence pages that haven’t been updated since the last reorg. Nobody’s data matches. Ask for the status of a project and you’ll get a different answer depending on which tool you query.
2. Handoffs Are Where Work Goes to Die
A Cyber requirement gets documented in a GRC tool, then manually transcribed into a PMO tracker, then re-entered as a Jira ticket for engineering, then logged again in QA’s test management system. At every handoff, context is lost, requirements mutate, and timelines slip. The original ask arrives at the delivery team looking nothing like what was intended.
3. Everything Is “Priority One”
Cyber says the vulnerability patch is P1. The PMO says the audit remediation project is P1. EA says the architecture review is P1. QA says the release can’t ship until the regression suite passes. Infrastructure says the capacity upgrade is P1. When everything is priority one, nothing is. The delivery team is left to navigate a minefield of competing demands with no clear tiebreaker.
4. No Shared Planning Cadence
Engineering runs two-week sprints. The PMO runs monthly portfolio reviews. Cyber runs reactive incident response on its own timeline. EA runs quarterly architecture councils. QA runs test cycles that don’t align with sprint boundaries. There’s no common rhythm, no synchronized planning, and no unified view of capacity versus demand.
5. Metrics That Measure Activity, Not Outcomes
Each team reports its own metrics to its own leadership. Cyber reports findings closed. QA reports defects found. The PMO reports projects on track. EA reports standards reviewed. None of these metrics answer the only question that matters: How fast can this organization deliver business value?
6. Tribal Knowledge Rules Everything
Need to get a deployment approved? You’d better know that Dave in Infrastructure is the only one who can process expedited change requests, and he’s out every other Friday. Need a security exception? Talk to Sarah in Cyber, but only after you’ve filed the form in the GRC tool and sent her a Slack message, because she doesn’t check the tool. The process depends on who you know, not what you know.
7. Adding Headcount Never Fixes It
The org responds to slowness by hiring more people into each silo. More security analysts. More QA engineers. More PMO coordinators. But delivery doesn’t get faster, because the bottleneck isn’t capacity—it’s coordination. You can’t solve a communication problem by adding more people who don’t communicate.
Silos don’t starve the organization of resources. They starve it of coherence. The people are there. The skills are there. The budget is there. What’s missing is a shared mission, shared tools, and a shared definition of success.
The Incentive Problem: Why Silos Resist Demolition
If silos are so obviously dysfunctional, why do they persist? Because the people inside them are being rewarded for silo behavior.
Every support team leader has built a kingdom. They have headcount. They have budget. They have a charter that justifies their existence. Their performance reviews are tied to metrics that only measure their team’s output, not the organization’s outcomes. Asking them to integrate with other teams feels like asking them to give up power—because it is.
The Cyber Director’s bonus is tied to audit findings closed, not to delivery velocity.
The QA Director’s performance review celebrates defects caught, not defects prevented through earlier testing.
The PMO lead’s success metric is projects tracked, not features shipped.
The EA lead’s credibility comes from standards enforced, not from patterns adopted.
Until these incentives change, the silos won’t. You can reorganize the boxes on the org chart all you want. If the metrics stay the same, the behavior stays the same. This is where the technology leader from Part 2 becomes decisive: only they have the authority to rewrite the incentive structure.
What a Unified Foundation Actually Looks Like
So what’s the alternative? What does it look like when the support layer operates as an integrated foundation instead of a collection of fiefdoms?
Shared Tooling and Shared Visibility
Every support team works in the same ecosystem. Not necessarily the same tool for everything, but integrated tools with shared visibility. When a Cyber requirement is created, it appears in the delivery team’s backlog automatically. When QA flags a defect, engineering sees it in real time. When the PMO needs a status update, they pull it from the same system engineering already uses—no re-keying, no reconciliation, no seventeen-slide status deck.
Shared Metrics Aligned to Delivery
Every support team is measured—at least in part—on delivery outcomes. Cyber’s success includes how fast security reviews complete, not just how many findings they close. QA’s success includes test automation coverage and pipeline pass rates, not just defects found. The PMO’s success includes deployment frequency and time-to-value, not just projects tracked. When everyone’s scorecard includes delivery velocity, the incentive to collaborate replaces the incentive to protect territory.
Embedded Support, Not Centralized Gates
Instead of centralized teams that delivery must queue up for, support capabilities are embedded into delivery streams. A security engineer sits with the delivery team. A QA automation engineer is part of the squad. Architecture guidance is available on-demand through lightweight consultations, not six-week review board queues. The Team Topologies model calls these “enabling teams”—and the distinction from traditional centralized support is night and day.
A Common Planning Cadence
All support teams synchronize their planning with the delivery cadence. If delivery runs two-week sprints, support teams align their capacity planning to the same cycle. Cross-team dependencies are identified in shared planning sessions, not discovered mid-sprint when a blocked ticket shows up. There’s one backlog view that shows all work—delivery and support—in a single, prioritized stream.
Automation as the Default
Manual gates are replaced with automated guardrails. Security scanning runs in the CI/CD pipeline, not in a manual review board. QA regression runs automatically on every pull request. Infrastructure provisioning is self-service through Terraform modules and platform APIs. Compliance checks are policy-as-code. The support layer’s job shifts from approving work to building the automation that makes approvals unnecessary.
The Maturity Assessment: Your Silo Detector
Here’s the uncomfortable part: most leaders don’t actually know how siloed their organization is. They have a vague sense that “things are slow” and “teams don’t communicate well,” but they can’t point to specific, measurable behaviors that explain why.
This is where the Maturity Assessment we introduced in Part 2 becomes your silo detector. A well-designed assessment doesn’t just evaluate individual team capabilities in isolation—it measures the connective tissue between teams:
Tooling integration: Do support teams and delivery teams share a common work management ecosystem, or does every team have its own disconnected tool?
Handoff efficiency: How many times is a requirement re-entered across systems before it reaches a developer? What’s the average time from request to action?
Planning alignment: Do support teams plan on the same cadence as delivery? Are cross-team dependencies visible before they become blockers?
Metric alignment: Are support teams measured on delivery outcomes or only on their own internal output?
Automation maturity: What percentage of security, QA, infrastructure, and compliance checks are automated versus manual?
Knowledge sharing: Is process documented and accessible, or does everything depend on tribal knowledge and individual relationships?
Score each dimension. Visualize the gaps. Show every team leader where they stand—not to shame them, but to give them a shared, objective picture of the problem. Silos persist partly because nobody has a clear view of the damage they’re causing. The maturity assessment makes the invisible visible.
You don’t break down silos by telling people to collaborate more. You break them down by showing them—with data—what the silos are actually costing, and then giving them a shared scoreboard to track progress against.
The Technology Leader’s Role: Silo Architect or Silo Demolisher?
Everything in this article comes back to the technology leader. The silos exist because the leader allowed—or created—the conditions for them to form:
They hired team leads without giving them shared objectives.
They let each team pick its own tools without requiring integration.
They approved metrics that reward team-level output over organizational outcomes.
They tolerated competing priorities without establishing a clear tiebreaker.
They added headcount to silos instead of investing in coordination and automation.
Breaking down silos requires the technology leader to make five deliberate moves:
Rewrite the incentives. Every support team leader’s performance review must include delivery velocity metrics. If their team’s work isn’t making delivery faster, they’re not succeeding—regardless of their internal metrics.
Mandate shared tooling. Not optional. Not “encouraged.” Mandated. One work management ecosystem with shared visibility. Eliminate the shadow systems.
Synchronize planning. All support teams plan on the delivery cadence. Cross-team dependencies are identified in shared sessions, not discovered when things break.
Embed support into delivery. Move from centralized gate-keeping teams to embedded enabling teams. Security, QA, and architecture expertise should sit with the people building software.
Run the maturity assessment quarterly. Track silo behavior over time. Celebrate improvements. Escalate regressions. Make the scoreboard visible to everyone.
None of this is easy. Every move threatens someone’s territory. But the technology leader is the only person who can make these moves—and the organization’s ability to deliver business value depends on them doing it.
What’s Next
Silos are one half of the governance weight problem. The other half is the audit flywheel—the self-reinforcing cycle where every audit creates more controls, more overhead, and more reactive work that consumes the capacity that should go to business delivery.
In Part 4: The Audit Flywheel, we’ll go deep on how compliance becomes the product, why reactive audit response makes you less secure, and how mature organizations use continuous compliance and policy-as-code to break the cycle—with the maturity assessment as the diagnostic that shows you where to start.
This is Part 3 of a 6-part series. Read Part 1: A Tale of Two Pyramids and Part 2: The Technology Leader’s Dilemma if you haven’t already.
Back to Blog