Example Use Cases

Decision Structure Diagnostic — Use Cases

How the Decision Structure Diagnostic Works

Every organization has questions it keeps asking but can’t resolve. Not because people aren’t smart enough — but because the way the question is being processed doesn’t match what the question actually requires.

You bring the business problem. The diagnostic reveals why it’s stuck — and what specifically needs to change for it to move.
Use Case 1 “Are Our Trade Shows Worth It?”
Use Case 2 “We Can’t Ship on Schedule”
Use Case 3 “Our Departments Don’t Collaborate”

“Are Our Trade Shows Worth It?”

The Business Problem

A mid-market B2B software company ($40M ARR, 180 employees) spends $1.2M annually on trade show participation. Every quarter, the same question resurfaces: Are trade shows providing a positive ROI from the leads they generate?

Marketing pulls attribution data. Sales disputes the numbers. Finance asks for a cleaner model. The CMO presents a revised analysis. The CEO asks the same question again next quarter. Meanwhile, the trade show budget renews by default because nobody can definitively answer the question — and nobody wants to be the one who cut the program and was wrong.

What It Looks Like from Inside

The team believes this is a data problem. If they could just get better lead attribution, cleaner pipeline tracking, and a tighter ROI model, they’d have the answer. So they invest in better tracking, run the analysis again, and still can’t reach a decision that holds. The question has been actively debated for 14 months without resolution.

What the Diagnostic Reveals

The question is Probabilistic — but it’s being treated as Deterministic.

Trade show ROI depends on variables the company doesn’t control: which prospects attend, whether they’re in an active buying cycle, how long their sales cycle runs, and whether the relationship that started at the booth would have formed through another channel anyway. There is no “right answer” — there is only a range of probable outcomes under different assumptions.

But the organization is treating it as Deterministic — searching for the number that proves trade shows work or don’t work. When the analysis is inconclusive (which it will always be), the decision stalls and cycles back.

Diagnostic Findings

PatternWhat the Data Shows
Decision type misclassificationProbabilistic question treated with Deterministic logic — seeking certainty where only probability exists
Revisit rate: 🟠 ElevatedThe same ROI question has been formally revisited 4+ times in 14 months
Authority ambiguityThree roles believe they have authority over the trade show budget (CMO, CRO, CEO) — none will make a unilateral call
Incentive conflict: 🟠 ElevatedMarketing is measured on pipeline (favors trade shows); Sales on closed revenue (indifferent to source); Finance on cost control (favors cutting). Each function’s data tells a different story because each is asking a different question
Completion: 🟠 WeakDecisions about the trade show program are made in quarterly reviews but don’t hold — the budget renews by default regardless of what was “decided”
Cost of delay: 🟠 Elevated14 months of cycling has consumed an estimated 200+ hours of senior leadership time with zero structural change
Role Delta: Authority-role respondents (CMO, CRO) rate the decision process as mostly functional. The Marketing Director (Execution) reports that the process produces no actionable outcome — decisions are announced in quarterly reviews and quietly overridden within weeks.

What Changes

The diagnostic doesn’t answer “Are trade shows worth it?” — it resolves why the organization can’t answer it.

1. Reframe as a structured bet, not a proof.

Stop asking “Do trade shows work?” and start asking “Under what conditions would we continue or cut this bet?” Define hold conditions (e.g., cost-per-qualified-lead below $X, pipeline-to-spend ratio above Y) and a review trigger (e.g., two consecutive quarters below threshold). This converts an unanswerable certainty question into a manageable probability question with defined exit criteria.

Owner: CEO · Sequence: First · Effort: Low
2. Assign budget authority to one role.

The CMO owns the trade show budget. The CRO provides pipeline data as input, not as veto. Finance sets the envelope; Marketing decides how to spend within it. This eliminates the three-way authority ambiguity that causes every analysis to be re-litigated.

Owner: CEO · Sequence: Parallel · Effort: Low
3. Set a 90-day review cycle with pre-agreed metrics.

Replace the ad-hoc quarterly debate with a structured review against the hold conditions. If conditions are met, the bet continues. If not, the designated authority makes the call. No re-analysis unless new information changes the probability model — not just because someone is uncomfortable.

Owner: CMO · Sequence: After 1 & 2 · Effort: Medium

“We Can’t Ship on Schedule”

The Business Problem

A 300-person SaaS company has missed its last three product release dates. Engineering blames unclear requirements. Product blames mid-cycle scope changes from Sales. Sales blames slow delivery for lost deals. The VP of Engineering has proposed a “process overhaul” — but the overhaul itself has been in planning for five months.

Everyone agrees the problem is real. Everyone disagrees on the cause. Three competing proposals exist (Engineering’s sprint overhaul, Product’s requirements framework, Sales’ prioritization committee), but no single proposal has enough support to move forward. Each is evaluated against the others, criticized for what it doesn’t address, and tabled for further refinement.

What It Looks Like from Inside

Leadership sees this as a “we need to pick the right approach” problem. The assumption is that one of the three proposals is correct, and if they analyze them thoroughly enough, the best one will emerge.

What the Diagnostic Reveals

The question is Generative — but it’s being treated as Deterministic.

The delivery process doesn’t need to be “fixed” by selecting the right pre-existing framework. It needs to be designed — authored by someone with the authority to create a new structure that integrates the legitimate concerns from all three functions. The three competing proposals aren’t wrong answers to be evaluated; they’re partial inputs to a design problem that nobody has been authorized to solve.

But the organization is treating it as Deterministic — comparing proposals as if one of them is the right answer. This creates an evaluation loop that doesn’t converge because the problem isn’t which proposal to pick — it’s that nobody has authorship over the design.

Diagnostic Findings

PatternWhat the Data Shows
Decision type misclassificationGenerative problem treated with Deterministic logic — evaluating proposals instead of authorizing design
Stall frequency: 🟠 ElevatedThe process overhaul has stalled 4 times in 5 months, each time at the “which approach?” gate
Coordination overload: 🔴 Critical10+ people involved in a decision that requires authorship from 1–2. Consensus requirement guarantees stall on a design problem
Authority misalignment: 🟠 ElevatedThree VPs have partial authority; none has full authority to design and implement. Each can block but none can build
Recurrence: 🔴 CriticalDelivery misses have occurred 4+ times in 24 months. The “fix the process” conversation has itself become a recurring pattern
Cost: impact: 🟠 ElevatedEach missed release delays revenue recognition by 4–6 weeks. Three consecutive misses have cost an estimated $2.1M in deferred pipeline
Role Delta: Authority-role respondents (VP Engineering, VP Product) rate the decision process as “mostly aligned” — they see productive debate. The Engineering Director (Execution) reports that nothing has changed in five months and the team has stopped believing a new process is coming.

What Changes

1. Designate a process author — not a committee.

One person (VP Engineering or a designated Chief of Staff) is given authorship over the delivery process redesign. They take input from Product and Sales, but they own the design. The output is not a proposal to be evaluated — it is a decision to be implemented, refined, and iterated.

Owner: COO or CEO · Sequence: First · Effort: Low
2. Set a 30-day design window with a ship date.

The author has 30 days to produce a v1 process design. The design ships on day 30 regardless of completeness — because a generative output improves through iteration, not through pre-launch perfection. The first quarterly release under the new process is the test, not a committee review.

Owner: Designated process author · Sequence: After 1 · Effort: Medium
3. Reduce the decision participant count from 10+ to 3.

The author consults Engineering, Product, and Sales leads for input. Nobody else has a seat at the design table. This directly targets the coordination overload that has prevented any single approach from gaining enough support to ship.

Owner: COO · Sequence: Parallel with 2 · Effort: Low

“Our Departments Don’t Collaborate”

The Business Problem

A 500-person professional services firm has grown through acquisition. Three legacy business units now operate under one brand, but cross-selling between units is nearly zero. The CEO has made “collaboration” a strategic priority for two consecutive years. Shared CRM dashboards, lunch-and-learns, joint pipeline reviews, a collaboration bonus — none of it has worked. Cross-unit referral revenue remains below 3%.

Leadership believes this is a culture problem — the legacy units don’t trust each other and default to protecting their own P&L. The proposed next step is a two-day offsite focused on “building bridges.”

What It Looks Like from Inside

Each business unit head publicly supports cross-selling but privately protects their pipeline. Referrals that do happen create friction: who gets credit for the revenue? Who owns the client relationship? What happens when a referred engagement goes poorly? These questions are answered differently by each unit, and the answers aren’t documented anywhere.

The collaboration initiatives have addressed visibility and motivation — but the underlying structural constraints haven’t changed. The bonus exists, but nobody knows how referral credit is split. The dashboard shows cross-unit pipeline, but nobody has authority to assign a referred lead.

What the Diagnostic Reveals

This is not a culture problem. It is a constraint visibility and incentive structure problem.

The diagnostic isolates the specific decision that’s failing: the referral decision — the moment when a partner in Unit A identifies an opportunity for Unit B and decides whether to act on it. That micro-decision breaks down for structural reasons: a partner who refers a $200K engagement loses pipeline and gains a $5K referral bonus. The rational economic choice is to not refer — and that’s exactly what’s happening.

Diagnostic Findings

PatternWhat the Data Shows
Constraint visibility: 🔴 CriticalThe rules governing referral credit, client ownership transfer, and engagement accountability are mostly implicit or undocumented. Each unit operates under different assumptions
Incentive conflict: 🔴 CriticalUnit P&L ownership directly conflicts with cross-unit referrals. The rational economic choice under current structure is to not refer
Authority ambiguity: 🟠 ElevatedNo single role owns the cross-unit referral process. The CEO mandates collaboration; the unit heads control the P&L; the partners make the decisions
Recurrence: 🔴 CriticalThe collaboration initiative has been relaunched every 6–12 months for three years, each time with different tactics but the same structural constraints
Cost: impact: 🟠 ElevatedInternal analysis estimates $4–6M in unrealized cross-sell revenue annually. Each failed initiative consumes 200+ partner hours
Completion: 🔴 Critical weaknessNo collaboration initiative in the past three years has run to completion. Each is launched, stalls when structural friction emerges, and is quietly replaced
Role Delta: Authority-role respondents (CEO, COO) rate the collaboration process as “mostly aligned” and believe the units need more encouragement. Execution-role respondents (unit-level partners) rate constraint visibility and incentive alignment as critical — they see the structural barriers that make referral economically irrational.

What Changes

The diagnostic identifies that the firm doesn’t need a culture intervention — it needs a constraint redesign. The referral decision will remain structurally irrational until the economics change.

1. Make the implicit constraints explicit — in writing.

Document the referral credit split, client ownership rules, and engagement accountability structure in a one-page referral operating agreement. Every unit signs it. Every partner reads it. The constraints that are currently implicit and inconsistent become explicit and shared.

Owner: COO · Sequence: First · Effort: Low
2. Restructure the referral economics.

Replace the flat referral bonus with a revenue-share model: the referring unit receives 15–20% of first-year revenue from a referred engagement, credited to the referring partner’s P&L. This converts the referral decision from an economic loss to an economic gain — without requiring anyone to change their values, mindset, or culture.

Owner: CEO + CFO · Sequence: After 1 · Effort: Medium
3. Assign a single cross-unit revenue owner with a number.

One person owns the cross-unit revenue target as their primary metric. They have authority to facilitate referrals, resolve credit disputes, and escalate structural blockers. This eliminates the accountability gap between the CEO’s mandate and the partners’ daily decisions.

Owner: CEO · Sequence: Parallel with 2 · Effort: Medium

What These Cases Have in Common

None of these organizations had a “decision” problem in the way they would describe it. They had business problems: wasted marketing spend, missed ship dates, failed collaboration. What they share is a pattern:

The question kept cycling because the structure used to process it didn’t match the structure the question required.

The trade show question needed probability framing but got certainty framing. The delivery process question needed design authorship but got proposal evaluation. The collaboration question needed constraint redesign but got culture initiatives.

In each case, the people were competent, the data was available, and the effort was real. What was missing was a structural diagnosis — a way to see that the method being used to reach the answer was itself the reason the answer never arrived.

You bring the business problem. The diagnostic reveals why it’s stuck.