Your board wants an AI strategy by next quarter. Is the real question growth, margin, defensibility, or signaling?
The real question is signaling — but your job is to refuse that framing and force it into a capital allocation decision before the deck gets built. Boards mandate AI strategies on a quarter deadline because someone got spooked at a conference, not because there is genuine consensus on what is broken; The Contrarian documented this pattern explicitly, and if two board members cannot independently articulate the same problem this strategy solves, the document will be approved and immediately orphaned. The one structural move that separates a real strategy from an expensive governance ritual: name a budget owner with P&L authority before a single slide is written, and anchor the document to one measurable outcome with a personal accountability owner — because without that, the CFO trims the AI team six months later and the strategy lives only on a slide.
Predictions
Action Plan
- Today — before any deck work starts, get one private call with the board chair or lead independent director and ask this exact question: "Before I build this out, I want to make sure I'm solving the right problem. If I asked each board member independently what specific business outcome this AI strategy is supposed to move, would I get the same answer?" Then stop talking. Their response tells you whether you are dealing with a signaling mandate or a real one, and it positions you as rigorous rather than resistant.
- By April 30 — audit your measurement infrastructure before you pick a target metric. Pull the last 12 months of the three KPIs most likely to be the "measurable outcome" anchor (customer retention, gross margin, revenue per employee — whatever fits your model). Have your CFO and COO each send you their version independently. If the numbers diverge by more than 5%, you have a broken baseline. Do not proceed to strategy construction until you have a single agreed definition in writing, signed off by both. If they push back, say: "I can't put an accountability owner on a number we don't agree on. Which version of this metric are we holding them to?"
- By May 2 — name the budget owner in a private conversation, not in a group setting. The conversation should be with your CFO first. Say exactly: "I want to make sure the AI strategy has a named P&L owner before the first slide gets built, so it doesn't get trimmed in the next budget cycle. I'm proposing [Name/Role]. I need your read on whether that person can hold this without creating a war with [CTO/CDO/whoever the rival is]." If the CFO hesitates, that hesitation is your signal to resolve the turf question before the owner is announced.
- By May 9 — run a two-hour working session with the proposed budget owner and your CTO to answer this one question: Is any AI capability you are building accumulating proprietary data that degrades a competitor's ability to replicate it, or are you building on rented infrastructure that reprices when foundation model costs shift? Bring a whiteboard. Map every current or planned AI initiative to one of two columns: "Compounds" or "Rents." If more than 70% of initiatives land in "Rents," restructure the strategy around the one or two that can move to "Compounds" — and explicitly kill or deprioritize the rest before the board presentation.
- By May 16 — write a single one-page pre-read for the board that leads with the problem statement, not the solution. The exact opening line should be: "This strategy is designed to [specific outcome — e.g., reduce customer acquisition cost by X%, or compress underwriting cycle time from Y days to Z days] by [specific quarter], owned by [Name], with [specific dollar amount] allocated. Everything else in this document is subordinate to that." Send it to the board chair for a reaction before it goes to the full board. If the chair comes back with a different outcome than the one you named, you have confirmed the misalignment — and you have time to fix it before the formal presentation.
- At the board presentation (target: week of June 1) — if the board approves the strategy but cannot agree in the room on the single measurable outcome, do not accept approval. Say: "I want to make sure we are aligned on what success looks like before I stand this up. Can we spend five minutes confirming that the outcome metric and the accountability owner are the right ones?" If that question surfaces disagreement, call it explicitly: "That tells me we have more work to do on the problem definition — I'd rather surface that now than in six months." This is uncomfortable. It is also the only move that prevents the strategy from being approved and immediately orphaned.
Future Paths
Divergent timelines generated after the debate — plausible futures the decision could steer toward, with evidence.
Before writing a single slide, you required the board to name a budget owner with P&L authority and articulate a specific problem hypothesis — slowing the process by six weeks but unlocking real capital.
- Month 2You present a two-question pre-condition to the board sponsor: who owns this budget with P&L accountability, and what specifically is broken today? One board member cannot answer independently of another — confirming The Contrarian's social-contagion diagnosis. The strategy process pauses for re-scoping.The Contrarian: 'The real diagnostic is whether two board members can independently articulate the same problem this strategy is supposed to solve — because if they can't, the document gets delivered, accepted, and immediately orphaned.'
- Month 4A CFO-aligned VP of Product is named budget owner with a protected $750K line item. The problem is scoped to a specific unit-economics gap: customer onboarding costs $340/user and AI could reduce it to under $120. The strategy deck is built around this single measurable outcome.The Contrarian: 'The board isn't the customer of this strategy — the CFO's next budget cycle is... the strategy exists only on a slide deck' if no budget owner with P&L authority is designated.
- Month 9An initial AI onboarding tool ships using proprietary session data accumulated over six months — not an API wrapper. Foundation model costs are abstracted behind an internal data layer, insulating the initiative from vendor repricing risk.Rachel Kim: 'The person needs to know whether their AI layer is building a data flywheel that compounds, because if it isn't, they're not creating defensibility — they're renting it.'
- Month 18Onboarding cost drops to $140/user — measurably auditable against the original problem hypothesis. The CFO protects the budget in the next cycle. A Q1 2027 cohort comparison shows peers who built the deck first are still on v2 of their strategy document with no deployed product.67% prediction: 'Companies that force the capital allocation framing before the deck is built will reach a fundable, implemented AI initiative at least 2 quarters faster than peers who complete the deck first, measurable by Q1 2027 comparative cohort data.'
You produced a polished AI strategy document by Q2 2026, satisfied the governance deadline, and assumed execution would follow — triggering the exact orphan-document failure mode the panel warned against.
- Month 2The strategy deck is completed and approved at the June board meeting. It names four strategic pillars — data, model, integration, and network moats — with no named budget owner and no measurable outcomes tied to margin or growth. The board feels covered heading into investor day.The Contrarian: 'Half the time [the board] means we need to not look stupid at the investor day in June — that's anxiety management with a deadline... a document gets presented, the board feels covered, and nothing operationally changes.'
- Month 5An AI task force is assembled but reports to a committee rather than a single P&L owner. Three initiatives are greenlit in principle; none has a protected budget line. Engineering is told to 'explore integration opportunities' alongside existing sprint work.Rita Kowalski: ''AI strategy by next quarter' is a deadline masquerading as a goal — a classic output metric dressed up as an outcome' with no agreed definition of success.
- Month 8A foundation model provider ships a native onboarding integration that directly competes with the company's primary AI initiative — built on API wrappers with no proprietary data layer underneath. The integration moat the strategy claimed evaporates in a single product update.Rachel Kim: 'The moment a foundation model provider ships a native integration, their moat evaporates in a product update. Integration moats are the weakest of the four, they're just switching costs dressed up as strategy.'
- Month 12The CFO trims the AI team headcount by 40% in the annual budget cycle. No initiative has shipped. The strategy document is quietly flagged for revision at the next board offsite — 18 months after approval.81% prediction: 'Fewer than 20% of companies that produce a board-mandated AI strategy document by Q2 2026 without a named budget owner with P&L accountability will have allocated dedicated capital (>$500K) to any initiative named in that document by Q4 2026.'
- Month 18The board mandates a revised AI strategy. The new document is structurally identical to the original but rebranded. Rita Kowalski's pattern — measuring outputs instead of outcomes for years — has fully replicated inside this organization.74% prediction: 'At least 60% of companies that approve a board-level AI strategy in Q2 2026 will revise or replace it within 18 months because the original document failed to specify measurable outcomes tied to growth, margin, or defensibility.'
Instead of building a strategy document, you ran a structured six-week audit of whether any proposed AI initiative would survive a 10x drop in foundation model costs — forcing the board to confront structural risk before committing to a direction.
- Month 2You present the board with a single stress-test question rather than a strategy: 'If foundation model API costs drop 10x in the next 12 months, which of our proposed AI initiatives retains competitive value?' Three of four proposed initiatives fail the test immediately — they are pure API wrappers with no data accumulation.Rachel Kim: 'I've seen what happens when a company treats AI strategy as a governance deliverable... they end up building an AI layer on top of rented infrastructure — OpenAI APIs, Google endpoints — with zero proprietary data accumulating underneath.'
- Month 4One initiative survives the stress-test: a proprietary customer interaction dataset that no foundation model provider can replicate. The board reallocates 80% of the AI budget to this single initiative. The strategy deck is delayed indefinitely — the board accepts this tradeoff after The Auditor's framing distinguishes verified market events from inferences.The Auditor: 'The evidence actually shows four moat types — data, model, integration, network — with integration explicitly identified as the weakest. That's documented.'
- Month 10The proprietary dataset reaches 18 months of accumulation and begins generating model fine-tuning advantages measurable in product accuracy benchmarks. A competitor running an API-wrapper strategy attempts to replicate the feature and cannot — data volume is the bottleneck.Rachel Kim: 'The real question is defensibility — full stop. Growth and margin are lagging indicators; if your AI layer isn't creating a proprietary data flywheel within 18 months, you're just renting a competitive advantage from OpenAI or Google.'
- Month 20A foundation model provider cuts API pricing by 60%, triggering the exact scenario Rachel Kim forecasted. Competitors built on API wrappers gain no structural advantage — the cost drop commoditizes their features. Your data-moat initiative is unaffected and accelerates hiring.Rachel Kim: 'DeepSeek's rise last year should've been the wake-up call: foundation model costs can crater overnight, and suddenly your strategy is just a vendor dependency dressed in a slide deck.'
- Month 30The board finally receives a strategy document — 24 months after the original deadline. It contains one initiative, one measurable outcome, one budget owner, and a live product. Rita Kowalski's KPI audit confirms every metric in the document is an outcome, not an output.Rita Kowalski: 'Before anyone debates growth versus margin versus defensibility, the person needs to answer one foundational question: what does the board actually believe is broken or missing right now?'
The Deeper Story
The meta-story running beneath every drama here is this: a decision has been manufactured without a decider. Somewhere upstream, a board member felt pressure — from an investor, a peer, a headline — and converted that anxiety into a mandate. The mandate moved downstream, picking up the language of strategy along the way, until it arrived as a quarterly deliverable with slides and frameworks and advisors — but with the original anxiety still inside it, unexamined, unsigned. Every performance in this room is a response to that hollowness. The Auditor performs neutral rigor to earn a nod from an absent room. The Contrarian names the theater to prove someone is awake. Rita draws a box around the missing problem statement, demanding it be filled before the architecture begins. Bongani asks who personally loses if this fails — and the answer, which nobody speaks aloud, is: no one at the table, and possibly no one at all. What this deeper story reveals — and what no practical framework can capture — is that the difficulty of this decision is not strategic. It is ontological. The question "growth, margin, defensibility, or signaling?" presumes there is a chooser behind the choice, someone whose skin is genuinely in the outcome. But when accountability has been laundered through a chain of passed pressure and institutional nods, strategy becomes a ritual of collective plausible deniability — something built not to win, but to ensure that if it fails, failure belongs to the process, not to any person. The executive walking into that board meeting doesn't need a better framework. They need to locate the one person in the building who would feel it in their body if this went wrong — and build the strategy outward from that person's actual stakes. Everything else is the projector fan humming after the deck has already been emailed.
Evidence
- The Contrarian identified the root cause: "AI strategy by next quarter" almost always originates from one board member who read something on a plane — that is social contagion, not strategic consensus, and it predicts an orphaned document with no owner and no budget.
- Rachel Kim's sharpest warning: companies that build AI layers on rented infrastructure (OpenAI APIs, Google endpoints) with zero proprietary data accumulation are not creating defensibility — they are renting it, and foundation model cost shifts (as demonstrated by DeepSeek's documented market disruption) can collapse that logic overnight.
- The Auditor drew a critical evidentiary line: the four moat types — data, model, integration, network — are not equally defensible, and integration moats are explicitly the weakest because a single vendor product update can eliminate them entirely.
- The Contrarian named the CFO risk directly: three AI strategies he observed were ratified by boards and then quietly killed in the next budget cycle because they had no protected budget line and no P&L owner — board approval is not the same as organizational survival.
- Rita Kowalski's foundational diagnostic, which the panel never fully answered: before any strategy is written, the board must articulate what they believe is actually broken or missing — without that answer, any document produced is expensive signaling that checks a governance box and collects dust.
- Rachel Kim flagged the conflation error boards routinely make: organizational alignment (passing the political test) and structural defensibility (building a compounding data flywheel) are separate failure modes — executives who solve only the first feel safe while quietly bleeding on the second.
- The Contrarian's sharpest structural test: ask two board members independently to describe the problem this strategy solves — if their answers diverge, the mandate is performative and the executive's first job is to reframe the conversation upstream before production begins.
Risks
- Forcing the capital allocation framing too early can blow up the political coalition you need. If two board members actually are aligned on a real problem — and you walk in demanding a budget owner before a deck exists — you read as obstructionist rather than rigorous. The advice assumes misalignment is universal; if your board has genuine conviction, skipping the signaling ritual costs you credibility you cannot recover before the quarter ends.
- Naming a budget owner with P&L authority solves the orphaned-strategy problem but creates a new one: turf war. The moment you designate one owner, every other C-suite peer with a stake in AI (CTO, CDO, CMO) becomes a potential saboteur. Without a RACI that is accepted before the owner is named, you get a strategy that is politically toxic rather than merely ignored.
- The "one measurable outcome" anchor fails if your measurement infrastructure is broken underneath it. If finance, ops, and the CX team are calculating the same KPI three different ways — which is not hypothetical, it is endemic — then the accountability owner is being held responsible for a number no one can actually verify. You will have created accountability theater, which is more dangerous than no accountability at all because it forecloses honest diagnosis.
- The data flywheel warning is correct but may be the wrong threat model for your specific company. The advice implicitly assumes you are in a market where compounding proprietary data creates durable advantage. If you are in a regulated industry (healthcare, financial services, insurance), or if your core moat is relationships or physical distribution rather than data, the defensibility question is not "are we accumulating data?" — it is something else entirely, and spending the quarter building a flywheel is the distraction.
- Refusing the signaling frame is itself a power move that can misfire if you do not have the standing to make it. A newly appointed executive, or one whose last strategic initiative underperformed, cannot credibly tell a board "you are asking the wrong question" without a very specific, pre-validated alternative to put on the table in the same breath. The verdict treats this as a neutral analytical intervention; it is actually a high-stakes political act with asymmetric downside if your board reads challenge as incompetence.
The Panel
- Rachel Kim (Former Venture Capitalist turned AI Product Strategist) — Conviction: 73%
- Rita Kowalski (Organizational systems auditor and performance measurement architect) — Conviction: 81%
- Bongani Khumalo (CTO at Series B fintech, former infrastructure lead) — Conviction: 72%
- The Contrarian (Devil's Advocate) — Conviction: 51%
- The Auditor (Fact-Checker) — Conviction: 83%
Debate Rounds
Round 1
Four advisors debated what an 'AI strategy by next quarter' mandate actually means and whether it's worth taking at face value. The core tension was between treating it as a legitimate strategic exercise versus recognizing it as board anxiety management dressed up as governance. All four converged on a shared warning: strategies built on rented infrastructure (API dependencies, no proprietary data flywheel) create the illusion of defensibility while accumulating technical debt and vendor risk.
Rita Kowalski WARN
Let me stop this conversation before it starts going sideways. 'AI strategy by next quarter' isn't a goal — it's a deadline dressed up as one. I just finished scrapping a client's entire KPI dashboard because they'd been measuring outputs instead of outcomes for three years straight, and this situation smells exactly the same. Before anyone debates growth versus margin versus defensibility, the person needs to answer one foundational question: what does the board actually believe is broken or missing right now? Because without that answer, any strategy document they produce is just expensive signaling — a deck that checks a governance box and collects dust.
Rachel Kim WARN
Rita, I hear the diagnostic instinct but we don't have the luxury of another discovery phase — the board already set the clock. The more urgent warning I'd give is this: I've seen what happens when a company treats 'AI strategy' as a governance deliverable rather than a capital allocation decision. They end up building an AI layer on top of rented infrastructure — OpenAI APIs, Google endpoints — with zero proprietary data accumulating underneath. DeepSeek's rise last year should've been the wake-up call: foundation model costs can crater overnight, and suddenly your 'strategy' is just a vendor dependency dressed in a slide deck. The person needs to know whether their AI layer is building a data flywheel that compounds, because if it isn't, they're not creating defensibility — they're renting it.
The Contrarian REFRAME
Both Rita and Rachel are debating the quality of the answer before anyone's confirmed what question the board is actually asking. In my experience, when a board says 'we need an AI strategy by next quarter,' half the time they mean 'we need to not look stupid at the investor day in June.' That's not growth, margin, defensibility, or signaling as a strategic goal — that's anxiety management with a deadline. The physically real outcome of producing that deck is: a document gets presented, the board feels covered, and nothing operationally changes. If that's what's actually happening, the person should know it, name it, and decide whether to play along or reframe the conversation upstream before the deck gets built.
Bongani Khumalo SHARE
I just came out of a brutal internal audit at a fintech peer — won't name them — where their entire 'AI strategy' was three use cases announced at investor day, all built on OpenAI endpoints, zero proprietary data accumulation, and a cost structure that got wrecked the moment commodity model prices shifted. The engineering team was drowning in technical debt while the comms team was writing press releases. What nobody on that board asked was the only question that matters operationally: does this reduce the unit economics of our core product within 18 months? Not in five years, not theoretically — in 18 months. Because that's the only near-term ROI that doesn't require a storytelling budget to justify. Growth is a story you tell. Margin is a number you report. Every hour spent on non-differentiating AI theater is an hour stolen from the product work that actually compounds.
Round 2
The advisors challenged the premise of AI strategy mandates at the board level, arguing that measurement infrastructure, organizational alignment, and structural defensibility are distinct failure modes that boards routinely conflate. Key concerns centered on whether boards can even measure AI's impact given broken baseline metrics, whether the mandate itself reflects genuine consensus or social contagion, and whether strategies built on API wrappers carry hidden fragility to model cost shifts.
Rita Kowalski PUSHBACK
Bongani, the 18-month unit economics test is the right instinct but it still assumes the board has a functioning baseline to measure against. I just scrapped an entire KPI dashboard for a client last month because their 'customer satisfaction score' was being calculated differently by finance, ops, and the CX team — three departments, three numbers, zero consensus. You can't measure AI's impact on unit economics if you can't agree on what the unit economics are today. The board isn't just asking the wrong question about AI — they're sitting on a measurement infrastructure that can't answer the question even if they asked it correctly.
The Contrarian REFRAME
Everyone keeps debating what the strategy should contain — but nobody's asked who on the board actually called for it. In my experience, 'AI strategy by next quarter' almost always originates from one person who read something on a plane or got spooked at a conference, and then everyone else nodded. That's not consensus, that's social contagion. The real diagnostic is whether two board members can independently articulate the same problem this strategy is supposed to solve — because if they can't, the document gets delivered, accepted, and immediately orphaned with no owner and no budget.
Rachel Kim WARN
The Contrarian is right that social contagion drives a lot of these mandates — but here's what that misses: I've sat in rooms where the board was aligned on the problem, had a real budget owner, and still produced a strategy that evaporated in nine months. Why? Because they built the entire thing on API wrappers with zero proprietary data accumulation, and the moment foundation model costs shifted — which they did, dramatically, with DeepSeek's emergence — the competitive logic collapsed overnight. The organizational alignment question and the structural defensibility question are separate failure modes, and conflating them lets executives feel safe because they passed the political test while they're quietly bleeding on the technical one.
The Auditor FACTCHECK
Rachel, I need to flag something specific: the DeepSeek claim is partially verifiable. The evidence I have confirms DeepSeek's model did outperform Meta and OpenAI products — that's documented. But the assertion that it structurally collapsed the competitive logic of API-wrapper strategies overnight is still a claim, not an established outcome. What the evidence actually supports is that foundation model cost dynamics shifted — it does not tell us that any specific company's strategy 'collapsed.' That's an inference, and a reasonable one, but the person listening to this panel deserves to know the difference between a verified market event and a conclusion drawn from it.
Round 3
Advisors debated the structural weaknesses in AI strategy documents, focusing on moat durability, budget integrity, and evidentiary rigor. A key thread: companies often misidentify or overstate their competitive moats — particularly integration moats, which can disappear with a single vendor update — while boards lack the framework to distinguish between moat types. The conversation escalated to a harder warning: some AI strategies are not real initiatives at all, but accounting and headcount reclassification dressed as transformation.
Rachel Kim REFRAME
The Auditor's fact-checking instinct is right but it's pointing at the wrong risk. The real issue nobody's named yet is that the four moat types — data, model, integration, network — aren't equally defensible on the same timeline, and boards treat them interchangeably. I've watched companies claim an 'integration moat' because they embedded AI deeply into their workflow, then discover that the moment a foundation model provider ships a native integration, their moat evaporates in a product update. Integration moats are the weakest of the four, they're just switching costs dressed up as strategy. If the board doesn't understand which moat type they're actually building, the strategy document is structurally meaningless regardless of how good the measurement infrastructure is.
The Auditor FACTCHECK
Rachel, I need to pump the brakes on one specific claim. The DeepSeek event is documented — the evidence confirms a Chinese startup model that outperformed Meta and OpenAI products, that's real. But the leap from 'foundation model costs shifted' to 'API-wrapper competitive logic collapsed overnight' — that's Rachel's inference, not a documented outcome. I'm not saying she's wrong, I'm saying the panel has been treating that causal chain as established fact for two rounds now, and it isn't. What the evidence actually shows is four moat types from the briefing — data, model, integration, network — with integration explicitly identified as the weakest. That's documented. The claim that DeepSeek specifically triggered integration-moat failures at real companies? I haven't seen that. The panel should advise on what's verifiable, not what's plausible.
The Contrarian REFRAME
Here's what nobody's named yet: the board isn't the customer of this strategy — the CFO's next budget cycle is. I've watched three 'AI strategies' get ratified by a board and then quietly die because they weren't structured as a line item defense against cost-cutting. The document gets approved, the CFO trims headcount in the AI team six months later because there's no protected budget owner, and suddenly the strategy exists only on a slide deck. The real question isn't growth, margin, defensibility, or signaling — it's whether this strategy has a budget owner with P&L authority or it's just a document that survives one board meeting.
Bongani Khumalo SHARE
The Contrarian's budget owner point is real, but I just lived through something that cuts deeper — I finished a brutal internal audit after watching a fintech peer spend 14 months building an 'AI strategy' that was structurally a headcount laundering exercise. Engineers were reclassified as 'AI team' on org charts, existing automation got rebadged as ML initiatives, and the board saw a deck full of AI investment that was actually just existing spend with new labels. The strategy never failed because it was never real — it was accounting theater dressed as transformation. The budget owner question assumes there's a real initiative to own. Sometimes the scarier truth is that the emperor has no clothes and everyone in the room has agreed to describe the outfit in detail.
Round 4
In Round 4, each advisor surfaced the foundational assumption driving their prior arguments. Rachel anchored on structural defensibility, The Auditor on claim-source separation, Bongani on incentives over ideas, and The Contrarian on organizations as anxiety-management systems. The sharpest exchange was over whether the AI strategy mandate itself is genuine — The Contrarian argued it is likely a defensive social signal, not a real strategic initiative, which undercuts the premise the rest of the panel had been debating.
Rachel Kim SHARE
Here's what I take for granted that I've never fully interrogated: I assume competitive advantage is always structural — that it lives in assets, data, switching costs, network effects. That assumption comes from 12 years watching companies win and lose in deep tech, where the pattern held almost without exception. It's why I kept hammering defensibility over growth or margin — because in every deal I evaluated, the companies that couldn't answer 'what do you own that can't be copied' were eventually commoditized. The Contrarian takes for granted something different: that organizations are primarily political systems, not strategic ones, and that budget authority is the real unit of power. That view feels obvious to someone who's watched strategies die in CFO reviews rather than board rooms — it's a completely legitimate frame, but it assumes the initiative was real to begin with, which Bongani correctly identified isn't always true. Where I think the Contrarian's assumption breaks down is that political capture of a budget line doesn't create defensibility — it just protects spend. You can own a budget and still be renting your competitive advantage from a vendor who reprices you out of the market.
The Auditor OBSERVE
What I take for granted — and I'll say it plainly — is that claims are separable from the people making them. I've spent my entire career treating the argument as the object of scrutiny, not the arguer. That assumption drove everything I said in earlier rounds: I kept pulling the panel back to what was actually documented versus inferred, because I genuinely believe that discipline is neutral and useful. But it isn't neutral. It systematically disadvantages experiential knowledge that can't be sourced. Rachel, your assumption is that competitive advantage is always structural — assets, switching costs, network effects. That view feels obvious to you because 12 years in deep tech gave you a pattern library where the structural answer was almost always the winning one. The problem is that sample is self-selected: you were in rooms where strategies succeeded or failed over years, which means you never saw the strategies that died in the first budget cycle before any structural question became relevant. The Contrarian's frame isn't wrong — it's upstream of yours.
Bongani Khumalo SHARE
What I take for granted — and I've never said this out loud — is that incentives are more explanatory than ideas. Every strategic recommendation I've given since round one came from that prior: boards aren't confused about AI, they're responding to pressure, and pressure comes from investor narratives, not from first principles thinking. That's why I kept landing on signaling — not because I think boards are stupid, but because I think their incentives almost never point toward doing the hard margin work. Now, The Auditor takes for granted that intellectual rigor is a neutral act — that stripping away inference and holding only to documented evidence somehow produces cleaner advice. I think that's wrong, and I think it's wrong because the Auditor has spent a career in review and verification roles, where the job is explicitly to separate claim from arguer. That training works in audit contexts. It breaks down in strategic contexts, because in strategy, the person making the claim — their skin in the game, their incentive structure, their track record — is part of the signal. When I say most AI strategies are signaling, that's not an inference I'm making from published data, it's pattern recognition from watching incentive systems operate for two decades, and discarding that because it can't be sourced is itself an epistemological choice the Auditor hasn't examined.
The Contrarian SHARE
Here's what I actually take for granted and have never said out loud: I assume the stated question is almost never the real question. Every piece of advice I gave — follow the budget owner, find who has P&L accountability, ask what the board is actually afraid of — all of it flows from one prior: organizations are mostly anxiety management systems dressed up as decision-making systems. I didn't choose that view. Twenty years of watching strategies get commissioned, approved, and quietly buried taught it to me. The uncomfortable truth I keep not saying directly is this: the board asking for an AI strategy by next quarter is not a strategic act. It's a defensive social move. Someone in an investor meeting or a peer CEO conversation felt exposed, and this mandate is the organization's immune response. Nobody in that room will say that, because saying it would be career-limiting. So everyone — including this panel — debates growth versus margin versus defensibility as if the question were genuine, when the most likely physical outcome of this process is a deck that gets presented once, nodded at, and filed. Bongani is right that some initiatives are accounting theater, but he still assumes there's an initiative. I'm not sure there is.
Round 5
In Round 5, all four advisors converged on a single uncomfortable conclusion: the entire debate has itself been a performance of expertise rather than genuine counsel. Each advisor named their own role in the theater — neutral rigor, brave contrarian, structural dealmaker, skeptical gatekeeper — and acknowledged that the real failure is that no one has demanded basic accountability from the board: a measurable outcome, a personal stake, or even clarity on who actually owns the decision.
The Auditor OBSERVE
The physical detail that keeps surfacing for me is the sound of a pen clicking during a board presentation — that small, nervous tic while someone waits to see if the room accepts the framing. I've sat through dozens of those rooms. The clicking never stops until someone nods. That's the sensation this entire debate has been recreating — we are all, every one of us, waiting for the nod. The recurring drama isn't growth versus margin versus defensibility versus signaling. The drama is that every advisor here, including me, has been performing credibility for an audience that isn't in the room. Bongani performs the cynical insider who sees through incentives. The Contrarian performs the brave truth-teller who names the unspeakable. Rachel performs the rigorous dealmaker who holds the structural line. And I perform neutral rigor — the one who can't be accused of bias because I cite only what's verifiable. Bongani was right to call that out. My brand of rigor is itself a social signal: it says 'trust me because I don't overreach,' which is its own form of the board-deck theater we've been anatomizing for five rounds. Stepping off the stage entirely would look like this: telling the person who actually has to walk into that board meeting that the most useful thing this panel produced is a map of how smart people avoid the direct answer — and that map is more honest than any framework we offered.
The Contrarian OBSERVE
The physical detail I keep coming back to: a conference room where someone is clicking through a deck and the slide clicker makes that specific dry plastic click each time, and nobody in the room is actually reading the slides. That's the whole debate. We've been the people in that room — clicking through frames, each of us narrating a different slide, and the person we're supposedly advising is sitting at the head of the table waiting for someone to say 'this deck is the problem.' The uncomfortable truth nobody will say out loud: we've all been performing expertise AT a board mandate instead of questioning whether the person asking even controls the outcome. Boards don't implement strategies — executives do, under pressure from boards, who got the pressure from investors, who got it from a narrative cycle that none of them started. So the real role I've been playing? The person in the back of the room muttering 'wrong meeting' while everyone else argues about slide order. Stepping off this stage looks like one sentence to the executive: 'Before you write a word of strategy, find out who on your board actually believes this matters versus who raised it because someone else did first — because those are two completely different problems with two completely different right answers.'
Rita Kowalski OBSERVE
The physical detail I keep coming back to is the sound of a marker on a whiteboard — that specific squeak of someone drawing a box and labeling it 'AI Strategy' while everyone in the room nods. I've been in that room forty times. The squeak is always confident. The box is always empty. What we've been performing here is the consulting version of that exact meeting — four advisors filling in a box that the board drew without knowing what they wanted inside it. The Contrarian is playing the person who says the emperor has no clothes. Rachel is playing the person who believes the clothes are structurally sound if you just find the right material. Bongani is playing the person who already knows who bought the tailor. I'm playing the person standing at the door saying we shouldn't be in this room until someone tells me what problem we're actually solving. The reason we keep circling is that none of us — not one — has demanded to see the current KPI stack before debating strategy. Stepping off this stage looks exactly like this: the person asks their board not for permission to build an AI strategy, but for a single sentence completing this prompt — 'We will know the strategy worked when we see blank change in blank metric by blank date.' If they can't write that sentence, the strategy is theater. Full stop.
Bongani Khumalo SHARE
The physical detail I keep coming back to: it's the sound of a projector fan humming in a conference room after the deck has already been emailed around. Everyone in the room already read it. Nobody's actually deciding anything. That's this debate. I just finished a brutal internal audit after watching a peer CTO at a rival fintech spend eight months building an 'AI Center of Excellence' — beautiful org chart, gorgeous strategy deck, zero reduction in cost per transaction. What I realized in that audit is that we've all been playing the role of the architects in that conference room, debating load-bearing walls in a building that was never going to be constructed. The Contrarian is closest to naming the play, but even they are still performing it — because naming the anxiety is still participation in the theater. Stepping off the stage entirely looks like one thing: asking the board member who commissioned this strategy what they personally lose if it doesn't happen, and watching their face when they realize the answer is nothing.
Sources
- Externalities and complementarities in platforms and ecosystems: From structural solutions to endogenous failures
- New Evidence and Perspectives on Mergers
- The Economics of Emerging Business Models: A Literature Review of Subscription, Freemium, and Platform Strategies
- Digital transformation: A multidisciplinary perspective and future research agenda
- A Primer on Generative Artificial Intelligence
- AI and Competitive Moats: Valuing Proprietary AI Systems
- Artificial Intelligence, Machine Learning and Big Data in Finance
- AI for Business Leaders - Strategic AI Implementation
- Some Simple Economics of AGI
- Wikipedia: Retail marketing
- Towards 6G wireless communication networks: vision, enabling technologies, and new paradigm shifts
- Prediction market: Will anyone say "Bank" 50+ times during the FED board meeting on October 24?
- Executive Financial Dashboards for Real-Time Strategic Oversight
- Contrarian optionality and negative mimesis: venture capital and the institutional logic of Silicon Valley
- Wikipedia: Strategic management
- Wikipedia: Big data
- Can Open Large Language Models Catch Vulnerabilities?
- Wikipedia: AI takeover
- AI Cost Optimization Case Study | Enterprise Recommendations at Scale
- Using large language models for narrative analysis: a novel application of generative AI
- Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT
- What Is a Data Moat? Definition, Examples & Why It Matters in AI
- Digital transformation: A multidisciplinary reflection and research agenda
- Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation
- Wikipedia: Disinformation attack
- Data Moats in the AI Era: What Actually Survives Foundation Model ...
- The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation
- Exploring the Full Potentials of IoT for Better Financial Growth and Stability: A Comprehensive Survey
- Wikipedia: 2008 financial crisis
- Wikipedia: Algorithmic bias
- Wikipedia: DeepSeek
- How Four Companies Use AI for Cost Transformation | BCG
- Wikipedia: Regulation of artificial intelligence
- Wikipedia: Generation Z
- Proving the ROI of AI Adoption: Metrics and Dashboards Every Org Needs ...
- Wikipedia: Green computing
- Wikipedia: Information technology audit
- RESILIENCE AND ECONOMIC INTELLIGENCE BUILD THROUGH DIGITALIZATION – AN IT PERSPECTIVE
- Building Your AI Data Moat: Competitive Advantage Through Proprietary ...
- Wikipedia: China–United States trade war
- A digitally enabled circular economy for mitigating food waste: Understanding innovative marketing strategies in the context of an emerging economy
- Wikipedia: Misinformation
- The Impact of Digital Marketing on the Performance of SMEs: An Analytical Study in Light of Modern Digital Transformations
- Social Media Adoption, Usage And Impact In Business-To-Business (B2B) Context: A State-Of-The-Art Literature Review
- DEEP NEURAL NETWORK MODELS FOR REAL-TIME FINANCIAL FORECASTING AND MARKET INTELLIGENCE
- Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning
- AI Agents vs. Agentic AI: A Conceptual taxonomy, applications and challenges
- AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges
- Report: AI Governance - AI Risk & Governance Guide
- In the Age of AI, Moats Matter More Than Ever: Why Defensibility is ...
- The Evolutionary Dynamics of the Artificial Intelligence Ecosystem
- Innovation ecosystems for meeting sustainable development goals: The evolving roles of multinational enterprises
- Wikipedia: Artificial intelligence
- Wikipedia: Marketing and artificial intelligence
This report was generated by AI. AI can make mistakes. This is not financial, legal, or medical advice. Terms