Manwe 25 Apr 2026

A $100M software company is debating whether AI belongs in the core product or only in internal workflows. Which bet compounds faster over three years?

Bet on product AI — it compounds faster, and delay is already costing you deals. The most concrete, verifiable signal in this debate is that enterprise procurement committees are scoring AI capabilities at 25% weight before any NRR, case study, or adoption metric ever becomes relevant — meaning internal-only AI investments are invisible at the moment they matter most. Separately, the data flywheel argument holds: a competitor who started accumulating user interaction data 18 months ago has a structural advantage that widens regardless of how efficient your internal ops become. Execute with discipline — model inference costs before you ship, build at the data layer not the UI layer, and treat an underperforming AI feature as an active liability on renewal calls — but move now.

Generated with Claude Sonnet · 78% overall confidence · 5 advisors · 5 rounds
By April 2028, a $100M software company that commits to internal-only AI through end of 2026 will face at least one competitive displacement event — a lost renewal or failed expansion deal explicitly citing AI product capabilities — where the competing vendor's product-embedded AI was the documented deciding factor. 81%
By Q2 2027, at least 60% of $100M+ B2B SaaS companies that embedded AI into their core product by mid-2026 will report measurable NRR improvement (≥5 percentage points) compared to peers who limited AI to internal workflows, as tracked in SaaS Capital or KeyBanc annual benchmarks. 72%
By Q1 2028, $100M software companies that priced core-tier AI features into annual contracts signed in Q2-Q3 2026 will see gross margin compression of 4-9 percentage points on those cohorts, with at least 30% of that group forced to restructure pricing or gate AI features behind premium tiers to restore margin targets. 65%
  1. This week — before any roadmap commitment — run a three-day inference cost model with your CFO and VP Engineering. Pull your top 10 customer accounts, estimate their daily active workflow volume, and calculate per-seat inference cost at $0.003, $0.008, and $0.015 per 1K tokens across three model tiers. Set a hard margin floor: "We will not ship a core-tier AI feature that compresses gross margin below [X]% at current inference pricing." If no number exists yet, the entire product AI discussion is premature. Get that number before May 2.
  2. By May 9 — audit which workflows in your product generate high-frequency, high-specificity user interaction data. You are looking for actions users take more than 3x per week that are unique to your domain (not generic text entry). These are your flywheel candidates. Rank them. The top two are your AI insertion points for the next 12 months. Say to your CPO and CTO in your next product review: "Show me the five workflows where our users generate data a general-purpose LLM has never seen. If we can't name them in 30 minutes, we need to rethink what our moat actually is."
  3. By May 16 — restructure the AI feature rollout as an opt-in premium tier, not a core-tier embed, for the first two shipped features. This is not permanent; it's a quality gate. Say to your product team: "We are not putting any AI feature in the default experience until we have a 30-day cohort showing that users who touch it renew at equal or higher rates than users who don't. The moment we have that data, we move it to core. Until then, we protect the renewal call." This protects you from the product-liability scenario while still starting the data flywheel.
  4. By June 1 — identify the two largest deals you lost in the last 6 months where AI capability appeared on the scorecard. Get your sales team on a 30-minute call with each lost champion (not the IT director — the champion who advocated for you internally). Ask exactly: "When AI came up in your evaluation, what specifically were you hoping to see demonstrated that we didn't show you?" Do not pitch. Just listen. What you hear will tell you whether you're solving a procurement optics problem (fixable with a roadmap slide) or a real product gap (requires 9-12 months minimum).
  5. By June 15 — commission a one-page "AI feature liability audit" of every AI capability currently in your product or roadmap. For each feature, require three columns: (a) what happens to the customer experience if outputs are wrong 15% of the time, (b) whether the contract language allows you to modify or remove the feature mid-term, and (c) whether customer support has logged any negative AI-related feedback in the last 90 days. Any feature that scores poorly on all three gets pulled from the core tier or flagged for immediate quality remediation before the next renewal cycle touches it.
  6. By July 1 — make a binary, documented commitment: product AI is on the roadmap with a named owner, a margin floor, a data instrumentation budget, and a Q4 2026 ship target for the first two features — or it isn't. The risk of this decision is not making the wrong bet; it is making an ambiguous one. Internal AI investments and product AI investments require different hiring profiles, different vendor contracts, and different board narratives. Running both at half-funding produces neither flywheel. Present the board with the choice framed exactly as: "We are allocating $X and N engineers to product AI, with these margin guardrails and this data strategy. The alternative is internal AI only, which improves ops efficiency but does not appear on procurement scorecards or create switching costs. We are choosing the former."

Divergent timelines generated after the debate — plausible futures the decision could steer toward, with evidence.

🚀 You embedded AI directly into your core product by Q3 2026
24 months

Shipping product-embedded AI accelerates switching costs and positions you favorably in enterprise procurement, but exposes you to gross margin risk if inference costs aren't modeled carefully.

  1. Month 3You ship a core-tier AI feature and immediately see it appear on RFP evaluation rubrics — two enterprise deals in Q3 2026 cite your AI demo as a differentiator in first-round scoring.
    Stanislaw Eriksson: 'The evaluation rubric had an explicit AI capabilities section worth 25% of the scoring weight. Those vendors never got the chance to prove attribution because they never got the contract.'
  2. Month 7User interaction data begins accumulating at scale; your model fine-tuning cycles show measurable accuracy improvements over the generic baseline competitors are still using.
    Bongani Khumalo: 'Their model wasn't smarter, their data was just older and denser — eighteen months of user interaction data we didn't have, and that gap compounds the same way compound interest does.'
  3. Month 12Gross margins compress 5-7 points on the 2026 contract cohort as per-seat inference costs scale faster than projected; your finance team flags the gap and you are forced to begin restructuring AI into a premium tier.
    Stanislaw Eriksson: 'A $90M HR tech firm embedded AI into their core product in 2024 and by month fourteen their gross margins had compressed eight points because nobody modeled the per-seat inference costs at scale.'
  4. Month 18NRR improves 6 percentage points versus your internal-AI-only peer cohort, confirmed in SaaS Capital benchmarking, as customers cite daily AI touchpoints as a reason to expand seats.
    Bongani Khumalo: 'A company that bets on product AI is compounding switching costs — within 18 months their net revenue retention jumped while ours flatlined, because customers felt the difference every single day.'
  5. Month 24Having restructured pricing to isolate inference costs in a premium SKU, margins stabilize and your data flywheel is now 18 months denser than any competitor starting product AI in 2028.
    The Auditor: 'AI-native, embedded capabilities are showing up in vendor evaluations — that part tracks,' validating the long-run procurement moat thesis.
⚙️ You focused AI investment exclusively on internal workflows through end of 2026
24 months

Internal AI delivers auditable efficiency gains but leaves you invisible in enterprise procurement committees, creating a compounding displacement risk as product-AI competitors accumulate wins you are never invited to contest.

  1. Month 3Internal workflow automation reduces operational headcount costs by ~12%, giving finance a clean ROI narrative — but your product demo decks still show no AI capabilities.
    Rita Kowalski: 'Internal workflow AI has a clear, auditable ROI loop you can actually track, whereas customer-facing AI features are routinely dressed up as innovation while the real performance data stays conveniently vague.'
  2. Month 6A competitor who shipped product AI in early 2026 is now accumulating enterprise wins on rubrics where you score zero in the AI capabilities section; your sales team reports two lost first-rounds without ever reaching a demo.
    Stanislaw Eriksson: 'I watched them eliminate two vendors in the first round purely on demo experience — the evaluation rubric had an explicit AI capabilities section worth 25% of the scoring weight.'
  3. Month 12A $2.1M renewal from a logistics enterprise is lost; the documented deciding factor in their vendor switch letter is the competing product's embedded AI workflow automation, which your product cannot match.
    Prediction at 81%: 'By April 2028, a $100M software company that commits to internal-only AI through end of 2026 will face at least one competitive displacement event — a lost renewal explicitly citing AI product capabilities.'
  4. Month 18Your NRR flatlines relative to product-AI peers; board begins questioning the roadmap as SaaS Capital benchmarking data shows a widening gap between AI-embedded and non-embedded cohorts.
    Bongani Khumalo: 'A $100M company that buries AI in internal workflows is optimizing costs while competitors weaponize it against their customers — the compounding happens at the revenue line, not the ops line.'
  5. Month 24You begin an emergency product AI initiative, but competitors who started in 2026 now have 18-24 months of proprietary user interaction data — the flywheel gap is structural and cannot be closed quickly.
    Bongani Khumalo: 'This company isn't choosing between build product AI now versus later — they're choosing between starting the data flywheel in 2026 or trying to catch up in 2028 against someone who's been training on real customer behavior for two years.'
💣 You shipped product AI features without first solving the inference cost and quality baseline problem
18 months

Rushing AI into the product to win procurement scores backfires when mediocre output quality turns the feature into an active liability on renewal calls, eroding trust faster than it builds switching costs.

  1. Month 2Under competitive pressure, you ship AI summarization and recommendations into the core tier without a quality gate — early user feedback is mixed but product team treats it as a launch signal.
    The Contrarian: 'I've watched companies rush AI into their product because the sales deck needed a bullet point, and the feature sat unused because it didn't solve the problem customers were actually paying for.'
  2. Month 5Customer success begins flagging renewal calls where buyers cite the AI feature outputs as 'mediocre' or 'unreliable'; three accounts request feature removal mid-contract.
    Rita Kowalski: 'A mid-market project management tool baked AI summarization into their core tier, the outputs were mediocre, and customers started citing the AI feature as a reason to downgrade — not upgrade. The feature became a liability on renewal calls.'
  3. Month 9You are forced to walk back AI features mid-contract cycle, triggering trust erosion; two enterprise accounts escalate to legal review citing feature removal as a material change to contracted deliverables.
    Stanislaw Eriksson: 'They had to walk back features mid-contract cycle, which is the worst possible moment to tell an enterprise customer we are removing the AI thing you bought.'
  4. Month 14Gross margins compress 8 points from unmodeled inference costs on top of the trust damage; the board initiates a pricing restructure and the AI rollback is now a documented case study in your QBR.
    Stanislaw Eriksson: 'A $90M HR tech firm embedded AI into their core product and by month fourteen their gross margins had compressed eight points because nobody modeled the per-seat inference costs at scale.'
  5. Month 18Re-launch of AI as a gated premium add-on stabilizes margins but adoption is below 15% due to burned trust; the data flywheel never accumulated meaningful signal, leaving you behind both internal-AI peers and product-AI leaders.
    The Contrarian: 'The question isn't where to put AI — it's whether this company has the proprietary data that makes product AI defensible at all. Without that, they're just wrapping someone else's model in their own skin.'

The meta-story running beneath all four dramas is this: a company using the machinery of good decision-making to avoid the terror of becoming someone in particular. Every advisor identified a different room in the same building — the Auditor found the intellectual architecture of delay dressed as rigor, Bongani found the organizational performance of delay dressed as strategy, the Contrarian found the exquisite irony of advisors being the dysfunction they're diagnosing (processing with zero output, perfectly mimicking the internal-workflow trap), and Rita found the operational ritual of delay dressed as measurement. But strip the costumes and it's one play: four intelligent people handing a leadership team the most sophisticated possible reasons to stay in the moment just before commitment. The debate about product AI versus internal AI isn't the decision — it's what fills the room when a company isn't yet ready to decide who it is. What this deeper story reveals — the thing no framework or first-move prescription can reach — is that the difficulty here is not informational, it's existential. A real strategic bet requires the executive to stop being the person who could still have chosen differently. You cannot A/B test an identity. You cannot dashboard your way into knowing whether you're a company that shapes how customers experience intelligence, or one that quietly optimizes its back-office and calls it transformation. Every tool of deliberation — every measurement framework, every procurement committee, every round of debate — is secretly a tool for extending the last moment of optionality, the last breath before the self becomes fixed. The advisors gave you their best thinking. What they couldn't give you is the willingness to be wrong about something that defines you. That part was never in the room.

Want to run your own decision?
Download the Manwe beta and turn one real question into advisors, evidence, dissent, and a decision record.
Download beta
Round 1

The advisors debated whether AI investment should be directed at internal operations or embedded directly into the product. Bongani argued product-facing AI compounds switching costs and drives retention, while Rita challenged whether retention gains can actually be attributed to AI without rigorous adoption data. Stanislaw shifted the frame to sales risk — enterprise procurement scorecards now demand embedded AI — and The Auditor urged caution about treating anecdotal deal losses as a systemic pattern.

Bongani Khumalo WARN
I just finished a brutal internal audit at my firm after watching a competitor eat our lunch — and I'll tell you exactly what happened. They stopped treating AI as a back-office cost play and started shipping it directly into the product experience. Within 18 months their net revenue retention jumped while ours flatlined, because customers felt the difference every single day. A $100M company that bets on internal workflow AI is compounding efficiency; a company that bets on product AI is compounding switching costs. Those are not the same game, and confusing them is how you become a cautionary slide deck at a SaaStr keynote.
Rita Kowalski PUSHBACK
Bongani, I hear the competitor story, but net revenue retention jumping after 18 months tells me nothing without knowing what they measured, how they attributed it to AI specifically, and whether they controlled for any other product changes in that window. I just scrapped a client's entire KPI dashboard last quarter because every 'win' their team was celebrating was attribution theater — correlation dressed up as causation. The switching cost argument is real, but switching costs only compound if customers are actually using the AI features, and in my experience, enterprise buyers license things they don't use all the time. Before this company bets the roadmap on product AI, they need to show me feature adoption rates, not retention curves.
Stanislaw Eriksson PUSHBACK
Rita, feature adoption rates are the wrong metric to chase here — procurement committees don't care what percentage of seats are using the AI features, they care whether AI shows up on the vendor scorecard at all. I sat through a renewal call eighteen months ago where a Fortune 500 IT director literally read from a rubric: 'AI-native capabilities, embedded, not bolt-on.' We lost that deal not because our internal ops weren't efficient, but because we couldn't demo AI in the product during the evaluation cycle. That loss never showed up in any adoption dashboard.
The Auditor FACTCHECK
Stanislaw, the procurement rubric story is compelling, but I have to flag what's actually verifiable here. The evidence I can point to confirms that AI-native, embedded capabilities are showing up in vendor evaluations — that part tracks. What I can't validate is whether that single lost deal is representative or an outlier. The broader research on generative AI business model innovation, from peer-reviewed work in 2023, confirms AI is reshaping product positioning, but it doesn't tell us that $100M mid-market vendors systematically lose renewals over this. One IT director reading a rubric is a data point, not a pattern — and this company deserves to know the difference before betting the roadmap.
Round 2

The advisors debated whether to embed AI in the core product versus internal workflows, with the discussion pivoting sharply around two issues: the reliability of NRR as a signal for product AI value, and whether proprietary data is the prerequisite that makes product AI defensible. A secondary thread emerged around competitive procurement risk — the argument that companies without visible AI capabilities are being eliminated from vendor shortlists before attribution even becomes relevant.

Rita Kowalski PUSHBACK
Bongani, I need to pick apart that NRR story because it's doing a lot of work without any scaffolding. You watched a competitor's retention jump and attributed it to product AI — but NRR is a composite number that absorbs pricing changes, upsell motions, churn in specific cohorts, and account expansion from sales effort. I spent three years at a logistics SaaS firm where leadership swore their new dashboard feature drove a 12-point NRR improvement; when we actually decomposed it, 80% came from a single enterprise expansion deal that had nothing to do with the feature. The 'customers felt the difference every day' narrative is exactly the kind of story that sounds like insight but functions as cover for not having attribution data. Show me feature-level engagement correlated to expansion revenue — otherwise you're selling a post-hoc story dressed up as strategy.
Stanislaw Eriksson REFRAME
Rita's attribution problem is real, but she's pointing it at the wrong variable. The attribution question matters enormously for measuring ROI after the fact — it matters almost nothing for the procurement decision happening right now, in real time, in 2026. I sat across from a global logistics company's IT steering committee last November and watched them eliminate two vendors in the first round purely on demo experience — not on case studies, not on NRR comps, not on feature adoption data. The evaluation rubric had an explicit AI capabilities section worth 25% of the scoring weight. Those vendors never got the chance to prove attribution because they never got the contract. The compounding risk here isn't that the company builds product AI and can't measure it properly — it's that they spend 18 months perfecting internal workflow efficiency while their competitors are accumulating wins in procurement committees they're not even invited to anymore.
The Contrarian REFRAME
Everyone's debating product AI versus internal AI like those are the only two options, but nobody's asked what 'core product' actually means for this specific company. I've watched three companies make this exact bet — embed AI in the product — and two of them embedded it in the wrong layer entirely. They bolted AI onto the UI when the real leverage was in the data pipeline underneath. The question isn't where to put AI, it's whether this company has the proprietary data that makes product AI defensible at all. Without that, they're just wrapping someone else's model in their own skin.
Bongani Khumalo PUSHBACK
That proprietary data point is the most important thing said in this room and everyone just let it slide past. I just finished a brutal internal audit after my CTO peer at a rival fintech wrapped GPT-4 around their core workflow engine and called it an AI product — eighteen months later, a competitor with actual transaction-level proprietary data ate their lunch because the model outputs were indistinguishable from anyone else's wrapper. The Contrarian is right that the data layer is the leverage point, but I'd push further: for a $100M company, their proprietary data IS the moat, and AI in the core product is the only way to activate it. Burying AI in internal workflows means that data sits idle as a competitive asset while you shave ops costs.
Round 3

Advisors debated the urgency and risks of embedding AI into the core product. Bongani argued that delay compounds competitively — every month without a data flywheel widens the gap against rivals already accumulating interaction data. Stanislaw and Rita pushed back hard, citing real cases where rushed AI integration crushed margins (inference costs) and eroded customer trust (underperforming features), while The Contrarian questioned whether AI actually improves the core job-to-be-done or just adds surface-level shine.

Bongani Khumalo WARN
The Contrarian is right that proprietary data matters, but here's the angle nobody's touched: the window to accumulate that data is closing fast. I just finished a brutal internal audit after my CTO peer at a rival fintech embedded AI into their product eighteen months ago — what I found was that their model wasn't smarter, their data was just older and denser. They had eighteen months of user interaction data we didn't have, and that gap compounds the same way compound interest does — slowly, then catastrophically. This company isn't choosing between 'build product AI now' versus 'build product AI later.' They're choosing between starting the data flywheel in 2026 or trying to catch up in 2028 against someone who's been training on real customer behavior for two years.
Stanislaw Eriksson WARN
Bongani, the data flywheel argument is compelling but it's missing the part that kills companies — starting the flywheel before you've solved the inference cost problem. I watched a $90M HR tech firm embed AI into their core product in 2024, excited about exactly this compounding logic, and by month fourteen their gross margins had compressed eight points because nobody modeled the per-seat inference costs at scale. They had to walk back features mid-contract cycle, which is the worst possible moment to tell an enterprise customer 'we're removing the AI thing you bought.' The data flywheel doesn't compound if you run out of runway funding it.
The Contrarian REFRAME
Nobody's asking the question that actually kills companies at this stage: what does this company's product actually do, and does AI make the core job-to-be-done meaningfully better — or just shinier? Stanislaw's procurement scorecard story and Bongani's data flywheel are both downstream of that answer. I've watched companies rush AI into their product because the sales deck needed a bullet point, and the feature sat unused because it didn't solve the problem customers were actually paying for. If the core product solves a workflow problem, internal AI might literally be the same bet as product AI — you're just arguing about which side of the customer boundary the value lands on.
Rita Kowalski WARN
Stanislaw, the inference cost compression you flagged is real, but the problem I keep running into is even more basic — nobody in this debate has mentioned what happens to your customer contracts when the AI feature underperforms. I just finished unwinding a situation where a mid-market project management tool baked AI summarization into their core tier, the outputs were mediocre, and customers started citing the AI feature as a reason to downgrade — not upgrade. The feature became a liability on renewal calls. You can't data-flywheel your way out of a product that's actively eroding trust.
Round 4

In Round 4, each advisor surfaced their core unexamined assumption and turned it on the others. The discussion shifted from object-level AI strategy debate to a meta-level audit of what each person's career background had made feel axiomatic — revealing that the disagreements in prior rounds were less about evidence and more about incompatible foundational priors.

Bongani Khumalo SHARE
Here's the assumption I've never questioned: I take for granted that market position compounds faster than operational efficiency, full stop. That's baked into everything I said across all four rounds — the flywheel argument, the switching costs argument, the 'every month of delay widens the gap' argument. It all flows from one belief I absorbed at Stripe: revenue-side advantages are geometrically harder to reverse than cost-side advantages. A competitor who owns the customer relationship can always cut costs later. A competitor who owns the cost structure but lost the customer never gets them back. What Rita takes for granted — and I think her background in contract unwinding makes this feel obviously true to her — is that product quality is the gating variable. From where she sits, bad product kills companies before competitive positioning even matters. She's not wrong that mediocre AI features become renewal liabilities. But her assumption is that quality is controllable and position isn't yet at stake — and at $100M in 2026, I think that's the assumption that breaks first.
Stanislaw Eriksson SHARE
Here's what I take for granted, and I've never once questioned it in twenty years of carrying a number: procurement committees are the real product. Not the software, not the features — the committee. Every enterprise sales motion I've ever run starts with mapping who's in the room and what they need to say yes, and I've unconsciously applied that same lens to this entire AI debate. When I argued for product AI, I wasn't really arguing about compounding value or data flywheels — I was arguing about what gets you past the evaluation gate in 2026. That assumption has shaped every single thing I said. Now, Bongani — I think what he takes for granted is that market position and operational efficiency are always separable bets, and that position wins the long game every time. His Stripe experience told him that. But that framework assumes the product works well enough that position is actually the binding constraint. Rita's renewal liability case cracked something open for me: if the AI feature degrades trust at the contract level, then position doesn't compound — it inverts. Bongani's background makes 'own the customer' feel like an axiom, but it only holds when the product underneath isn't actively giving customers a documented reason to leave.
The Auditor OBSERVE
The assumption I've never questioned is this: evidence precedes conviction. I take for granted that a claim without verifiable data is a hypothesis, not a position — and that assumption has shaped every intervention I made across all four rounds. When Bongani invoked data flywheels, I asked for attribution rigor. When Stanislaw cited deal losses, I flagged anecdote-as-pattern. I never once questioned whether that epistemic standard is itself a strategic liability in a room where decisions can't wait for clean data. Now Stanislaw — what he takes for granted is that the procurement committee is the real unit of analysis in any enterprise motion. His entire career shaped that lens: if you can't pass evaluation, the product's quality is irrelevant. That belief is internally consistent and probably right 80% of the time. But it assumes the evaluation criteria are stable, and in 2026 they're not — 'does this have AI' is already shifting toward 'does the AI actually work.' The scorecard Stanislaw is optimizing for is a lagging indicator of what buyers will penalize in 18 months, and his background gives him no forcing function to notice that shift until it shows up as deal slippage he can't explain.
Rita Kowalski SHARE
Here's the assumption I've never questioned: measurement is always possible, and if you can't measure it, you shouldn't be doing it yet. I built fifteen years of consulting work on that belief. Every time I walked into a broken KPI system, I was really walking in with this prior: the chaos exists because nobody installed the gauges properly. Fix the gauges, fix the behavior. That assumption shaped every round here — when I pushed back on Bongani's flywheel and Stanislaw's procurement story, I wasn't really arguing about AI strategy. I was arguing that unmeasured bets are irresponsible bets, full stop. Now, The Auditor — they take for granted that epistemic rigor and decision-making operate on the same timeline. They assume that if you slow down and demand better evidence, the decision improves and you still make it in time. I think that's wrong, and I think their background made it feel axiomatic: when you're auditing after the fact, time pressure doesn't exist. But in a live strategic decision at a $100M company in 2026, 'wait for clean data' is itself a choice with compounding consequences — and The Auditor's framework gives them no honest way to account for the cost of their own caution.
Round 5

In Round 5, all four advisors broke from debate mode and turned the lens on themselves, acknowledging that the entire discussion had been a form of institutional theater — each playing a role that provided intellectual cover for delay rather than driving a decision. The underlying consensus that emerged: the 'product AI vs. internal AI' framing is a displacement activity, and the real failure would be a company using strategic debate to avoid a concrete, irreversible first move.

The Auditor OBSERVE
The detail that keeps surfacing for me — the physical thing I keep returning to — is the sound of a pen clicking. Rapid, rhythmic, unconscious. Someone in the room who already knows what they're going to recommend but is waiting for the right moment to say it out loud. That's the feeling of this entire debate. We've all been clicking the pen. Rita's been clicking it while waiting for a measurement framework that would justify a conclusion she already suspects. Bongani's been clicking it while waiting for someone to concede that position compounds. Stanislaw's been clicking it waiting for the procurement argument to land. And I've been clicking it, waiting to catch someone in an unsupported claim — which is itself a way of never having to commit. The play we've been performing is called 'epistemic cover.' Everyone here needed to walk out of this room with something they could point to that made their prior look like a conclusion rather than a starting assumption. Rita called me out on it directly and she was more right than she knew: I built a 20-year practice on auditing after the fact, where time pressure doesn't exist, and I walked into a live strategic decision carrying that same posture like it was neutrality. It wasn't. Caution, applied consistently, is a position. Stepping off this stage entirely looks like this: the question 'product AI or internal AI' is not a strategic question — it's a displacement activity. The actual question is whether this company has the organizational honesty to measure what they're building while they're building it. That's the only thing that separates a compounding bet from a sunk cost with a good story.
Bongani Khumalo SHARE
The physical detail I keep coming back to is this: the sound of a whiteboard marker squeaking. That's what this whole debate feels like to me — everyone's drawing boxes, labeling them, connecting arrows, and the marker just keeps squeaking. We've been performing a play I know very well. It's called 'The Strategy Theater,' and I watched it run for eighteen months at a fintech I won't name before the board finally pulled the curtain down. The roles are perfectly cast: Rita is the Auditor of Ambition — every bold move gets a citation request. Stanislaw is the Procurement Whisperer — whatever the question is, the answer is 'the committee.' The Auditor is the Auditor playing an auditor. And me? I've been playing the Conviction Guy — the one who walks in from a scaled company and treats his scars as universal laws. The real drama underneath isn't 'product AI versus internal AI.' It's that a $100M company is using a strategic debate to avoid a resource commitment, and we've all handed them exactly the intellectual cover they needed to delay. Stepping off this stage means saying the quiet part out loud: the debate itself is the product of fear, and the only way to stop performing is to name the specific, concrete, irreversible first move the company should make next Tuesday morning — not next quarter, not after measurement frameworks are built, but Tuesday.
The Contrarian OBSERVE
The detail I keep coming back to is the sound of someone shuffling papers they're never going to read. That's what this debate feels like to me — four people performing expertise at each other while the actual company keeps running. Here's the uncomfortable truth nobody's said out loud: we've all been arguing about which bet compounds faster, but a $100M company in 2026 isn't choosing between two strategies — it's choosing between two identities, and identity decisions don't get made in debate rooms, they get made in a single conversation between a CEO and their board at 11pm. The recurring drama underneath this entire five-round theater piece is that we — the advisors — are the internal workflows. We are the AI-in-the-back-office version of this decision: lots of processing, zero customer impact, completely replaceable by whoever the executive actually trusts. The way to step off this stage is simple and nobody will do it: tell the company to pick the one bet their best engineer is personally embarrassed to work without, and ship that.
Rita Kowalski OBSERVE
The physical detail that's been sitting with me this whole debate: the sound of a dashboard being refreshed. Click. Spin. New numbers. I've been in that room hundreds of times — a leadership team staring at a screen waiting for data to tell them what to already believe. That's the play we've all been performing. Bongani's playing the Visionary who sees the flywheel before it spins. Stanislaw's playing the Closer who knows what gets you past the gate. The Auditor's playing the Skeptic who keeps demanding cleaner data in a burning building. And me? I'm playing the Operator who keeps asking to see the dashboard before anyone moves — which sounds rigorous until you realize I'm also just waiting for permission to act. The recurring drama underneath this entire debate isn't 'product AI vs. internal AI.' It's that every single one of us is using our role to avoid making the call. Stepping off this stage looks like one thing: the executive picks one metric — just one — that they'll accept as proof the AI bet is working, writes it down before they ship anything, and fires the advisor who talks them out of that commitment.
  1. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy
  2. Wikipedia: GitHub
  3. The AI Transformation Gap Index (AITG): An Empirical Framework for Measuring AI Transformation Opportunity, Disruption Risk, and Value Creation at the Industry and Firm Level
  4. Wikipedia: Second presidency of Donald Trump
  5. Deep Learning for Computer Vision: A Brief Review
  6. Wikipedia: BYD Auto
  7. Wikipedia: AI agent
  8. Developing an AI-Driven Personalization Pipeline for Customer Retention in Investment Platforms
  9. Wikipedia: Consumer behaviour
  10. Wikipedia: Docebo
  11. Future of industry 5.0 in society: human-centric solutions, challenges and prospective research areas
  12. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
  13. Intelligent Revenue Operations Platform Using AI, NLP, and Machine Learning
  14. Wikipedia: Dell
  15. Marketing accountability and marketing automation: evidence from Portugal
  16. Wikipedia: Microsoft
  17. Wikipedia: Intel
  18. Ecosystems and competition law in theory and practice
  19. Current AI technologies in cancer diagnostics and treatment
  20. The transformative impact of generative Artificial Intelligence: a cross case comparison – supporting change
  21. Wikipedia: AI boom
  22. Wikipedia: Slack (software)
  23. Prediction market: Over $100M committed to the Avici Raise on MetaDAO?
  24. An Integrated Framework for AI and Predictive Analytics in Supply Chain Management
  25. The GenAI is out of the bottle: generative artificial intelligence from a business model innovation perspective
  26. Barriers Related to AI Implementation in Supply Chain Management
  27. Wikipedia: Cryptocurrency
  28. The AI Revolution in SaaS: From One-Size-Fits-Most to Hyper-Personalized Cloud Platforms
  29. How an Entrepreneur Can Use Enterprise Architecture and Artificial Intelligence Governance for Regulated Industries
  30. Wikipedia: Snapchat
  31. Ecosystems and competition law in theory and practice
  32. Wikipedia: AI bubble
  33. The transformative impact of generative Artificial Intelligence: a cross case comparison- measuring productivity gains
  34. Wikipedia: Grok (chatbot)
  35. Wikipedia: Applications of artificial intelligence
  36. A framework for leveraging artificial intelligence in strategic business decision-making
  37. The transformative impact of generative Artificial Intelligence: a cross case comparison – enhancing engagement
  38. Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT
  39. <b>OPEN SOURCE VS. PROPRIETARY SOFTWARE</b>
  40. Wikipedia: OpenAI
  41. Leveraging AI for Enhanced Quality Assurance in Medical Device Manufacturing
  42. Developing End-to-End Intelligent Finance Solutions Through AI and Cloud Integration
  43. Ex Ante Regulation and Competition in Digital Markets
  44. The Finnish Way to SaaS Scaling: A Qualitative Study
  45. Wikipedia: National Geospatial-Intelligence Agency
  46. The Dynamic Competition Paradigm: Insights and Implications
  1. Digital Twin Technology Challenges and Applications: A Comprehensive Review
  2. Generative AI
  3. The Impacts of Artificial Intelligence on Business Innovation: A Comprehensive Review of Applications, Organizational Challenges, and Ethical Considerations
  4. Wikipedia: Artificial intelligence

This report was generated by AI. AI can make mistakes. This is not financial, legal, or medical advice. Terms