Ecomma DD Process
Steps 1–8 produce the deal memo. Step 9 — the DD Trifecta — fires the verdict at three independent AI models and synthesises their challenges. Today, four steps still need analyst judgement; the build plan progressively removes that requirement until the analyst's role shifts from doing the work to reviewing the engine's output. The trifecta found six methodology errors on its first run on the Aveugle case — including errors the original analyst-led memo had shipped.
The DD Trifecta
Why this matters
Every analyst has blind spots. Every model has blind spots. A verdict that survives independent challenge from three different models — trained on different data, with different reasoning styles, with different biases — is materially more defensible than a verdict that did not.
The trifecta also surfaces methodology errors that compound across deals. When the tribunal SUSTAINS a charge, the lesson feeds back into the reference files. The next deal's analysis is sharper.
Aveugle Case — first trifecta run, May 2026
The Aveugle DD memo (HARD PASS verdict) was submitted to the trifecta. Both Gemini and OpenAI independently flagged five reasoning errors:
- Cettire 59% retention comparison — apples-to-oranges measurement (multi-year cohort vs 30-day dashboard). Both adversaries SUSTAINED. Removed from memo.
- 0.80× implied multiple as HARD PASS — finance error. In micro-PE that's a 9.6-month payback. Both SUSTAINED. Reframed.
- EU Omnibus regulatory exposure — overstated for a $1M target. Both demoted to Caution.
- Scenario probabilities (25/55/20) — arbitrary on an empty dataroom. Both SUSTAINED. Caveat added.
- Asset-purchase counter-case at $30–40K — Gemini surfaced as MODIFIED. Added to memo as separate CONDITIONAL PASS scenario.
Verdict outcome upheld; reasoning restructured. Six methodology lessons logged back into the plugin reference files. Cost of the trifecta: ~$0.10 in API calls + 15 minutes of analyst time. Leverage: every future DD benefits.
How to invoke
Default: every CONDITIONAL PASS and every HARD PASS above $200K deal size automatically goes through the trifecta before the seller response is sent. Manually: /dd-trifecta or "run the tribunal on this deal".
Required: OPENAI_API_KEY and GEMINI_API_KEY environment variables. Codex CLI (optional) substitutes for OpenAI when installed.
1. Why this process exists
Ecomma evaluates approximately 100 ecommerce acquisition opportunities per year through a two-stage process: an informal soft yes/no, followed by a formal due diligence (DD). The current DD process is largely financial — analysts produce a deal memo grounded in P&L analysis, with the acquisition thesis criteria applied as a checklist. The process is rigorous on the financial dimension and well-served by analyst experience.
Four structural gaps in the current process motivate this document:
- No standardised front gate. Failed deals consume analyst time before they are killed. Many failed deals share the same 2–3 disqualifiers and could be rejected in 30 minutes by a checklist, freeing analyst capacity for the deals that actually warrant deep work.
- No required external benchmark. The current DD treats each target on its own terms. The result is verdicts that are correct but not anchored — "this business has a 19% GM" is informative; "this business has a 19% GM in a category where the public-market price for that model is 23% GM" is decisive.
- No institutional recall. Discipline that prevents a fund from re-pitching itself on a price concession three months later. The HubSpot recall registry (Step 8) makes it cheap to run.
- No multi-model challenge. Single-model analysis has blind spots. Even an excellent analyst with an excellent model still has the same blind spots every deal. The trifecta (Step 9) introduces independent challenge from genuinely different models.
The endgame is full automation. Today's split — rules for the gate, templates for structure, analyst judgement for the layered review and scenarios — is a transitional state, not the destination. Each phase of the build removes more analyst manual work. The trifecta is the model: it makes Step 9 fully automated multi-provider analysis with the analyst doing only the synthesis read. Step 4 (gap review) and Step 6 (scenario model) follow the same pattern in Phase 4 and Phase 5 of the roadmap.
2. Current state
The existing Ecomma DD process produces high-quality output on the financial dimension. The Aveugle audit (20 Apr 2026) is the proof point — explicit verdict, monthly P&L cohort, criteria sense-check, multiple analysis, negotiation ladder, recommended response with HubSpot logging, and institutional recall from a prior pass.
What's missing is what this process adds: front-gate automation, external benchmarking, layered review beyond financials, scenario modelling, systematised recall, AND multi-model adversarial challenge of every verdict.
3. The 9-step process
Steps 1, 5, and 8 run automatically. Step 4 (gap review) is the analyst's primary work product. Step 9 (trifecta) runs after the analyst memo is drafted and before it ships to the IC.
| # | Step | What it produces | Tier |
|---|---|---|---|
| 1 | Thesis-fit gate | PASS / NEAR / HARD FAIL on 12 criteria (10 from current thesis + retention vs comp + public reputation) | Fully automatable |
| 2 | Financial DD | Verified P&L summary, GM trend, ad-spend efficiency, monthly cohort table | Partial |
| 3 | Public-comp benchmark | Comp set selected by category, KPI deltas (GM, repeat, AOV, ROAS) vs target | Partial |
| 4 | Ten-layer gap review | For each layer: what audit covered, what is missing, evidence with sources, impact on verdict | Judgement |
| 5 | Public reputation scan | Trustpilot score & complaint themes; regulator-register check (Forbrugerombudsmanden / equivalents) | Fully automatable |
| 6 | Three-scenario model | Downside / base / upside FY+1 projection with explicit drivers and probability weights | Judgement |
| 7 | Verdict + multi-trigger plan | PASS / CONDITIONAL PASS / HARD PASS, with explicit triggers (plural) for re-evaluation | Judgement |
| 8 | HubSpot logging + recall | Closed Lost reason code, re-engagement triggers stored against the deal record for institutional memory | Fully automatable |
| 9 | ★ DD Trifecta | Three-headed multi-model cross-examination of the verdict; convergence matrix; disposition (UPHELD / RESTRUCTURED / OVERRULED); methodology lessons logged | ★ Multi-model |
The trifecta is the cheapest way to compound the methodology. Each run that produces RESTRUCTURED or OVERRULED dispositions surfaces methodology gaps that propagate back to the reference files. The Aveugle case generated six lessons on a single trifecta run — every future DD benefits automatically.
4. Step-by-step detail
Step 1 — Thesis-fit gate Fully automatable
A 12-criterion checklist run on every inbound deal as the first action. Two HARD FAILs = automatic Closed Lost; the deal does not enter the analyst queue.
Step 2 — Financial DD Partial automation
Verified P&L summary (FY and YTD), monthly cohort, multiple analysis. Auto-generated tables; analyst writes commentary and identifies anomalies.
Step 3 — Public-comp benchmark Partial automation
KPI deltas vs the closest public comparable. Comp database refreshed quarterly; comp selection per deal is analyst judgement.
Step 4 — Ten-layer gap review Judgement
The analytical core. Ten templated layers, each producing what's covered / what's missing / evidence with sources / insight / impact on verdict.
Step 5 — Public reputation scan Fully automatable
Trustpilot scrape + regulator-register check in seller's establishment jurisdiction. Auto-flags below 4.0 rating or any open inquiry.
Step 6 — Three-scenario model Judgement
Downside / Base / Upside FY+1 projection with probability-weighted expected return arithmetic shown explicitly.
Step 7 — Verdict + multi-trigger re-engagement plan Judgement
PASS / CONDITIONAL PASS / HARD PASS with explicit AND-conditions for re-engagement (plural triggers, never single).
Step 8 — HubSpot logging + recall registry Fully automatable
Updates dealstage, reason_code, re_engagement_triggers, institutional_lesson. Auto-recalls prior closed deals on the same seller/domain/category at deal entry.
Step 9 — DD Trifecta ★ Multi-model
The new step. Submits the verdict to three independent AI models (Claude orchestrating + Gemini + OpenAI/Codex). Builds a convergence matrix on 8 standard adversarial charges. Synthesises a tribunal verdict that either UPHOLDS, RESTRUCTURES, or OVERRULES the original.
Required env vars: OPENAI_API_KEY, GEMINI_API_KEY. The plugin never bakes API keys into its files — set these as environment variables in the user's Cowork environment.
Cost per trifecta: under $0.10 in API calls + ~15 minutes of analyst review of the responses.
The 8 standard charges:
- Asset class reframing — is the structural argument actually correct?
- Comp metric methodology asymmetry — apples to apples?
- Regulatory / reputation exposure realism — overstated for the target's scale?
- Multiple framing vs hurdle-rate framing — public-market intuition vs micro-PE economics?
- Scenario probability defensibility — defensible on the available data, or theatre?
- Upside case under-weighting — discounting the acquirer's own playbook?
- Contrarian engagement case — alternative deal structures considered?
- What was missed — completeness of the layered review?
5. Rules engine — what it does and does not do
5.1 What the rules engine does
- Runs the 12-criterion thesis-fit gate (Step 1) on every inbound deal in <5 minutes
- Triggers the public reputation scan (Step 5) on deal entry; refreshes monthly
- Looks up matching closed deals from HubSpot (Step 8 recall)
- Auto-generates financial DD scaffolding from a structured P&L upload
- Provides the public-comp database for analyst lookup (Step 3)
- NEW: dispatches the DD Trifecta automatically on CONDITIONAL or HARD PASS verdicts (Step 9)
5.2 Current automation limits — to be progressively removed
The following are TODAY'S limits, not architectural constraints. Each is on the roadmap to full automation.
- Doesn't auto-pass deals that clear all thresholds. Today the analyst has to manually confirm. Roadmap: once the trifecta has run on 50+ deals and we've calibrated false-positive rates, auto-pass deals that clear gate + clear all 10 layers + UPHELD by tribunal can ship straight to LOI without analyst review.
- Doesn't auto-write the seller response. Today an analyst writes from the template. Roadmap: parametrised seller-response generation from the verdict + trigger conditions; analyst reviews and sends.
- Doesn't derive scenario probabilities. Today an analyst sets them. Roadmap: probability calibration from comparable closed deals in the HubSpot recall registry — once the registry has 30+ closed deals with realised outcomes, probabilities become data-driven.
- Doesn't select the public comp. Today an analyst picks. Roadmap: comp selection becomes a classification problem against the comp database once we have 50+ category-tagged closed deals.
- Doesn't auto-accept the tribunal disposition. Today Claude orchestrator synthesises. Roadmap: keep this one human-readable for IC defensibility — the tribunal disposition is the one place where the analyst's reading should remain even at full automation, because IC review depends on it.
Every "today" limit above has a phase in the roadmap that removes it. The endgame is the analyst reviewing engine output for ~15 minutes per deal instead of writing memos for 4 hours. Maksim's team becomes a portfolio-construction and seller-relationship function, not a deal-memo-production function. That's the trajectory.
6. Phased build plan
| Phase | Window | What gets built | Owner / leverage |
|---|---|---|---|
| Phase 1 | Week 1 · days 1–3 | Codify the 12-criterion thesis-fit gate. Backfill against the last 20 deals to calibrate thresholds. | Analyst — 8–12 hours total. Catches ~60% of failed deals before any analyst time. |
| Phase 2 | Week 1 · days 4–7 | Public-reputation scan (Trustpilot + regulator) as automated job. Plus: deploy the DD Trifecta against the next 5 inbound deals to validate the methodology. | Engineering / no-code (Make.com or Zapier). 2–3 days build. |
| Phase 3 | Week 2 | Public-comp database — initial 30–40 companies across the 5 verticals. Plus: install Codex CLI for the third tribunal voice (replacing OpenAI). | Highest single-phase leverage: removes the analyst's most-repeated work. ~40–60 analyst hours saved annually once live. |
| Phase 4 | Week 3 | Auto-write the gap review. The 10 layers drafted by a Claude sub-agent against the templates with sourced WebSearch evidence; analyst reviews + adds insights, no longer drafts from scratch. | Removes ~60–90 minutes per deal of analyst writing. Compounds over the pipeline. |
| Phase 5 | Week 4 · days 22–25 | Auto-build the scenario model. Downside/base/upside probabilities calibrated from the HubSpot recall registry's existing closed-deal corpus; refines as more deals close. Analyst reviews probability assignments rather than picking from gut. | Removes ~30 minutes per deal + makes probabilities defensible. |
| Phase 6 | Week 4 · days 26–28 | Auto-pass deals that pass gate + UPHELD by tribunal. The trifecta's UPHELD disposition is calibrated against the Phase 2 pilot deals; deals clearing gate + 10 layers + tribunal UPHELD ship straight to LOI without analyst review. Analyst reviews only RESTRUCTURED and OVERRULED dispositions. | The endgame: analyst time per deal goes from ~3.5 hours (today) to ~15 minutes (review only). |
Week 1 ships the gate and the reputation scan + first trifecta runs. Week 2 ships the comp database. Week 3 ships auto-written gap reviews. Week 4 ships data-driven scenarios and full auto-pass on UPHELD trifectas. By end of week 4, analyst time per deal is ~15 minutes (review only) versus ~3.5 hours today. Maksim's team scales from ~100 deals/year (today's analyst capacity) to 300+ (engine capacity with analyst review).
7. Change management — how to roll this out
- "This adds work, it doesn't save work." Pilot on 5 deals. Gate immediately saves time; deeper review feels like more work for 2-3 deals before templates take over.
- "The templates suppress real analysis." Templates are floor for consistency, not ceiling for depth. Add deal-specific layers; mark layers N/A with one-line justification.
- "The rules engine will mis-classify deals." Quarterly review of auto-Closed-Lost decisions; recalibrate thresholds where engine over-fires.
- NEW: "The trifecta will produce conflicting verdicts." That's the point. Convergence matrix is the synthesis tool. Mixed verdicts surface for analyst adjudication, not auto-decided.
8. Aveugle case study — the methodology's prototype
The Aveugle DD memo (5 May 2026) is the worked example showing the full methodology in a