Product Blueprint • March 2026

TBK-IQ

The AI Content Intelligence Platform.
Data-driven topic selection. Voice cloning. Universal publishing.

Prepared by Chain Reaction Complete Product Blueprint Confidential
6 Platform Layers
7 AI Agents
10+ CMS Targets
3 Week Sprint
01

What is TBK-IQ?

An end-to-end AI content intelligence platform that knows what to write, when to write it, who it should sound like, and where to publish it.

TBK-IQ is not an article generator. It is a 6-layer platform that combines real-time performance data (GSC, GA4, Semrush, Ahrefs), competitive intelligence, voice cloning, multi-agent article generation, and universal CMS publishing into a single pipeline. The user provides a category. TBK-IQ returns data-backed article recommendations, generates content in the voice of the client's existing writers, and publishes it directly to any CMS.

The End-to-End Flow

Client Site
Data Ingestion
Intelligence
Voice Match
Generate
Publish

Who Is This For?

E
Enterprise Publishers

SRMG, Asharg, and media groups producing 2,000+ articles/week. Manual SEO workflows can't scale — TBK-IQ automates the intelligence layer.

A
SEO Agencies

Content teams managing 10+ client blogs. TBK-IQ replaces the spreadsheet-and-gut-instinct approach with data-driven topic selection per client.

S
E-Commerce / Shopify Stores

Product-aware content generation. TBK-IQ writes blog posts that reference the store's catalog, driving organic traffic to product pages.

B
Any Website With a Blog

WordPress, Ghost, Webflow, Contentful, Strapi, custom CMS, or plain HTML. If it publishes content, TBK-IQ works with it. Zero lock-in.

The core thesis: The science of knowing WHAT to write is harder than the act of writing it. Semrush tells you what keywords exist. TBK-IQ tells you which ones are worth writing about for YOUR specific site, in YOUR specific voice, and then proves it worked.
02

Platform Architecture — 6 Layers

Each layer feeds the next. Data flows top to bottom. Every article is backed by real performance data, matched to a human voice, and published directly to the client's CMS.

Layer 1 — Data Ingestion
Connectors & Crawlers
Google Search Console • GA4 • Semrush API • Ahrefs API • Google Trends • CMS Crawler
→ All data normalized into a unified Content Performance Record per URL per client
Layer 2 — Content Intelligence
The Recommendation Brain
Decay detection • Gap analysis • Seasonality model • Keyword opportunity scoring • SERP saturation index • Competitor velocity • Cannibalization guard
→ Outputs: ranked topic recommendations with confidence scores, data justification, and article type (NEW or REFRESH)
Layer 3 — Voice Intelligence
Writer Detection & Persona Cloning
Crawl existing content • AI vs human classification (stylometrics) • Writer clustering (HDBSCAN) • Persona DNA generation
→ Outputs: writer personas with sentence cadence, vocabulary fingerprint, tone, and style constraints
Layer 4 — Generation Pipeline
4-Agent Article Factory
Project Analyzer (detect site shell, design system, components) → Research Engine (6-round deep research via Gemini + web) →
Article Architect (concepts, TOC, component mapping, image plan) → Draft Writer (voice-matched, framework-native output + inline edit UI)
Layer 5 — Universal Publishing
CMS Adapters & Platform Plugins
WordPress plugin (all builders) • Shopify app • Contentful • Strapi • Ghost • Webflow • Sanity • Generic webhook/REST
→ Publish to any platform. Media uploaded to CMS CDN. SEO meta auto-set. Created as draft for human review.
Layer 6 — Feedback Loop
Performance Tracking & Recalibration
Track every published article at 30/60/90 days via GSC + GA4 • Prediction vs actual scoring • Recalibrate intelligence engine
→ TBK-IQ gets smarter the longer a client uses it. The data makes the engine more accurate over time.
03

Layer 0: Data Ingestion

Every decision TBK-IQ makes is backed by real performance data. This layer collects, normalizes, and stores it.

Source Data Pulled Auth Method Refresh Cycle
Google Search Console Clicks, impressions, CTR, avg position per page per query; index coverage; crawl stats OAuth 2.0 Daily
Google Analytics 4 Sessions, engagement rate, scroll depth, time on page, conversions per URL OAuth 2.0 Daily
Semrush API Keyword research, keyword gap, domain analytics, topic research, SERP features API Key Weekly
Ahrefs API Backlink profile, domain rating, referring domains, top pages by traffic, keyword explorer API Token Weekly
Google Trends Seasonal interest curves, trending queries, regional demand signals Public / Pytrends Weekly
CMS Crawler Every article URL, title, word count, publish date, author, categories, tags CMS API / HTTP On-demand

Data Normalization

All sources are normalized into a unified Content Performance Record per URL:

ContentPerformanceRecord { url: "/blog/n54-hpfp-symptoms" title: "N54 HPFP Failure Symptoms" publishDate: "2025-06-14" author: "Mike T." wordCount: 2,840 gsc: { clicks: 1,420, impressions: 28,300, ctr: 5.02%, avgPosition: 8.2 } ga4: { sessions: 1,680, engagementRate: 72%, scrollDepth: 64% } semrush: { organicTraffic: 1,350, keywords: 47, topKeywordKD: 32 } ahrefs: { backlinks: 14, referringDomains: 8, domainRating: 42 } trend: { 3moChange: -18%, 6moChange: -34%, seasonal: "peaks Oct-Dec" } status: "DECAYING" // HEALTHY | DECAYING | DECLINING | DEAD }
Cost management: Semrush and Ahrefs charge per API request. A caching layer stores results for 7 days (configurable per client). Estimated API cost: $200–400/mo per client at the Professional tier.
04

Layer 1: Content Intelligence Engine

The brain. Analyzes performance data to recommend exactly what to write, when, and why.

Two Genesis Modes

A
Agent-Recommended PRIMARY

User provides a category. The agent returns a ranked list of specific articles to write, each scored and justified with data.

B
User-Driven OVERRIDE

User provides a specific keyword directly — for agent override to accommodate priorities or preferences beyond the data.

The Recommendation Algorithm

Mode B runs 6 analyses in sequence to produce scored recommendations:

# Analysis What It Does Data Source
1 Content Inventory Crawls the site, builds a map of every article with metadata CMS API + HTTP
2 Decay Detection Flags articles with 3+ months of declining impressions or 20%+ single-month drop GSC + GA4
3 Gap Analysis Finds high-volume keywords the client doesn't rank for but competitors do Semrush + Ahrefs
4 Seasonality Check Adjusts opportunity scores based on seasonal demand curves (write before the peak, not during) Google Trends
5 Saturation Index Scores SERP difficulty: are top results thin/outdated (high opportunity) or deep/authoritative (low)? Semrush SERP
6 Cannibalization Guard Checks if client already ranks for a keyword — recommends refresh instead of new article GSC + Content Map

Output: Scored Recommendations

// Example output from Mode B — category: "BMW tuning" 1. "N54 HPFP Failure Symptoms and Solutions" Score: 94 Volume: 4,400/mo | KD: 28 | No existing coverage Competitors: thin (avg 800 words) | Seasonal peak in 6 weeks Recommendation: NEW ARTICLE 2. "Best BMW Tuning Platforms 2025" Score: 87 Existing article lost 34% traffic in 3 months Competitors published fresher guides | Data is 14 months old Recommendation: REFRESH 3. "B58 vs N55: Which Engine is Better?" Score: 81 Volume: 2,100/mo | KD: 35 | Comparison intent Top SERP results: outdated (pre-2024) | Forums dominate Recommendation: NEW ARTICLE
05

Layer 2: Voice Intelligence

Detect existing writers, clone their style, produce content that's indistinguishable from their own work.

The Anti-Slop Mechanism

Instead of "write like a human," TBK-IQ says "write like THIS specific human who already writes for this publication." The Voice Analyzer Agent runs a 4-step process on the client's existing content.

Step 1 — Corpus Collection

Crawl the client's site and collect 50–100 articles minimum (statistical significance threshold). Extract raw text, strip HTML, preserve paragraph structure. Associate each with author name where available.

Step 2 — AI vs Human Classification

Run stylometric analysis on each article to classify as HUMAN HYBRID or AI-GENERATED. Signals measured:

SignalHuman PatternAI Pattern
Sentence length variance High (mix of short punchy + long complex) Low (uniform 15–20 word sentences)
Vocabulary richness (TTR) Higher — uses unexpected words Lower — defaults to common synonyms
Hedging frequency Occasional, deliberate Excessive ("it's worth noting", "it's important to")
Cliche density Low — personal expressions High — "delve", "landscape", "leverage"
Em-dash frequency Varies by author Unusually high (AI signature)
Paragraph rhythm Irregular, driven by thought flow Uniform 3–4 sentence blocks

Step 3 — Writer Clustering

Using only the HUMAN articles, cluster by writing style fingerprint using NLP features: sentence structure, tone, formality, vocabulary, use of analogies, humor, technical depth. HDBSCAN clustering finds natural writer groups — no need to predefine the number of writers.

Step 4 — Persona Generation

Writer Persona: "The Technical Storyteller" // Detected from 34 of 87 human-written articles voice: Conversational-authoritative. Teaches through analogy. cadence: Short-medium sentences (avg 14.2 words). Punchy openers. structure: Opens with a real-world scenario. Uses "here's the thing." numbers: Always contextualized ("145 bar — barely enough") avoids: Passive voice, em-dashes, "it's worth noting", hedging TTR: 0.72 (rich vocabulary, avoids repetition) humor: Dry, occasional. Never forced. refs: [url1], [url2], [url3]

This persona is injected into the draft-writer agent as a style constraint. The output reads like the client's own writer wrote it.

06

Layer 4: The Generation Pipeline

7 AI agents orchestrated in sequence. Each one is specialized, autonomous, and produces structured output for the next.

Agent Architecture

#AgentInputOutputTools Used
1 Topic Recommender Category + client site URL Ranked list of article recommendations with scores and reasoning Layers 1–2 data
2 Voice Analyzer Client site URL Writer personas (DNA profiles) for style matching HTTP crawler, NLP
3 Project Analyzer Target project directory or URL Shell detection, design tokens, component inventory, tech stack File system, HTTP
4 Research Engine Topic + domain lock 6-round research report + 4–6 image prompts Gemini MCP, WebSearch, WebFetch
5 Article Architect Research report + component inventory 5 concepts → selected architecture → TOC with sidebar labels → image plan → section metadata Internal analysis
6 Draft Writer Architecture + research + writer persona + design tokens Framework-native article (.tsx, .vue, .svelte, .html) with inline edit UI and section-level editing Framework adapters
7 Quality Gate Generated article + target persona 7-signal quality score. Below 7/10 = auto-revision. Max 2 passes before human review flag. SEO scoring rubric

3 Adaptation Modes

The pipeline adapts to whatever the target project provides:

1
Existing Components

Project has its own component library. TBK-IQ detects and uses them — cards, callouts, tables, heroes — native to the site.

2
Registry Blueprints

Project has no components. TBK-IQ uses its 193 structural component blueprints, styled with the project's design tokens.

3
Fallback Generation

No project detected at all (standalone mode). TBK-IQ generates self-contained HTML with inline CSS, professional defaults, and the full edit UI. Works as a static file, email attachment, or raw upload.

Section-Level Editing

Every generated article includes an inline edit UI. Users click "Edit" on any section, type a revision instruction ("make this more technical", "add a comparison table"), and TBK-IQ rewrites just that section via a bridge server that spawns a Claude subprocess. No full regeneration needed.

06

Layer 3: Universal Publishing

Generate once, publish anywhere. CMS adapters, WordPress plugin, Shopify app.

CMS Adapter Architecture

Each adapter translates TBK-IQ's standardized article output (HTML + metadata + images + SEO fields) into the CMS's native format.

PlatformIntegrationCapabilitiesBuilder Compat
WordPress REST API + Plugin Create posts, categories, featured image, Yoast/RankMath meta All (Gutenberg, Elementor, WPBakery, Classic)
Shopify Admin API + App Create blog posts, SEO meta, product references All themes (Liquid-native)
Contentful Management API Create entries, publish, media upload N/A (headless)
Strapi REST / GraphQL Create content, upload media N/A (headless)
Ghost Admin API Create posts, set cards, metadata Native editor
Webflow CMS API Create collection items, SEO fields Webflow Designer
Custom Webhook / REST POST article payload to any endpoint Any

WordPress Plugin — Universal Compatibility

Key insight: The plugin operates at the wp_posts database level via wp_insert_post(), not at the builder level. WordPress stores content in the same table regardless of whether Gutenberg, Elementor, WPBakery, or Classic Editor rendered it. This is why it works with every builder — it doesn't compete with them.

The WordPress plugin adds:

Shopify App

Embedded Shopify admin app using the Blog API. Product-aware — can reference the store's catalog in generated articles. Simpler than WordPress because Shopify has one content path, not multiple builders. Distributed via Shopify App Store.

07

Contingencies & Edge Cases

What happens when the client's setup doesn't match the happy path.

ScenarioWhat HappensFallback
Site is not WordPress or Shopify CMS adapter system handles this. If the CMS has a REST/GraphQL API (Contentful, Strapi, Ghost, Webflow, Sanity), use the matching adapter. If no adapter exists: generic webhook adapter POSTs the article payload to any endpoint. If no API at all: output standalone HTML file (current v1 behavior) for manual upload.
Site is a custom-built CMS with no standard API Generic webhook adapter sends structured JSON (HTML body + metadata + images) to a client-provided endpoint. Client implements a small webhook receiver on their end. We provide a reference implementation. Estimated effort: 2–4 hours for any backend developer.
Client doesn't have GSC or GA4 connected Layer 2 (Intelligence) can still function using Semrush + Ahrefs data alone for gap analysis, keyword opportunities, and saturation scoring. Decay detection requires GSC data — without it, the system recommends only NEW articles (no refresh recommendations). Prompt client to connect GSC for full functionality.
Client doesn't have a Semrush or Ahrefs subscription TBK-IQ uses its own API keys (cost absorbed into subscription). The client doesn't need their own account. If TBK-IQ API costs need reduction: fall back to free/lower-cost data sources (Google Keyword Planner API, Ubersuggest API, or web scraping SERP results).
Site has fewer than 50 articles (insufficient for voice analysis) Voice Analyzer needs 50+ articles for statistically meaningful clustering. Fall back to single "brand voice" persona defined manually with the client (tone guide, vocabulary preferences, example articles they admire). Or use the best 20–30 articles with reduced confidence.
All existing content is AI-generated (no human baseline) Voice Analyzer detects this and flags it. Two options: (1) Client provides 5–10 reference articles from other publications they want to emulate, or (2) TBK-IQ uses a curated "professional editorial" persona as baseline with client-specific vocabulary overlaid.
Site is in a language TBK-IQ hasn't seen before Research engine and draft writer work in any language (LLM-native). Voice analysis uses language-agnostic stylometric signals (sentence length, variance, structure). Arabic/RTL is first-class (SRMG). Other languages: research quality depends on Gemini/web coverage in that language. Low-resource languages may produce shallower research.
Google OAuth verification takes longer than 3 weeks Use "Testing" mode (supports up to 100 test users — sufficient for pilot clients). If verification is blocked entirely: accept manual GSC/GA4 CSV data exports. Less automated but functional. Push verification as a background task.
Semrush/Ahrefs API changes pricing or access terms All SEO data sources are behind an adapter layer. Swap to alternative providers (Moz API, SpyFu, SimilarWeb) with a new adapter. The intelligence engine consumes normalized data — it doesn't care which provider produced it.
Client wants to use a different SEO tool (e.g., Moz, Sistrix, SE Ranking) Build a new connector adapter for that tool. Each new SEO tool adapter is ~1–2 days of work. The adapter normalizes into the same ContentPerformanceRecord format. No changes needed in the intelligence layer.
Design principle: Every external dependency has a degradation path. TBK-IQ never hard-fails because one data source is unavailable — it operates at reduced confidence and tells the user exactly what's missing.
07

Quality & SEO Scoring Gate

Every article passes through a scoring rubric BEFORE delivery. Below 7/10? Auto-revise.

Adapted from the Master Kit's Content SEO Scoring framework — a multi-signal quality assessment that catches weak content before it reaches the client.

SignalWeightWhat It MeasuresThreshold
E-E-A-T Signals 20% Experience markers, expertise depth, authority signals, trust elements ≥ 7/10
Topical Completeness 20% Coverage vs top 5 competitors' subtopics (completeness matrix) ≥ 80% coverage
Voice Match 15% Stylometric distance from target writer persona (sentence length, TTR, cadence) ≤ 0.3 distance
AI Detection Score 15% Probability of passing AI detection (sentence variance, vocabulary, cliche density) ≥ 85% human
Freshness Signals 10% Current statistics, recent examples, working links, up-to-date screenshots All data ≤ 6mo old
Technical SEO 10% Heading hierarchy, schema markup, meta description, image alt text, internal links ≥ 9/10
Readability 10% Flesch-Kincaid grade, paragraph length, section structure, scanability Grade 8–12
Auto-revision loop: If the article scores below 7/10 on any signal, the draft-writer agent is re-invoked with targeted instructions: "Improve topical completeness — you're missing coverage of [subtopic X, Y, Z] that all top 5 competitors include." Max 2 revision passes. If still below threshold after 2 passes, flag for human review.
08

The Feedback Loop

The moat nobody else has. TBK-IQ learns from its own output.

After publishing, TBK-IQ tracks the article's GSC/GA4 performance at 30, 60, and 90 days. This creates a closed loop:

Recommend Topic
Generate Article
Publish
Track Performance
Feed Back to Layer 1

If an article underperforms the prediction, the intelligence engine recalibrates: "Articles about [topic X] in this niche don't convert as well as the saturation index suggested." Over time, recommendations get sharper. This is how TBK-IQ becomes more valuable the longer a client uses it — their data makes the engine smarter.

09

3-Week Parallel Sprint

Full scope. 4 parallel workstreams. 3 weeks. All external dependency applications submitted Day 1.

Day 1 actions (before any code): Submit Google OAuth consent screen, apply for Semrush API access, apply for Ahrefs API access, submit WordPress plugin shell to wp.org, create Shopify Partner app listing. All have 1–6 week approval windows — start the clock immediately.

Week 1 — Foundation Sprint

Four teams work in parallel. No cross-dependencies until end of week.

Stream Team Week 1 Deliverables
A: Data Pipes 2 engineers GSC OAuth2 connector, GA4 connector, Semrush connector, Ahrefs connector, Google Trends integration, CMS HTTP crawler, ContentPerformanceRecord schema, data normalization pipeline, caching layer
B: Intelligence 2 engineers Topic Recommendation Agent spec + scaffold, decay detection engine (port Master Kit logic), gap analysis module, keyword-content mapping (port cannibalization rules), saturation index scorer
C: Voice 1 engineer + 1 NLP Voice Analyzer Agent, corpus crawler, AI vs human classifier (stylometric signals), writer clustering (HDBSCAN), persona schema design, PoC on 3 real client sites
D: Publishing 2 engineers CMS adapter base architecture, universal article payload format, WordPress plugin (PHP skeleton, wp_insert_post, Yoast/RankMath meta), Shopify app scaffold (embedded admin, Blog API), media upload pipeline design

Week 2 — Integration Sprint

Streams converge. Intelligence consumes data pipes. Voice feeds into draft-writer. Publishing connects to pipeline output.

Stream Team Week 2 Deliverables
A: Data Pipes 2 engineers End-to-end data pull for 1 real client, Supabase schema extension (performance_snapshots, keyword_opportunities, writer_personas, client_connections), data refresh scheduling, API cost monitoring dashboard
B: Intelligence 2 engineers Wire Mode B to live data pipes, seasonality model (Trends curves), competitor content velocity scoring, scored recommendation output format, SEO quality scoring gate (7-signal rubric), auto-revision loop (below 7/10 = re-draft)
C: Voice 1 engineer + 1 NLP Persona generator (DNA profiles), inject persona as style constraint into draft-writer agent, voice match scoring (stylometric distance), validate output on 3 client sites — blind test: can client tell AI from their own writer?
D: Publishing 2 engineers WordPress plugin full build (all builder testing: Gutenberg, Elementor, WPBakery, Classic), Shopify app full build, Contentful + Ghost + Webflow adapters, generic webhook adapter, media upload pipeline (images → CMS CDN)

Week 3 — Polish + Feedback Loop + Demo

Full system integration. Feedback loop. Multi-client dashboard. Demo-ready.

Stream Team Week 3 Deliverables
ALL: Integrate Full team End-to-end demo: category in → Mode B recommendations → user picks → voice-matched article → published to WordPress/Shopify as draft. 30/60/90 day performance tracker, prediction vs actual recalibration engine, multi-client dashboard, scheduled content calendars, client-facing performance reports, A/B headline testing integration
End of Week 3 demo: Give the system a category ("BMW tuning content for tbklabs.com"). It pulls GSC/GA4/Semrush/Ahrefs data, identifies the top 5 articles to write, you pick one, it generates the article in the voice of the site's existing writer, and publishes it as a draft to WordPress. The whole flow, data-driven end to end.
10

Action Items — Plugging the Gaps

Everything that needs to happen before each phase can start. Prioritized by dependency order.

P0 — Blockers (Before Any Phase)

01
Resolve v1 Security Issues

The 3 P0 security issues (service_role on disk, tokens on disk, prompt injection) from the v1 audit must be fixed before any new data connectors are added. Data connectors handle OAuth tokens — the same class of vulnerability.

SECURITY BLOCKS PHASE 1
02
Semrush API Access — Apply and Get Approved

Semrush API access requires a Guru+ plan ($249/mo) or API units purchase. Apply now — approval can take 1–2 weeks. Without this, Gap Analysis and Saturation Index cannot function.

EXTERNAL DEPENDENCY BLOCKS PHASE 1
03
Ahrefs API Access — Apply and Get Approved

Ahrefs API requires an Enterprise plan or separate API subscription. Provides backlink data, domain rating, and keyword explorer that Semrush doesn't duplicate well. Apply in parallel with Semrush.

EXTERNAL DEPENDENCY BLOCKS PHASE 1
04
Google Cloud Project — OAuth Consent Screen Setup

GSC and GA4 both require OAuth 2.0. Need a Google Cloud project with the Search Console API and Analytics Data API enabled, plus an OAuth consent screen (starts in "Testing" mode, needs verification for production). Set up now — Google verification takes 2–6 weeks.

EXTERNAL DEPENDENCY BLOCKS PHASE 1
05
Database Schema Extension for Performance Data

Current Supabase schema handles articles and auth. Needs extension for: content_inventory, performance_snapshots, keyword_opportunities, writer_personas, client_connections (OAuth tokens for GSC/GA4/Semrush/Ahrefs per client). Design schema before building connectors.

ARCHITECTURE BLOCKS PHASE 1

P1 — Phase 2 Prerequisites

06
Define the Topic Recommendation Agent Spec

This is the most complex new agent. Needs a formal spec before implementation: input format, scoring algorithm (how to weight volume vs KD vs decay vs seasonality vs saturation), output format, confidence thresholds. The scoring weights will need tuning against real client data.

DESIGN BLOCKS PHASE 2
07
Voice Analysis Research — Validate Stylometric Approach

Before building the full Voice Analyzer, run a proof-of-concept on 3 real client sites: can the stylometric signals reliably distinguish writers? Can the clustering produce meaningful groups? If the signal isn't strong enough with 50 articles, we may need 100+. This determines feasibility before full build.

RESEARCH BLOCKS PHASE 2
08
Port Content Decay Logic from Master Kit

The 36-seo/content-seo/content-decay-refresh.md contains the full decay detection methodology (4 detection methods, priority scoring formula, refresh triggers). Extract the decision logic and implement as a scoring module that runs against ContentPerformanceRecords.

ARCHITECTURE
09
Port Keyword-Content Mapping Logic from Master Kit

The keyword-content-mapping.template.md contains cannibalization detection rules, one-keyword-per-page enforcement, intent alignment checks. This becomes the content map that prevents the agent from recommending articles that would cannibalize existing rankings.

ARCHITECTURE

P2 — Phase 3 Prerequisites

10
WordPress Plugin — Register on WordPress.org

WordPress.org plugin submissions require review (1–4 weeks). Submit the plugin shell early with basic functionality. Full features are added via updates after approval. Also need a WordPress test environment with Gutenberg, Elementor, WPBakery, and Classic Editor for compatibility testing.

EXTERNAL DEPENDENCY
11
Shopify App — Partner Account + App Listing

Shopify apps require a Shopify Partner account (free) and app listing submission. The app review process takes 1–3 weeks. Submit early. Also requires a Shopify development store for testing.

EXTERNAL DEPENDENCY
12
Define Universal Article Payload Format

All CMS adapters consume the same payload. Need to define the canonical format: HTML body, SEO fields (title, meta description, OG image), taxonomy (categories, tags), author, publish date, featured image (URL + alt), schema markup. This is the contract between the pipeline and every adapter.

ARCHITECTURE
13
Image CDN Strategy for CMS Publishing

Currently images are base64-embedded in HTML. For CMS publishing, images need to be uploaded to the CMS's media library (WordPress Media, Shopify Files, etc.) and referenced by URL. Need a media pipeline that: generates image → uploads to target CMS → returns URL → inserts in article HTML.

ARCHITECTURE
11

Cost Model & Risk Assessment

Per-client operational costs and key risks to monitor.

Per-Client Monthly API Costs

ServiceEst. Monthly CostNotes
Semrush API $80–150 Depends on API unit consumption. Caching reduces by ~60%.
Ahrefs API $50–120 Row-based pricing. Backlink data is the largest cost driver.
Google APIs (GSC + GA4) $0 Free within standard quotas (25K requests/day GSC, 10K GA4).
Google Trends $0 Unofficial API (pytrends) — no cost but rate-limited.
LLM Tokens (article generation) $80–200 Depends on articles/month. Voice analysis adds ~$20/mo.
Supabase (database + auth) $25–50 Pro plan. Scales with data volume.
Total per client $235–520/mo At Tier 2 ($5K/mo) = 90–95% gross margin

Risk Register

RiskImpactLikelihoodMitigation
Semrush/Ahrefs API changes or pricing increase HIGH MEDIUM Abstract behind adapter layer. Can swap providers without pipeline changes.
Google OAuth verification rejected CRITICAL LOW Submit early. Follow Google's sensitive scope guidelines exactly. Have fallback: manual GSC data export.
Voice analysis doesn't produce distinct personas HIGH MEDIUM Run PoC in Phase 1. If clustering fails, fall back to single "brand voice" persona defined manually.
WordPress plugin rejected from directory MEDIUM LOW Self-hosted distribution as fallback. Most enterprise clients don't use wp.org directory anyway.
AI detection evolves, bypassing voice cloning HIGH MEDIUM The goal isn't to "fool" detectors — it's to write AUTHENTICALLY. Voice cloning produces naturally varied output that reads human because it IS human-patterned.
12

Revenue Model

Three service tiers designed to match publisher scale — from self-serve intelligence to fully managed editorial operations.

Tier 1

Intelligence Platform

Full AI platform access. Data-driven topic recommendations, voice analysis, automated article generation, and SEO quality scoring — delivered to the client's editorial team via dashboard and content calendar.

$3,000/mo · $36K ARR

Best for: publishers with in-house SEO teams

Tier 2 — Target

Platform + Managed Service

Everything in Tier 1, plus Chain Reaction managed service. Editorial validation by human specialists, content strategy consulting, voice calibration, and ongoing performance optimization.

$5,000/mo · $60K ARR

Best for: enterprise publishers without dedicated SEO resource

Tier 3

Platform + Full Editorial Intelligence

Complete managed intelligence. Automated content generation at scale, GEO optimization for AI search visibility, trending topic discovery, dedicated CR strategy team, and weekly prioritized action briefs.

$8,000–$12,000/mo · $96–144K ARR

Best for: large media groups requiring full content intelligence operations

Setup fee: $20,000 one-time per client · Tier 2 Year 1 total: $80K · Gross margin: ~55–62% · One Tier 2 deployment recovers the entire pilot build cost.

TBK-IQ transforms content generation
from an execution engine into an intelligence platform.

Semrush tells you what keywords exist. TBK-IQ tells you which ones are worth writing about for YOUR specific site, in YOUR specific voice, and then proves it worked.