Pillar Guide
How to track AI mentions of your brand
A five-step instrumentation playbook for AI visibility tracking. Each step maps to the real Menra product flow — pick prompts, connect engines, set cadence, read the reports, act on gaps. The whole loop takes about an hour to set up and produces a useful baseline within seven days.
10 minute read · Updated April 2026
Before you start
The two prerequisites are a clear sense of which buyer questions matter (so you can pick the right prompts) and a robots.txt that allows the major AI crawlers (so the engines can fetch your content when synthesizing answers). The crawler allowlist piece is covered in detail in the companion blog post, how GPTBot is quietly replacing Googlebot.
For the conceptual foundation — why AI mention tracking matters and what's structurally different from SEO — start with what is GEO and what is AEO. This pillar is the operational walkthrough; those two pillars cover the why.
Step 1 — Identify your target prompts
Pick fifteen to thirty prompts your customers actually ask AI engines when they're shopping in your category. The temptation is to start with high-volume keyword-style prompts ("what is AI search"). Resist it. Top-of-funnel prompts produce flashy citation share but rarely move pipeline. Bottom-funnel prompts ("best CRM for early-stage startups", "alternatives to Notion for product teams") are where citation share converts.
A reliable starter framework: take the five most common questions your sales team answers on first calls and rephrase each one as the four to six different ways a buyer might phrase it to ChatGPT. That gives you twenty to thirty prompts grounded in real intent. Add three to five competitor-comparison prompts and one to two pricing-question prompts to round out the cluster.
Inside Menra, prompts are configured per brand on the dashboard. Each subscription includes five prompts; if you're starting with twenty, the additional fifteen come from kontör top-ups (see pricing). Most teams settle around twenty active prompts after the first month.
Step 2 — Connect AI sources
ChatGPT and Perplexity are the universal floor. They are where most B2B buyers are running discovery prompts in 2026, and they disagree often enough that measuring both gives you a meaningful range. Once those two are wired, layer on engines based on your audience: Claude for technical buyers, Gemini and Google AI Overviews for SMB and consumer, Copilot for Microsoft-heavy enterprises, Grok for X-native communities, DeepSeek for APAC, Meta AI for consumer Meta surfaces.
Menra includes three platforms in the base $69/month subscription. The other six are individually-priced add-ons. The pragmatic shape for most B2B brands is ChatGPT + Perplexity + one of (Claude or Gemini) on day one, with the rest added as monthly budget allows.
In the dashboard, each platform shows up as a separate scan source with its own daily run schedule. The platform-coverage page in your Menra account exposes the per-engine cost in kontör so you can model the kontör burn before committing.
Step 3 — Set monitoring frequency
Daily is the minimum useful cadence. AI answers shift within 24 hours when a competitor's PR push lands, when a new review hits G2, or when an industry publication publishes a category roundup. Weekly cadence misses these movements entirely; monthly cadence is essentially anecdotal.
Hourly is overkill for most B2B brands. The signal-to-noise drops fast — most prompts produce identical answers across consecutive scans, so you're paying for redundant runs. Hourly is justified for time-sensitive launches (you're shipping a major product update and want to track citation pickup hour-by-hour) or for very high-volatility consumer categories.
Menra defaults to daily scans during early-morning UTC hours so reports are ready when teams open their dashboard. The schedule is editable per brand if your team works on a different cadence.
Step 4 — Read citation reports
Every Menra weekly report exposes four numbers per prompt per engine: citation share (mentioned vs. not), citation position (lead source, mid-paragraph, footnote), sentiment (-1 to +1), and the source URLs the AI cited. The first three numbers tell you how you're doing. The fourth tells you why.
The source URL list is where most teams find their highest-leverage actions. If ChatGPT is citing G2 for your category but you have no G2 page, that's a one-week project worth six months of blog content. If Perplexity is pulling from a specific industry publication you've never pitched, that's a PR target. The source graph is the prioritized list of where to invest next.
The dashboard surfaces three default views: a per-prompt history (citation share over time), a per-engine breakdown (which engines you win on, which you lose), and a competitor matrix (how your share stacks against the three to five brands you flagged as competitors at signup).
Step 5 — Act on competitive gaps
Acting on the data is where the program either compounds or stalls. The actionable output of any week's report is a prioritized list of three to five gap prompts — prompts where a competitor wins and your brand is absent or footnoted.
The fix splits three ways. First, restructure top URLs that should be answering each gap prompt: rewrite around direct-answer density, add FAQPage schema, refresh dates and statistics. Second, earn mentions on the high-authority sources AI is pulling from for your competitor — review-generation programs, targeted PR pitches, Reddit AMA presence. Third, refresh stale evergreen content; AI engines weight recency for most categories.
Re-scan after each sprint, compare the citation-share delta to your baseline, and pick the next month's gap targets. That loop, repeated monthly, is the entire program. Most teams see meaningful citation-share movement within two months; a full discipline shift takes a quarter.
What this looks like in practice
A typical first month for a Menra customer: signup and prompt configuration take about 60 minutes. The first useful baseline lands after 7 days of scans. The first restructuring sprint (rewrite five top URLs around direct-answer density, ship FAQPage schema) takes about two weeks of marketing-engineering time. Re-scan the second week, look at citation-share deltas, pick three new URLs for the next sprint. By month two, the loop is in production; by month three, citation-share movement is consistent and reportable to leadership.
Where to go next
- What is GEO? — the umbrella discipline; why AI mention tracking matters.
- What is AEO? — the answer-engine subset; retrieval mechanics that decide citation.
- How GPTBot is quietly replacing Googlebot — the crawler-side hygiene piece.
- Menra vs the alternatives — how Menra compares to other AI visibility tools.
Start tracking your AI mentions — one subscription at $69/mo.
See pricing