Glossary

Document and link motif suggesting glossary terms and evidence-style definitions.

This glossary ties everyday language to the mechanics teams implement in warehouses and dashboards. Words like “visibility” or “citation” only become measurable when you attach definitions: which strings count as a mention, whether subdomains roll up, how multi-brand paragraphs are labeled, and how repeated runs aggregate into rates. Use these entries as a shared spec starter; your legal and analytics owners should still publish an internal dictionary with version numbers whenever parsers change.

AI visibility tracker
Software or a program that records how brands and URLs appear in AI-mediated answers. Mechanically it orchestrates prompt jobs, captures interface state, parses spans and links, and stores append-only observations for aggregation. Site overview.
AI visibility tracking
The practice of running prompt sets on schedules, storing answers and citations, and reviewing variance. It includes operational choices—rate limits, retries, deduplication, human review—that turn raw generative output into accountable metrics. AI visibility tracking guide.
AI Overviews
Google Search feature that can show an AI-generated summary block with mixed text and links. Tracking must record whether the block triggered, what elements rendered, and how citations map to sentences. AI Overviews tracker.
AI Mode
Google AI Mode experiences that sit on Search intent with conversational behavior. Tracking should log session policy (cold start vs scripted follow-ups) because mechanics differ from one-shot summaries. AI Mode tracker.
Citation
A linked or attributed source inside an AI answer. Programs count citations to map evidence and should canonicalize URLs (strip tracking parameters) before domain rollups so metrics stay stable when marketing adds new query strings.
Share of voice
A comparative slice: how often your brand appears versus a competitor set inside the same prompt basket. Mechanically it is a conditional rate per prompt group, not a global popularity score, unless you define a population model explicitly.
Prompt
The user text you replay on a schedule to test visibility. Version prompts with hashes so accidental edits do not silently fork a time series.
Schedule
How often runs execute, for example daily or weekly. Schedules interact with variance: sparse schedules hide spikes; overly dense schedules waste budget without improving confidence intervals.
Observation
A single captured result for one prompt at one time with metadata. Rows should be immutable; corrections happen in override tables keyed by observation id.
Gemini
Google’s Gemini family of models and products. Coverage varies by surface; always store which product produced the row. Gemini visibility tracker.
ChatGPT
OpenAI’s ChatGPT product line. Free, Plus, Team, and API paths differ mechanically—tag them. ChatGPT visibility tracker.
Perplexity
Perplexity AI answer product with strong citation habits; parsers must deduplicate inline and list citations. Perplexity visibility tracker.
Claude
Anthropic’s Claude product line; long outputs and policy screens affect extraction cost and refusal metrics. Claude visibility tracker.
Mistral
Mistral AI models and related products in scope for your vendor; multi-deployment setups need explicit labels. Mistral visibility tracker.

Ready to track in production?

Software helps you run prompts on schedules, store evidence, and compare engines without manual copy paste.

Start Tracking