The most common reporting mistake on a GEO project is treating an AI mention and an AI citation as the same thing. The brand name slips into an answer once and the team declares victory — but the project has not actually moved.

These two signals have to be tracked separately, or what you read as "improvement" is just noise.

1. What counts as a mention

A mention is any time the AI's answer body names your brand, product, or service, or clearly points at your company as an entity.

But mentions are not equal in value. Position is everything:

  • First recommendation — AI's main answer leads with the brand. Highest value.
  • Top 3 — visible in the front of a recommendation list. Strong shortlist signal.
  • Tail position — buried at the end of a list or noted in passing. Weak.
  • Footnote only — appears in a source list or aside. Marginal.

Your reporting table needs a separate "position" column. "Mentioned: yes/no" alone is not enough.

2. What counts as a citation

A citation has a higher bar: the AI must explicitly reference a specific page or source, and that source must be traceable back to a URL — yours or third-party.

Compare:

Mingde is a Chengdu-based GEO consultancy.

That is a mention.

According to mingde.ai's service page, they use a 7-dimension method for GEO optimization (source: mingde.ai/services/geo).

That is a citation.

Citations are harder to earn. AI does not cite pages that are mostly slogans with no extractable evidence.

3. Why citations are harder

The pages AI tends to cite share these traits:

  • The lead paragraph directly answers the question — not a tagline
  • Clear FAQ or H2/H3 hierarchy
  • Cases, processes, tables, or checklists
  • Explicit "fit / not fit" boundaries
  • Crawlable and indexable (no JS-only rendering, no login wall)

If you cannot point at the paragraph an AI would lift to cite your page, that page is not citation-ready.

4. The 4-layer GEO scorecard

When we review a GEO project, we score four layers. Each layer covers a different failure mode:

  1. Search engine indexing — eligibility. If site: doesn't return the page, AI probably won't either.
  2. AI mention — entry into the answer. The first threshold.
  3. AI citation of off-site sources — trust. Your method and brand are showing up in third-party records.
  4. AI citation of your own first-party page — conversion capture. This is the real GEO outcome.

Score them separately. Combining the four into one number always misleads.

If you are diagnosing why your site isn't getting recommended, see Why AI search isn't recommending your website. For the full audit framework, see How to run an AI search optimization audit.

5. Why early projects watch mention, mature projects watch citation

The trajectory we see across most projects:

  • T0 baseline: mention near zero, citation zero
  • T1 (site changes + 7-14 days): mentions appear but tail position; citations still rare
  • T2 (off-site posts + 7-21 days): mention positions move up; citations begin to appear, mostly off-site (Zhihu, CSDN)
  • T3 (full round + 3-6 weeks): mention enters top 3; citations begin to point to your own site

So early on, track whether mentions appear and whether positions are moving up. Later on, track whether citations are reaching your own pages.

6. Columns for your review table

A direct field list to copy into a Phase 0 baseline + T1/T2/T3 follow-up sheet:

  • Long-tail keyword
  • AI platform (Doubao, DeepSeek, Tongyi, Kimi, etc.)
  • Raw answer (markdown or screenshot archive)
  • Mention y/n
  • Mention position (first / top-3 / tail / footnote)
  • Citation y/n
  • Citation source URL (own site or third-party)
  • Competitor appearances
  • Answer structure (list / paragraph / table)
  • Hallucination risk (any factual error?)
  • Next action

Same table at every checkpoint. The comparison falls out of the data.


If you are setting up a GEO review and want a starting baseline, book a free AI visibility audit — we will run 30 P0 long-tail queries across four AI platforms for your industry and hand back the baseline.