Before hiring an AI consultancy, answer one question: are you buying an answer, or a result?

Management consulting delivers answers — strategy analyses, market reports, organizational diagnostics. If AI consulting stops at that layer, it becomes a "PPT firm with an AI wrapper". This piece will cover three things: what an AI consultancy actually delivers, where the line sits between consulting / implementation / management advisory, and how to tell whether a firm can ship.

1. The four deliverables of AI enterprise consulting

Every real project we've shipped since 2024 maps into one or more of these four categories. The framing isn't theoretical — it comes from retrospectives across dozens of engagements.

1. A running system

This is the hardest deliverable. Not a demo. Not a POC. Not a "script that can run". A system inside the client's production environment, connected to real data, used daily by real employees.

A concrete case: a hardware factory in Guangdong. After diagnosis, the biggest efficiency leak sat in manual order reconciliation by the sales-ops team. We shipped an ERP-connected AI assistant; two months later the team dropped from 5 people to 2, and shipping delays went from 18% to 6%. The deliverable was that system — engineers installed it on-site, trained staff until they could use it, and responded immediately when issues came up.

The boundary condition for a "running system": it must be connected to real data, real workflows, and real users. The moment any link is fake (sample data in a demo, a sandbox disconnected from production), it doesn't count as delivered.

2. A quantified business change

Shipping the system is not enough. You also have to prove it moved business numbers. Most AI consultancies refuse to commit to this — because once it's in the contract, the client can withhold payment or cancel based on results.

Our approach: define 2-3 key metrics jointly with the client during the diagnosis phase. Examples:

  • Efficiency: person-days on a role, cycle time of a process, rework count on a defect class
  • Cost: monthly/annual savings on a cost line, replacement value on an outsourced step
  • Revenue: inquiry-to-close rate on a channel, repeat-buy rate for a cohort, time-to-shelf for a new SKU

Sign before delivery; compare with data 30 days after. If the numbers didn't move, we run a second optimization cycle at our cost. If it still doesn't move, we refund.

This is the single sharpest line separating credible AI consultancies from AI-wrapped PPT firms. Firms that only commit to "building AI capability" or "advancing digital transformation" — deliberately vague phrases — will never sign a quantified target.

3. Capability transfer to the client team

People often overlook this one, but it's the difference between a project that keeps running and one that decays.

An extreme example: a retail chain hired a large consultancy for a 6-month "AI enablement program". Weekly meetings, weekly status reports, a 300-page methodology handbook at the end. The day the engagement closed, the consulting team rolled off, and the client's IT team couldn't maintain a single module independently — they had been "coordinators", never hands-on.

Credible AI consulting must hand over capability alongside the system:

  • During on-site engineering, the client's IT engineers must shadow, touch the code, and debug a real incident solo at least once
  • The business team must have used the system three times or more, and know which scenarios fit and which don't
  • Management must have reviewed three monthly data retrospectives and be able to interpret the metrics themselves

Without these three, the project is "rented AI" — it starts decaying the day we leave.

4. Methodology documentation (supporting deliverable)

Documentation matters, but it is not the primary deliverable. Its role is to help the client operate after handoff — so it must read like an operations manual, not a theory textbook. Specifically:

  • Architecture diagram + a runbook for each component
  • "Standard responses" handbook for typical business scenarios (what employees should do when X)
  • Incident playbook for data anomalies and model drift
  • Decision tree for future optimization directions

Important: this document serves the first three deliverables. It does not replace them. If an AI consulting project ends with only this document — no running system, no quantified change, no capability transfer — the project has failed, whether or not the client realizes it.

2. Where the boundary sits with management consulting and software implementation

vs. management consulting: different granularity of deliverable

Management consulting deliverables are strategic — 3-5 year transformation roadmaps, org restructuring plans, new business investment recommendations. These are executable but not easily verifiable. Whether a McKinsey growth strategy was right might take three years to judge.

AI consulting deliverables must be operational — results this month, this quarter, this half-year. Operational projects must have clear boundaries: which processes to touch, which tools to use, how long to deliver, what metrics to validate against.

This leads to a useful heuristic: a good AI consulting project usually runs 6 months or less. Longer means the scope wasn't focused enough, or the firm is stretching time. Our standard delivery window is 2-3 months; anything past 4 months triggers a review of whether the scope was properly carved.

vs. software implementation: accountability for business outcomes

Software implementers get paid on "successfully installed". AI consultancies get paid on "numbers moved".

The difference looks small but the impact on the client is enormous:

  • Under the implementation model, the vendor installs, trains, and leaves. Whether anyone actually uses it is the client's problem. Usage holds up for three months, then drops off, data looks wrong, staff revert to the old process — all normal, and all no longer the vendor's concern.
  • Under the consulting model, if the 30-day data looks bad, we come back at our cost to adjust. If 60-day data still fails, a second optimization cycle kicks in. The refund / rework terms are in the contract.

Our own constraint: fewer than 20 clients per year. Not a marketing line — it's because the 30-day post-delivery validation is expensive, and taking on more would degrade quality.

vs. "AI enablement" engagements: not a repackaged training-plus-vendor-intro

Time to call out the "AI enablement consulting" category directly: what these engagements actually deliver is — running a few AI tool trainings plus introducing you to a few AI vendors.

Neither of these is what consulting should deliver:

  • Training is training, priced per day (¥20-50k/day is the going rate)
  • Vendor introductions are channel/integrator work, compensated via referral

Bundling these into "AI enterprise consulting" at six figures is essentially an information-asymmetry play — the client doesn't know what these would cost unbundled.

Real AI consulting must do scenario selection and solution design grounded in your specific business, then ship. Training and vendor coordination are supporting motions, not the main course.

3. How to tell whether a firm can actually ship

After a month of initial conversations, how do you judge? Five specific questions:

Question 1: Can they name three concrete scenarios in your industry?

If the answer stays at "AI can help you make smarter decisions" or "we have a universal framework for every industry", end the conversation.

A credible firm, 30 minutes into hearing your industry and business, should be able to name 3-5 very specific scenarios. For example: "In apparel export, the clearest ROI sits in quote generation, sample fast-tracking, and overseas compliance review. We shipped a quoting scenario for a Zhejiang factory last year — cycle time went from 4 hours to 40 minutes."

Firms that can cite specific clients, specific roles, specific numbers are the ones who've actually worked in that industry.

Question 2: Does the proposal include a "won't do" list?

A proposal with only a "will do" list and no "won't do" list is a template.

A good proposal must draw explicit boundaries:

  • What this phase won't include (e.g., predictive maintenance stays off until the knowledge base is stable)
  • Which scenarios we judge unsuitable for AI (e.g., your small-batch custom line — automation ROI is too thin)
  • Which tools we chose A over B (with a selection rationale, not "A is better")

A firm willing to say no is far more credible than one that says yes to everything.

Question 3: Is there a 30-day validation clause?

The contract must state: within 30 days of delivery, the client has the right to validate against agreed metrics. If the numbers miss, the firm adjusts at its cost, or refunds pro-rata.

Most firms dodge this with "AI effects are hard to validate short-term" / "it depends on long-term operations" / "it depends on your business team's cooperation". Half of that is true, half is avoidance.

Our language: 30 days is the minimum retro window; 90 days is the full validation window. Both show the data. If we can't commit, we don't take the project.

Question 4: Engineers on-site or remote delivery?

Remote-delivery projects end, 90% of the time, in "we executed the contract; outcome depends on your team's cooperation". Engineers who aren't on-site can't understand the business details, can't respond to staff issues immediately, can't tell which processes genuinely need changing versus which are cover stories.

Credible AI consulting has at least 50% of the critical work on-site. Our standard: 2-3 days on-site for diagnosis, 2-6 weeks of engineering on-site for implementation, 1-2 days on-site for validation.

Question 5: Can you talk to past clients?

A credible firm, at the right moment (typically the final round before signing), offers 1-2 past clients as references. The reference must be a real decision-maker — owner, CIO, business lead — not their account manager contact.

If, after multiple rounds, no concrete client is available to speak with, either the firm has no real delivery history, or the outcome left the client unwilling to vouch. Both are red flags.

4. Who fits this kind of engagement

Not every company needs to hire a consultancy. A quick filter:

Good fit (if you match 2-3):

  • A concrete business pain, but uncertainty about whether / how AI can solve it
  • ¥50M+ annual revenue, with some IT base (even 1-2 IT staff)
  • Willing to commit 2-3 months of deep involvement from owner, business lead, and IT
  • Can budget ¥200-500k per project
  • Tried AI pilots before with no outcome, and want to understand why

Not a fit:

  • Only willing to spend small amounts to "try AI" (budgets under ¥100k mostly buy training or API calls)
  • Under 50 employees, no dedicated IT (commercial SaaS is more cost-effective)
  • Want a report to show upstream (management consulting does this for less)
  • Expect visible results inside a month (fastest AI projects show initial numbers at 30 days, full validation at 90)

5. Why we do this

Our own position, to close.

We don't think AI is a universal answer. Many of the problems companies face right now don't need AI — they need process clean-up, management discipline, basic digitization.

But for companies where processes are clean enough, data is accessible, and the team is willing to engage, what AI can change is real. What we do is: walk in, see clearly where AI belongs, what tool fits, what budget makes sense — then stick around until it runs.

If you're shopping for AI consulting, our first advice isn't "come to us" — it's take a free AI maturity audit first. A 15-minute questionnaire plus a half-day on-site interview. We'll tell you whether you're ready, where to start, and what budget range fits. If it suits our capability, we'll engage; if not, we'll point you elsewhere.

Fewer than 20 clients a year. We'd rather do less than dilute the outcome each project deserves.

If this posture matches what you're looking for, read through our service lines, or reach out to discuss specifics.