LIVE AUDITSee how your business can save money and time.
BLOG · HIRING & PROCUREMENT · 2026

23 questions to ask any automation consultant before signing

Sales calls with automation consultants tend to follow a script. The consultant pitches their methodology, you describe your problem at a high level, and 45 minutes later you have a proposal arriving in your inbox. That process tells you almost nothing about whether the consultant can actually deliver. These 23 questions break the script — discovery, technical, commercial, and reference questions designed to surface real capability or expose its absence.

By Automation Labz · Updated May 10, 2026 · 15 min read min read
SECTION 01

Why these questions matter

The asymmetry of information in sales calls heavily favors the consultant. They've done this conversation a thousand times. You've done it a few times at most. They have polished answers to predictable questions and well-rehearsed transitions back to their pitch. The default sales conversation gives them exactly what they want — a chance to sell.

Specific, probing questions break the rhythm of the sales script. Good consultants welcome them; the questions give them an opportunity to demonstrate real expertise. Sales-driven consultants struggle with them; the questions force them off-script and reveal capability gaps. Both reactions are signal you need before signing.

The 23 questions below are organized by category. You don't need to ask all 23 in one call — that would be exhausting and unfocused. Instead, pick 8-12 questions most relevant to your project and ask those across two or three conversations. The questions cover four categories of risk: problem understanding, technical capability, execution staffing, and commercial alignment. The 23rd question — a tiebreaker — gets asked at the end if multiple consultants survive the rest.

The goal isn't to interrogate. It's to gather specific evidence of capability. Good consultants will appreciate the rigor — they want to work with clients who take the engagement seriously. Bad consultants will become defensive, vague, or attempt to redirect. Both responses give you information.

Bring these questions in writing to your sales calls. Take notes on the answers. After each call, score the responses against what you expected. Compare across consultants. The exercise will surface which consultant actually understands automation work and which is selling automation marketing.

SECTION 02

Six questions about how they understand your problem

Before any technical or commercial discussion, the consultant needs to demonstrate they understand your actual problem. If they don't, everything downstream is broken.

1. "Walk me through what you understood from the brief in your own words." Don't accept "we read it and we're excited about the project." Force them to paraphrase. If they can't articulate your problem clearly in their own words, they don't actually understand it. They're going to build something else.

2. "What questions does the brief leave unanswered for you?" Every brief has gaps. A consultant who says "your brief is comprehensive, we have no questions" is either lying or hasn't actually engaged with the material. Good consultants come to discovery with 5-15 specific questions about edge cases, integration details, and decision points the brief didn't cover.

3. "What part of this project worries you most?" Real consultants have informed opinions about risk. They've seen similar projects fail and know where the failure modes are. A consultant who can't identify the risky parts of your project either hasn't thought about it carefully or has limited experience with this type of work.

4. "What's the simplest version of this that would still deliver value?" Tests their willingness to push back on scope. Many projects can deliver 80% of the value at 50% of the cost by ruthlessly scoping. Consultants who eagerly accept whatever scope you propose may be optimizing for project size rather than your outcome.

5. "How would you measure whether this project was successful 6 months after launch?" Forces them to think about outcomes, not deliverables. The right answer ties to your business metrics (the things you wrote in your brief's success criteria). The wrong answer is vague language about "delivering value" or "achieving goals."

6. "If you were the client, what would you do differently in this brief?" Reveals whether they'll be a critical thinker on your project or just an order-taker. Good consultants have opinions about how to structure work. They'll point out things they'd clarify, sequence differently, or scope differently. Order-takers will say "I think the brief looks great."

SECTION 03

Five questions about technical approach

Once they've demonstrated they understand the problem, the next category is technical credibility.

7. "What stack would you recommend for this project, and why?" Listen for specifics and reasoning. Good consultants name specific tools and explain why those tools fit. They'll discuss tradeoffs — why they chose tool A over tool B for your specific situation. Bad consultants either give generic answers ("we'd use a combination of best-in-class tools") or default to whatever they always use regardless of fit.

8. "What's the riskiest technical decision in this project?" Tests whether they've thought through the technical architecture. The riskiest decision in a multi-system automation might be integration approach with a specific system, data model design, or error-handling strategy. Consultants who can name the risky decision and explain why it's risky are doing real architecture work in their head. Consultants who say "we don't see major technical risks" haven't analyzed the project.

9. "How would you handle errors and exceptions in this automation?" Error handling is where amateur automation projects die. Production automations need to handle: temporary system outages, data quality issues, edge cases the design didn't anticipate, partial failures across multi-step workflows, and recovery from failed state. Good consultants have a specific approach. Bad consultants have a generic "we monitor and alert" answer.

10. "What's your approach to testing this kind of automation?" Testing automation is harder than testing traditional software because the systems being orchestrated are external. Good consultants discuss specific approaches — sandbox environments, mocked external services, test data sets, end-to-end test scenarios. They might mention specific testing frameworks. Bad consultants say "we test thoroughly before launch" and don't elaborate.

11. "How will the automation be monitored once it's live?" Monitoring is consistently the weakest part of automation projects. Production automations need: error alerting, performance tracking, business metric tracking (is the automation actually producing the intended outcomes), and audit logging. Good consultants describe specific monitoring approaches. Bad consultants either don't mention monitoring or describe basic uptime checks.

SECTION 04

Four questions about staffing and execution

Many sales-driven consultants are highly responsive during the sale and significantly less responsive during execution. These questions surface what execution will actually look like.

12. "Who specifically will work on my project, and what percentage of their time will be allocated?" Names matter. So does percentage allocation. A senior engineer at 25% time for 8 weeks is very different from a junior engineer at 100% time for 4 weeks. Same total hours, very different outcomes. If the consultant won't commit to specific staffing, the project is likely to be staffed with whoever's available, which is rarely the same people who sold the project.

13. "What happens if your assigned engineer becomes unavailable?" Tests whether they have real continuity planning. Good consultants have backup staffing thought through. They might bring a second engineer up to speed during the project specifically for continuity. Bad consultants give vague "we'll handle it" answers.

14. "What's your typical project management approach?" Project management quality predicts project outcome quality. Good consultants describe specific cadences (weekly status calls, async daily updates, milestone reviews) and specific tools (project management software, shared documentation, communication channels). Bad consultants say "we're flexible to your preferences" or describe generic agile theater.

15. "What does typical client involvement look like during your projects?" Honest consultants tell you that you'll need 5-15 hours per week of client involvement during active phases (discovery, UAT, launch). Consultants who claim "you'll barely need to be involved" are either inexperienced or misleading you. Most projects fail when client involvement is too low, not too high. Setting realistic expectations upfront is honest behavior.

SECTION 05

Four questions about commercial terms

Commercial questions surface alignment around money, risk, and what happens when things deviate.

16. "How do you handle scope changes that come up during the project?" Good answer: documented change-order process with specific cost estimation and explicit approval before work proceeds. Bad answer: "we're flexible" or "minor changes we absorb, major changes we discuss." Vagueness here predicts mid-project disputes. Good consultants have specific process for handling change because they've learned from past disputes.

17. "What's your payment schedule, and is it negotiable?" Listen to both the schedule and the negotiability. Standard reasonable schedule: 25-30% upfront, balance tied to milestones, 10-20% held until post-launch warranty period passes. Aggressive schedules (50%+ upfront, weekly billing regardless of progress, no milestone tie) suggest the consultant prioritizes cash collection over alignment with delivery. Refusal to negotiate any terms is also signal — flexibility on payment structure is reasonable; inflexibility may indicate cash flow problems.

18. "What's your warranty period, and what does it cover?" Standard: 30-90 days post-launch during which bugs in delivered scope are fixed at no additional cost. Less than 30 days is short. No warranty period is a red flag. Some consultants offer warranty only at additional cost — that's commercial preference but should be clear upfront. Without a warranty period, every post-launch issue becomes a paid change request, which damages the relationship and your budget.

19. "What happens at the end of the project if I want to terminate the engagement before the original scope is complete?" Tests their cancellation terms and IP transfer practices. Good consultants have reasonable termination clauses — you pay for work completed, get work-in-progress handed over, no penalties. Bad consultants have onerous cancellation fees or refuse to provide work-in-progress until full project payment. Ask to see this in writing in the proposed contract before signing.

SECTION 06

Four questions for reference checks

References are where most sales theater unravels. These questions extract real signal from reference calls.

20. "Can I talk to two or three clients with projects similar to mine?" Ask the consultant directly. The hesitation pattern is information — consultants who can immediately offer specific reference clients are confident in their work. Consultants who delay, offer "case study calls" instead of direct references, or only offer references for very different projects are signaling something about how those reference calls might go.

When you get the references, ask the actual clients:

21. "What was the original timeline and what was the actual timeline? What was the original budget and what was the final cost?" Real consultants deliver close to original timeline and budget most of the time. Variances of 10-20% are normal. Variances above 50% suggest either chronic underestimation or aggressive scope management on the consultant's side. Both are problems. Ask the client whether the variance was explained transparently or felt like a surprise.

22. "What's the post-launch experience been like? Is the consultant still responsive when something needs attention?" Post-launch responsiveness is the strongest predictor of long-term consultant quality. Many consultants are highly responsive during active projects and disappear after launch. Reference clients can tell you whether the consultant has actually been there when needed, or whether they vanished after the final payment cleared.

23. The tiebreaker: "Would you hire them again, and would you recommend them to a friend with a similar project?" The combination matters. Some clients will hire someone again because they're acceptable but wouldn't recommend them. Some will recommend them but wouldn't hire again because they've moved past needing that level of work. Strong "yes" to both is the highest-signal endorsement. Hesitation on either is information.

Take notes during reference calls. Compare what reference clients say to what the consultant claimed during sales calls. Discrepancies between consultant claims and client reality are usually decisive signal — consultants who exaggerate to prospects probably exaggerate during projects too.

SECTION 07

What good answers sound like

Good consultant answers share certain patterns regardless of the specific question:

1. Specificity. Good answers include specific names, specific numbers, specific tools, specific timelines, specific examples. "We typically deliver projects like this in 8-12 weeks, with discovery in the first 2 weeks, build in weeks 3-9, and UAT plus launch in weeks 10-12." Bad answers stay abstract. "Our process is flexible and adapts to project needs."

2. Acknowledged tradeoffs. Real expertise involves understanding tradeoffs. "We could build this in n8n, which is faster and cheaper, but it means you're tied to that platform. We could build it custom, which is more expensive upfront but gives you portability and avoids vendor risk. For your situation, n8n is probably right because [specific reasoning]." Bad answers present one-sided pitches with no acknowledged tradeoffs.

3. Concrete examples from past work. Good consultants reference specific projects they've done. "We built something similar for [type of client] using [specific approach] and the outcome was [specific result]." They might offer to share documentation or take you through the architecture. Bad consultants speak in generalities about "the kinds of projects we typically work on" without specifics.

4. Comfortable disagreement. Good consultants will push back on you when they disagree. "I hear what you're asking for, but I'd recommend a different approach because [reason]." That pushback is valuable — they're bringing expertise to the engagement, not just executing requests. Bad consultants agree with everything you say, which means they're either not engaging critically or planning to surprise you with change orders later.

5. Honest uncertainty. Good consultants admit what they don't know. "I'm not sure about the volume scaling for that specific scenario — let me ask our team and get back to you with a real answer." Bad consultants fake certainty about things they couldn't possibly know.

Pattern-match these qualities across consultants. The differences become visible quickly once you know what to listen for.

SECTION 08

How to use these in practice

You're unlikely to ask all 23 questions in one conversation. Here's a practical approach.

First call (discovery and problem understanding). Ask questions 1-6 from the problem-understanding category. These are the most important questions because if the consultant doesn't understand your problem, nothing else matters. Spend 30-40 minutes of a 60-minute call on these. Use the remaining time for the consultant to ask you questions and present their initial thoughts.

Second call (technical and execution). Ask questions 7-15 from the technical and staffing categories. By the second call, the consultant should have absorbed the project enough to give specific answers. This is where you separate technically rigorous consultants from sales-driven ones. If the second call doesn't happen — if the consultant tries to move directly from problem discovery to proposal — that's information.

Proposal review with commercial questions. When you receive the proposal, ask questions 16-19 about commercial terms. Specifically ask about anything in the proposal that's ambiguous, missing, or different from what was discussed verbally. Get answers in writing as part of the contract negotiation.

Reference calls. Once you've narrowed to 2-3 consultants you'd seriously consider, do reference checks using questions 20-23. Talk to 2-3 references per consultant. These calls take an hour each and are worth every minute.

Final decision. Score consultants across all four question categories. The winner usually emerges clearly. If multiple consultants score similarly, ask question 23 (the tiebreaker) at the very end and look at which references give the strongest endorsements.

This process takes 4-8 hours of your time per consultant evaluated. For a project worth $25K+, that time investment is worth it. The cost of picking the wrong consultant — failed project, lost time, having to start over — easily exceeds 8 hours of your time many times over.

Most operators skip this rigor because it feels slow. The result: project failure rates above 40% in automation engagements. Operators who use this rigor have project failure rates below 10%. The discipline pays off in project outcomes.

Frequently asked questions

Five questions operators ask most when evaluating automation consultants.

How many consultants should I interview before deciding?

3-5 is the right range. Fewer than 3 limits your ability to compare answers and recognize patterns. More than 5 dilutes your time without adding meaningful information. Pre-qualify aggressively before scheduling calls — read case studies, check operating history, verify they've done similar work. Sending these 23 questions to 5 pre-qualified consultants beats sending them to 15 random ones.

Should I send the questions in advance or ask them live?

Mostly ask live. Sending in advance lets the consultant prepare polished answers that reveal less. Asking live forces extemporaneous responses that reveal real understanding. Two exceptions: (1) reference questions can be sent in advance to references so they have time to think, (2) commercial questions about specific contract terms should be in writing as part of negotiation. The discovery and technical questions are most useful asked live during calls.

What if the consultant refuses to answer some questions?

Refusal patterns reveal more than the actual answers would. Consultants who refuse to name specific staffing, refuse to share references, or refuse to discuss commercial terms before signing are signaling concerning behavior. If they won't answer reasonable evaluation questions before signing, they're unlikely to answer reasonable accountability questions during the engagement. Move to the next consultant.

How do I evaluate the technical questions if I'm not technical?

You don't need to evaluate the technical correctness of answers — you need to evaluate the specificity, confidence, and consistency of answers. A non-technical operator can absolutely tell the difference between a consultant who says "we use Make.com with custom webhooks for the integrations, fall back to manual queue on retries past 3 attempts, and monitor with PagerDuty alerts wired to your on-call team" versus a consultant who says "we use best-in-class automation tooling with robust error handling." If you genuinely want technical evaluation, hire a one-time technical advisor for 4-6 hours to review the proposals with you. That investment costs $500-$1,500 and routinely saves multiples of that.

What if all the consultants I talk to give similar answers?

If three or more consultants give similar high-quality answers, the differentiation moves to other factors — references, cultural fit, specific industry experience, gut feel after meeting the team. That's actually a good problem to have. Multiple credible options means lower risk. Pick the one whose references are strongest, whose team you have best rapport with, and whose pricing is most aligned with your budget. If all consultants give similar low-quality answers, your candidate pool is wrong — go source different consultants.

Get a brief that helps you evaluate consultants

These 23 questions work best when paired with a clear brief that gives consultants something specific to respond to. Our free 4-minute audit produces a complete automation brief — scope, success criteria, technical context — that turns vague consultant pitches into specific, comparable proposals. Use the brief as the foundation for your consultant evaluation, or send it to us for matching with vetted partners. No obligation. The brief is yours.

No credit card. No follow-up call unless you ask.