AI Agent Discovery Questions: What to Ask Before You Quote the Build
A lot of AI agent projects go sideways before a single prompt is written.
Not because the model is bad. Not because the tools are weak. Because the discovery was lazy.
Somebody hears:
- we want an AI agent for this workflow
- we spend too much time on this process
- can you automate this with AI?
And instead of slowing down for ten useful questions, they jump straight to solution cosplay.
They scope a build. They quote a number. They imagine a clean happy path. Then three weeks later they discover the workflow depends on:
- stale CRM data
- undocumented approval rules
- edge cases nobody mentioned
- one operations person holding the whole thing together with vibes and keyboard shortcuts
Classic.
If you want to make money with AI agent work without lighting your margin on fire, you need better discovery.
Not longer calls. Not consultant theater. Just sharper questions.
Because the point of discovery is not to sound smart. The point is to find out whether this workflow is:
- worth solving
- safe to touch
- economically viable
- actually ready for an agent
So here are the questions I would ask before quoting the build.
The first question: what painful thing are we actually trying to improve?#
A shocking number of AI projects begin with a fake problem statement.
The buyer says they want:
- automation
- AI agents
- autonomous operations
- less manual work
That is not the real problem. That is the chosen medicine.
You need the disease.
Ask:
- what specific workflow is creating drag right now?
- where does work pile up?
- what gets delayed, dropped, or done badly?
- what is expensive about the current process?
- what happens if this never gets fixed?
You are trying to find the operational pain in plain language.
Good answers sound like this:
- proposals sit for two days because nobody owns first draft assembly
- inbound requests arrive in five places and ops has to manually normalize them
- customer emails get triaged inconsistently and high-value leads wait too long
- account managers spend six hours a week turning messy calls into action items
Bad answers sound like this:
- we want to use AI more
- we think an agent could help somewhere
- our team is excited about automation
That is not scope. That is ambient enthusiasm.
What does the workflow look like today, step by step?#
If the current process is blurry, your future build will be too.
Ask them to walk the workflow from trigger to outcome.
You want to know:
- what starts the workflow?
- what inputs show up?
- what systems get touched?
- what decisions get made?
- what outputs are produced?
- what ends the workflow?
This sounds obvious, but it is where half the hidden mess appears.
People will say a workflow is simple. Then you map it and discover:
- three different intake paths
- two different systems of record
- one manual spreadsheet that somehow matters more than both
- a side-channel approval process in Slack or email
If nobody can explain the current flow clearly, do not pretend the agent layer will fix that for free.
It usually means one of two things:
- the workflow needs cleanup before automation
- the real product is an audit, not a build
That is good news, by the way. You just found the right first offer.
What percentage of the work is actually repetitive?#
Not all busy work is automatable work.
Some workflows feel repetitive because they are annoying. That does not mean they are structurally consistent enough for an agent.
Ask:
- how often does this happen?
- how similar are the cases?
- what percentage follows the normal path?
- what percentage needs judgment, negotiation, or policy interpretation?
- what are the top 5 exception types?
This question matters because agents make money on repeatability.
If every case is special, then you are probably looking at:
- decision support
- draft generation
- triage assistance
- approval-layer design
Not full autonomy.
And that is fine. But price and scope it honestly.
What data does the workflow depend on, and how ugly is it?#
This is where people love to lie by omission.
They will say:
- it pulls from HubSpot
- it reads from the CRM
- the information is already there
Cool. Now ask the adult questions.
- which fields are required?
- how often are they missing?
- how often are they wrong?
- who updates them?
- what happens when records conflict?
- where are the unwritten rules currently living?
An agent built on dirty inputs becomes an expensive confusion machine.
If the workflow depends on:
- free-text chaos
- duplicate records
- stale statuses
- undocumented mapping rules
- mystery spreadsheets
then the first deliverable may need to be:
- data cleanup rules
- schema normalization
- field-level requirements
- confidence labels
- missing-data handling
Not “the agent.”
Again: this is not bad news. This is how you avoid quoting fantasy work.
What is the acceptable failure mode?#
A lot of teams ask what success looks like. Good. Also ask what failure is allowed to look like.
Because every workflow fails somehow. The only question is whether it fails in a survivable way.
Ask:
- what is the worst reasonable mistake here?
- what kinds of errors are annoying versus unacceptable?
- if the system is unsure, what should it do?
- when should it stop instead of guessing?
- what must always go to a human?
This tells you whether the workflow supports bounded autonomy.
A safe workflow usually has clear answers like:
- if required data is missing, route to review
- if confidence is low, draft but do not send
- if financial risk is above threshold, stop for approval
- if the source system disagrees with the request, do nothing and log it
A dangerous workflow usually has mushy answers like:
- it should usually know what to do
- we would want it to be smart about edge cases
- ideally it handles most things automatically
That is how people accidentally buy trouble.
Who owns the exceptions?#
This one is huge.
Everyone wants the happy path automated. Nobody wants to own the mess left behind.
Ask:
- who reviews flagged cases?
- who fixes bad outputs?
- who decides whether the rules should change?
- who watches the queue when exceptions spike?
- who gets blamed when the workflow drifts?
If the answer is basically “the team will figure it out,” that means they have not bought an operating model. They have bought a demo.
An agent without exception ownership is just deferred manual work.
You do not have real automation until someone owns:
- the review queue
- the escalation path
- the policy changes
- the business outcome
If that ownership does not exist, discovery is not done.
What permissions would the workflow need?#
People love “automation” right up until they realize it requires write access.
Ask:
- does this workflow only read, or does it write too?
- what systems would it touch?
- what actions are reversible?
- what actions are sensitive?
- what actions must never be taken automatically?
This does two things.
First, it helps you design the build safely. Second, it tells you whether the workflow should start in:
- read-only mode
- draft mode
- approval-required mode
- tightly scoped execution mode
A lot of early wins happen by starting with proposals, summaries, classifications, or pre-filled actions instead of raw execution.
That is often the difference between a workflow buyers approve and a workflow legal or ops immediately kills.
How will we know if this is actually worth the money?#
If nobody can define the economics, do not pretend the build has a clear business case.
Ask:
- how much time does this workflow currently consume?
- whose time is it?
- what is that time worth?
- what delays or errors create downstream cost?
- what manual review load would still be acceptable?
- what payback period would make this a good decision?
You are trying to estimate whether the agent can produce real margin, not just cool outputs.
Because the fake ROI version sounds like this:
It saves time.
The useful version sounds like this:
This process burns 25 ops hours a week, delays proposals by one business day, and creates cleanup work in billing. If we reduce manual handling by 60 percent while keeping approval time under 10 minutes per flagged case, the build pays back inside two months.
Now you are speaking human.
What changed recently that makes this urgent now?#
This is not just a sales question. It is a seriousness test.
Ask:
- why now?
- what changed?
- why is this worth solving this quarter?
- what happens if this waits six months?
Real urgency often comes from:
- headcount constraints
- response-time pressure
- growth breaking the old manual process
- margin compression
- quality inconsistency hurting trust
Weak urgency usually means the workflow will stall in committee after discovery.
That is useful to know before you spend weeks scoping.
What does “phase one” need to prove?#
One of the best discovery moves is forcing a smaller first win.
Ask:
- what is the narrowest version of this that would still be useful?
- what can we leave out of phase one?
- what outcome would count as proof?
- what would earn a second phase?
This keeps the build from turning into:
- a platform rewrite
- cross-system orchestration madness
- a vague promise to automate the whole department
Early AI agent work usually goes better when phase one is something like:
- classify and route inbound requests
- draft responses for human approval
- summarize calls into structured CRM updates
- assemble proposal inputs without sending anything
- flag risky cases for manual review
That is not small thinking. That is revenue-preserving thinking.
What would make this a bad fit for an agent right now?#
This question is underrated because it forces honesty.
Ask directly:
- what would make this not ready?
- what hidden dependencies are we worried about?
- where do humans currently improvise?
- which rules are real but undocumented?
- what part of this workflow do you not trust yet?
You are looking for friction that should change the engagement.
Sometimes the right answer is:
- do the audit first
- clean the data first
- redesign the approvals first
- start with assist mode
- do not automate this at all
That is not lost revenue. That is protecting your reputation and your margin.
Bad discovery creates bad builds. Good discovery creates cleaner offers.
The real point of discovery#
Discovery is not there to help you sound sophisticated on a Zoom call.
It exists to answer four brutal questions:
- is the workflow painful enough to matter?
- is it structured enough to automate safely?
- is the data good enough to support the system?
- are the economics strong enough to justify the build?
If you cannot answer those clearly, you do not have scope. You have hope.
And hope is a terrible scoping framework.
A simple way to use these questions in practice#
If I were running discovery on a real lead, I would organize the conversation into five buckets:
1. Workflow#
- what starts it?
- what happens next?
- what systems and people are involved?
- where does it currently break?
2. Variability#
- what percent is standard?
- what percent is weird?
- what exceptions matter most?
3. Controls#
- what needs approval?
- what must never be automatic?
- what is the acceptable fallback?
4. Economics#
- what time or money is being burned now?
- what review burden is acceptable later?
- what would count as a win?
5. Readiness#
- is the data usable?
- is ownership clear?
- is phase one narrow enough?
- is this actually a build, or is it an audit first?
That is enough to protect you from a lot of fake opportunity.
The punchline#
A lot of people think the hard part of AI agent work is building the thing.
Sometimes it is.
But a lot of the time, the harder move is refusing to quote the wrong project.
Better discovery helps you do that.
It helps you:
- avoid low-margin chaos
- sell audits when audits are the right move
- scope safer phase-one builds
- price around real operational complexity
- sound like somebody who understands the business, not just the model
That is how you make money in this market.
Not by promising full autonomy to anybody with a messy workflow and a calendar link.
By asking sharper questions sooner.
If you want help figuring out whether a workflow is ready for an AI agent, check out Async Agent Builds or work with Erik MacKinnon.