top of page

Is the rise of AI driving people back to relationship sales?


For the past two years, the sales technology market has sold a seductive idea: that generative AI can compress the time between prospect identification and first meeting, while expanding the volume of outreach a team can execute. On one level, that proposition is real. AI is now embedded across prospecting, account research, sequencing, message drafting, call summarisation, forecasting and pipeline administration. McKinsey has argued that marketing and sales remain among the functions with the largest economic upside from generative AI, while its more recent work shows AI adoption broadening further, including a growing wave of agentic use cases inside large organisations.

However, it quickly becomes difficult to maintain a steadily rising correlation between productivity increases and quality. Scale in sales inherently detracts from direct client focus, which may impact trust, especially when the ambition is to establish and nurture client relationships.

The rise of AI in outbound sales has unquestionably made the market noisier. Buyers are being approached more often, through more channels, with messaging that is frequently more targeted in structure but arguably flatter in feel. That is the paradox of AI-enabled outreach: it can appear highly personalised while still sounding strangely generic. The surface details may be correct, the company name is right, the industry references are plausible, the trigger event is often recent, yet the message lands with the unmistakable texture of synthesis rather than conviction intermingled with the human relationship angle, often necessary for context and ultimately persuasion.

That matters because sophisticated buyers detect the difference pretty quickly and increasingly personalisation marks the difference between getting a reply and being filed unceremoniously into Deleted Items.


LinkedIn’s recent research on B2B buying argues that as AI democratises information, trust becomes more important. In its survey with Ipsos, buyers placed the greatest value on seller input in the middle of the buying journey, when complexity rises and organisations are trying to interpret options, risks and trade-offs rather than merely gather facts.

Salesforce, too, has framed one of the defining sales trends as a dual movement: using AI for efficiency while simultaneously prioritising human connection and trust-building.

That is why a backlash is now visible beneath the AI enthusiasm. Not a rejection of AI itself, but a reversion towards relationship-led selling as the primary means of doing business in complex markets.

This is especially true where sales are consultative, multi-stakeholder, high-value or regulated. In those environments, the buyer is not simply purchasing a product. They are simultaneously underwriting execution risk, assessing judgement as well as trying to determine whether the seller understands the internal politics of the decision, the commercial consequences of delay, and the practical constraints of adoption. Those are human assessments before they are technical ones. AI can help sales teams materially here, but mostly when it is used to deepen relevance rather than simulate it.

Account-based marketing is an obvious example. Used properly, AI can accelerate client research, condense large volumes of public and proprietary information, map stakeholders, surface likely priorities, and help prepare hyper-personalised messaging that is grounded in actual context rather than templated flattery. It can also improve preparation ahead of meetings by helping teams understand a client’s business model, regulatory pressures, sector dynamics and likely objections. AI can surface the buyer intelligence sellers need to perform better and reduce errors, however McKinsey has repeatedly pointed to workflow redesign and human validation as central to extracting and contextualising real value from generative AI.


Used well, AI can reduce the administrative friction that weakens sales execution: missed follow-ups, inconsistent CRM records, poor meeting notes, delayed recap emails, data-poor handovers and messaging drift between teams. In those cases, AI is not replacing judgementit is tightening operational execution.


The next phase goes further. Agentic systems, including products such as Claude Cowork, are designed not just to generate content but to execute multi-step knowledge work on a user’s behalf: synthesising research, preparing documents, moving across tools and handling repeatable workflows with less manual intervention. Anthropic describes Claude Cowork as outcome-oriented rather than prompt-oriented, built for high-effort repeatable work across desktop files and applications. Microsoft’s 2025 Work Trend Index, meanwhile, presented the idea of “human-agent teams” and the emerging “Frontier Firm”, where people increasingly direct software agents to execute defined tasks at scale.

For sales leaders, that opens up obvious use cases: tailored pitch decks can be drafted faster, meeting briefs can be assembled in minutes rather than hours, follow-up materials can be adapted for different stakeholder ICPs, internal coordination can become more fluid, prospect progression can accelerate because the time required to produce relevant collateral falls sharply.

Yet this is precisely where discipline becomes more important. The best use of agentic AI in sales is not to lead the outbound motion, but to reinforce the human one. Its value lies in supporting analysis, synthesising complex client problems and helping teams articulate sharper value propositions to different stakeholders. It is useful when it expands a salesperson’s ability to understand, prepare and respond. It is dangerous when it becomes a substitute for actual commercial thinking. It can also taint relationships if there is too obvious a disconnect between AI-authored correspondence vs a person's actual, in-person communication style.


This distinction is becoming more urgent as the rapid roll-out of agentic AI accelerates. There is a risk of role reversal: the human overseeing a sales process that is increasingly initiated, shaped and sustained by generative systems, even when the underlying sale is complex, relationship-focused and value-based. That would be a category error, complex sales need to remain human-led.


The relationship part is the point. Humans retain the singular ability to build trust with one another through judgement, empathy, credibility, accountability and the capacity to collaborate around ambiguous problems. AI is improving quickly, and it is heading towards greater autonomy and sophistication. However, it still does not own consequences or understand incentivisation in the way a person does, it does not carry reputational skin in the game, it does not sit across the table from a buyer whose internal sponsor is exposed if the decision goes wrong - from the get-go, mutual alignment is impossible.

Nor are the technical weaknesses trivial. As more businesses deploy generative AI systems, familiar problems are beginning to matter commercially: sycophancy, hallucinations, confirmation bias, and the tendency towards safe but generic answers that improve average model performance while weakening the depth required in genuinely complex situations. The result is often polished output with insufficient edge - accurate enough to pass, not insightful enough to persuade.

That leads to a deeper question that many firms have not yet asked with enough seriousness: in whose interests is the AI acting? The end user’s, the model provider’s, or the company licensing the tool? Anthropic itself has said it wants Claude to act unambiguously in users’ interests. That aspiration is reassuring, but the question will only become more important as agents gain wider operational scope inside companies.


Stretch the thought experiment far enough and a more unsettling horizon appears. Are we moving towards fully agentic companies, conceivably created and operated by AI with only limited human supervision? Perhaps, in some operational sense, elements of that future are imaginable. But even highly automated corporate networks still require human, state or trust ownership (the latter two still currently governed by humans). There remains, at least for now, a human in the loop: setting objectives, defining taxonomies, determining acceptable risk, interpreting regulation, and taking ultimate responsibility for what the system does. We are witnessing that human presence diminishing in day-to-day operational visibility, but it cannot yet disappear, nor should it. To move beyond that into the idea of AI as an independent sovereign actor with its own legal persona would be to enter genuinely dystopian territory. It would stretch not just corporate law, but the social basis on which humans extend trust to systems in the first place.

Better then, to park that thought for a future article and return to the present market reality.

What businesses are experiencing now is an unusually noisy commercial environment. Automated sales emails are landing in inboxes with growing frequency. Outreach has become easier to produce, cheaper to distribute and harder to differentiate. Ironically, that very abundance is making automation more transparent and thus less persuasive.


This is why relationship sales may be due not for extinction, but for renewal.

Not the old relationship model built on vague familiarity and golf-course mythology but a more disciplined version than that. One in which human sellers use AI to sharpen research, improve preparation, reduce errors and increase relevance, but do not outsource trust-building itself. One in which AI helps the seller understand the client more deeply, not merely contact them more often. One in which the human uses AI to sense-check their work, rather than the human merely sense-checking the AI’s work after the fact.

That is the balance worth aiming for.

Because in a market saturated with synthetic fluency, genuine commercial understanding is becoming more valuable. Ultimately, in a world where buyers increasingly struggle to distinguish between what is AI-generated and what is human-authored, trust may once again become the scarcest, and therefore most valuable, asset in sales.

 
 
 

Comments


Let's Connect

We look forward to hearing from you and to scheduling an initial, no obligation consultation.

Contact Us

© 2025 by Ravelyn Consulting Limited. Powered and secured by Wix

bottom of page