Would you trust a machine?

The Trusted Advisor is probably the single most influential book on the client-consultant relationship published in the last two decades. More than 15 years after it appeared, you can still find it on the shelves of senior partners right across the professional services sector. What made the book especially powerful was simplicity: Trust, it argues, stems from individuals’ expertise, whether they can be relied on to do what they say, and the extent to which they can go beyond the conventional supplier-customer relationship. Truly “trusted” advisors, says the book, also avoid the pitfall of self-interest: they put their clients’ interests ahead of their own commercial imperatives.

But “trust” has become a problematic issue. In all the many, many conversations I’ve had with clients, I don’t think a single one has used that word. It’s true that many describe the relationships they have with their advisers in terms that imply trust, but that’s not how they’d articulate it. “Trust”—like “loyalty”, again a term you never hear—has become associated with dependence and the suspension of critical judgment. A loyal customer buys the same product because they feel they should, because they don’t want to or can’t be bothered to change, not because it’s the best product. Even the clients we talk to who spend a lot of money with the same firm on a regular basis don’t equate that with loyalty: “Our procurement team would kill me if I said that,” one observed.

It’s tempting to shrug our shoulders and blame the zeitgeist. After all, this isn’t just an issue confined to professional services: The global financial crisis taught a generation not to trust banks; we shouldn’t trust the media, and we can’t trust politicians. But automation will put trust back in the limelight.

In the traditional world of professional services, your adviser—be they a lawyer, an accountant, or a consultant—would turn up and talk to you. You, as the client, would be able to gauge their level of expertise by talking to them (which is why, incidentally, the best clients have always been experts themselves). By working with them, you’d get to know whether they could be relied on to do what they promised, and by getting to know them you might—this was always the hardest bit—decide that you shared the same set of goals and values. But supposing it’s not a person who turns up, but a machine. That third aspect is clearly out of the question. “Intimacy”, the perhaps slightly unfortunate term used in The Trusted Advisor, is clearly a non-starter: Machines don’t have values. But you can probably be fairly confident about “reliability”, because a key advantage of a machine is that it won’t make mistakes, and you’d think you’d be able to judge “expertise”, assuming you’re one of those good clients who knows your stuff. But can you really be sure of the latter?

Trust has always been in part about transparency–a point that’s missing from the Maister model. If I’m a Victorian clerk, I write down a list of figures in a ledger and then add them up. The Head Clerk can then come over, add up the figures again, check they tally and initial the total. The owner of the business could turn up and do the same. We all trust the total because we can all check the maths. When we type a formula into a spreadsheet, we can check it; We can even employ fancy gizmos that check it and everything else, just in case. If the business owner (the head clerk role obviously having become redundant in the intervening century) turns up, he or she can check the spreadsheet. Again, transparency breeds trust. But what happens when that simple formula is replaced by an algorithm that only a couple of spotty nerds understand, and who—even if you could prize them out of the dank basement they inhabit—wouldn’t be able to explain in plain English what they’ve done? What if the machine (definitely not coming out of the basement) wrote the algorithm? What happens to trust then?

This is a hugely important question, not least because the professional services sector is pouring millions of dollars into both hiring spotty nerds and building machines that will build other machines that will replace some people. The solution, we hear from everyone, lies in that little word “some”. Only “some” of the people will go. In reality, as a client, you won’t find yourself talking to a robot on the other side of the table, but a person who has a robot in their briefcase/handbag/office/ basement. That person is tasked not only with explaining to you what the machine is doing but also with forging a relationship with you (remember: machines can’t do the intimacy bit of the Maister model). And I’m not sure that’s going to work.

Why? Because the person across the table, the would-be-trusted-advisor, won’t be able to trust their machine any more than you, as a client, can. It would be the equivalent of giving the Victorian clerk a magic wand, capable of totting up numbers in an instant: Why should he believe the number is right? Professional services firms already have huge problems with trust internally. The silos almost all firms continue to be organised around aren’t accidental, but the product of the endemic reluctance of experts to trust people who aren’t experts in their field. A tax advisor will be reluctant to introduce one of their consulting colleagues to an important client, because they’re worried the latter will screw up. Are these people seriously going to trust the spotty nerds in the basement, people they don’t trust to dress appropriately, let alone with, you know, actual clients.

The solution to this would, of course, be to train the advisor to understand the algorithm, but that’s going to require some very different skills on the part of the advisor, even if they could spare the time and had the opportunity—and they’ll be too busy trying to do what the machines can’t do: build relationships with clients.

“Who guards the guards?” asked the Roman poet Juvenal two millennia ago. “Who will the trusted advisors trust?” might be a pertinent question today.