Happy Friday everyone – I find myself wanting to expand on an offhand remark I made at a conference a while ago, where I suggested that at the present level of development, it was possible to get a bit of perspective by using the rough heuristic that “Generative AI” could be replaced by “an offshore centre in Chennai”, and seeing whether your argument still sounded convincing.
I actually do think that my perspective on AI has been very shaped by spending ten years in and around the global knowledge process outsourcing industry – it was certainly one of the things that initially got me interested in cybernetics and the idea that information transfer and processing was central to economic organisation. But in general, if we’re thinking about “Will All The Jobs Be Replaced By The Computer”, it seems to me to be very relevant to have a look at a group of people who a) were the previous thing that was going to Replace All The Jobs, and b) are quite likely to be in the front line themselves of being Replaced By The Computer. So here’s three things I learned:
People who don’t make the investment in making it work properly, tend to have a very wrong understanding of what can be achieved. I always liked to pretend in my mind to be the Harvey Keitel character from Pulp Fiction when troubleshooting an offshore project that had run into trouble. And like that character, a lot of what was needed to do was just to ask obvious questions and give obvious instructions. Nine tenths of my consulting fee was earned by the time I’d asked the question “Whose job is it to handle communication with the offshore team”?
It is classic cybernetics stuff – part of the cost of setting up a system is the cost of setting up information infrastructure to make sure that information flows in and out of the black box, to the place where it can be useful, in a form in which it can be the basis for decision making and in time to be useful. As Stafford Beer’s “First Principle of Organisation” puts it, information flows will always obey the Law of Variety (the capacity of the control system is at least equal to the variety of what it’s trying to regulate) – the task of management is to make sure that this inequality is satisfied in the most productive way with the least strain.
In an outsourcing arrangement, this means that the useful output of the offshore centre will resize itself according to the capacity provided to communicate with the onshore client. Allocating resources to this task can be done in smart or dumb ways, and I hope I learned a few clever tricks of information attenuation and amplification to make it work. But the problems typically arose when no specific resource was allocated at all.
The ick factor should not be underestimated. It’s enough to make an anthropologist out of you. My CEO and the founder of FLA has a lot of thoughts on this, and maybe I’ll try and get a guest post out of him. But from my perspective, at least as important as old-fashioned racism, economic protectionism and don’t-care-don’t-like-it-ism (all of which certainly exist, and which people were to my mind surprisingly forthright in expressing to a complete stranger) is the fact that lots of people don’t like giving orders. Just like the nouveau riche of the 1920s allegedly didn’t know how to deal with servants, Anglo-European middle managers have to go through a bit of a mental adjustment to stop using the “colleagues” style and relying on shared understanding and tacit communication, and start giving specific orders for what they want and when they want it by. People don’t like being put into the role of being a boss with no warning. Often the nicest people are the biggest problems, because they are the ones who feel the most powerful sense of dissonance and awkwardness at suddenly being told they’re now in charge of a dozen human beings earning much less money than them, in the former British Empire. I tended to chuck them a copy of one of the books in the Wyndham and Bannerjee historical detective novel series, I don’t know if it helped.
I think the point I’m trying to make here is that ChatGPT, for some reason, doesn’t have the same affect; it sits in the uncanny valley between “talking to a person” and “going on the computer”, and people are able to trick themselves into giving it orders. That’s a big advantage for it in terms of my previous point, as in my experience, people are very very bad judges of how much time and effort they are spending on things. Somone will spend as much as ten minutes out of every hour explaining things to a graduate trainee on their desk, then swear blind at the end of the day that the trainee needs hardly any supervision; the same person will then cut up rough about not having time for this BS when you ask them to set up a half-hour call once a week with the offshore team. The fact that it’s so easy to get into a rabbit hole and lose track of time when trying to train the chatbot to do something is actually quite important.
Getting bad news is always the problem. Anyone who knows me will have heard the anecdote, but I firmly believe that the very worst possible case for offshore knowledge work is the combination of a Canadian client and a graduate of the Indian Institute of Technology. On one side, you have someone who has been trained to never refuse anything to someone in authority, and that all problems can be solved by working or studying harder. On the other side, you have an extremely conflict-averse culture, with someone who wants to think of their colleagues as friends. This more or less guarantees that nobody will find out about problems or miscommunications until they have become literally disastrous.
I don’t know whether to be interested or amused by the fact that ChatGPT is even worse at saying “no” or “I don’t understand the question” than a room full of enthusiastic offshore workers.I suspect that the problem has the same root – it is really difficult to provide reinforcing but negative feedback. We don’t have the human language to make a sentence like “That is bad news and I am annoyed, specifically I am annoyed with you for doing something wrong but thank you for telling me, I am pleased with that bit” sound convincing.And it seems like we don’t have a mathematical loss function for it either; maybe that should be the frontier of new research.
