About hypergeneralists and division of labor in the AI age

Some time ago, I read an interesting interview with the CEO of Microsoft Belgium [Didier Ongena] in a Flemish newspaper. It tackled the usual topics about artificial intelligence and how it will influence all aspects of a business over time (no big bang but a gradual increase – I fully agree here), but what mainly caught my attention was a term which was new for me in this context – the Hypergeneralist: people who can see the bigger picture in a company or sector and who can make a good synthesis.

I was wondering what he meant exactly – at first glance this is not really new, generalists are ‘of all times’. But then I made the connection with another great article, this time in HBR [Eric Colson]: “Why Data Science Teams Need Generalists, Not Specialists” or later “Beware the data science pin factory: The power of the full-stack data science generalist and the perils of division of labor through function”.

In that article, a comparison is made with “The Wealth of Nations” by Adam Smith, describing the benefits of specialization in a pin factory assembly line. “This time is different” – it is not the first time somebody claims this. But after reading the article I tend to agree, and I better understand the Hypergeneralist idea mentioned above. ….

The goal of assembly lines is execution, we know what is wanted. But the goal of data science is to learn and develop new business capabilities. Those can’t be designed upfront, they need to be learned as you go, through experimentation, trial and error, and iteration. Too much specialization (and hence probably departmental silos) hinders the goals in several ways:

  • it increases coordination costs
  • it ceates longer waiting times
  • It narrows context. Division of labor can artificially limit learning by rewarding people for staying in their lane.
  • It can lead to loss of accountability and passion.
Hypergeneralists versus specialised silos – what to choose in the AI age?

A generalist will see opportunities that a narrow specialist won’t, will have more ideas and try more things (and fail more, too).

Of course, some assumptions are needed, such as a solid data platform on which to work. But in general, the model will be better scalable, and provide a better starting place. So start with generalists, and move to a function-based approach only when clearly necessary.

You can disagree, but I invite you to read the paper in detail.

To be noted, I am probably biased. After a long career as engineer/manager/etc… in diverse functions, I chose to update/upgrade myself with learning about data, about business intelligence and now more and more about business analytics and machine learning. As such, I guess I am (becoming) such a Hypergeneralist. Good to know there seems to be a word for it …

Note: also worth mentioning concerning the interview mentioned above : the importance of lifelong learning. And that is exactly what this blog is about.

TMWS Info Card:
⁞ Time Well Spent : +/- 3 hours
⁞ Money Well Spent : €0
⁞ Type of learning : Articles
⁞ More info : see HBR or Stitch Fix