This week Microsoft made a landmark announcement about integrating generative AI into the Microsoft suite of products. This integration will be a transformative milestone from a labor market standpoint.
Human Resources now contends with a scenario of understanding Machine vs. Human performance. How to attribute an analyst’s performance through Generative AI- Is this machine or human impact? We are getting into a complex scenario. Human Factors concepts we study in Industrial Engineering have thus far dealt with Machines exchange in a spatial, ergonomic, and instrumentation-driven way. Now we will be forced to look at it from a Live Algorithm way. We can derive some meaningful action from the Music industry, which has been using AI for a while and co-existing with AI. From this perspective, Blackwell et al. (2012) developed the concept of a Live Algorithm, defined as “an autonomous music system capable of human-compatible performance… the Live Algorithm listens, reflects, selects, imagines, and articulates its musical thoughts as sound in a continuous process”. The Live Algorithm or Co-pilot will apply directly to many workloads and will form a key element of where we consider the impact will be. In Blackwell’s research, the output, as per the study, is not centered on completing tasks but rather on improvisation. As a result, we may have versions of improvisation on a strategy document, and it will evolve with inputs. For example, a screenwriter may change the story as live feedback comes in from the audience. A recruiter may change strategy as they face challenges with a certain skill and start widening the talent pool based on Live Algorithm feedback. Or a Workforce Planner may decide to overrule certain standard choices for location and stay within the geo boundaries due to geopolitical sensitivity. The speed at which you can make decisions based on Live Algorithms will be mind-boggling.
But there is an element HR leaders should be carefully thinking about and start building a practice- Responsible AI and Failure Point Analysis. I read a very interesting book this week by the author Charles Perrow. (Introduction from Chat GPT4.0) Charles Perrow (1925-2019) was an American sociologist best known for his research in organizational theory, particularly his work on system accidents and the concept of “normal accidents.” Perrow’s most influential work is his book “Normal Accidents: Living with High-Risk Technologies,” published in 1984. In this book, he introduced the idea that complex, tightly coupled systems are inherently prone to accidents, which he called “normal accidents.”
Perrow argued that in complex systems with multiple, interdependent components, accidents are not only possible but inevitable due to the nature of these systems. He explained that complex systems have many points of potential failure, and their interdependencies make it difficult to predict and control the outcomes of those failures. Tightly coupled systems have closely linked components, meaning that if one part of the system fails, it can quickly lead to cascading failures.
Charles Perrow’s work influenced the understanding of the causes of major accidents, such as the Three Mile Island nuclear accident in 1979 and the Challenger Space Shuttle disaster in 1986. His research has also contributed to broader discussions on risk management, safety culture, and regulating high-risk industries.
As you can see, as the potential for rewards goes high through AI, the odds of failure also increase, and we need to manage this effectively.
Last week, the Chartered Financial Analysts Institute unveiled sweeping changes to its three-level tests. This change signifies the preparation of the CFA role in the AI world. Level 1 candidates can pick Python, and Level 2 will now offer Advanced Analytics and Python skills. Implementing machine learning into the core finance curriculum has always been widely discussed, but getting here has taken a long time.
The foundation of all this is to understand what skills we have and do not clearly. Here is an output from an exercise on After-Sales. After-sales has changed dramatically in the last few years. Digital transformation means that the customers expect a certain amount of sophistication and proactiveness
Here is an example of skills present and not present and emerging skills (names masked to protect confidentiality)
Swarm Recruitment Intelligence is another interesting process we are defining that could be valuable for your team. Swarm systems, also known as swarm intelligence or swarm-based systems, are artificial intelligence (AI) inspired by the collective behavior of social insects, such as ants, bees, and termites, as well as other animals like birds and fish. These systems consist of many simple agents (individuals) interacting with each other and their environment, following relatively simple rules. The agents in a swarm system are generally autonomous and decentralized, with no central control authority. The key aspect of swarm systems is that the collective behavior of the agents leads to the emergence of complex, intelligent behavior at the group level. This attribute is known as emergent intelligence or self-organization. We will publish a white paper on this very soon
The uniqueness of such a process driven by co-pilot is we can set up agents within the company to get live inputs to refine and win in the market. Previously, this would have to happen through meetings, etc., and that can be different in the Co-pilot model