Artificial Intelligence Strategic Partnerships Perspectives

This post was originally published on this site

Today’s leaders hold a huge responsibility for both delivering on AI’s promise and creating the ethical policies that foster reputable brands, diverse communities, and thriving employees.

Global business leaders recognize artificial intelligence’s (AI) power to impact every country, industry, community, company, and individual around the globe, but they often struggle with three challenges to successfully adopting it, according to the Center for Creative Leadership’s (CCL) Talent Reimagined 2020 Report: The Human Element of Disruption.

To identify the top five global disruptive trends, CCL researchers reviewed more than 500 survey responses from leaders across the world. With its potential to improve the speed of tasks but also replace employees in the workplace, respondents selected AI as one of the top three trends most likely to impact their business over the next five years. In surveys, leaders listed a number of ways in which AI can be applied, such as:

  • Eliminating repetitive tasks, freeing up workers to focus on strategic thinking and decision-making skills
  • Improving understanding of customer needs, with machine learning allowing for great personalization, responsiveness, and product requirement analyses
  • Implementing true innovation such as driverless cars, disease detection and prevention, image analysis, and robotics

However, even though leaders embrace the power of AI, they identified three disruptive challenges it poses: biased algorithms, lack of empathy toward workers, and improper task allocation. These challenges have serious consequences for organizations. They can create a bright line for what their brand stands for and the implications for their workers. But if not well managed by leaders, they can seriously impact brand reputation, culture, and Equity, Diversity and Inclusion initiatives.

Biased Algorithms

The survey respondents expressed concerns about governance over AI algorithms. Some organizations are using AI algorithms to identify qualified job candidates. The algorithms used to identify those candidates rely on training data sets. Unfortunately, if the dataset’s definition of a “good candidate” reflects a bias toward a gender, ethnicity, or educational background, the pool of candidates will shrink to a homogenous subset that lacks diversity. The AI Now Institute documents a host of consequences for biased algorithms, ranging from erroneous assumptions about healthcare protocols to over-policing in vulnerable communities.

Leaders must combat this challenge by first educating themselves about the potential biases and how to mitigate them. Second, they must engage the C-suite and functional leaders in vertical development, helping leaders think in more sophisticated ways, so they consider broad implications of AI algorithms across all parts of the organization. Third, they must think strategically about policies for governing the use of AI technologies in their product development, HR hiring practices, and privacy policies protecting workers.

Lack of Empathy Toward Workers

How easy is it for a 50-year-old worker on the manufacturing line to become a data scientist or a robotics engineer? What happens to the driver replaced by an autonomous car or the person who shifts from the cash register to the back office?

How global leaders manage shifts in required jobs and skill sets will set the tone for an organization’s culture. Do less skilled workers have the opportunity to retool or are they simply replaced? Do leaders address ageism for older workers and openly express the value of hard-earned expertise and knowledge? Do they inspire teams who may feel obsolete to explore new platforms and skills for product development?

Leaders must show empathy at every turn or suffer the consequences of a dispirited workforce that is less productive, less engaged, and perhaps airing their grievances on public platforms such as Glassdoor. Yes, decisions need to be made, but with heart.

Improper Task Allocation

Not all problems can be solved by a machine. Today, AI algorithms are spectacular at processing huge amounts of complex data quickly. These algorithms beat chess masters and use preprogrammed representations of the world to direct driverless cars.

But machines do not have what psychologists call the “theory of mind—thoughts and emotions that affect their behavior. Machines cannot set strategy, make decisions based on the emotional impact to people, or understand the complex interactions of diverse cultures.

Humans cannot abdicate their humanity. Tasks and decisions that require setting direction, aligning teams, and obtaining their commitment must remain firmly with leaders as they seek to inspire and motivate greatness in their employees.

Government’s Role

Governments recognize the powerful potential impact of AI, as well and have set in motion a set of policy decisions businesses can learn from.

On May 10, 2018, the White House hosted the Artificial Intelligence for American Industry summit to discuss the promise of AI, ramifications for U.S. policies and research funding, and how America would lead in this space.

“Artificial intelligence holds tremendous potential as a tool to empower the American worker, drive growth in American industry, and improve the lives of the American people,” said Michael Kratsios, deputy assistant to the president for Technology Policy. “Our free-market approach to scientific discovery harnesses the combined strengths of government, industry, and academia, and uniquely positions us to leverage this technology for the betterment of our great nation.”

The resulting summary, published by the White House Office of Science and Technology Policy, noted four key needs identified in breakout sessions:

  • Supporting the AI development ecosystem
  • Developing the American workforce to take full advantage of the benefits AI
  • Removing barriers to AI innovation in the U.S.
  • Enabling high-impact, sector-specific applications of AI

Resulting policy decisions included regulatory barriers; created apprenticeships; increased science, technology, engineering, and mathematics (STEM) funding for computer science education; and increased strategic military investments. These policies have implications for leaders at every level as they couple AI investments with formal policies governing the application of AI, take advantage of federal funding to build new workforce skills, and create strategic plans for their future work forces.

AI holds enormous promise for the future of humanity. Today’s leaders hold a huge responsibility for both delivering on its promise and creating the ethical policies that foster reputable brands, diverse communities, and thriving employees.

Cheryl Fink, Ph.D., is global vice president of Leadership Research, Analytics, and Impact; Holly Downs, Ph.D., is director of the PropelNext Program for Societal Advancement; and Sunil Puri is the APAC director of Research, Innovation, and Product Development at the Center for Creative Leadership.