Remember when we were all excited about the promise for AI, and generative AI in particular, to solve HR’s data and bias problems, automate mind-numbing tasks, and bring the best intelligence, insights, and coaching to every employee and HR team? Today, the story has changed, and the initial fear of missing out (FOMO) has shifted to fear of messing up (FOMU). Many HR organizations are paralyzed not just by FOMU but by how AI will change their way of working, their organizations, and their workforce.
To be fair, HR has some legitimate concerns. However, to understand how we can move beyond those concerns and leverage the technology for real sustainable benefit, it’s helpful to understand some of the big factors that brought us to this point:
- Vendor hubris. In the early days of generative AI, many vendors rushed features to market with limited transparency and explainability. The “black box” nature of many systems made it difficult for HR teams to understand how they worked, and vendors who were still understanding the bias, tendency for hallucinations, and nuances of prompt engineering and data masking themselves kept kimonos tightly closed lest it be revealed that they didn’t completely know what they were talking about.
- IT hero syndrome. Like with most emerging technologies, generative AI was declared the domain of IT, not line-of-business or horizontal functions like HR, making HR feel at best not in control and, at worst, relegated to whatever self-declared “AI teams” within IT were able to deliver. Those teams often quickly discovered that developer expertise wasn’t necessarily synonymous with AI or prompt engineering expertise, but were able to hold their position with concerns about data security and governance and put even well-vetted HR AI technologies on compliance and other hurdles that disillusioned even the most tech-savvy and forward-thinking HR departments.
- HR automation anxiety. While their counterparts in sales, marketing, service, finance, and other areas saw the potential for automation as a means to drive productivity and better business outcomes, HR often saw the glass as half empty, worrying about what automation would mean for HR headcount instead of what opportunities it might give HR to focus on more strategic goals. HR, in some cases, also feared AI would take over decision-making power, resulting in a loss of control for HR leaders.
To be fair, HR, compliance, and other groups had – and have – legitimate concerns about the legal and reputational risks of errors caused by AI in decisions like hiring and layoffs, in the potential bias in models trained on historical data, and with the risk of data leaks or noncompliance with data protection laws. In these areas, to move beyond FOMU, the onus is on vendors to provide more transparent, explainable, correctible models, and on HR to ask the right questions about how models are used and how data flows to and from them.
HR can beat vendor hubris by demanding clear and explainable models and recommendations, and refuse to consider offerings from vendors that don’t answer questions to their satisfaction. They can battle IT hero syndrome by becoming more literate on AI basics and leveraging their heritage as stewards of data safety and security to drive collaborative development of AI policies and practices that are both technology and human friendly. HR should be putting these things on its to-do-list:
- Experimenting with different generative AI models and products. All your peers in other departments are. If you want to have a meaningful conversation about the impact of AI on humans at work, you should experience it as a human.
- Asking questions – and not stopping until you understand the answers. HR is going to be on the front lines of explaining to employees how and why AI will improve, not replace, their work. It is not HR’s job to learn a vendor’s jargon or IT’s techno-gobbledygook – it is HR’s job to keep asking questions until you understand exactly how and with what data and models any proposed AI uses, how data privacy and security is protected, and what safeguards are in place against risks of hallucination, inaccuracies, or bias.
- Taking the lead on establishing AI policies, practices, and training. Valoir’s recent research found that only 1/3 of companies had a policy on the ethical use of AI, and only 30% had training on the effective use of AI. HR should be taking the lead in – if not creating – codifying, communicating, and training staff on the appropriate and ethical use of AI (generative and otherwise).
When it comes to HR automation anxiety, as the saying goes, jobs will not be replaced by AI, they’ll be replaced by people who understand AI. The real leadership opportunity for HR is in getting ahead of not just HR skills, roles, and responsibilities but organizational ones, helping business leaders understand how a more dynamic understanding of existing and future skill demands require a people-first and HR-lead approach to dynamic skill mapping, gap analysis, and intelligent reskilling.
AI is upon us – and it represents a real opportunity for HR to leverage its strengths in data stewardship, training and development, and the human side of work to harness it to deliver real benefits for organizations and individuals. If HR doesn’t rise to the challenge, someone else will.
Explore Hrtech Articles for the latest Tech Trends in Human Resources Technology