Scroll Top
HR Human resources management recruitment concept on virtual screen. 3d render robot pressing button.

Exploring what organizations should know about using AI in Recruitment & Talent Acquisition efforts   

by Jonathon Palmieri

Artificial Intelligence (AI) in business is increasingly becoming a focal point as organizations strive to enhance and streamline their workforce operations. As we delve into the realms of AI, it’s crucial to address a common gap in understanding what AI truly encompasses. This blog seeks to unravel the complexities surrounding AI, distinguishing between concepts like process automation and predictive analytics and generative AI, such as ChatGPT. As we explore the nuances of AI applications, including the emergent field of Prompt Engineering and the intricacies of AI training, we’ll also scrutinize the ethical and legal ramifications of AI-generated content and data usage. By dissecting these multifaceted issues, we aim to provide a comprehensive overview of AI’s role in business and its broader implications.

AI in business is a growing trend, especially as many organizations look to optimize and augment their workforce. However, I think it’s important to mention that many organizations do not understand what AI is. The term has been thrown around to include process automation, predictive/advanced analytics, and generative AI (ChatGPT). These misunderstandings often conflate the topic; for example, within generative AI, many people confuse training with AI chat prompt manipulation. Prompt manipulation is absolutely something useful in leveraging AI, hence the emergence of the “Prompt Engineering” job. However, training is where many advancements are happening. It’s also important to know where data comes from, who owns it, and how it is being used in training. It is extremely important to the resulting impacts of how we work with AI and how organizations leverage it. For example, when Amazon tried to create an AI recruiting tool, it resulted in unintentional bias. Most likely propagated by implicit bias in their training data and model-building methods. Another issue is ownership not only of the data but also of the content being generated. According to Lexology, “the Copyright Office guidance indicates AI-generated content may be copyright protected if the work is in some way original and unique”. But is content based on scraped and acquired data original? Also, what if that generated content comes from an AI model that is scraping data illegally? There are dozens of lawsuits going after large technology companies scraping AI model training data (International Association of Privacy Professionals). 

 These topics, and others, extend to how we look at utilizing AI in the Talent space. A recent post on the Reddit forum I moderated on (r/humanresources) was a call for advice on catching people using ChatGPT / Generative AI on their resumes. Surprisingly, the mass opinion was that no one cared if people used a tool (ChatGPT) to create their content; they viewed it no differently than hiring a writer or career coach. However, the problem stemmed from wrongfully representing themselves. Therein lies the danger: when you have something like AI creating content, with little visibility into how it is making the content and problems such as Hallucinations (incorrect or misleading results) from AI, you open yourself or your company up to liability or poor results. Just look at NYC’s AI chatbot telling business’ to do illegal things. Similar problems in the talent world exist, such as benefits chatbots generating incorrect results for employees (or incoming candidates), which can open the company up to liability. I recommend that companies utilize strict RAG (Retrieval-Augmented Generation) techniques to limit hallucinations and ground AI text generation by providing context. However, I still believe organizations should use these AI techniques to augment rather than replace HR & recruitment professionals. 

       Lastly, I am seeing many more companies interested in measuring employee & candidate experience trends, which are important people metrics as they relate to employee performance, engagement, satisfaction, and retention, and turnover. These are topics that many companies, such as Deloitte, have recognized as growing HR trends. Candidate experience is important as it relates to the likelihood of hiring qualified candidates, and it is related to the outcomes of that candidate as an employee. Furthermore, candidate experience impacts employer branding as well as consumer decisions. Simply put, if your company provides a poor candidate experience by hiring low-quality recruiters (or outsourced recruiters), having a poor interview process, or having marketplace misaligned jobs, you are hurting your company in a variety of ways. When talking about AI from a candidate experience perspective, the saying, “People join people, and people hire people,” comes to mind. Adding in-personal technology, such as an AI chatbot, removes that people element and retracts from the candidate’s experience. 

In conclusion, the integration of AI in business, especially in the talent and HR sectors, poses both opportunities and challenges. As companies increasingly leverage AI to enhance candidate and employee experiences, it’s imperative to balance technological advancements with ethical considerations and human touch. While AI can significantly augment HR functions, it should not replace the nuanced judgment and empathy that human professionals bring to the table. Ensuring responsible use of AI, addressing potential biases, and maintaining transparency will be crucial in harnessing AI’s full potential while safeguarding against its pitfalls. Ultimately, fostering a holistic approach that values both technology and human interaction will be key to achieving sustainable success in the evolving landscape of AI in business.

View this post on LinkedIn or the r/recruiting founded website AreWeHiring.com

Leave a comment

You must be logged in to post a comment.