Artificial intelligence to recruit better… really?

Tools integrating algorithms mobilizing theartificial intelligence (IA) have gradually gained all stages of the process of recruitment. Already in 2018, 64% of the 9,000 recruiters interviewed in a study online said they used them sometimes or often in the course of their business. 76% thought this technology would have a significant impact on it. A investigation more recent even suggests a link between AI and performance: 22% of the best performing companies use “predictive” or “augmented” recruitment, compared to 6% of the worst performing organizations.

What to get excited about? The promises made by market players are very important: time to save, better identified profiles, eliminated stereotypes… Beyond that, however, many technical, ethical and legal questions arise.

To respond to this, a draft European Union regulation (“AI-act”) is in preparation. It also classifies systems intended to be used for the recruitment or selection of natural persons in the category of “high-risk AI systems”, those which have potential effects on fundamental rights. Specific rules for detailed information, prior compliance, and regular auditing are provided for these systems.

Even if the labor law French and the European texts present general rules that aim to protect candidates for employment, A European regulation on AI appears essential. The current observation is that of a real discrepancy between the many promises of effectiveness and objectivity of these tools and the rare scientific studies dealing with these yet fundamental subjects. Their ability to reduce discrimination is still largely to be proven.

More efficient, faster, more inclusive

Solutions integrating AI today concern all stages of the recruitment process, with the key for each of them promising advantages for the organizations that implement them: economic advantage by being faster and more productive at when choosing their future employee (some developers of these recruitment solutions even claim quarter the time necessary to finalize a recruitment); technical advantage by being able to process a large volume of information and carry out classifications according to the criteria that one wishes; ethical advantage by escaping the stereotypes mobilized by human recruiters when they discover a CV or cover letter.

When looking for candidates, the so-called “sourcing” phase, the automated online collection of information on potential candidates (“web scraping”) is presented as a guarantee of improving the match between the needs of the recruiting company and the profile of the candidates. The analysis algorithms will look for data identifiable on CVs but also information retrieved from social networks, which are supposed to make it possible to infer certain personality traits or particular skills in potential candidates.

During the pre-selection of files, chatbotslike Randy, the conversational robot developed by Randstad, offers personalized tests and directs candidates towards the most suitable professions. The candidate experience would be improved, thanks in particular to the reduction of the stress felt and the gamification of the lived experience. For the company, there is a possibility here of redirecting the activity of recruiters towards more qualitative and complex tasks, by relieving them of very time-consuming steps.

With regard to the interview phase, automatic video analysis tools, which the candidate can sometimes do alone, are in full development. The American company HireVue proposes for example to evaluate the answers given on the basis of facial expressions and body posture. The Swiss company cryfe proposes to analyze the “authenticity” of people by studying their verbal signals and their gestures.

All allegedly without activating stereotypes relating to the physical appearance or the language of the candidate and therefore without discrimination. At each stage of the recruitment process, therefore, the promoters of these solutions promise companies that recruit by using them a more efficient, faster and more inclusive recruitment.

Judgment bias at all levels

Some warning signs, however, should not be overlooked. Several studies have, for example, shown that far from reducing discriminatory biases, certain predictive recruitment tools can even generate new judgment biases.

Right from the tool programming stage, developers can incorporate their own biases. Algorithms associating facial expressions, personality traits and competence, in particular, start from questionable assumptions. Several studies conclude that the decoding of emotions is on the one hand very complex and on the other hand culturally dependent. the error rate for the recognition of an expression can thus vary from 1% for a white man to 35% for a black woman.

For so-called “machine learning” algorithms, which rely on data to train and adjust, discriminations are indeed easily reproduced. The training bases can be incomplete and biased and make the tools rendered less efficient and even discriminating for minority categories.

The most famous example is that ofAmazon who had to stop using an automatic sorting tool of applications in 2018. It systematically discriminated against women applying for technical jobs or web developers on the basis of recruitments made between 2004 and 2014, which had favored men.

With regard to the tests that claim to be neutral, the “stereotype threat” is never far away. This is a psychological effect whereby when faced with certain test situations, an individual may experience the feeling of being judged through a negative bias aimed at their group, which can cause stress and a decrease in performance. For example, when a woman takes a test in mathematics, her result may be affected by the stress caused by the internalized idea that women have inferior abilities to those of men in this discipline.

Testing with a chatbot, which is supposed to be more fun, and therefore less stressful for a candidate, could be resented by certain categories of candidates. Testing with a chatbot, which is supposed to be more fun, and therefore less stressful for a candidate, could nevertheless be resented by certain categories of candidates. This is particularly the case for candidates who are less familiar with digital and virtual environments; they could be less successful when confronted with a digital selection method, due to negative generational stereotypes (but also the lack of habit of using this type of tool, the fear of being less efficient than younger generations).

The algorithm can also generate errors itself based on spurious correlations, due to confounding variables. Playing golf can thus be an over-represented leisure activity in the profile of employees occupying a position of executive management. However, the association between this sport and performance at work is in no way relevant. The worst part is that it is sometimes difficult to know and identify the reasonings of some algorithms based on deep learning (deep learning) due to the complexity of the process. In this case, we speak of a “black box” model.

An inexplicable algorithm is an unacceptable algorithm

Caution is therefore required. Specialists who have been working on artificial intelligence for a long time even sometimes speak of “artificial incompetence” instead of artificial intelligence. Currently, the tasks seem, most of the time, distributed with a certain modesty in the uses: favoring human intervention in the final choice phase, and considering the use of AI as a pre-selection tool and as a decision-making aid .

[Près de 80 000 lecteurs font confiance à la newsletter de The Conversation pour mieux comprendre les grands enjeux du monde. Abonnez-vous aujourd’hui]

Great, however, is the temptation to give in to the sirens of algorithms. As we show in a study published recently, recruiters certainly say they have more confidence in the recommendations of their peers. In fact, however, they tend to follow the recommendations provided by a pre-screening algorithm more than those of their colleagues. This also applies even when the algorithm proposes to select the least good candidate.

Our observations therefore call for extreme vigilance: if recruiters blindly follow recommendations, even erroneous ones, provided by tools that lack transparency and explainability, the legal and reputational risks are great for a company that uses these tools, particularly in cases of proven discrimination. The new regulations initiated by Europe, which should be voted on this year, do not seem to be entirely irrelevant.

An inexplicable algorithm is in principle an unacceptable algorithm. Explainable AI should obey three principles : the transparency data used to make the model; I’interpretability, the ability to produce results understandable by a user; and theexplainability, the possibility of understanding the mechanisms that led to this result with the potential biases they involve. No doubt anticipating the difficulties to come and the evolution of the legal framework, some companies are already offering adjustments integrating “explainable” AIs also called transparent, if necessary, by practicing a form of positive discrimination.

We want to thank the writer of this article for this awesome web content

Artificial intelligence to recruit better… really?


You can find our social media profiles , as well as other pages related to it.https://yaroos.com/related-pages/