«Federal Human Resources»: Applying it in an ill-considered manner may harm the reputation of the institution

«Bias» highlighted the risks of «artificial intelligence» in the evaluation of job talents

The Federal Authority for Government Human Resources (FAHR) confirmed that although artificial intelligence technologies now provide a variety of potential benefits in employers' assessments of job talents working for them or wishing to work, the use of artificial intelligence methods in this process may involve several risks, most notably legal, public relations and effectiveness risks.

In a recently launched guideline on the use of artificial intelligence techniques in job talent assessments, the authority stated that the legal risks associated with the use of artificial intelligence in talent assessment are represented in two main categories, the first related to "data protection", and the second related to "bias".

She explained that many countries have established legal regulations specifying the permissible use of artificial intelligence methods to analyze data on individuals, to avoid this.

She said that the UAE has issued the Personal Data Protection Law, which is highly flexible, to enable artificial intelligence systems in the country and support their adoption, as these regulations relate to the legal protection surrounding the collection and use of personal data.

She stated that bias in job talent assessment occurred when the evaluation unfairly discriminated against an individual based on one or more of their characteristics or background (e.g., race, ethnicity, religion, country of origin, sex, age, disability status). "One of the benefits of AI methods of talent assessment is that it can reduce this bias by reducing the impact of human autonomy, as incorporating AI into assessment can increase and consolidate bias, if not done correctly and under expert supervision."

She stressed that the use of AI methods in talent assessment also carries risks related to the public's perception of the organization, if AI methods are not designed and developed in a careful and thoughtful manner. For example, candidates who are evaluated using AI-based assessment, and who determine that it is unfair or intrusive, can share these negative reactions with their network, and on social media, which can damage the organization's reputation. It can also have a negative impact on the quantity and quality of applicants to the institution.

She pointed out that there are other risks that may result from artificial intelligence methods of evaluation, known as "effectiveness risks", which are the risks of using an AI-based assessment that does not work as expected.

"The complexity of data sources and modeling techniques enabled by AI methods means that even experts who design an AI assessment may not know exactly how it works and how it reaches its expectations or decisions. The danger is that evaluation may seem to work well in empirical studies, but experts don't know how to make decisions. In this case, they cannot accurately predict or predict how well it will perform when used in new scenarios, such as making decisions about real-life data in new conditions that algorithms have not seen before."