How to detect and reduce biases for responsible AI – EconomieMatin

Humans may have biases, but that doesn’t mean AI has to. Algorithms learn to behave primarily based on the type of data provided to them. If these contain underlying biases, the AI ​​model will learn to act accordingly. If companies don’t make sure their models are as fair as possible, they can let bias creep in unknowingly.

The data problem

Certain types of data are more likely to lead (often inadvertently) to discriminate against certain groups – such as information on gender or age.

Companies should carefully consider the data used by AI. Otherwise, these may lead to unfair treatment of certain populations – such as excessively limiting loans, insurance policies or product discounts to those who really need them. More than a moral obligation, it is a real responsibility for companies to prevent the appearance of such biases.

When biases become real

Over the past year, several high-profile incidents have highlighted the risks of unintended bias and the detrimental effect it can have on a brand and its customers. In the current climate where so many companies are struggling to stay afloat, this effect is only heightened. Discrimination has a cost: loss of income, loss of trust between customers, employees and other stakeholders, regulatory fines, damage to brand reputation and legal ramifications.

How to identify and anticipate biases

Marketing to specific groups with similar characteristics is not necessarily biased. For example, sending offers to parents of young children with promotions on diapers, college savings plans, life insurance, etc. is perfectly acceptable if the company offers them personalized offers. The same goes for targeting elderly people benefiting from mutual insurance or a pension, provided that the company promotes relevant and useful offers for this target group. It’s not about discrimination, it’s about smart marketing.

But targeting groups can quickly become a slippery slope. It is incumbent on organizations to integrate bias detection technology into all AI models. Particularly in regulated industries, such as financial services and insurance, where the consequences can be severe. Bias detection should not be done only quarterly or monthly. Organizations need to continuously monitor their self-learning AI models 24/7 to proactively detect and eliminate discriminatory behavior.

In order to avoid these built-in biases, companies should use “clean data sources” to build their AI models. They must nevertheless remain cautious, because the “classic” class identifiers (level of education, profession, marital status, etc.) can also be sources of discrimination in certain situations.

The use of specific technology to identify these biases is essential. Testing AI models through simulations can spot discriminatory AI behaviors and correct them in advance. This is an essential step, especially since AI is new, powerful, and often opaque in the way it works – making the detection of inherent biases more complex.

But it is not enough to detect the bias at the level of the predictive model. The interaction between several models (sometimes hundreds) and the company’s customer strategy can also give rise to discriminatory behavior. Thus, the presence of bias must be tested during the final decision that a company takes with regard to a customer, and not only during the testing phases.

Using technology to detect bias is essential for two reasons. The first is that of the scale of work that needs to be done. The second is that customers and the media are increasingly sensitive to the issue, and correspondingly less forgiving. Errors at this level are not tolerable.

We would like to thank the author of this short article for this outstanding content

How to detect and reduce biases for responsible AI – EconomieMatin


Check out our social media profiles as well as other related pageshttps://yaroos.com/related-pages/