Tay. It was the name of a chatbot put by Microsoft on Twitter in 2016. It is known to have become racist in record time. Since then, the company has been striving to develop responsible AI. It is in this context that it announced on June 21 to withdraw certain functionalities from its facial recognition tools.
Restrain AI capabilities
As of this week, Microsoft will no longer provide facial recognition tools capable of identifying a person’s gender, age or emotional state. For those who already have this technology, these features will be disabled throughout the year.
The company believes that these features pose reliability and potentially discrimination issues. Natasha Crampton, head of AI at Microsoft, told the New York Times to wish ” similar quality of service for identified demographic groups, including marginalized groups “.
Microsoft will also strengthen the controls put in place before selling one of its identification tools based on facial recognition. Its customers will have to explain for what use they need it. In particular if these tools can be diverted from their initial use as the Custom Neural Voice: used to imitate a voice, for example, for translations with one and the same voice, it can be diverted in the form of a “deepfake sound “.
Prior to the marketing of these technologies, Microsoft claims to study the impact of their use for obtaining a job, access to education, health, financial services… The goal is to find ” valid solutions to the problems they are meant to solve “, assures Natasha Crampton, while avoiding the perverse effects.
AI systems, on which facial recognition software is based, are known to their biases, sexists, racists. They are also formidable repressive tools in the hands of unsavory people and especially regimes.
Microsoft and other calls for an industry legislative framework
In 2020, Microsoft suspended an investment in the company AnyVision learning that this company’s facial recognition technology was being used to discriminate against Palestinians. It has since stopped all minority holdings in companies of this type. The same year, the company, with Amazon and IBMhas placed a moratorium on the sale of its systems to US police, following the crackdown on Black Lives Matter protests.
The AI Ethics Group at Microsoft, joined by Natasha Crampton in 2018, developed a 27-page document of AI standards over two years. A series of requirements to prevent the use of these technologies by some from having a negative impact on society.
Microsoft and others claim to long time legislation adapted to the AI sector. For Natasha Crampton This is a crucial time for setting standards in artificial intelligence “. The manager looks favorably on the recent European initiatives going in this direction. In the meantime, companies are free to choose whether or not to discipline themselves like Microsoft.
We want to give thanks to the author of this short article for this incredible web content
Microsoft will restrict its facial recognition software for ethical issues
You can find our social media profiles , as well as other pages related to it.https://yaroos.com/related-pages/