Microsoft Corp. on Tuesday announced that it is planning to phase out its so-called artificial intelligence-based โemotion recognitionโ tool from its Azure Face facial recognition services citing privacy concerns.
Experts have severely criticized such โemotion recognitionโ tools, as facial expressions and emotions differ depending on the country and race, which makes it inappropriate to equate external displays of feelingsย with internal emotions.
The Redmond giant has reportedly been reviewingย the emotion recognition systems since last year to check whether they are rooted in science. For this, it collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs.
โIn the case of emotion classification specifically, these efforts raised important questions about privacy, the lack of consensus on a definition of โemotions,โ and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics,โ Sarah Bird, Principal Group Product Manager at Microsoft’s Azure AI unit,ย saidย in a blog post.
โAPI access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misusedโincluding subjecting people to stereotyping, discrimination, or unfair denial of services.โ
To mitigate these risks, Microsoft has decided not to support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup.
Starting June 21, 2022, detection of these attributes is no longer available to new customers. They will need to apply for access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer.
On the other hand, existing customers have until June 30, 2023, to discontinue the use of the emotional recognition tools before they are retired. Further, they have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases.
However, facial detection capabilities (including detecting blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box) will remain generally available and do not require an application.
By introducing Limited Access, the company is adding an additional layer of scrutiny to the use and deployment of facial recognition to ensure the use of these services aligns with โMicrosoftโs Responsible AI Standardโ, a 27-page Microsoft-produced document that sets the guidelines for AI systems to ensure they are not going to have a harmful impact on society.
The requirements include ensuring that systems provide โvalid solutions for the problems they are designed to solveโ and โa similar quality of service for identified demographic groups, including marginalized groups.โ
For now, Microsoft has just asked its clients โto avoid situations that infringe on privacy or in which the technology might struggleโ, such as identifying minors, but it did not explicitly ban such uses.
The company has also put some restrictions on the ‘Custom Neural Voice’ feature, which will require users to apply and explain how they will use this feature.