microsoft

Microsoft Corp. on Tuesday announced that it is planning to phase out its so-called artificial intelligence-based “emotion recognition” tool from its Azure Face facial recognition services citing privacy concerns.

Experts have severely criticized such “emotion recognition” tools, as facial expressions and emotions differ depending on the country and race, which makes it inappropriate to equate external displays of feelings with internal emotions.

The Redmond giant has reportedly been reviewing the emotion recognition systems since last year to check whether they are rooted in science. For this, it collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs.

“In the case of emotion classification specifically, these efforts raised important questions about privacy, the lack of consensus on a definition of “emotions,” and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics,” Sarah Bird, Principal Group Product Manager at Microsoft’s Azure AI unit, said in a blog post.

“API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused—including subjecting people to stereotyping, discrimination, or unfair denial of services.”

To mitigate these risks, Microsoft has decided not to support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup.

Starting June 21, 2022, detection of these attributes is no longer available to new customers. They will need to apply for access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer.

On the other hand, existing customers have until June 30, 2023, to discontinue the use of the emotional recognition tools before they are retired. Further, they have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases.

However, facial detection capabilities (including detecting blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box) will remain generally available and do not require an application.

By introducing Limited Access, the company is adding an additional layer of scrutiny to the use and deployment of facial recognition to ensure the use of these services aligns with “Microsoft’s Responsible AI Standard”, a 27-page Microsoft-produced document that sets the guidelines for AI systems to ensure they are not going to have a harmful impact on society.

The requirements include ensuring that systems provide “valid solutions for the problems they are designed to solve” and “a similar quality of service for identified demographic groups, including marginalized groups.”

For now, Microsoft has just asked its clients “to avoid situations that infringe on privacy or in which the technology might struggle”, such as identifying minors, but it did not explicitly ban such uses.

The company has also put some restrictions on the ‘Custom Neural Voice’ feature, which will require users to apply and explain how they will use this feature.