Microsoft will limit customer access to Azure Face services that purposely “detect” emotions and identity attributes. One of the main issues is that people express emotions in different ways. Often our external facial features do not match the emotions we feel internally. One must also question the purpose in using technology that can supposedly guess one’s emotions. Microsoft noted, “Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ’emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability. .” Customers will also no longer be able to take advantage of Azure Face’s capability of identifying a person’s gender, age, smile, facial hair, hair, and makeup.
New customers are not able to acquire these AI features, while existing customers will lose access on June 30th, 2023. Potential customers will now need to apply to use Azure Face services and will only be allowed to utilize it if their plan is a “pre -defined acceptable” use case. Microsoft itself will only use the technology for products like “Seeing AI” which aids those with visual impairments.
Microsoft is also restricting the use of its Custom Neural Voice technology. It “enables the creation of a synthetic voice that sounds nearly identical to the original source.” The company is concerned that it could be used to “inappropriately impersonate speakers and deceive listeners” and will therefore limit access.
Azure Face and Custom Neural Voice have both been incredibly controversial. These technologies could be abused in a multitude of ways by various entities. Most people are therefore thankful that Microsoft has decided to likely largely withdraw these features. The technologies will still be available to some, but they seem to be employed primarily by those developing accessibility products.
Top image courtesy of Microsoft