Microsoft has proposed a responsible AI standard and announced limits on the usage of its technology


Building on Microsoft‘s previous work, a multidisciplinary group of researchers, engineers, and regulatory experts has created a framework for responsible AI development, intended to serve as a practical guide for development teams.

The document includes concrete goals or outcomes that teams developing AI systems should strive to achieve. Each of them requires a set of steps that professionals must follow to ensure their compliance throughout the entire life cycle of the system and, finally, tools and practices are provided that facilitate their implementation.

A need for this kind of assistance

The need for this kind of practical guidance is growing. Microsoft, as a technology company, recognizes its responsibility to act and work to ensure that AI systems are responsible by design.

The company has announced new usage controls tailored for facial and synthetic voice recognition. In this area, it has Azure AI’s Custom Neural Voice, an innovative voice technology that allows the creation of a synthetic voice that sounds almost identical to the source and that has already been used by companies such as AT&T or Progressive. This technology has enormous potential in sectors such as education, accessibility, and entertainment, but its risks must also be taken into account.

To alleviate them, Microsoft has adopted a control framework based on experiences and lessons learned in this regard and has announced that it will apply similar controls in its facial recognition services. In this sense, it will stop giving access to emotion detection APIs through technology that can scan people’s faces and their emotional states, based on their facial expressions and identity attributes such as gender, age, smile, facial hair, hair, makeup, or movements.

In addition, the business will assist.

From now on, detection of these attributes will no longer be available to new customers, who must request access to use facial recognition capabilities in the Azure Face API, Computer Vision, and Video Indexer. Those who are already subscribed will have one year, until June 30, 2023, to request and receive approval to access facial recognition services.

I will be able to assist

The company will also provide guidance and tools that empower its customers to implement this technology responsibly. Azure Cognitive Services customers can now use the open-source Fairlearn package and Microsoft’s Fairness Dashboard to measure the fairness of Microsoft’s facial verification algorithms on their data. This way, they can identify and address potential impartiality and bias issues before deploying their technology.

Additionally, it has updated its transparency documentation to improve the accuracy and fairness of its systems by incorporating human review to help detect and resolve instances of misidentification or other failures.

A new API is available to Microsoft customers

Working with customers already using its Face service, the company also realized that some errors originally attributed to fairness issues were due to poor image quality. If the image someone sends is too dark or blurry, the model may not be able to match it correctly and may unfairly disadvantage certain demographics in particular.

For this reason, Microsoft is offering customers a new recognition quality API that indicates problems with lighting, blur, occlusions, or head angle in images submitted for facial verification. The company also provides a reference app that provides real-time suggestions to help users capture higher-quality images.

The company has shared its knowledge

In addition, the firm has shared its experience and the pattern it followed to improve its speech-to-text technology to uphold fairness principles and avoid social bias and inequalities.

Artificial Intelligence solutions are the result of multiple decisions by those who develop and implement them. Microsoft chooses to proactively guide this entire process, from its inception to how people interact with it. Therefore, it is necessary to keep people at the center of the design of this technology and respect values ​​such as fairness, reliability, security, privacy, inclusion, transparency, and responsibility, “said the company in a statement.

What do you think?

Written by Rachita Salian


Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

pkjlzul bug

Instagram wants to verify your age by employing an AI to scan your face

Pandora Kaaki shows off her great charms with a stunning swimsuit

Pandora Kaaki shows off her great charms with a stunning swimsuit