How Microsoft Creating a Responsible AI Standard

How Microsoft Creating a Responsible AI Standard


Microsoft creates technologies that have the potential to change the world and it also has a responsibility to ensure that those technologies are used for good, and that requires thoughtful and deliberate work. 

AI systems are becoming more and more a part of our lives, but our laws are lagging behind. They have not yet caught up with the unique risks of AI systems and society's needs. So in the absence of clear policies and regulations, companies must chart their own course to define the steps needed to develop and deploy AI systems responsibly, and that's exactly what the Microsoft Responsible AI Standard does.  It guides how we design, build, and test AI systems so that they can uphold Microsoft principles and earn people's trust. 


Microsoft introduced the first Responsible AI Standard in 2019. It was grounded in our six core principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. 


How Microsoft Creating a Responsible AI Standard



When Microsoft published the first version, they know it was right at the very beginning of our journey of moving from principles to practices. Microsoft engineer talks about the challenges of building innovative AI solutions for a wide range of customers. There are so many thorny challenges when it comes to developing and deploying AI systems responsibly. 



AI has the potential to exacerbate existing societal inequities and even to create new ones. Researchers are at the forefront of uncovering and understanding these challenges, so incorporating their perspectives into the standard was essential. Microsoft deployed its hub-and-spoke governance model to move the process forward. This model brings together policy leads, product teams, and researchers so the core people needed to create and carry out actionable guidance.


Microsoft needed to strike the right balance of providing clear guardrails while allowing for new types of AI systems.   Microsoft started to find repeatable, predictable patterns which used these patterns to guide the requirements in the standard and help identify tools and resources that would be usable by the people tasked with building AI systems. Microsoft also recognized that there were particularly sensitive uses of AI that would always require expert guidance. 


In Custom Neural Voice, AI has made great leaps when it comes to replicating someone's voice, but it's easy to see how this technology could be misused Microsoft playbook needs to work across a large range of AI systems and to have a durable as the practice of responsible AI matures. 


Around the world, Microsoft seeing more and more policy proposals to better govern AI, but Microsoft has updated Responsible AI Standard.  AI will continue to evolve, and as it's put to novel uses, we'll need to be able to adapt quickly, be open to constant feedback, and have the humility to admit when there is a shortfall. 

0 Comments