Microsoft Advocates for Artificial Intelligence Regulations
Microsoft has thrown its weight behind a set of regulations for artificial intelligence (A.I.), addressing concerns from governments worldwide about the potential risks associated with this rapidly evolving technology.
As a company committed to integrating A.I. into its products, Microsoft proposed a range of regulations aimed at ensuring safety and transparency. These proposals include requirements for emergency braking-like functionalities in critical infrastructure systems, clear labeling for A.I.-generated content, and legal obligations applicable to A.I. systems. By championing these regulations, Microsoft emphasizes the need for collective responsibility while urging governments to take swifter action.
The rise of A.I. has captured the attention of industry players, with products like the ChatGPT chatbot generating significant interest.
Tech giants such as Microsoft and Alphabet (Google's parent company) have raced to incorporate A.I. technology into their offerings. However, concerns have emerged that safety considerations may be compromised in the race to stay ahead of competitors. Policymakers have publicly voiced worries regarding the potential for A.I. systems, capable of generating text and images autonomously, to fuel disinformation, facilitate criminal activities, and disrupt employment opportunities. Regulators in Washington have committed to vigilant oversight, particularly with regards to A.I. scams and systems perpetuating discrimination or violating the law.
In response to growing scrutiny, A.I. developers have called for greater government involvement in regulating the technology.
OpenAI CEO Sam Altman, whose company counts Microsoft as an investor, recently testified before a Senate subcommittee, urging government regulation. This echoes similar calls from internet giants like Google and Meta (Facebook's parent company) for privacy and social media laws.
However, the process of enacting new federal rules in the United States has been slow. Microsoft's President, Brad Smith, clarified in an interview with the New York Times that the company is not shirking responsibility. Rather, it is offering concrete ideas and is committed to implementing them, regardless of government action.
Microsoft's recommendations encompass several crucial areas. The company supports the notion that highly capable A.I. models should require licenses obtained from government agencies, involving notification, sharing of results, and ongoing monitoring and reporting of unexpected issues. Microsoft also suggests the operation of high-risk A.I. systems solely within "licensed A.I. data centers." To enhance safety, the company advocates for the inclusion of "safety brakes" in A.I. systems used in critical infrastructure, drawing a parallel with the built-in braking systems found in elevators, school buses, and high-speed trains. Additionally, Microsoft proposes special labels for A.I.-generated content to prevent consumer deception. The company emphasizes the legal responsibility of A.I.-related harms, extending liability to developers and cloud service providers for security compliance.
Alan Herrera is the Editorial Supervisor for the Association of Foreign Press Correspondents (AFPC-USA), where he oversees the organization’s media platform, foreignpress.org. He previously served as AFPC-USA’s General Secretary from 2019 to 2021 and as its Treasurer until early 2022.
Alan is an editor and reporter who has worked on interviews with such individuals as former White House Communications Director Anthony Scaramucci; Maria Fernanda Espinosa, the former President of the United Nations General Assembly; and Mariangela Zappia, the former Permanent Representative to Italy for the U.N. and current Italian Ambassador to the United States.
Alan has spent his career managing teams as well as commissioning, writing, and editing pieces on subjects like sustainable trade, financial markets, climate change, artificial intelligence, threats to the global information environment, and domestic and international politics. Alan began his career writing film criticism for fun and later worked as the Editor on the content team for Star Trek actor and activist George Takei, where he oversaw the writing team and championed progressive policy initatives, with a particular focus on LGBTQ+ rights advocacy.