microsoft president brad smith outlines proposals to govern and regulate ai

Technology giant Microsoft is calling for sweeping changes to how artificial intelligence is governed and regulated, proposing a five-point blueprint to ensure this rapidly advancing technology remains safe and beneficial for society.

In a white paper released today and written by the tech company’s President, Brad Smith, five policy recommendations were provided encompassing new government safety standards, legal and regulatory frameworks mapped to AI’s architecture, expanded transparency, and public-private partnerships.

Here’s What Microsoft is Proposing

“In many ways, this is at the heart of the unfolding AI policy and regulatory debate,” Smith highlighted. “What form should new law, regulation and policy take?”, he asked.

Among the proposals, Microsoft (MSFT) is calling for:

  • Implementing and expanding government AI safety frameworks like NIST’s AI Risk Management Framework, which the company pledges to follow.
  • Requiring that appropriate guard rails are built into high-risk AI applications that control critical infrastructure, with government oversight and licensing.
  • Developing an AI legal architecture that places different regulatory responsibilities on actors based on their role in creating or using AI technologies. This includes applying existing laws at the application level while developing new regulations for powerful AI models and the infrastructure they run on.
  • Promoting transparency around how AI systems work through measures like annual transparency reports and labeling AI-generated content. Microsoft also vows to expand access to AI resources for research.
  • Forming public-private partnerships to harness AI’s benefits while addressing societal challenges, including protecting democracy and human rights.

Microsoft acknowledges that its proposals require broader discussion and development to translate into workable solutions. But the company argues that proper AI governance demands “decisive and effective action” from governments, companies, and the general public working together.

The white paper stresses that machines must remain “under human control” and that no AI technology should be “above the law.” Microsoft commits to continuously improve its own internal AI governance systems to implement responsible AI principles.

Microsoft’s suggestions provide a framework instead of complete policies, highlighting the pressing need for significant changes in AI governance to reduce risks and enhance benefits as these technologies develop quickly. Smith’s proposals come at a time when many governments, tech experts, and public figures are expressing worries about the potential negative effects this technology could have on society if it’s not implemented correctly.

Tech Experts Warn About Risks to Society, Most Governments Are Yet to Respond

A group of over 1,100 professionals in the tech industry including the head of Tesla, Elon Musk, and the co-founder of Apple, Steven Wozniak, signed a letter in late March this year that called for an outright 6-month pause in the development of AI models that are more powerful than GPT-4 – arguably the most effective large language model (LLM) released to the public to date.

These experts emphasized that these models are being released to the general public, corporations, governments, and who knows who without appropriately researching the impact that they could have in multiple areas of society.

They warned that AI labs are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.

Meanwhile, both countries and large corporations including Apple (AAPL), JP Morgan Chase, and Walmart have banned the use of ChatGPT – OpenAI’s most popular AI-powered generative AI technology – from being used in a device that has access to their systems.

Privacy concerns are among the most widely cited as these models are reportedly being trained by using the data from their users, often without their express consent. In addition, social media networks such as Snapchat (SNAP), Meta Platforms, and even TikTok recently have been performing tests to embed AI-powered chatbots into their platforms without necessarily understanding the risks that this creates for consumers.

Earlier this month, the European Union took the first steps to start regulating AI with the release of the first rough draft of a law that aims to set some boundaries and rules that will govern this technology.

However, since the technology is at an early stage at this point, governments are adopting a cautious approach not to obfuscate technological advancement while at the same time protecting the public from the risks that come with highly advanced artificial intelligence.

Other Related Articles:

Wall Street Memes (WSM) - Newest Meme Coin

Our Rating

Wall Street Memes
  • Community of 1 Million Followers
  • Experienced NFT Project Founders
  • Listed On OKX
  • Staking Rewards
Wall Street Memes