AI

Microsoft CTO Kevin Scott thinks AI should be regulated

The tech veteran explains his views on responsible AI.
article cover

Illustration: Dianna “Mick” McDougall, Photo: Microsoft

· 6 min read

Kevin Scott, Microsoft’s chief technology officer, is battling inertia.

After more than two decades in the tech industry—including stints at Google and LinkedIn—he’s found that “even the slowest parts of the technology industry are changing fast relative to everywhere else.”

That pace, Scott told us, can make it difficult for some companies to slow down and develop tech carefully. Ahead of last week’s Microsoft Build, the company’s annual headline event for developers, Emerging Tech Brew spoke with Scott about Microsoft’s approach to responsible AI.

This interview has been edited for length and clarity.

Microsoft Research recently announced new developer tools that flag “risks in large language models and mitigate them as soon as possible.” How will Microsoft gauge the success of these tools?

We have a whole bunch of different ways that we’re thinking about it. So new tools that we will be launching soon out of research, for both explainability and bias assessment. And we have a product coming to the Azure ML portfolio called Responsible AI Dashboard that’s based off of some of the work that we’ve been doing over the past four years on our responsible AI program internally. And hopefully, in the not-too-distant future, we will be able to do a public announcement around our responsible AI standard, which is on its second version.

The thing that we want to be able to do with all of these tools is, you want to be able to get the APIs and the tools that are using the language models into the hands of people as quickly as possible, where you’re mitigating harms, and you’ve got a bunch of infrastructure in place to help you mitigate the harms that you can anticipate. But also, that gives you a way to limit the blast radius, so to speak, when you discover something that you didn’t intend.

I mean, this is sort of the history of software development writ large—it’s actually theoretically impossible to write a set of software that has no bugs in it, for instance. It is always going to be the case that you will release things that do things that you didn’t anticipate, and the question is how you really quickly respond to those things to contain the impact.

There’s been a lot of talk recently about how important it is not only to mitigate the risks of AI systems, but also to know when to just stop developing certain projects. What exactly would warrant stopping development of an AI project?

A few years ago, we were really excited about a large language model that we were developing. The plan had been to open-source the model and all of its weights to the public. As we started running the large language model through our responsible AI process, a whole bunch of stuff got flagged—where, if we had released it to the public, we wouldn’t have known how to control for a whole bunch of these potential harms that the process was uncovering.

I was one of the initial people who was pushing pretty hard to get it open-source because I wanted to get it into the hands of researchers outside of the company, so they could play with it, do research, and investigate—and, honestly, so they could even help us uncover some of the harms that we weren’t anticipating or some of the beneficial uses that we hadn’t imagined.

But we decided not to release the model in that form, and instead, we built an academic program where we could, in a controlled way, partner with research institutions to give them access to the model so that they could study it and scrutinize it—but also, where we didn’t have to worry about malicious actors grabbing it and using it for something that was actively harmful to society. So that’s a concrete example of something that we just stopped in the tracks because it didn’t pass responsible AI muster.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

What was the name of that model, and what made it a dealbreaker?

It was a version of the Turing family of models that we’ve been developing—a precursor to a thing called Turing Megatron, that we talked about in the press last year. The potential harms, these models have biases in them, like you worry about people using them for for generating disinformation, and you worry about them getting used by people who aren’t fully thoughtful about having a real harms framework and having the biases that are present in the model get propagated through the products that they’re building. And not even maliciously, just because they are making a set of assumptions about what the model is safe for that are incorrect.

What’s your view on external regulation of AI, and in what capacity?

I think there should be. It’s just sort of a fact that the European Commission is working on a piece of AI legislation right now that I think has got some super-thoughtful elements to it. And I think you have to have multiple stakeholders involved in expressing what it is they want, in terms of the benefits that they want to encourage from AI and the harms that they want to minimize.

The thing that we’ve been really strongly encouraging people to think about when crafting policy is to focus on the what versus the how. Getting very clear about what it is you want your policy to accomplish and then having some flexibility on how you go accomplish it, I think, is ultimately going to lead to better outcomes. Over-focusing policy on how just means that you are almost by definition behind, as fast as the technology is evolving. So if you talk too much about the specifics of what you want to have happen, as things evolve very quickly, then your legislation or policy may become out-of-date before it can have the effects you want it to.

Your book, Reprogramming the American Dream, addresses how AI can be for everyone if it’s democratized. Many people, including some leaders in the AI ethics field, believe that machine learning is most likely to help privileged people, while leaving more marginalized groups out. What’s your take on that view?

I think they could be right. But these are all decisions that we get to make as the builders of these technologies. The thing that I have experienced directly is that the power of these machine learning tools is growing greater and greater over time…The trend line has been that machine learning, as a tool that you can just pick up and use to go do things, is becoming more accessible, not less…I think you can do really super-powerful things. And so, the bigger the models get, the more that vision of them—as platforms that are accessible to lots of people—becomes more real. It’s hard to build the big models themselves, but the big models themselves can then be platforms for other people to go use.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.