AI

How AWS helps companies manage AI responsibly

The company’s responsible AI lead talked to Tech Brew about regulation, trust, and more.
article cover

Diya Wynn

· 8 min read

As the person in charge of making sure AI is created and used responsibly on one of the biggest cloud providers, Diya Wynn has, well, a lot of responsibility.

Wynn, who leads the responsible AI practice at Amazon Web Services, works with the cloud giant’s many clients to make sure they have the tools and know-how to build fair and unbiased systems as they put the tech into practice.

We caught up with her about making AI more ethical on the front lines of implementation, the current push for government regulation, and how companies can make the tech more trustworthy.

This interview has been edited for length and clarity.

How has the field of responsible AI and people’s understanding of what it constitutes evolved over the past couple years?

What we’re seeing as a result of the interest and excitement around ChatGPT, and now the rapid movement toward generative AI, is that responsible AI is certainly present of mind. It’s helping to increase the interest and people’s awareness of the importance of paying attention to how we’re building inclusively and responsibly. Now, certainly there were things happening prior to generative that should have been raising that alarm bell before, but I think that it’s been increased, especially as we consider some of the questions around copyright, intellectual property—those are areas that also encompass considerations that have to be made from a responsible AI perspective.

Now, when we think about what we’ve been sharing and communicating, that hasn’t changed; I think our value and the need for having a holistic approach to responsible AI and the way in which we need to maintain a human- or people-centric focus, as we’re building as we’re designing, building, deploying AI technology in products and services, that’s still the fabric or the core structure that we’re communicating with customers both before and now.

What did you think of the ideas coming out of Senator Chuck Schumer’s recent AI forum or the ideas talked about there? (Wynn was not able to attend due to a speaking conflict, but other Amazon representatives were present.)

I think all of this is a good start in the right direction for how we can explore and consider regulation. I know that one of the challenges with trying to introduce regulation is that there is some unknown that we still don’t have good context for—technology is advancing quite rapidly.

And then this question of how do we create legislation that’s going to be fair, equitable, that does not restrict access and the ability to innovate? And that provides some complications—we need to be thinking about the fact that what probably is going to come out [in terms of] legislation won’t be perfect initially, but that they build in an approach to making iterations or having working groups, etc., that will continue to evolve on the technology because that’s going to be necessary…There needs to be lots more conversation.

What are some of the ways that you measure problems in AI? What do you look for in terms of results when you’re implementing some of these practices?

When we think about how we establish fairness, for instance, you want to make sure that across demographics or subgroups that you have equal performance for an algorithm or service across certain subgroups or demographics. Now, that’s going to vary depending on the application and the context. But when I think about something, whether that is folks from different ethnic groups, from different sexual orientations, or different ability, being able to make determinations and test for whether you’re getting representative performance across them.

When you’re using AI or services, for instance, we’ll get confidence levels. And we can also test for accuracy, and we want to make sure that we’re getting the same sort of predictions or accuracy…Doing those kinds of tests is one of the ways in which you can ensure that you’re establishing or getting equal results in the service or the algorithm.

One of our tools or services that we use and also provide for our customers is Sagemaker Clarify. And that gives us the ability to be able to test for bias in the data, as well as in the algorithms when they are delivered in a production context. Thinking about operationalizing AI, if an organization is using and building in Sagemaker, then they get to leverage this service as well and have that employed in their environments in order to be able to test their data, both in training as well as in production.

The other thing…is actually in design—thinking about what is fair in the early stages of the process. So when we think about operationalizing responsible AI, one of the key elements is to make sure that we are designing for and including responsible AI in the early stages of our design. And then making sure that we are testing to measure that we’ve actually met those requirements. Then we are looking for the right features in our data sets that would align with our definition standards. And that gets carried out throughout the entire process. So there are things that we want to do from a people perspective in terms of training and equipping our teams to be accountable, to make sure that they understand the things that they think about, that they can implement or test for bias and use the tools that we put in place. And then also tooling and technology that we get to leverage; I think all of those things help us in being able to put this into practice in an organization.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Is there an example or two that you might be able to share of how you put this responsible AI into practice in your job?

With an insurance company, we were able to work with them—the fortunate thing is that they came in and said, “We recognize that there is bias in our data, and so we want to be able to figure out how do we help our team understand that there is bias and think about the ways in which they can build AI to be able to reduce that.” So we put together a comprehensive training program that leverages some of the content that we’ve developed internally at AWS, but also other sources to be able to give them [a way to] provide them education and training…Another example would be with a cosmetic company during the pandemic, that was thinking about leveraging computer vision in order to be able to help make recommendations for their customers, because people weren’t going into the stores anymore. And so typically you would figure out what products you’re going to use based on somebody helping you in a store, and that wasn’t happening anymore. And so the idea was we wanted to be able to leverage AI to be able to help make those recommendations. So we got to be able to talk to them about the approach that they will take in order to deal with feedback and then help them implement AI in a way to make sure that if we’re using computer vision, how do we do so in a way that makes sure that they’re meeting all of the customers that they ultimately want to be able to serve.

There was a Gallup poll recently showing that a lot of Americans don’t trust businesses to use AI responsibly. As someone who works with AWS clients on AI, what steps do you think that companies can take to boost trust or make AI more explainable?

There are two key pieces to this. One is communicating a responsible AI strategy; I think that responsible AI is the foundation for gaining trust and earning trust with our customers. And so having a strategy that actually says, “We’re committed to the safe, inclusive, and responsible development of AI services,” I think it’s one of the ways to do that.

The other is transparency, and one of the things that we’ve done in the way of being able to be transparent about the way in which we’re building our services is to introduce something called service cards, which were released last year. The intention was to be able to provide our customers with the details into the intended use of the service card, as well as the performance…We want to make sure that there’s no statistical difference in the output across different sectors and being able to provide them some context for intended use, and then the things that they can do as well to be able to optimize their performance. So service cards and providing that context for transparency in the way we’re building, and then the performance of those services, are two great ways in which we can continue to build trust with our customers.

And the erosion of trust doesn’t just affect that one organization, but it has a trickle-down effect. It affects the trust that people have in this technology and other technology providers. And what that does is ultimately limit our ability to use it for great use cases.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.