ai

"A 'building the plane as you fly it' moment": Q&A with Twitter's ethical AI lead Rumman Chowdhury

Chowdhury previously ran responsible AI at Accenture.
article cover

Rumman Chowdhury

· 12 min read

Rumman Chowdhury has led responsible AI efforts from within major companies for years—first at Accenture, and now as director of the Machine Learning Ethics, Transparency, and Accountability (META) team at Twitter.

In November 2020, Chowdhury left her longtime role as Accenture’s Responsible AI Lead to found Parity AI, a startup whose mission is to “bridge the translation gap” that often slows down companies’ internal auditing processes for their AI models.

Just two months later, her team was acqui-hired by Twitter; Parity re-incorporated with new leadership, and Chowdhury started her role, leading Twitter’s efforts on model documentation, risk identification, and a new audit process.

So what brought Chowdhury from her own startup to Twitter within a three-month span? We chatted with her about how she got here, the projects she’s working on, and what it means to be successful in responsible AI.

This interview was edited for length and clarity.

Recently, you posted a Twitter thread about “the complexities of working on an ethical tech team”—how it involves optimism and seeing the potential of tech. In that vein, what brought you to Twitter from Parity? What was the thought process there?

Parity had not existed for very long, and we were doing well, so it was definitely something I thought about for a very long time. In the course of talking to folks at Twitter, I asked to talk to literally everybody: policy, comms, the CTO, everybody, and I was consistently impressed with the ethos. It's hard to hide a company's culture, if you know what to look for. And since I’ve been doing responsible AI at companies for a couple of years, I kind of knew what to look for, when looking for a company that was a good place. I wasn't actually looking for a job; it just always kind of stuck.

To me, the pivot point—the decision point—was when my lead investor in Parity asked me: What is it you want to do? In five years, what is the path you can take today that you’ll look back and say, ‘I did the thing I wanted to do’? What I told him was that my goal is actually to have an impact and make sure the field of responsible AI, which is very young and very new, heads in the right direction.

The more I reflected on it, one of the things I thought about was that there are few companies that have the level of impact that a company like Twitter does. Everything we build immediately impacts the public. It impacts what people see, how they communicate, whether or not it brings joy to their lives.

There’s no question this move has a lot of impact, with the amount of users on Twitter’s platform and the amount of people that aren't users but are still impacted by it. You mentioned knowing what to look for—what did you look for in these conversations that led you to say, “This is where I want to be guiding that impact?"

One of the last things I did at Accenture was publish a research paper on what companies need to do to be successful in responsible AI. There were a couple of critical things, and I can boil it down to three things that matter the most.

One is access to leadership. At Twitter, responsible ML is a company-wide initiative driven by the META team, which is important because we get face time with some of the highest-up people in the company on a very regular basis. We're all constantly talking to each other and held accountable for delivering.

The second is how success is measured—not just the success of individuals, but also how we measure success in our models and AI systems. Twitter already has a history of looking at things like conversational health. The idea of thinking about ethics might be new and uncharted, but graduating from the concept of thinking about more than number of users and literal dollars and cents to something maybe more nebulous—like the concept of good discourse and conversational health—is something Twitter has been trying to tackle. So I saw a lot of allies in the company.

The third is how a company reacts to a bad situation. What I found when talking to folks here was a lot of openness, interesting and wild ideas, and transparency. They really do mean transparency—sharing our code, results, everything. That's really important because we’re putting what we're doing out there and we're opening ourselves up for critique, but hopefully channeling it in a way that's positive.

There’s starting to be a narrative that fairness by unawareness is insufficient, which is great—in other words, if I don't know something bad will happen, that's not enough of a reason to assume that I'm not responsible if it does. So Twitter is ahead of the narrative by thinking about transparency and thinking about how to open up the systems that we are using for scrutiny and reactions so we aren’t unaware of any unfairness.

What are you hoping to accomplish in the short- and long-term at Twitter—specifics like team growth and funding, or the projects you want to work on most that will help you accomplish that five-year vision you had?

I've been in this role for about three months, so I'm hitting my first planning cycle launching into the second half of the year. It's kind of a big moment for road-mapping. Being inside a company versus being on the outside, in my opinion, holds an outsized responsibility to impact the product. I would be disappointed if I took a role where my jobs were only to publish papers or do thoughtful research because folks in academic institutions can do that as well. So the responsibility we hold being inside a company is to drive product changes, to help product teams build better tools. That’s really on my mind. The big question: What are the meaningful ways in which a team like META can impact our product?

So we’re hiring pretty aggressively—I'm looking for smart people in both research and engineering. The kind of work our research team does is meant to push forward the applied and problem-solving, as well as problem identification, parts of the field of responsible AI, specifically as they relate to Twitter. There are a couple of worksheets that we're working on, and my guess is that some of these will take quite some time.

Can you give us specifics on these worksheets? How will they impact the platform?

One of them is probably considered to be kind of boring, but it's my favorite thing to work on: standards development and risk assessments. We're already starting to see speculation and regulators talking about assessments and audits. So one thing I'm building is a risk assessment methodology. Right now I'm literally in the process of figuring out the best way to assess our models for risk—we're interviewing model owners and starting to dig into our catalog of models.

Another worksheet we have is on algorithmic choice and, more broadly, user agency. I have long said, even before I came to Twitter, that everybody talks about human-in-the-loop, but nobody has really solved what it means. As you pointed out very correctly, even people who aren’t on Twitter are impacted by it. So what does it mean to give people choice and agency over their experience with Twitter and how Twitter impacts them? That's pretty broad; it’s even something Jack has talked about to Congress. So we’re working on understanding how users understand agency and choice from the ground up.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Can you talk more about the model's catalog—how many there are, how long it’ll take to go through each of them, and what you see happening once you’ve applied the protocols you’ve created? Do you plan to publish those results under Twitter’s transparency policies?

This is very much a “building the plane as you fly it” moment. Our goal is to be trusted consultants and to share best practices with people and demystify the process of algorithmic assessments.

So I built a risk-tiering methodology, which is a series of questions that different model owners can answer about their models, and it helps us classify them into high, medium, or low risk. After that would come an investigation process. Our team would come in and work with the team on the ground to test their models based on all the different factors that came up in the initial checklist risk assessment. So right now, we’re going through our model inventory lists, working with the model owners, and highlighting issues as we see them. What we're really trying to do here is standardize the process that already existed and add structure around it.

In the space right now, there's always this conversation of, “All of this stuff stifles innovation,” and the answer is actually: “It’s the other way around.” People need structure and understanding of where the guidelines are; otherwise, they’re going to drive off the road. You need to know where the lines are, and then you're able to do really great things and build really interesting models. Twitter uses a lot of different types of models for different purposes, so standards—and scaling these standards—are really important so the company can move at the rate users expect.

You mentioned that before you join a team, you look at how they handled a bad or less-than-ideal situation. What's an example of a situation that you saw at Twitter that you thought they could have handled better, but that allowed you to see that there was room for growth or openness to change?

Around when image cropping became an issue that was discussed—the potential bias in the image cropping algorithm—that was actually around when I was in talks with Twitter, or maybe before. I was very impressed at the way Twitter handled it and the grace with which they handled “taking medicine.” That all pre-dated me.

To walk through the example: In October 2020, users were pointing out that the image cropping algorithm seemed to be biased against darker-skinned faces over lighter-skinned faces, and folks were pointing out that it seemed to crop women differently than men. Senior leadership at Twitter put out a blog post making a commitment to actually addressing the problem, and I was very impressed at how they followed through on that commitment. It wasn't just empty words; it very specifically laid out the things they were going to do.

A few months ago, they introduced a prototype to completely remove the image cropping algorithm and replace it with a new approach: If a photo is in standard asset ratio, it comes up in its entirety, and if it’s not in standard asset ratio, it center-crops. So that's how image cropping works today. Then my team followed up with a bias investigation that we made completely transparent and public—not just the process, but we also quite literally shared our code, so people can replicate our approach and analysis.

What’s your pie-in-the-sky change—the biggest change that you'd like to see a couple of years from now at Twitter or beyond?

The ideal thing I would love to see—not just for Twitter, but for the industry as a whole—is for us to move from reactive to proactive adjusting of the impact of ML models. Instead of finding a problem after the fact and fixing it, having good ways integrated into the company...for identifying harms before they even happen and adjusting them so that users don’t even know that there could have been a negative outcome.

This relies on so many things. One of the other worksheets we’re working on is education—we are working on specific training for our ML engineers to understand the kinds of tools that are available. The engineering team is working on something called the Responsible ML workbench, which is a series of tools for internal folks to use to do self-assessment of their models. It's important that a team like META is not the only place where algorithmic bias assessment happens.

One very good thing we see about the state of engineering and ML today is that people are very aware that bias and unfairness are problems that manifest themselves, and they really want to get smart on what they can do tangibly and pragmatically in their day-to-day. Part of META is to help provide that, whether it’s education on where biases can happen or specific training on tools they can use. A high-level goal that would incorporate all of that is Twitter being in a situation where it can be proactive about identifying and adjusting harms instead of in a reactive place.

Some tech workers in this sector feel their jobs are centered around reactive problem-solving, rather than proactive work searching out bias and other platform issues. Do you think structures like this should change in the industry, at Twitter and other companies? How?

I’ll cut some slack to the industry as a whole right now and say things are still very new and undefined. So even if we were to say, “Our company is dedicated to being proactive and identifying risk,” the first question is: What does that mean? And what does that look like? To be fair, we don't have good standards in the industry as a whole. We don't have good rubrics and guidelines. So to be fair to the industry and the people trying to do this work, we currently live in a world of firefighting and reaction because these are the best tools we have at the moment.

But it’s a shortsighted view to say, “We're only going to invest in firefighting,” because it's actually quite time-consuming. There's more upfront work, but less long-term effort, in instituting the right kinds of systems. It requires a huge push today, but in a year or two years, if done well, your team actually has more meaningful and interesting work to do because you're not putting out low-level fires. And simply putting bodies behind it doesn't necessarily solve the problem.

This is more closely linked to a company’s ethos, mission, and goals for responsible ML. Viewing responsible ML as a way of avoiding scrutiny will lead an organization’s remit to be: “You only should firefight when there’s a problem.” But if your remit is to be a company-wide initiative, to be something that differentiates your entire organization, then it will make logical sense for leadership to say, “We have to actually get beyond putting out fires and start actually identifying problems proactively.” It fundamentally boils down to the ethos: If leadership is dedicated to it, they will find the money, they’ll find the resources, and they’ll prioritize people appropriately.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.