Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
Business leaders may be letting their guard down when it comes to protecting against AI risks.
A new report from PwC found that 58% of 1,000 US executives surveyed have carried out a preliminary accounting of risks around the use of AI in their companies. Only around one in 10 reported fully implementing 11 capabilities the firm identified as key to responsible AI, though 80% have made some progress.
On top of the dangers of unchecked AI itself, the lack of readiness could cause problems for companies, as new regulations like the EU’s AI Act, President Biden’s executive order, and California’s proposed AI safety bill start to mandate more guardrails. The findings also come as reports show businesses are running into other obstacles, from experimentation with AI to rollout.
What’s holding businesses back from investing in responsible AI? The top reason—reported by 29%—was “difficulty quantifying risk mitigation.” “Responsible AI not a budgetary priority” and “leadership unclear on value” tied for second, with 15% each.
Ilana Golbin Blumenfeld, responsible AI lead at PwC, said companies also tend to overestimate their own progress on installing responsible AI guardrails.
“The numbers that we see from the survey don’t always match our experiences in working with organizations directly,” she told Tech Brew. “There is sometimes a misconception that having cybersecurity practices, period, or having privacy practices, period, or legacy risk management functions, period—means that you have a responsible AI program. There’s a bit of a conflation where we might be looking at those risks or the considerations a bit myopically.”
But Golbin Blumenfeld said the results show business leaders don’t see responsible AI as at odds with keeping pace in the AI arms race—at least not as strongly as she might have anticipated based on past experience. The findings do point to a need for better measurement of the value these systems bring, she said.
Golbin Blumenfeld said some common factors that tend to wake up organizations to the need for responsible AI include new regulations or negative news coverage of “public failures,” but she’s recently seen more grassroots awareness within companies.
“Historically, it was really driven by negative news coverage,” she said. “Now I think it’s ‘pick your channel of interest.’ There’s so many different ways and touching many different stakeholders across the organization, because the discourse has just become a lot more dominant than it was in the past.”
PwC’s report was based on a survey of 1,001 US executives across six industries—half in business roles and half in tech roles—in April of this year.