At a global gathering formerly known as the AI Safety Summit last week, Vice President J.D. Vance declared that “the AI future is not going to be won by hand-wringing about safety.”
It was a somewhat predictable sentiment from an administration that was already widely expected to put no-holds-barred innovation ahead of regulatory concerns. But a similar attitude seems to be fashionable right now even outside Washington, whether because of the Trump administration’s cues or not.
News items from the past week or so point to a fast-evolving AI safety space.
“The policy landscape has undergone a dramatic transformation, particularly in the US,” Manoj Saxena, founder and chairman of the Responsible AI Institute, told Tech Brew in an email. “We’re seeing a clear move away from regulatory oversight.”
The nonprofit Responsible AI Institute itself announced this week that these shifts have pushed it to back away from policy advocacy and focus on building tools that help its corporate members manage risk in the absence of regulation.
“We’re seeing a concerning trend where fear of missing out—or FOMO—is driving rapid AI adoption without proper safeguards,” Saxena said. “The risks here are substantial: Uncontrolled AI deployment can lead to severe reputational damage if systems make high-profile mistakes or exhibit biased behavior.”
Caroline Shleifer, founder and CEO of regulatory management platform RegASK, told us businesses should expect that “AI governance will remain in flux for the foreseeable future.”
“That uncertainty increases risk, especially for industries like life sciences and consumer goods,” Shleifer said in an email.
Chinese challenger: All this rethinking comes as the Chinese lab DeepSeek has injected an unexpected rivalry into the global race around AI. The upstart’s purportedly hyper-efficient model, around which safety concerns also abound, has brought more focus on competition with China among world leaders.
Some AI safety advocacy groups we contacted described the need for AI safety in these terms, or other frameworks amenable to stated Trump administration goals.
Varun Krovi, executive director at the Center for AI Safety’s Action Fund, said in a statement that chip export controls and federal support for domestic chip production would boost AI safety.
“[These] are concrete steps the US can take to ensure AI safety and security that align with the administration’s commitment to innovation,” Krovi said. “They are two sides of the same coin.”
AI Policy Institute executive director Daniel Colson also mentioned a “strategic advantage over China” along with “preventing catastrophic risks.”
“As transformative AI systems advance rapidly, we hope the administration’s forthcoming AI Action Plan will include robust measures to prevent the most severe potential harms while promoting responsible innovation,” Colson said in a statement.
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.