Skip to main content
AI

A new AI development framework puts civil rights first

A civil rights group aims to fill a void in federal regulation and standards.

Scales of justice graphic on a microchip

Sakorn Sukkasemsakorn/Getty Images

3 min read

Power up. From supercharging employee productivity and streamlining processes to inspiring innovation, Microsoft’s AI is designed to help you build the next big thing. No matter where you're starting, push what's possible and build your way with Azure's industry-leading AI. Check it out.

With the federal government backing off AI safety research, it could leave a void of standards for risk-proofing AI models.

The Center for Civil Rights and Technology at the Leadership Conference on Civil and Human Rights wants to help fill those needs with a new framework meant to help companies and other orgs design and deploy AI systems with equity in mind.

The 36-page document covers each stage of the development process with considerations for protecting the civil rights of marginalized groups, as well as case studies and resources. It’s aimed at companies and investors in “specific sectors that utilize consumer-focused tech,” including those at a particular risk for discrimination, like housing, banking, and healthcare.

“Private industry doesn’t have to wait on Congress or the White House to catch up; they can start implementing this Innovation Framework immediately,” Kostubh “KJ” Bagchi, VP of the Center for Civil Rights and Technology, said in a statement.

Founded in 1950, the Conference is a coalition of national organizations born out of the civil rights movement. The group formed the Center for Civil Rights and Technology as a joint project with its education and research arm in 2023 to advocate specifically around AI and privacy, industry accountability, and broadband access.

The framework’s release came just before Commerce Secretary Howard Lutnick renamed the National Institute for Standards and Technology’s AI Safety Institute to drop the word “safety.” NIST released a widely cited AI risk management framework in 2023 under President Biden that faced opposition from some Republicans, including Senator Ted Cruz, who called the org’s AI safety standards “woke.”

The Center’s new framework is centered on four principles that cover civil rights, human-first design, sustainability, and AI as “a tool, not the solution.” It covers 10 different pillars around those goals in each stage of AI development.

Maya Wiley, president and CEO of the Leadership Conference, said the framework can help companies make more trustworthy products that further innovation.

“American-made AI will succeed when our rights lead the way,” Wiley said in a statement. “No person or community should have to worry whether shoddy AI will flood the market…When companies refuse to ensure that their AI innovates rather than discriminates, it can mean more expensive and worse health care, more denials of home loans to working families who deserve them, and more qualified candidates getting turned away from good jobs.”

The Center also recently led a coalition of 60 advocacy groups in a letter to Congress opposing the budget reconciliation bill’s controversial decade-long moratorium on enforcement of state and local AI regulation. The “One Big Beautiful Bill Act” has passed the House and is awaiting a vote in the Senate.

“A 10-year moratorium will extinguish states’ ongoing debates and efforts to address AI challenges, including the problem of algorithmic discrimination, leaving people vulnerable and exposed to faulty technology,” the letter reads. “This is no longer a nascent industry; companies are making billions from their AI technology.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.