
As an industry being developed largely within the private, for-profit sector and with little regulation, the governance of artificial intelligence —the values, norms, policies, and safeguards that comprise industry standards—has been left in the hands of a relative few whose decisions have the potential to impact the lives of many.
And if this leadership lacks representation from the communities affected by automated decision-making, particularly marginalized communities, then the technology could be making the issue of inequity worse, not better.
So say various legal experts, executives, and nonprofit leaders who spoke with NPQ about the future of “AI governance” and the critical role nonprofits and advocacy groups can and must play to ensure AI reflects equity, and not exclusion.
A Lack of Oversight
The potential for AI to influence or even change society, in ways anticipated and not, is increasingly clear to scholars. Yet, these technologies are being developed in much the same way as conventional software platforms—rather than powerful, potentially dangerous technologies that require serious, considered governance and oversight.
Several experts who spoke to NPQ didn’t mince words about the lack of such governance and oversight in AI.
“Advancements are being driven by profit motives rather than a vision for public good.”
“There is no AI governance standard or law at the US federal government level,” said Jeff Le, managing principal at 100 Mile Strategies and a fellow at George Mason University’s National Security Institute. He is also a former deputy cabinet secretary for the State of California, where he led the cyber, AI, and emerging tech portfolios, among others.
While Le cited a few state laws, including the Colorado Artificial Intelligence Act and the Texas Data Privacy and Security Act, he noted that there are currently few consumer protections or privacy safeguards in place to prevent the misuse of personal data by large language models (LLMs).
Le also pointed to recent survey findings showing public support for more governance in AI, stating, “Constituents are deeply concerned about AI, including privacy, data, workforce, and society cohesion concerns.”
Research has revealed a stark contrast between AI experts and the general public. While only 15 percent of experts believe AI could harm them personally, nearly three times as many US adults (43 percent) say they expect to be negatively affected by the technology.
Le and other experts believe nonprofits and community groups play a critical role in the path forward, but organizations leading the charge must focus on community value and education of the public.
Profit Motives Versus Public Good
The speed at which AI capabilities are being developed, and the fact that it is being developed mostly in the private sector and with little regulation, has left public oversight and considerations like equity, accountability, and representation far behind, notes Ana Patricia Muñoz, executive director of the International Budget Partnership, a leading nonprofit organization promoting more equitable management of public money.
The people most affected by these technologies, particularly those in historically marginalized communities, have little to no say in how AI tools are designed, governed, and deployed.
“Advancements are being driven by profit motives rather than a vision for public good,” said Muñoz. “That is why AI needs to be treated like a public good with public investment and public accountability baked in from the moment an AI tool is designed through to its implementation.”
The lack of broader representation in the AI field, combined with a lack of oversight and outside input, has helped create a yawning “equity gap,” in AI technologies, according to Beck Spears, vice president of philanthropy and impact partnerships for Rewriting the Code, the largest network of women in tech. Spears pointed to the lack of representation in decision-making with AI.
“One of the most persistent equity gaps is the lack of diverse representation across decision-making stages,” Beck told NPQ. “Most AI governance frameworks emerge from corporate or academic institutions, with limited involvement from nonprofits or community-based stakeholders.”
“If nonprofits don’t step in, the risk isn’t just that AI systems will become more inequitable—it’s that these inequities will be automated, normalized, and made invisible.”
Complicating this problem is the fact that most commercial AI models are developed behind closed doors: “Many systems are built using proprietary datasets and ‘black-box’ algorithms that make it difficult to audit or identify discriminatory outcomes,” noted Spears.
Solving these equity gaps requires, among other things, much broader representation within AI development, says Joanna Smykowski, licensed attorney and legal tech expert.
Much of AI leadership today “comes from a narrow slice of the population. It’s technical, corporate, and often disconnected from the people living with the consequences” Smykowski told NPQ.
“That’s the equity gap.…Not just who builds the tools, but who gets to decide how they’re used, what problems they’re meant to solve, and what tradeoffs are acceptable,” Smykowski said.
Smykowski’s experience in disability and family law informs her analysis as to how automated systems fail the communities they were built to serve: “The damage isn’t abstract. It’s personal. People lose access to benefits. Parents lose time with their kids. Small errors become permanent outcomes.”
Jasmine Charbonier, a fractional chief marketing officer and growth strategist, told NPQ that the disconnect between technology and impacted communities is still ubiquitous. “[Recently], I consulted with a social services org where their clients—mostly low-income families—were being negatively impacted by automated benefit eligibility systems. The thing is none of these families had any say in how these systems were designed.”
How Nonprofits Can Take the Lead
Nonprofits can and already do play important roles in providing oversight, demanding accountability, and acting as industry watchdogs.
For example, the coalition EyesOnOpenAI—made up of more than 60 philanthropic, labor, and nonprofit organizations—recently urged the California attorney general to put a stop to OpenAI’s transition to a for-profit model, citing concerns about the misuse of nonprofit assets and calling for stronger public oversight. This tactic underscores how nonprofits can step in to demand accountability from AI leaders.
Internally, before implementing an AI tool, nonprofits need to have a plan for assessing whether it truly supports their mission and the communities they serve.
“We map out exactly how the tool impacts our community members,” said Charbonier, addressing how her team assesses AI tools they might use. “For instance, when evaluating an AI-powered rental screening tool, we found it disproportionately flagged our Black [and] Hispanic clients as ‘high risk’ based on biased historical data. So, we rejected it.”
Charbonier also stressed the importance of a vendor’s track record: “I’ve found that demanding transparency about [the company’s] development process [and] testing methods reveals a lot about their true commitment to equity.”
This exemplifies how nonprofits can use their purchasing power to put pressure on companies. “We required tech vendors to share demographic data on their AI teams and oversight boards,” Charbonier noted. “We made it clear that contracts depended on meeting specific diversity targets.”
Ahmed Whitt, the director of the Center for Wealth Equity (CWE) at the philanthropic and financial collaborative Living Cities, focused on evaluating the practical safeguards: “[Nonprofits] should demand vendors disclose model architectures and decision logic and co-create protections for internal data.” This, he explains, is how nonprofits can establish a shared responsibility and deeper engagement with AI tools.
“Decision-making power doesn’t come from being ‘consulted.’ It comes from being in the room with a vote and a budget.”
Beyond evaluation, nonprofits can push for systemic change in how AI tools are developed. According to Muñoz, this includes a push for public accountability, as EyesOnOpenAI is spearheading: “Civil society brings what markets and governments often miss—values, context, and lived realities.”
For real change to occur, nonprofits can’t be limited to token advisory roles, according to Smykowski. “Hiring has to be deliberate, and those seats need to be paid,” she told NPQ. “Decision-making power doesn’t come from being ‘consulted.’ It comes from being in the room with a vote and a budget.”
Some experts advocate for community- and user-led audits once AI tools are deployed. Spears pointed out that user feedback can uncover issues missed in technical reviews, especially from non-native English speakers and marginalized populations. Feedback can highlight “algorithmic harm affecting historically underserved populations.” Charbonier says her team pays community members to conduct impact reviews, which revealed that a chatbot they were testing used confusing and offensive language for Spanish-speaking users.
William K. Holland, a trial attorney with more than 30 years of experience in civil litigation, told NPQ that audits must have consequences to be effective: “Community-informed audits sound great in theory but only work if they have enforcement teeth.” He argues that nonprofits can advocate for stronger laws, such as mandatory impact assessments, penalties for noncompliance, and binding consequences for bias.
Nonprofits should also work at the state and local levels, where meaningful change can happen faster. For instance, Charbonier said her team helped push for “algorithmic accountability” legislation in Florida by presenting examples of AI bias in their community. (The act did not pass; meanwhile, similar measures have been proposed, though not passed, at the federal level).
Beyond legislative lobbying, experts cite public pressure as a way to hold companies and public institutions accountable in AI development and deployment. “Requests for transparency, such as publishing datasets and model logic, create pressure for responsible practice,” Spears said.
Charbonier agreed: “We regularly publish equity scorecards rating different AI systems’ impacts on marginalized communities. The media coverage often motivates companies to make changes.”
Looking Ahead: Risks and Decision-Making Powers
As AI tech continues to evolve at breakneck speed, addressing the equity gap in AI governance is urgent.
The danger is not just inequity, but invisibility. As Holland said, “If nonprofits don’t step in, the risk isn’t just that AI systems will become more inequitable—it’s that these inequities will be automated, normalized, and made invisible.”
For Charbonier, the stakes are already high. “Without nonprofit advocacy, I’ve watched AI systems amplify existing inequities in housing, healthcare, education, [and] criminal justice….Someone needs to represent community interests [and] push for equity.”
She noted that this stance isn’t about being anti-technology: “It’s about asking who benefits and who pays the price. Nonprofits are in a unique position to advocate for the people most likely to be overlooked.”