Nonprofits Governing Powerful Technologies Responsibly Part 2

As mission-driven nonprofits, we have a moral imperative to develop and deploy artificial intelligence in a way that promotes equity, inclusion and social good. Last week, we explored fundamental practices around prioritizing data privacy and individual rights – core tenets of any responsible AI governance framework. However, ethical AI requires going a step further to actively center equity and inclusivity throughout the entire AI lifecycle.

From the training data sourced to the development teams building the models to the real-world impacts on communities served, nonprofits must be vigilant about identifying and mitigating risks of bias, discrimination and inequitable outcomes. Centering equity means proactively designing AI systems aligned with principles of fairness, representation and accessibility for all stakeholders. It necessitates diverse perspectives informing every stage – data collection, model development, validation, deployment and monitoring. Only through this intentional, inclusive approach can we harness AI’s potential as a powerful force for positive societal impact and social justice.

A hole in a brick wall with a colorful galaxy in the background Description automatically generated

Image by Vilius Kukanauskas from Pixabay

2nd Pillar Centering Equity & Inclusivity

We’ve all seen the examples of AI systems propagating societal biases and discriminatory harms when they aren’t developed responsibly. From racist facial recognition to sexist hiring algorithms, the risks of encoded biases producing unjust impacts are very real.

That’s why every nonprofit AI governance framework absolutely must have rigorous mechanisms built in for auditing models, datasets, and systems through an equity lens before anything gets deployed. We’re talking comprehensive bias testing, representative data practices, and diverse human oversight.

The AI Now Institute has been pioneering work in this space. They’ve researched current Algorithmic Audits. Currently, the tech industry is strategically assuming a leadership position in the field of AI auditing. An AI Now Institute article states, “There is a burgeoning audit economy with companies offering audits-as-a-service despite no clarity on the standards and methodologies for algorithmic auditing, nor consensus on the definitions of risk and harm.”

Coherent standards and methodologies for assessing when an algorithmic system is harmful to its users are hard to establish, especially when it comes to complex and sprawling Big Tech platforms. Audit tools will forever be compromised by this conundrum, making it more likely than not that audits will devolve into a superficial “checkbox” exercise.

This doesn’t mean that you can’t measure an AI tool against your nonprofits mission and values. Those can be your checkboxes.

A colorful clouds and a planet Description automatically generated with medium confidence

Image by Alan Frijns from Pixabay

But auditing alone isn’t enough. We also must get upstream and bake equity into the entire AI development lifecycle from the very start. That means prioritizing diverse, interdisciplinary teams – not just technologists, but domain experts, ethicists, impacted community members and more.

The Data Nutrition Project has been doing inspiring work in this vein with their “Collective Data Nutrition” approach. They are creating Data Nutrition Labels for datasets that train the AI models. This way the AI developers will know if a dataset has issues with equity and inclusivity before they use it for AI training.

At the end of the day, the only way we’ll develop AI technologies that truly expand access, opportunity and justice is by centering the voices and lived experiences of the communities we want to serve. Diverse representation must be a core operating principle from the data all the way through to model governance.

No more punting equity to a legal check as an afterthought. It must be the foundational ethos driving all aspects of nonprofit AI development. Because the last thing any of us wants is to inadvertently amplify the very same disparities and injustices we’re trying to dismantle through our missions.

The future of impactful, ethical AI depends on getting this right from the ground up. And as mission-driven organizations, we simply can’t afford to perpetuate harmful biases and discrimination – even unintentionally. Equity and inclusion must be our guiding north stars.

Small and mid-sized nonprofits may have less resources but can still take measured steps to responsibly adopt and govern AI by following expert guidance tailored to their needs and capacities. The risks of not governing AI are too high to ignore.

The key is starting with leadership commitment, creating a governance framework following best practices, deploying AI thoughtfully through pilots, prioritizing high-value use cases, investing in training, and building trust through transparency.

A heart shaped picture frame with flowers and butterflies Description automatically generated

Image by Tomislav Jakupec from Pixabay

Collect your dot…

Just because you are a small to mid-sized nonprofit doesn’t mean that you can’t do much about AI Equity & Inclusivity. That might be true in overall picture, but you have a huge influence on how your nonprofit handles it.

Here is some helpful guidance on what small and mid-sized nonprofits can do to responsibly adopt and govern AI technologies:

  • Start with leadership buy-in and an executive sponsor for AI initiatives

The first step is getting leadership alignment on the importance of AI governance. Identify an executive who can champion developing an AI strategy and policy framework.

  • Form a cross-functional AI task force or committee

Bring together a diverse team from across the organization – program staff, IT, legal, ethics advisors, community representatives etc. To holistically assess AI opportunities and risks.

  • Develop clear AI usage policies and guidelines

Work on drafting policies that outline ethical principles, allowable use cases, data practices, human oversight protocols and more. Don’t wait for perfection – start with a framework you can iterate on.

  • Leverage external AI governance resources

Look to expert groups like the Diligent Institute, NIST AI Risk Management Framework and others for templates, training and best practice guides to model policies after.

  • Start small with controlled pilot projects

Before wide-scale deployment, run limited AI pilot programs with specific use cases. Test the technology, processes and guardrails in a contained environment first.

  • Focus on high-impact, high-value use cases

Prioritize applying AI to repetitive, time-intensive tasks like donor communications, data analysis, content creation etc. To free up staff bandwidth.

  • Invest in staff training and change management

Have a plan to upskill employees on working with AI tools. Redesign roles and provide guidance to get buy-in.

  • Emphasize transparency and trust-building

Be fully transparent about your AI use with stakeholders. A breach of trust from misuse can undermine support.