Nonprofits Governing Powerful Technologies Responsibly Part 1

You’re absolutely right to be excited about the potential of AI to drive positive change, but also a little wary of the ethical quandaries these powerful technologies present. The nonprofit world has a unique opportunity and responsibility when it comes to shaping the future of artificial intelligence for good.

Think about it – we’re talking about machines that can analyze massive datasets to derive insights for optimizing programs and operations. Models that can generate personalized communications and creative content at scale. Intelligent agents that could one day assist with everything from fundraising to casework to disaster response efforts.

A stone walkway with ivy on the ground

Image by Alan Frijns from Pixabay

The possibilities for augmenting our social impact work are seriously mind-bending. AI could be the key to unlocking exponential progress toward our missions in ways we’re just starting to imagine.

But as we’ve seen with issues like biased facial recognition or discriminatory hiring algorithms, getting AI wrong can lead to serious real-world harms and setbacks. Especially for nonprofits serving marginalized communities, we have an ethical obligation to develop and govern these technologies responsibly.

That’s why leaders across the social sector are sounding the alarm about the need for robust, ethical AI governance frameworks. From prioritizing data privacy and consent to ensuring algorithmic fairness and accountability – there’s a lot for mission-driven organizations to consider as we explore AI’s potential.

So in this piece, we’re going to roll up our sleeves and dig into ethical AI governance best practices tailored for nonprofits. We’ll explore strategies for centering equity and inclusivity, protecting privacy and data rights, ensuring accountability through community oversight, and aligning AI adoption with your organization’s core values.

Because let’s be honest – the AI revolution is already here. And if we want to be leaders harnessing this powerful force for positive change, we need to get serious about governing it responsibly from the ground up. The future of ethical, mission-driven AI development is in our hands.

Are you ready to help shape it?

A colorful bird flying in the air

Description automatically generated

Image by Alana Jordan from Pixabay

First Pillar Prioritizing Privacy and Data Rights

Prioritizing privacy and data rights is a critical pillar as nonprofits explore AI adoption. This is one area where we simply cannot afford to cut corners if we want to maintain stakeholder trust.

Think about all the sensitive information nonprofits handle – from donor records and client case files to volunteer data and more. Now imagine an AI system getting access to that data trove without proper governance. It’s a privacy nightmare just waiting to happen.

That’s why having robust data governance policies and technical safeguards in place is paramount before unleashing AI on your organization’s data streams. We’re talking strict protocols around data collection, storage, access controls, encryption, you name it.

As the team at the Future of Privacy Forum emphasizes, “Nonprofits must implement Privacy by Design from the very start of any AI system development to bake in data protection from the ground up.”

The Future of Privacy Forum is a great company to follow. They keep you up to date on everything related to this pillar.

One nonprofit getting data governance right is Feeding America. Their team developed a comprehensive AI management framework that pseudonymizes all data before ingesting it into models.

Pseudonymization refers to the process of replacing identifiable data like names, addresses, etc. with artificial identifiers or pseudonyms.

This allows Feeding America to leverage AI’s analytical powers on their data for optimizing food bank operations, without exposing or compromising any individual’s private information. By pseudonymizing the data first, they have effectively anonymized it while still retaining its analytical value for AI modeling.

Feeding America has developed a comprehensive “AI management framework” built around this pseudonymization approach as a core privacy safeguard. This demonstrates they have robust data governance protocols and technical controls in place from the very start of their AI development lifecycle, rather than trying to bolt-on privacy as an afterthought.

For small to mid-sized nonprofits looking to emulate this level of ethical data stewardship, the key is to make “Privacy by Design” a priority from day one of any AI project. This means:

  • Conducting thorough data mapping to understand what personal data exists and data flows.
  • Implementing technical controls like pseudonymization, encryption, access controls etc.
  • Developing clear data management policies around collection, storage, usage and deletion.
  • Establishing transparency and consent mechanisms to uphold data rights.
  • Undergoing regular privacy reviews and audits as AI capabilities evolve.

The overarching guidance seems to be that while nonprofit software provides databases to store constituent data, nonprofits still need to layer on additional data security controls, encryption, pseudonymization and other de-identification techniques to properly protect personal data in adherence with privacy laws and regulations.

Simply using an out-of-the-box nonprofit CRM is not sufficient. But that is something that you need to research and discuss with the salespeople for the CRM and donor/volunteer management software.

Prioritizing data privacy and ethical data practices has to be a core tenet of nonprofit AI governance to maintain stakeholder trust. While Feeding America has an advanced approach, any organization can start small by designing privacy and ethics into their AI strategy from the ground up.

Beyond securing data, we also need mechanisms for transparency and consent around how an individual’s information may get used by AI systems. That means clear notices, opt-in/out choices, and giving people visibility into automated decision-making that impacts them directly.

The UN has even called for outright bans on certain AI applications that pose too high a privacy risk, like mass surveillance systems. For nonprofits, that could translate to avoiding deploying facial recognition or predictive policing-style models that undermine civil rights.

At the end of the day, promoting data rights and ethical data practices has to be a core tenet of any nonprofit AI governance framework from day one. Losing public trust is a surefire way to derail your organization’s ability to drive positive impact through technology.

So let’s get proactive – audit your data flows, implement rigorous security controls, make privacy a design priority, and never stop putting the rights of the communities you serve first as AI evolves.

The future of ethical AI depends on it.

A ship in the water at night

Description automatically generated

Image by Kai from Pixabay

Collect your dot…

As we’ve explored, the rise of artificial intelligence presents nonprofits with incredible opportunities to amplify our impact – but also profound ethical challenges we simply cannot ignore. The stakes are too high when it comes to upholding the core values and missions that make the social sector so vital.

Just imagine the ramifications if we fail as a sector to get ethical AI governance right from the start. Unchecked, these technologies could perpetuate societal biases, erode stakeholder trust, and even undermine the very justice and equity we’re striving to create.

Privacy violations exposing vulnerable communities’ data. Discriminatory algorithms restricting access to critical services. Lack of accountability leading to AI tools causing real-world harms with no recourse. These aren’t hypothetical risks – we’re already seeing these types of consequences play out.

That’s why small and mid-sized nonprofits have a moral obligation to be proactive stewards and champions for responsible development of AI. We can’t afford to be passive bystanders while world-shaping technologies proliferate unguided.

The good news? We’re uniquely positioned to drive this ethical evolution from a place of credibility and trust. Our missions inherently align with prioritizing human rights, equity, transparency and accountability as AI matures.

By getting out front in spearheading cross-sector collaborations, developing robust governance frameworks, and advocating for regulatory safeguards – we can help shape AI’s trajectory in a more just, inclusive direction from the ground up.

It starts with each of us implementing rigorous ethical policies, auditing mechanisms, and community oversight structures tailored to our own AI adoption journeys. But it extends to boldly using our social capital to educate, build multi-stakeholder partnerships, and promote AI literacy.

The path won’t be easy, but we owe it to the communities we serve to be ethical trailblazers on this front.

If not us as mission-driven nonprofits, then who will ensure rapidly evolving AI systems uphold our core principles around privacy, equity, accountability and more?

So let’s get proactive in cementing our sector’s role as leaders championing human-centered, ethical AI development.

The future of these world-shaping technologies depends on us helping steer them responsibly, every step of the way.