If Nonprofits Don’t Shape AI, Someone Else Will—and We May Not Like the Outcome
- jackiedomanus
- Mar 26
- 5 min read
Every time a new wave of AI panic washes through LinkedIn, I notice the same pattern. The loudest voices are usually reacting to what technology could do rather than asking who is actually making the decisions, whose values are encoded into systems, and who gets left out when speed and scale become the primary incentives.
That tension is exactly why I wanted to host a conversation with Jim Fruchterman.
Jim is a MacArthur Genius, a Caltech‑trained physicist, the founder of multiple tech nonprofits, and the author of Technology for Good. He has spent decades moving between for‑profit AI and large‑scale social impact, and he carries an unusually grounded perspective on what technology actually does when it meets real human needs. What struck me almost immediately is that Jim doesn’t talk about AI as magic or doom. He talks about it as infrastructure, shaped by incentives, data, and intent.
And that framing changes everything.
This conversation was not about whether nonprofits should use AI. It was about what happens if they don’t, and about what kind of future we quietly design when we outsource both thinking and responsibility to purely commercial actors.
AI Has Always Been for Humans—We Just Forgot That Part
One of the most surprising moments for me was learning that many of the core AI technologies we now take for granted were originally built for people with disabilities. Voice recognition, text‑to‑speech, character recognition, and assistive reading tools all started as accessibility solutions before they became mass‑market features.
That history matters, because it undermines the idea that AI is inherently extractive. Technology doesn’t arrive with values attached; values are introduced through funding models, training data, and who shows up early enough to shape its direction.
Jim made the point clearly: when nonprofits step back from AI conversations, the systems that emerge will naturally reflect the priorities of those building them. That usually means wealthy, English‑speaking, commercially valuable populations, while entire communities, languages, and lived experiences remain underrepresented because they do not make the balance sheet attractive.
If AI is trained primarily on profit‑seeking data, it will optimize for profit‑seeking outcomes. That is not a moral failing of the technology. It is a design problem born of absence.
Nonprofits Have Something Tech Giants Can’t Replicate: Representative Data and Moral Permission
One of the strongest arguments Jim made is rarely acknowledged publicly. Nonprofits often hold the most representative, human, and ethically sensitive data in existence. They work with populations that are routinely ignored by commercial platforms, including people who speak non‑dominant languages, live outside Western economic centers, or exist at the margins of formal systems.
When nonprofits opt out of AI entirely, those experiences simply disappear from the training landscape. Worse, biased systems become normalized, and their errors are treated as inevitable rather than correctable.
Jim was clear that this is not about nonprofits building massive AI platforms themselves. In fact, he warned strongly against that approach. What matters is participation, boundary‑setting, and insisting on responsible patterns of use, including bias testing, data protection, and intentional deployment where technology augments humans rather than replaces them.
This is a subtle but critical distinction. AI should not replace judgment, empathy, or accountability. At its best, it removes drudgery, frees up time, and allows people to do more of the work they entered the nonprofit sector to do in the first place.
AI Is Not a Strategy. It’s a Tool That Exposes the Quality of Thinking Behind It.
One of the themes I keep returning to in Idea Citizen conversations is that tools amplify intent. AI is no exception. When applied without clarity, it scales confusion. When applied without ethics, it accelerates harm. But when paired with thoughtful systems design, it can meaningfully change outcomes.
Jim shared examples where nonprofits used AI narrowly and intelligently, such as summarizing long counseling transcripts to reclaim human time, or delivering vetted health information in multiple languages through supervised chat systems. In each case, the technology itself was not the hero. The idea behind its application was.
This matters deeply to how I think about ideas and masterminds.
AI cannot do sense‑making for us. It can support sense‑making, but it cannot decide what matters, what tradeoffs are acceptable, or what kind of world we are willing to build. Those are human judgments, best refined through conversation, critique, and collective thinking.
Without spaces where people can ask better questions together, we default to whatever is easiest, fastest, and most deployable.
Why This Conversation Belongs Inside an Idea Mastermind
What this conversation reinforced for me is that the future of “AI for good” will not be decided by tools alone. It will be decided by who is in the room early, asking uncomfortable questions, resisting false binaries, and sharing hard‑won perspectives across sectors.
That is exactly the role that idea‑driven masterminds are meant to play.
Not echo chambers. Not hype cycles. But structured spaces where people from different disciplines can pressure‑test assumptions, bring real data and lived experience into the conversation, and challenge linear thinking before it calcifies into policy or product.
When nonprofits, technologists, strategists, and funders are not speaking to one another, the default narrative becomes dangerously simplistic. Ideas flatten. Systems drift. Impact becomes accidental rather than intentional.
Idea Citizen exists to counter that fragmentation by creating environments where ideas can mature before they scale, and where technology is treated as a lever rather than a worldview.
What I Keep Thinking About After This Conversation
Jim said something toward the end that stuck with me: AI is just normal technology. A bad product is still a bad product, even if it has AI in it.
That line feels deceptively simple, but it carries weight. It reminds us that the question is not whether AI is good or bad, but whether we are doing the hard work of thinking clearly about its application.
If we want different outcomes, we need different conversations. And if we want different conversations, we need spaces where ideas are allowed to be incomplete, challenged, and reshaped in community.
That is where progress actually starts.
Frequently Asked Questions
Who is this conversation for?
This is for nonprofit leaders, technologists, funders, policymakers, and anyone working at the intersection of technology and social impact who wants to move beyond surface‑level debates about AI.
Is this about adopting AI or resisting it?
Neither. The conversation is about intentional use. It argues against blind adoption and blanket rejection in favor of thoughtful, ethical participation.
Do nonprofits need to build their own AI tools?
No. In most cases, they should not. Jim strongly advocates for nonprofits using mature, affordable products with appropriate safeguards rather than attempting expensive in‑house AI development.
How does this relate to Idea Citizen?
Idea Citizen focuses on idea‑driven conversation and collective thinking. This conversation illustrates why systems‑level issues like AI governance require shared dialogue, not isolated decision‑making.
Why bring this into a mastermind context?
Because AI is not a technical problem alone. It is a social, ethical, and strategic one. Mastermind‑style spaces allow those dimensions to be explored together before decisions calcify into infrastructure.



Comments