- The Path to Equality
- Posts
- How We're Using The Master's Tools...
How We're Using The Master's Tools...
AKA How We're Using AI At Mission Equality To Further Our Mission (While Side-Eyeing It Hard).
Hello,
How has the start of the changing seasons been for you? In the northern hemisphere, we’re enjoying some autumnal weather which is great for snuggling and cosying up.
We haven’t yet written much about AI though it’s something we’ve taken a very deep dive into, individually and as an organisation, over the past year.
The uncomfortable truth: The same AI systems that perpetuate bias, consolidate power and automate oppression are also the ones we're using to work to dismantle those very systems. Welcome to our messy, contradictory, utterly necessary relationship with artificial intelligence!
The Paradox We're Living
We're using AI to:
Increase our individual capacity to reach more organisations ready for values-driven transformation.
Interrogate our strategies and approaches across a number of AI platforms.
Test different AI platforms for bias, context and capability, and interrogate the strategies it recommends (e.g. if a strategy isn’t working, we ask it what it’d recommend trying instead…often it’s stumped).
Analyse and identify patterns in data that we might have missed.
But we're also constantly asking: Whose voices trained these models? What biases are baked into the algorithms we're leveraging? Are we reinforcing the systems we're trying to change?
Our "Master's Tools" Strategy
1. AI as Amplifier, Not Author: We use AI to enhance our human insights, not replace them. Any AI-generated content gets the human treatment several times over: interrogated, refined, infused with lived experience that cannot be replicated by AI.
2. Bias Auditing Our Own Work: Before any AI-assisted content goes live, we run it through our equity lens:
What are the sources for any claims, stats and research?
Whose perspectives are being shared/are missing?
What assumptions are being made?
Does this reinforce or challenge power structures?
3. More Human, Not Less: Our focus in using AI has been to consistently identify the ‘human edge’…the things AI does not excel at (and likely never will) and the things humans do. We’re using AI to help continually identify and define where and how we can be ‘more human’, in a way which serves us all.
4. Creating Connection: One of the most obvious ‘edges’ we have as humans is the ability to connect, human to human, and to lean into our full emotional range to do this. By increasing our own personal capacity, we use AI to create space for more human connection, not less.
What We're Learning (Sometimes The Hard Way)
AI Reveals It’s Own Blind Spots: When we asked AI to help create progressive policies outside of the standard set found in most organisations, it struggled. It cannot create what doesn’t commonly exist and this highlighted its limits big time.
Efficiency ≠ Equity: The fastest, most optimised solutions often reinforce existing power dynamics. The slow, messy, human-centred approaches create more opportunity for connection, collaboration and community that benefit everyone, equally.
Validation isn’t always what’s needed: We’ve interrogated AI’s advice a number of times and confirmed that: “The default stance of ChatGPT (unless explicitly forced otherwise, like you’ve done with me) is to encourage, validate, and “help people stay motivated.” It smooths over hard truths because that keeps users engaged.”. Critical questioning is not the default and neither is a push for self reflection…
SUMMARY: AI - as a tool for challenging the status quo - has some pretty significant flaws.
Our Commitment Moving Forward
We're not AI purists or AI rejecters. We're AI interrogators.
Every tool, every prompt, every automated process gets the same treatment we give organisational policies in our client work:
Does this advance equality and justice or just efficiency?
Who benefits (most) from this approach?
What would this look like through an equity and equality lens?
Here’s the burning question: If we're using biased AI tools to help organisations become less biased, are we part of the solution or perpetuating the problem?
Our answer: Both.
The master's tools might not dismantle the master's house completely but they can sure as hell weaken the foundations while we build something better, together.
Join the Interrogation?
We're not experts at ethical AI use though we’re learning, FAST. We're practitioners figuring it out in real-time, with real stakes, for real organisations trying to do good.
We’d love to know:
How is your organisation grappling with AI and ethics?
What questions are you asking about the tools you're using?
Where do you see the biggest tensions between efficiency and equity/equality?
Hit reply and let us know…
Until next time,
Lea - Founder, Mission Equality
Note: If you're ready to interrogate not just your AI use but your entire organisational approach to living your values, our 6-month Learning Journey might be exactly what you need. The real master's tool isn't AI, it's critical thinking applied to deep systems change ;)
Reply