Site icon Washington Ethnic Studies Now

Reclaiming the Future: How WAESN Is Using AI with Transparency and Intention

By Asuka Conyer, Director of Development and Programs

At Washington Ethnic Studies Now (WAESN), we sure do talk quite a bit about transparency… not just as a principle, but as a practice. It’s most certainly no different when it comes to the topic of Artificial Intelligence (AI). If anything, transparency matters more now than ever.

AI isn’t a pacifist, nor a bystander. It’s complicated, political, and oddly enough…uncomfortably “human”.  Not just because of what AI is, but more so because the way someone uses it says a lot about who they are and what they value. 

So, let’s talk honestly about what AI looks like for us at WAESN, why we use it, and how we stay grounded in anti-racism and anti-capitalism while doing so. 

At WAESN, AI is never a replacement for people. What it is is a support tool for accessibility, clarity, and communication.

For those of our staff who choose to utilize AI, AI acts as a tool in ensuring our text materials remain clear, concise, and organized for our readers with the ability to summarize complex contexts, clarify grammar, or turn a flood of intersectional thoughts on a Zoom call into a structured report. Make no mistake, all of our material comes from real people with lived experience, community knowledge, and care.

We can’t be anti-racist and anti-capitalist without advocating for disability-justice. For those of us who identify as neurodivergent (ND), AI use is a personal choice that can assist in various ways. 

100% of WAESN staff and 13% of WAESN Board Members are Neurodivergent. Image source: Neurodiversity in the Workplace | Neurodivergent Talent

Whether it’s organizing the many hats we wear within our responsibilities, or using it to help us understand our observations, sometimes AI’s (slightly creepy) robotic mono-tone clarity serves as a mediator between the complex cognitive, vocal, and sensory experiences that differ between us all. For us ND folks, it’s an accommodation tool that helps us navigate language processing disorders, executive dysfunction, and nervous system regulation.

For us, AI is a tool, not a voice. It helps amplify what’s already been here, and what we know is to come as we observe a new wave of technology. We use it to compile, to edit, and sometimes to help us find the words that have been on the tip of our tongues for the past week! 

Most of all, AI helps small, grassroots orgs like us stay afloat in dismantling fast-paced spaces of oppression.

In all truth, AI is already in almost everything we do, whether we like it or not. Much like many of the daily (and inherently problematic) things we are faced with, most Americans interact with AI every day, typically without even realizing it (Tyson & Kikuchi, 2023).

When you unlock your phone with facial recognition? That’s AI. 

When Netflix suggests what you should watch next? That’s also AI. 

Or maybe you take a wrong turn, and Google Maps reroutes you around traffic? 

How about when your email spam filter flags those terrible spam emails? 

Yeah… that’s all AI, too.

If the use of AI is demonized entirely, it’s a bit more harmful than you think. Apathy is dangerous to the advancement of liberation efforts even in the face of (sometimes almost scary) technological advancements.

We can’t resist something we refuse to educate ourselves in, especially considering, in all truth, living “AI-free” in the U.S doesn’t exist for most. The same goes for the thought that pushing others to avoid it makes us morally pure. Instead, it only hands over power to corporations and systems that will gladly continue to use it without transparency, equity, or ethics; without our lived experiences in mind. 

To a grassroots organization like ours, to outright reject AI is also to reject the battlefield it represents. If AI is becoming one of the most powerful (and potentially dangerous) tools shaping our social, political, and economic realities, then refusing to engage with it means denying ourselves the ability to fight for justice on a new, undeniable platform of oppression. 

We must learn to engage with AI critically, intentionally, and collectively if we ever want to make it an effective instrument against its own inherent harms.

With all said, it would be an absolute injustice to deny the inherent harms inflicted upon global majority communities. Especially as new digital futures too often evolve into tools of oppression. Yet, what if we disrupted this historical pattern and re-envisioned AI as a tool to empower?

Priscila Chaves (2025), in her blog Resistance Strategies in the Age of AI, argues that our resistance must “embrace and subvert” technology. In many ways, trying not to reject it outright, but instead reimagining it through an ethical, community-driven lens.

That’s exactly what we mean when we talk about a decolonized AI future. To deny AI’s existence doesn’t protect us, it just leaves historically marginalized voices out of the conversation. Instead, WAESN is interested in using AI as a tool for disruption and accessibility: 

to continue creating space for marginalized voices in a fast-paced digital future that too often replicates oppression.

This wouldn’t be the first time we’ve observed a tech-assisted resistance either! In the 1990s, the Ejercito Zapatista de Liberacion (Zapatista Army of National Liberation) in Chiapas, Mexico, used the early internet to sustain a transnational advocacy network. They built online networks to mobilize global solidarity and effectively bypassed traditional media that silenced Indigenous voices. It was radical, it was creative, and it was one of the first times this early technology was used as a grassroots tool for liberation. 

One of the most radical acts of resistance is envisioning a world in which we dismantle the inequities that AI represents through collective engagement and inclusive conversations about AI ethics (Chaves, 2024). There is a real potential for AI. It can amplify stories, connect movements, and challenge systems that were never designed for us. 

The existence of AI has also created an undeniable battlefield for quite a few climate justice groups as well, especially in consideration of AI’s inherent environmental harms. 

Yet, take a look at the Climate Justice Alliance’s (CJA’s) Endorsement of the People’s AI Action Plan in response to the Trump Administration’s recent AI plan release. Here, it’s not about rejecting the impending doom of AI that is already in the hands of capitalism. Instead, it’s about being proactive in disrupting an undeniable future through accountability, transparency, and a dismantling of continued fossil fuel use and “offset emission claims” (CJA, 2025).

The use of AI can be a radical instrument of accountability only if we engage with AI intentionally through moderation and care, not trashy corporate convenience.

At WAESN, we don’t pretend to have all the answers, but we do commit to continued transparency. When we use AI, we use it with our staff, community, and informed caution in mind. 

When we learn something new about its risks or possibilities, we share that too, because liberation is accessible education. 

AI isn’t just about algorithms, data, or undermining the depth of our human creativity. It’s about power; who holds it, who’s excluded from it, and how we reclaim it together.

Comment Below.

For us, using AI ethically means centering equity, accessibility, and collective imagination. It means believing that technology can be part of a decolonized future.

One where justice—not profit—drives innovation.

We’re not just asking how AI can serve us.

We’re asking how it can serve justice, and how WAESN can efficiently serve you.

Chaves, P. (2019). Resistance strategies: Capitalistic narratives and anti-racist imaginaries for AI futures. Priscila Chaves. https://www.priscilachaves.com/the-ethics-of-ai/blog-resistance-strategies

CJA. (2025). Climate Justice Alliance supports a people’s AI action plan. Climate Justice Alliance. https://climatejusticealliance.org/climate-justice-alliance-supports-a-peoples-ai-action-plan/?utm_source=chatgpt.com

Curry, R. (2025). People with ADHD, autism, and dyslexia say AI agents are helping them succeed at work. CNBC. https://www.cnbc.com/2025/11/08/adhd-autism-dyslexia-jobs-careers-ai-agents-success.html

King, M. (2004). Cooptation or cooperation: The role of transnational advocacy organizations in the zapatista movement. Sociological Focus, 37(3), 269–286. http://www.jstor.org/stable/20832239

Tyson, A., & Kikuchi, E. (2023). Growing public concern about the role of artificial intelligence in daily life. Pew Research Center. https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/

Exit mobile version