Reclaiming the Future: How WAESN Is Using AI with Transparency and Intention By Asuka Conyer, Director of Development and Programs At Washington Ethnic Studies Now (WAESN), we sure do talk quite a bit about transparency… not just as a principle, but as a practice. It’s most certainly no different when it comes to the topic of Artificial Intelligence (AI). If anything, transparency matters more now than ever. AI isn’t a pacifist, nor a bystander. It’s complicated, political, and oddly enough…uncomfortably “human”. Not just because of what AI is, but more so because the way someone uses it says a lot about who they are and what they value. So, let’s talk honestly about what AI looks like for us at WAESN, why we use it, and how we stay grounded in anti-racism and anti-capitalism while doing so. How We Use AI (and Why We’re Cool With Talking About It) At WAESN, AI is never a replacement for people. What it is is a support tool for accessibility, clarity, and communication. For those of our staff who choose to utilize AI, AI acts as a tool in ensuring our text materials remain clear, concise, and organized for our readers with the ability to summarize complex contexts, clarify grammar, or turn a flood of intersectional thoughts on a Zoom call into a structured report. Make no mistake, all of our material comes from real people with lived experience, community knowledge, and care. We can’t be anti-racist and anti-capitalist without advocating for disability-justice. For those of us who identify as neurodivergent (ND), AI use is a personal choice that can assist in various ways. 100% of WAESN staff and 13% of WAESN Board Members are Neurodivergent. Image source: Neurodiversity in the Workplace | Neurodivergent Talent Whether it’s organizing the many hats we wear within our responsibilities, or using it to help us understand our observations, sometimes AI’s (slightly creepy) robotic mono-tone clarity serves as a mediator between the complex cognitive, vocal, and sensory experiences that differ between us all. For us ND folks, it’s an accommodation tool that helps us navigate language processing disorders, executive dysfunction, and nervous system regulation. Learn more about AI as a tool for accessibility here. For us, AI is a tool, not a voice. It helps amplify what’s already been here, and what we know is to come as we observe a new wave of technology. We use it to compile, to edit, and sometimes to help us find the words that have been on the tip of our tongues for the past week! Most of all, AI helps small, grassroots orgs like us stay afloat in dismantling fast-paced spaces of oppression. Let’s Be Real, AI Is Already Everywhere In all truth, AI is already in almost everything we do, whether we like it or not. Much like many of the daily (and inherently problematic) things we are faced with, most Americans interact with AI every day, typically without even realizing it (Tyson & Kikuchi, 2023). When you unlock your phone with facial recognition? That’s AI. When Netflix suggests what you should watch next? That’s also AI. Or maybe you take a wrong turn, and Google Maps reroutes you around traffic? How about when your email spam filter flags those terrible spam emails? Yeah… that’s all AI, too. If the use of AI is demonized entirely, it’s a bit more harmful than you think. Apathy is dangerous to the advancement of liberation efforts even in the face of (sometimes almost scary) technological advancements. Image source: AI in Daily Life-Examples of Artificial Intelligence We can’t resist something we refuse to educate ourselves in, especially considering, in all truth, living “AI-free” in the U.S doesn’t exist for most. The same goes for the thought that pushing others to avoid it makes us morally pure. Instead, it only hands over power to corporations and systems that will gladly continue to use it without transparency, equity, or ethics; without our lived experiences in mind. To a grassroots organization like ours, to outright reject AI is also to reject the battlefield it represents. If AI is becoming one of the most powerful (and potentially dangerous) tools shaping our social, political, and economic realities, then refusing to engage with it means denying ourselves the ability to fight for justice on a new, undeniable platform of oppression. We must learn to engage with AI critically, intentionally, and collectively if we ever want to make it an effective instrument against its own inherent harms. AI as Resistance, Not Replacement With all said, it would be an absolute injustice to deny the inherent harms inflicted upon global majority communities. Especially as new digital futures too often evolve into tools of oppression. Yet, what if we disrupted this historical pattern and re-envisioned AI as a tool to empower? Priscila Chaves (2025), in her blog “Resistance Strategies in the Age of AI”, argues that our resistance must “embrace and subvert” technology. In many ways, trying not to reject it outright, but instead reimagining it through an ethical, community-driven lens. That’s exactly what we mean when we talk about a decolonized AI future. To deny AI’s existence doesn’t protect us, it just leaves historically marginalized voices out of the conversation. Instead, WAESN is interested in using AI as a tool for disruption and accessibility: to continue creating space for marginalized voices in a fast-paced digital future that too often replicates oppression. This wouldn’t be the first time we’ve observed a tech-assisted resistance either! In the 1990s, the Ejercito Zapatista de Liberacion (Zapatista Army of National Liberation) in Chiapas, Mexico, used the early internet to sustain a transnational advocacy network. They built online networks to mobilize global solidarity and effectively bypassed traditional media that silenced Indigenous voices. It was radical, it was creative, and it was one of the first times this early technology was used as a grassroots tool for liberation. Learn more about Ejercito Zapatista de Liberacion One of the most radical acts of resistance is envisioning a world in which we dismantle the inequities that AI represents through collective engagement and inclusive conversations about AI ethics (Chaves, 2024). There is a real potential for AI. It can amplify stories, connect movements, and challenge systems that were never designed for us. The existence of AI has also created an undeniable battlefield for quite a few climate justice groups as well, especially in consideration of AI’s inherent environmental harms. Yet, take a look at the Climate Justice Alliance’s (CJA’s) Endorsement of the People’s AI Action Plan in response to the Trump Administration’s recent AI plan release. Here, it’s not about rejecting the impending doom of AI that is already in the hands of capitalism. Instead, it’s about being proactive in disrupting an undeniable future through accountability, transparency, and a dismantling of continued fossil fuel use and “offset emission claims” (CJA, 2025). The use of AI can be a radical instrument of accountability only if we engage with AI intentionally through moderation and care, not trashy corporate convenience. Our Commitment to Transparency and a Just Tech Future At WAESN, we don’t pretend to have all the answers, but we do commit to continued transparency. When we use AI, we use it with our staff, community, and informed caution in mind. When we learn something new about its risks or possibilities, we share that too, because liberation is accessible education. AI isn’t just about algorithms, data, or undermining the depth of our human creativity. It’s about power; who holds it, who’s excluded from it, and how we reclaim it together. We want to hear from you. What are your thoughts on the use of AI as a tool for liberation? Comment Below. For us, using AI ethically means centering equity, accessibility, and collective imagination. It means believing that technology can be part of a decolonized future. One where justice—not profit—drives innovation. We’re not just asking how AI can serve us. We’re asking how it can serve justice, and how WAESN can efficiently serve you. REFERENCES Chaves, P. (2019). Resistance strategies: Capitalistic narratives and anti-racist imaginaries for AI futures. Priscila Chaves. https://www.priscilachaves.com/the-ethics-of-ai/blog-resistance-strategies CJA. (2025). Climate Justice Alliance supports a people’s AI action plan. Climate Justice Alliance. https://climatejusticealliance.org/climate-justice-alliance-supports-a-peoples-ai-action-plan/?utm_source=chatgpt.com Curry, R. (2025). People with ADHD, autism, and dyslexia say AI agents are helping them succeed at work. CNBC. https://www.cnbc.com/2025/11/08/adhd-autism-dyslexia-jobs-careers-ai-agents-success.html King, M. (2004). Cooptation or cooperation: The role of transnational advocacy organizations in the zapatista movement. Sociological Focus, 37(3), 269–286. http://www.jstor.org/stable/20832239 Tyson, A., & Kikuchi, E. (2023). Growing public concern about the role of artificial intelligence in daily life. Pew Research Center. https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/ Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on Bluesky (Opens in new window) Bluesky Click to share on LinkedIn (Opens in new window) LinkedIn Click to email a link to a friend (Opens in new window) Email Click to share on Threads (Opens in new window) Threads Published by Washington Ethnic Studies Now View all posts by Washington Ethnic Studies Now
I’m disappointed and saddened by this post. I learned a lot in the Ethnic Studies course I completed through WAESN and have great respect for your work and its liberatory impact for students and communities. However, this post does not give me the sense that you have fully investigated the consequences of your embrace of “AI” tools. As an educator, I cannot in good conscience support an industry and products that: directly harm children psychologically and cognitively, exploit vulnerable workers in the US and abroad to screen horrific/traumatizing content so as to protect end users, blatantly violate copyright protections for artists and writers and educators, massively increase use of scarce fresh water resources, slow and even reverse the transition away from fossil fuels, increase local air pollution near data centers (which themselves are usually located in highly impacted communities), further concentrate economic power into the hands of a few massive corporations, provide an enormously powerful tool of surveillance and propaganda to corporate/state actors, override State laws through cooperation with the Trump administration, and promise a dystopian future that’s the latest iteration of imperialist, white supremacist culture. While “AI” products can certainly increase accessibility for end users, when considering the only partial list of harms listed here I would hope that anyone dedicated to liberation would first seek out less harmful alternatives and work hard to strictly limit use of hyper-scaled Large Language Models like ChatGPT and similar products where no alternative exists. Surely, liberation for all people, including neurodivergent people, requires us to challenge our understanding and acceptance of difference, to promote the value of (neuro-)diversity for all communities, so that we can dismantle the structures that limit and oppress, not find assimilatory short-cuts. Your post seems to be an argument against Audre Lorde’s “The Master’s Tools Will Never Dismantle the Master’s House”, except you don’t engage with Lorde’s objections and reasoning — I’d recommend you give it another read. Please reconsider your use of “AI”. And please give more depth of thought to the consequences of your use of “AI”, including the example you set for others engaged in liberatory work. Thank you for your time, and best wishes in your work. Reply
Thank you for your thoughtful response, Adam. We do consider the harmful impacts you’ve mentioned, including the environmental impacts. There are examples in other countries where these impacts are being minimized, and we should be pushing for implementation of these strategies in the U.S. We also believe that education about this technology is the solution to the harmful impacts on young people. Ignoring and avoiding it will not lead to youth discontinuing use of it. As for the benefits for neurodivergent users, part of “acceptance of difference” is accepting that many neurodivergent folks will always need accommodations to support their success in a neurotypically dominated world. Acceptance will not change that need. In the current political climate, nonprofits like ours, that are working toward racial justice, are struggling to stay afloat. Grants are gone. Contracts have dried up. Volunteers are spread thin with all of the emergency efforts taking place to protect human rights. If we want to continue to fight for racial justice in education, we need to use every tool at our disposal. Reply