AI Literacy Series: Thinking Critically about the Systems That Shape Us with danah boyd

In this episode of the In AI We Trust? AI Literacy series, Miriam Vogel is joined by co-host Rosalind Wiseman for a conversation with danah boyd, founder of Data & Society. As a researcher at the intersection of technology, bias, and education, boyd has spent decades studying how AI systems reflect and amplify existing inequalities.

boyd challenges many of the assumptions often made about AI, urging us to think critically about how it is shaping education, social structures, and power dynamics. Rather than asking whether AI is inherently good or bad, she argues that the real question is how we, as a society, choose to use it.

From the biases embedded in search engines to the growing role of AI in learning, this episode explores why AI literacy is crucial and how we can equip people with the knowledge to engage with AI thoughtfully.

How danah boyd’s Background Shaped Her Approach to AI

As you may have expected, danah boyd got her start in technology by studying  computer science but early in her career, she became fascinated by how technology shapes human behavior, particularly in online communities and social networks. The more she studied these systems, the more she realized that the biggest challenges weren’t just technical—they were social and political.

That realization led her to move beyond coding and into research on power, bias, and governance in digital spaces. She founded Data & Society to bring together experts from multiple disciplines, including computer scientists, sociologists, lawyers, and policymakers, to better understand how AI and data-driven systems impact society.

This broad, interdisciplinary approach has shaped the way she thinks about AI—as not just a technology, but a force embedded in economics, politics and society.

The Hidden Biases in AI: A Socio-Technical Problem

Bias in AI isn’t a glitch—it’s a reflection of the world we live in. AI systems learn from human-generated data, and that data often carries deep inequalities that AI then reinforces. A striking example of this comes from Latanya Sweeney, who discovered that Google search results were more likely to associate Black-sounding names with criminal records even when no such record existed.

This kind of bias isn’t intentional, but it’s nevertheless built into the way AI models are trained. AI models learn patterns from past data so when past data is discriminatory, AI repeats those patterns at scale.

boyd argues that this isn’t just a technical problem, but rather it’s a reflection of how power operates in AI development. When companies train and deploy AI systems, who is making the important decisions? Whose experiences are shaping the technology? These questions, she says, are just as important as improving the algorithms themselves.

AI and Education: Rethinking What Learning Means

Conversations about AI in education often focus on the wrong problem. The most common fear is that AI will make it too easy for students to cheat. But as boyd points out, what’s actually at stake is how we define learning itself.

“People are worried about AI making it easier to cheat, but that’s not the right panic,” she explains. “We need to ask: What does it actually mean to learn?”

For decades, education has been shaped by economic structures, political agendas, and societal expectations. Schools were designed to train workers, reinforce social hierarchies, and prepare people for specific roles in the economy. AI challenges this traditional model, making rote memorization and standardized testing feel increasingly outdated.

Rather than seeing AI as a threat to education, boyd suggests that it could be an opportunity, forcing us to reconsider what skills actually matter in a world where information is instantly accessible. Instead of emphasizing recall and repetition, we should be focusing more on critical thinking, problem-solving, and creativity.

What It Means to Be “AI Literate”

What does it actually mean to be “AI literate?” Many people assume it’s about learning to code or understanding how AI models work, but boyd argues that this is the wrong way to think about it. AI literacy isn’t just about technical knowledge—it’s about understanding the power dynamics behind AI and how it’s shaping the world.

“You know you’re AI literate when you can have complex trade-off conversations about AI, understanding its limits, its potential, and how to use it responsibly,” boyd explains.

In other words, AI literacy isn’t about mastering the technology itself. It’s about knowing when to question it. It’s about asking: Who built this system? What assumptions are baked into it? Who benefits from it? Who might be harmed?

The Environmental Cost of AI: What We’re Not Talking About

AI has a massive environmental footprint. Training AI models requires huge amounts of computing power, and as AI adoption grows so does its energy consumption. But because the costs are absorbed by big tech companies and venture capital, most people don’t think about them at all.

“Is using generative AI for this project worth the equivalent of a round-trip flight?” boyd asks. She argues that these trade-offs need to be part of the AI conversation and not an afterthought.

As AI systems become more widely integrated into everyday life, discussions around sustainability need to catch up. Otherwise, we risk blindly expanding AI without considering its long-term impact on climate and resource consumption.

AI, Mental Health, and Social Connection

Is AI making people lonelier, or is it simply filling a void that was already there?

As AI chatbots and digital assistants become more sophisticated, more young people are turning to them for emotional support. This has sparked concerns that AI could replace real human relationships.boyd, however, argues that this fear misses the bigger issue.

“The problem isn’t that young people are using AI chatbots,” she says. “It’s why they feel so alone that they need them in the first place.”

Instead of blaming technology, boyd urges us to look at the deeper societal forces driving disconnection from declining community spaces to the pressure to always be “productive.” AI isn’t causing loneliness, but it is revealing how much loneliness already exists.

Conclusion: What Comes Next?

Throughout this episode, boyd challenges us to move past black-and-white narratives about AI and instead ask what kind of future are we building with these technologies?

AI is already changing education, work, and relationships, but the way we choose to integrate it into society is still in our hands. That’s why AI literacy matters not just for policymakers or engineers, but for anyone who interacts with AI systems. Understanding AI isn’t just about understanding the technology itself—it’s about understanding our own values, priorities, and responsibilities.

As boyd says, “If we get AI literacy right, we won’t just understand the technology better, we’ll understand ourselves better.”

Listen to the full episode here: