AI Literacy Series: Building Trust and Understanding in AI with Dewey Murdick

In a new episode of In AI We Trust, Dewey Murdick, Executive Director of Georgetown’s Center for Security and Emerging Technology (CSET), joined host Miriam Vogel to discuss his work in AI literacy, emerging technology analysis, and decision-making support for policymakers.

Dewey’s Background and Expertise

Dewey is the Executive Director of CSET, where he advises US and international policymakers and organizations in policy involving AI and emerging technologies. He serves as an advisor to several organizations, including the OECD Network of Experts on AI. Previously, Dewey was Director of Science and Analytics at the Chan Zuckerberg Initiative, where he led metric development, data science, and machine learning. Dewey also served as Chief Analytics Officer and Deputy Chief Scientist at the Department of Homeland Security with a key oversight role in the Department’s research and development portfolio. His career path, initially rooted in computational physics, took a significant turn after 9/11 when he realized the profound societal impact of technology. 

The Importance of AI Literacy

A core theme of the discussion is the need for AI literacy across all levels of society. Murdick emphasizes that AI should not be an elite domain confined to PhDs and technical experts but should be accessible to a broad audience, including those without formal education. He stresses that AI’s benefits and risks should be well understood by policymakers, industry leaders, and the general public to ensure its responsible development and application.

Building Trust in AI Systems

One of the key issues Murdick highlights is trust in AI systems. He advocates for structured AI incident reporting to help companies share information about hazards and mitigation strategies. This transparency, Miriam and Dewey agree, coupled with active governmental involvement, is crucial for fostering trust in AI technologies. His approach to AI governance, outlined in a recent CSET report, revolves around three core principles:

  1. Knowing the terrain of AI risk and harm
  2. Preparing humans to capitalize on AI
  3. Preserving adaptability and agility in policy development

Role of AI in Education

The conversation also explores the broader implications of AI on education. Murdick notes that traditional education systems often fail to accommodate diverse learning styles, leading many young people to disengage. He sees AI as a tool for personalized learning, allowing individuals to explore knowledge in ways that align with their strengths. However, he warns that AI should complement, not replace, human interaction in learning environments.

Global AI Governance and National Security

On the topic of international AI policy, Murdick provides insights into the complexities of global AI governance. He underscores the importance of balancing national security concerns with collaborative efforts, particularly in multi-party international contexts where consensus-building is slow. He highlights the evolving norms around AI use in national security, noting a significant shift in military attitudes towards autonomous systems since 2018.

Murdick also acknowledges concerns among young people about the impact of AI on employment and its potential role in military conflicts. He reassures listeners that the U.S. defense community is taking ethical considerations seriously, with regulations in place to ensure responsible AI deployment. However, he emphasizes the need for continued vigilance and adaptive policymaking to address the rapid evolution of AI technologies.

AI as an Educational Companion

Looking ahead, Murdick expresses optimism about AI as an educational companion. He envisions AI tools that can remember and contextualize past lectures and books, helping individuals retain and apply knowledge more effectively. While acknowledging the challenges AI presents, he remains hopeful that, with thoughtful governance and increased literacy, AI can be harnessed as a force for societal good.

The Future of AI Policy

Throughout the discussion, Murdick emphasizes the importance of an evidence-based, iterative approach to AI policymaking. Rather than succumbing to paralysis over AI’s vast implications, he advocates for incremental steps to maximize learning and adaptation. His  underscores the need for AI governance that is both pragmatic and forward-thinking, ensuring that technological advancements align with human values and societal needs.

Enjoy the episode here: