
A community at UW–Madison dedicated to making AI safe and beneficial for all.
Our Mission
The Wisconsin AI Safety Initiative advances understanding and mitigation of advanced AI risks through strategic collaboration and rigorous inquiry.
We pursue this mission through three integrated pillars:
•Advancing Public Discourse. We leverage our platform at a leading research institution to shape informed perspectives on AI futures, catalyze substantive debate, and engage decision-makers across sectors.
•Driving Research and Innovation. We conduct meaningful research projects that address critical questions in AI safety, bridging technical analysis with policy implications and fostering collaboration across disciplines.
•Strengthening Community. We cultivate an enduring intellectual culture where collaborative rigor, critical analysis, and shared purpose drive collective excellence.
Addressing AI safety demands expertise spanning technical research, policy, and ethics. As AI reshapes our technological landscape, we are committed to ensuring its responsible development and deployment.
"It's been great working with everyone and getting to be around people who are really interested in AI Safety and helping people get involved. It's exciting to be a part of this."
— Shawn Im, PhD Student
Numbers and Beyond
- 10 PhD Safety Scholars
- 6 Masters Safety Scholars
- 50+ Undergraduate Safety Scholars
- 30 Current AI Safety Fundamentals participants
- 130+ AI Safety Fundamentals graduates
"...A year ago the idea of facilitating a group discussion would've been hugely intimidating to me but now I find myself looking forward to my cohort sessions. This much needed nudge out of my comfort zone has shaped my growth as a leader and student..."
— Elise Fischer, Policy Team
Involvement and Impact
- 9 WAISI members were flown out to DC to participate in a Congressional Exhibition on Advanced AI.
- Contributed to Wisconsin's 2023 Assembly Bill 664, which requires disclosing AI-generated material in political ads.
- Hosted speakers from Google DeepMind, Anthropic, Model Evaluation and Threat Research (METR), the Center for a New American Security (CNAS), and the Horizon Institute for Public Service.
- Members in 12+ research labs on campus. See our research page.
- Collaborated with professors from the School of Computer, Data, and Information Sciences, the School of Education, the School of Business, and the Department of Philosophy.




Opportunities
Technical Fundamentals
The technical track of AI Safety Fundamentals is an eight-week research-oriented reading group on technical AI safety. Topics include reward specification, generalization, interpretability...
Click to learn more
Policy Fundamentals
The policy track of AI Safety Fundamentals is an eight-week reading group on the foundational governance and policy challenges posed by advanced AI systems. Topics include AI harms...
Click to learn more

Current Projects
WAISI Technical AI Safety Workshop Program
Most AI Safety communities introduce members who are interested in technical AI safety through the pipeline of Intro Technical Fellowship → Paper Reading Sessions → Alignment Research Engineer Accelerator program (ARENA)...
Learn more →Transferable Adversarial Materials (TAM)
Within the past decade, small portable Unmanned Aerial Systems (UASs) operated by individual infantry units have been demonstrated to be vital assets on the battlefield in intelligence, surveillance...
Learn more →Research Highlights

Towards Interpretability Without Sacrifice: Faithful Dense Layer Decomposition with Mixture of Decoders

Debate or Vote: Which Yields Better Decisions in Multi-Agent Large Language Models?
