WAISI at the Capitol

A community at UW–Madison dedicated to making AI safe and beneficial for all.

Nine WAISI members in front of the US Capitol

Our Mission

We believe that AI presents a magnitude of risks and benefits unmatched by any previous technology. To realize the benefits, we must address the risks.

We contribute by:

Building and supporting a community of AI Safety specialists.

Producing impactful research across disciplines.

Informing public discourse on transformative AI.

Our goal: help humanity navigate the transition to advanced AI wisely.

"It's been great working with everyone and getting to be around people who are really interested in AI Safety and helping people get involved. It's exciting to be a part of this."

— Shawn Im, PhD Student

Numbers and Beyond

  • 10 PhD Safety Scholars
  • 6 Masters Safety Scholars
  • 50+ Undergraduate Safety Scholars
  • 30 Current AI Safety Fundamentals participants
  • 130+ AI Safety Fundamentals graduates

"...A year ago the idea of facilitating a group discussion would've been hugely intimidating to me but now I find myself looking forward to my cohort sessions. This much needed nudge out of my comfort zone has shaped my growth as a leader and student..."

— Elise Fischer, Policy Team

Involvement and Impact

A speaker event
Students gathered at a WAISI intro presentation
7 students learning about AI
A WAISI booth at the student organization fair
WAISI members working on research

Current Projects

WAISI Technical AI Safety Workshop Program

Most AI Safety communities introduce members who are interested in technical AI safety through the pipeline of Intro Technical Fellowship → Paper Reading Sessions → Alignment Research Engineer Accelerator program (ARENA)...

Learn more →

Transferable Adversarial Materials (TAM)

Within the past decade, small portable Unmanned Aerial Systems (UASs) operated by individual infantry units have been demonstrated to be vital assets on the battlefield in intelligence, surveillance...

Learn more →

Research Highlights

Towards Interpretability Without Sacrifice

Towards Interpretability Without Sacrifice: Faithful Dense Layer Decomposition with Mixture of Decoders

Debate or Vote

Debate or Vote: Which Yields Better Decisions in Multi-Agent Large Language Models?

Everything Everywhere All at Once

Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition

Our Members Have Collaborated With

Our Sponsors

KAIROS Logo

Kairos

UW Madison Computer Sciences Logo

UW-Madison Computer Sciences