Programs
🤖 AI Technical Safety Fundamentals
Applications closed!
Every semester, WAISI runs an 8-week introductory reading group on AI safety. We’ll cover topics like neural network interpretability, learning from human feedback, goal misgeneralization in reinforcement learning agents, and eliciting latent knowledge.
The intro program meets weekly in small groups, where you’ll read material, complete interactive activities, and participate in discussions. There will be at most 1 hour of preparation work outside of session, with each session running for 2 hours. Review our handbook for further details.
See the previous semester’s curriculum here. Note that we will be significantly overhauling this curriculum for the Fall 2024 semester.
Applications will open on September 10th, and will be linked on this website and in our Discord. To be notified, express your interest here. Applications are typically competitive — last fall semester, we received around 200 applications and accepted around 60. We expect to send out decisions on October 5th, with cohorts starting the week of October 6th.
If you’re interested in AI policy or governance, we recommend the policy intro program! It is possible to participate in both programs.
The intro program meets weekly in small groups, where you’ll read material, complete interactive activities, and participate in discussions. There will be at most 1 hour of preparation work outside of session, with each session running for 2 hours. Review our handbook for further details.
See the previous semester’s curriculum here. Note that we will be significantly overhauling this curriculum for the Fall 2024 semester.
Applications will open on September 10th, and will be linked on this website and in our Discord. To be notified, express your interest here. Applications are typically competitive — last fall semester, we received around 200 applications and accepted around 60. We expect to send out decisions on October 5th, with cohorts starting the week of October 6th.
If you’re interested in AI policy or governance, we recommend the policy intro program! It is possible to participate in both programs.
🏛️ AI Policy Safety Fundamentals
Applications closed!
Every semester, WAISI runs an 8-week introductory reading group on the foundational policy and governance issues posed by the development of advanced AI systems. The intro program aims to introduce students interested in AI policy and governance to risks from advanced AI. It will discuss questions such as:
The intro program meets weekly in small groups, where you’ll read material, complete interactive activities, and participate in discussions. There will be at most 1 hour of preparation work outside of session, with each session running for 2 hours. Review our handbook for further details.
See the previous semester’s curriculum here. Note that we will be significantly overhauling this curriculum for the Fall 2024 semester.
Applications will open on September 10th, and will be linked on this website and in our Discord. To be notified, express your interest here. Applications are typically competitive — last fall semester, we received around 25 applications and accepted around 15. We expect to send out decisions on October 5th, with cohorts starting the week of October 6th.
If you’re interested in the technical side of AI safety, we recommend the technical intro program! It is possible to participate in both programs.
- • How much progress in AI should we expect over the next few years?
- • What are the risks associated with the misuse and misalignment of advanced AI systems?
- • How can regulators audit frontier AI systems for potentially dangerous capabilities?
- • How could novel hardware mechanisms prevent malicious or irresponsible actors from creating powerful AI models?
The intro program meets weekly in small groups, where you’ll read material, complete interactive activities, and participate in discussions. There will be at most 1 hour of preparation work outside of session, with each session running for 2 hours. Review our handbook for further details.
See the previous semester’s curriculum here. Note that we will be significantly overhauling this curriculum for the Fall 2024 semester.
Applications will open on September 10th, and will be linked on this website and in our Discord. To be notified, express your interest here. Applications are typically competitive — last fall semester, we received around 25 applications and accepted around 15. We expect to send out decisions on October 5th, with cohorts starting the week of October 6th.
If you’re interested in the technical side of AI safety, we recommend the technical intro program! It is possible to participate in both programs.
🧑💻 AI Technical Safety Scholars
ⓘ Reserved for AI Alignment Safety Fundamentals Graduates or UW-Madison Graduate Students
Applications closed!
Technical Safety Scholars are our core membership groups that aim to upskill in AI safety research. Participants meet weekly to read papers, develop technical research abilities, and participate in our research network. These meetings will be held weekly and will last two hours, with free lunch/dinner provided.
Prior to the meeting, members will usually vote on which new AI Technical Safety paper out of several they’d like to read and discuss. However, some weeks we will have other activities on the agenda, including events with professors, researchers, and other professionals. Technical Safety Scholars will also work weekly to complete assignments based on the ARENA curriculum to build technical research skills.
Applications for this program are competitive, and are open near the end of each semester. Applicants are typically graduates from our technical intro program, students who have completed CS 762, or graduate students with relevant backgrounds.
View the handbook for more details about this program.
Prior to the meeting, members will usually vote on which new AI Technical Safety paper out of several they’d like to read and discuss. However, some weeks we will have other activities on the agenda, including events with professors, researchers, and other professionals. Technical Safety Scholars will also work weekly to complete assignments based on the ARENA curriculum to build technical research skills.
Applications for this program are competitive, and are open near the end of each semester. Applicants are typically graduates from our technical intro program, students who have completed CS 762, or graduate students with relevant backgrounds.
View the handbook for more details about this program.
🗳️ AI Policy Safety Scholars
ⓘ Reserved for AI Governance Safety Fundamentals Graduates or UW-Madison Graduate Students
Applications closed!
Policy Safety Scholars are our core membership groups that aim to advance responsible AI policy. Participants meet weekly to review law proposals such as SB 1047 or the EU AI Act, holding discussions about careers in policy, or talking with guest speakers. These meetings will be held weekly and will last two hours, with free lunch/dinner provided.
Prior to the meeting, members will usually vote on which new AI Policy paper out of several they’d like to read and discuss. However, some weeks we will have other activities on the agenda, including events with professors, researchers, and other professionals.
Applications for this program are competitive, and are open near the end of each semester. Applicants are typically graduates from our policy intro program, law students, or graduate students with relevant backgrounds.
View the handbook for more details about this program.
Prior to the meeting, members will usually vote on which new AI Policy paper out of several they’d like to read and discuss. However, some weeks we will have other activities on the agenda, including events with professors, researchers, and other professionals.
Applications for this program are competitive, and are open near the end of each semester. Applicants are typically graduates from our policy intro program, law students, or graduate students with relevant backgrounds.
View the handbook for more details about this program.
Infographic designed by Sophie Fellinger.