We are collaborating with the Curtin Machine Learning Club to host an evening focused on risks from advanced artificial intelligence. Learn about whether our final invention is more likely to usher in utopia, or doom. There will be guest speakers arguing that we cannot rule out doom scenarios, and showing the steps being taken now in research and policy to make advanced AI safe and beneficial. There will be free books, free pizza, and free discussion. This event is suitable for people at all stages of knowledge - so bring your friends!
Speakers:
David Quarel is a PhD student at the Australian National University under Marcus Hutter. He recently worked as a Research Assistant at Cambridge University. David is interested in AI alignment, reinforcement learning, and universal artificial intelligence. David is also interested in teaching, and has acted as teaching staff for ARENA, an AI safety upskilling program.
David will be giving an introduction to technical AI Safety. Topics covered will include
- High level of how a transformer works
- A summary of some interesting research in the area, including
- Automatic search for jailbreak prompts
- Discovery and intervention of LLM world models
- Interpretability via feature maximisation of neurons
- Empirical demonstrations of reward hacking and goal misgeneralisation
Greg Sadler is the Secretary of Effective Altruism Australia and CEO of Good Ancestors Policy. Good Ancestors develops and advocates for policies aimed at solving this century's most challenging problems. Previously, Greg spent 15 years in the Australian Public Service, he holds a BA/LLB(Hons) from ANU and majored in philosophy.
Greg will be talking about the regulation and governance of AI systems. The talk will focus on the journey in Australia so far, developments overseas, and possible future directions.