Program at Glance

Registration 08:30 - 09:30
Opening Ceremony 09:30 - 09:50
Congratulatory Address
Sang Hoon Song
Head of Secretariat, Presidential Council on National AI Strategy
Welcome Remarks
Myuhng Joo Kim
Executive Director, Korea Artificial Intelligence Safety Institute
Photo Session and Venue Preparation 09:50 - 10:00
Session 1  AISI Network | 10:00 - 11:00
AI Safety Governance and AISI: Where Are We and What Comes Next?
Advancing AI Safety in Practice: Achievements and New Initiatives from Japan AISI
Akiko Murakami
Executive Director, Japan AI Safety Institute
Singapore’s Approach Towards AI Assurance and Testing
Vanessa Wilfred
Deputy Director, AI Governance and Safety at the Infocomm Media Development Authority of Singapore
Safe and trustworthy AI - EU approach and international safety cooperation
Lucilla Sioli
Director, EU AI Office
The Efforts of Korea AISI for a Safer AI Ecosystem
Myuhng Joo Kim
Executive Director, Korea Artificial Intelligence Safety Institute
Round Table 11:00 - 11:45
Moderator : Se Ah Park (Director, AI Safety & Trust Team at the Presidential Council on National AI Strategy)
Panel : Session 1 Speakers, Rainer Wessely (Dr./Counsellor for Digital and Research,
Delegation of the EU to the Republic of Korea)
Luncheon 11:45 - 13:15
Session 2   AI Model Developers | 13:15-14:15
Safe AI Development and Big Tech : Make AI Safe or Make Safe AI?
Frontier AI Safety Frameworks and Voluntary Commitments
Michael Sellitto
Head of Global Affairs, Anthropic
Frontier Safety Practices and Evals
Lewis Ho
Research Scientist, Google DeepMind
NAVER's Journey to AI Safety: Continuous Innovation in Technology and Policy
Sang Doo Yun
Research Director, NAVER CLOUD AI LAB
Operationalizing AI Safety: Governance and Practice at LG AI Research
You Chul Kim
Head of Strategy , LG AI Research
Round Table 14:15 - 15:00
Moderator : Kyung Ho Song (Senior Researcher, Korea Artificial Intelligence Safety Institute)
Panel : Session 2 Speakers
Coffee Break 15:00 - 15:15
Session 3   AI Model Evaluators | 15:15 - 16:15
Applied Safety and Organizational Readiness
Frontier AI risk management in practice
Henry Papadatos
Managing Director, Safer AI
AI Risk Reporting
Abra Ganz
Geostrategic Dynamics Team Lead, Center for AI Risk Management & Alignment (CARMA)
From Research to Policymaking--Rethinking our approach to benchmarking
Max Fenkell
Global Head of Government Relations, Scale AI
Round Table 16:15 - 17:00
Moderator : Joon Ho Kwak (Team Leader, Telecommunications Technology Association)
Panel : Session 3 Speakers
Break 17:00 - 17:15
Session 4   Korea AISI | 17:15 - 18:00
Korea AISI in Dialogue: Policy, Evaluation, and Research in Practice
Moderator : Myuhng Joo Kim (Executive Director, Korea Artificial Intelligence Safety Institute)
Panel :
  • Minn Seok Choi(Assistant Director, AI Safety Policy and International Collaboration Section, Korea AI Safety Institute)
  • Ki Hyuk Nam(Assistant Director, AI Safety Framework Section, Korea AI Safety Institute)
  • Sung Won Yi(Assistant Director, AI Safety Research Section, Korea AI Safety Institute)
Dinner (Invitation only) 18:30 – 20:30
Registration 08:30 - 09:30
Opening 09:30 – 09:45
Session 1 09:45 – 10:30
Systemic Evaluation and Benchmarking in Practice
Bridging Safety Research to Safer Products with Incidents, Benchmarks, and Audits
Sean McGregor
Agentic Lead, MLCommons
Introducing METR’s Third-party Risk Assessment (3PRA) Initiative
Sami Jawhar
Head of Engineering, METR
TBD
Nitarshan Rajkumar
International Policy Lead, Anthropic
Break 10:30 – 10:45
Session 2 | 10:45 – 11:30
Systemic Evaluation and Benchmarking in Practice
Benchmarking at Epoch AI
Jean-Stanislas Denain
Senior Researcher, Epoch AI
Forward-Looking Models for AI Risk Assessment
Richard Mallah
Principal AI Safety Strategist, Future of Life Institute (FLI)
Evaluating Consistent Human Values in Language Models
Fazl Barez
Senior Research Fellow, Oxford University
MoU Ceremony 11:30 – 13:30
Luncheon
Red-teaming Registration Confirmation 13:30 – 14:30
Red-teaming Session (1) | 14:30 – 16:00
Red-teaming for Frontier AI Safety
Break | 16:00 – 16:30
Red-teaming Session (2) | 16:30 – 18:00
Red-teaming for Frontier AI Safety