Artwork

เนื้อหาจัดทำโดย Machine Learning Street Talk (MLST) เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Machine Learning Street Talk (MLST) หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

Yoshua Bengio - Designing out Agency for Safe AI

1:41:53
 
แบ่งปัน
 

Manage episode 461294240 series 2803422
เนื้อหาจัดทำโดย Machine Learning Street Talk (MLST) เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Machine Learning Street Talk (MLST) หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Professor Yoshua Bengio is a pioneer in deep learning and Turing Award winner. Bengio talks about AI safety, why goal-seeking “agentic” AIs might be dangerous, and his vision for building powerful AI tools without giving them agency. Topics include reward tampering risks, instrumental convergence, global AI governance, and how non-agent AIs could revolutionize science and medicine while reducing existential threats. Perfect for anyone curious about advanced AI risks and how to manage them responsibly.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?

They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.

Goto https://tufalabs.ai/

***

Interviewer: Tim Scarfe

Yoshua Bengio:

https://x.com/Yoshua_Bengio

https://scholar.google.com/citations?user=kukA0LcAAAAJ&hl=en

https://yoshuabengio.org/

https://en.wikipedia.org/wiki/Yoshua_Bengio

TOC:

1. AI Safety Fundamentals

[00:00:00] 1.1 AI Safety Risks and International Cooperation

[00:03:20] 1.2 Fundamental Principles vs Scaling in AI Development

[00:11:25] 1.3 System 1/2 Thinking and AI Reasoning Capabilities

[00:15:15] 1.4 Reward Tampering and AI Agency Risks

[00:25:17] 1.5 Alignment Challenges and Instrumental Convergence

2. AI Architecture and Safety Design

[00:33:10] 2.1 Instrumental Goals and AI Safety Fundamentals

[00:35:02] 2.2 Separating Intelligence from Goals in AI Systems

[00:40:40] 2.3 Non-Agent AI as Scientific Tools

[00:44:25] 2.4 Oracle AI Systems and Mathematical Safety Frameworks

3. Global Governance and Security

[00:49:50] 3.1 International AI Competition and Hardware Governance

[00:51:58] 3.2 Military and Security Implications of AI Development

[00:56:07] 3.3 Personal Evolution of AI Safety Perspectives

[01:00:25] 3.4 AI Development Scaling and Global Governance Challenges

[01:12:10] 3.5 AI Regulation and Corporate Oversight

4. Technical Innovations

[01:23:00] 4.1 Evolution of Neural Architectures: From RNNs to Transformers

[01:26:02] 4.2 GFlowNets and Symbolic Computation

[01:30:47] 4.3 Neural Dynamics and Consciousness

[01:34:38] 4.4 AI Creativity and Scientific Discovery

SHOWNOTES (Transcript, references, best clips etc):

https://www.dropbox.com/scl/fi/ajucigli8n90fbxv9h94x/BENGIO_SHOW.pdf?rlkey=38hi2m19sylnr8orb76b85wkw&dl=0

CORE REFS (full list in shownotes and pinned comment):

[00:00:15] Bengio et al.: "AI Risk" Statement

https://www.safe.ai/work/statement-on-ai-risk

[00:23:10] Bengio on reward tampering & AI safety (Harvard Data Science Review)

https://hdsr.mitpress.mit.edu/pub/w974bwb0

[00:40:45] Munk Debate on AI existential risk, featuring Bengio

https://munkdebates.com/debates/artificial-intelligence

[00:44:30] "Can a Bayesian Oracle Prevent Harm from an Agent?" (Bengio et al.) on oracle-to-agent safety

https://arxiv.org/abs/2408.05284

[00:51:20] Bengio (2024) memo on hardware-based AI governance verification

https://yoshuabengio.org/wp-content/uploads/2024/08/FlexHEG-Memo_August-2024.pdf

[01:12:55] Bengio’s involvement in EU AI Act code of practice

https://digital-strategy.ec.europa.eu/en/news/meet-chairs-leading-development-first-general-purpose-ai-code-practice

[01:27:05] Complexity-based compositionality theory (Elmoznino, Jiralerspong, Bengio, Lajoie)

https://arxiv.org/abs/2410.14817

[01:29:00] GFlowNet Foundations (Bengio et al.) for probabilistic inference

https://arxiv.org/pdf/2111.09266

[01:32:10] Discrete attractor states in neural systems (Nam, Elmoznino, Bengio, Lajoie)

https://arxiv.org/pdf/2302.06403

  continue reading

233 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 461294240 series 2803422
เนื้อหาจัดทำโดย Machine Learning Street Talk (MLST) เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Machine Learning Street Talk (MLST) หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Professor Yoshua Bengio is a pioneer in deep learning and Turing Award winner. Bengio talks about AI safety, why goal-seeking “agentic” AIs might be dangerous, and his vision for building powerful AI tools without giving them agency. Topics include reward tampering risks, instrumental convergence, global AI governance, and how non-agent AIs could revolutionize science and medicine while reducing existential threats. Perfect for anyone curious about advanced AI risks and how to manage them responsibly.

SPONSOR MESSAGES:

***

CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.

https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?

They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.

Goto https://tufalabs.ai/

***

Interviewer: Tim Scarfe

Yoshua Bengio:

https://x.com/Yoshua_Bengio

https://scholar.google.com/citations?user=kukA0LcAAAAJ&hl=en

https://yoshuabengio.org/

https://en.wikipedia.org/wiki/Yoshua_Bengio

TOC:

1. AI Safety Fundamentals

[00:00:00] 1.1 AI Safety Risks and International Cooperation

[00:03:20] 1.2 Fundamental Principles vs Scaling in AI Development

[00:11:25] 1.3 System 1/2 Thinking and AI Reasoning Capabilities

[00:15:15] 1.4 Reward Tampering and AI Agency Risks

[00:25:17] 1.5 Alignment Challenges and Instrumental Convergence

2. AI Architecture and Safety Design

[00:33:10] 2.1 Instrumental Goals and AI Safety Fundamentals

[00:35:02] 2.2 Separating Intelligence from Goals in AI Systems

[00:40:40] 2.3 Non-Agent AI as Scientific Tools

[00:44:25] 2.4 Oracle AI Systems and Mathematical Safety Frameworks

3. Global Governance and Security

[00:49:50] 3.1 International AI Competition and Hardware Governance

[00:51:58] 3.2 Military and Security Implications of AI Development

[00:56:07] 3.3 Personal Evolution of AI Safety Perspectives

[01:00:25] 3.4 AI Development Scaling and Global Governance Challenges

[01:12:10] 3.5 AI Regulation and Corporate Oversight

4. Technical Innovations

[01:23:00] 4.1 Evolution of Neural Architectures: From RNNs to Transformers

[01:26:02] 4.2 GFlowNets and Symbolic Computation

[01:30:47] 4.3 Neural Dynamics and Consciousness

[01:34:38] 4.4 AI Creativity and Scientific Discovery

SHOWNOTES (Transcript, references, best clips etc):

https://www.dropbox.com/scl/fi/ajucigli8n90fbxv9h94x/BENGIO_SHOW.pdf?rlkey=38hi2m19sylnr8orb76b85wkw&dl=0

CORE REFS (full list in shownotes and pinned comment):

[00:00:15] Bengio et al.: "AI Risk" Statement

https://www.safe.ai/work/statement-on-ai-risk

[00:23:10] Bengio on reward tampering & AI safety (Harvard Data Science Review)

https://hdsr.mitpress.mit.edu/pub/w974bwb0

[00:40:45] Munk Debate on AI existential risk, featuring Bengio

https://munkdebates.com/debates/artificial-intelligence

[00:44:30] "Can a Bayesian Oracle Prevent Harm from an Agent?" (Bengio et al.) on oracle-to-agent safety

https://arxiv.org/abs/2408.05284

[00:51:20] Bengio (2024) memo on hardware-based AI governance verification

https://yoshuabengio.org/wp-content/uploads/2024/08/FlexHEG-Memo_August-2024.pdf

[01:12:55] Bengio’s involvement in EU AI Act code of practice

https://digital-strategy.ec.europa.eu/en/news/meet-chairs-leading-development-first-general-purpose-ai-code-practice

[01:27:05] Complexity-based compositionality theory (Elmoznino, Jiralerspong, Bengio, Lajoie)

https://arxiv.org/abs/2410.14817

[01:29:00] GFlowNet Foundations (Bengio et al.) for probabilistic inference

https://arxiv.org/pdf/2111.09266

[01:32:10] Discrete attractor states in neural systems (Nam, Elmoznino, Bengio, Lajoie)

https://arxiv.org/pdf/2302.06403

  continue reading

233 ตอน

ทุกตอน

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน

ฟังรายการนี้ในขณะที่คุณสำรวจ
เล่น