Artwork

เนื้อหาจัดทำโดย Turpentine, Erik Torenberg, and Nathan Labenz เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Turpentine, Erik Torenberg, and Nathan Labenz หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

GELU, MMLU, & X-Risk Defense in Depth, with the Great Dan Hendrycks

2:38:31
 
แบ่งปัน
 

Manage episode 445856065 series 3452589
เนื้อหาจัดทำโดย Turpentine, Erik Torenberg, and Nathan Labenz เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Turpentine, Erik Torenberg, and Nathan Labenz หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Join Nathan for an expansive conversation with Dan Hendrycks, Executive Director of the Center for AI Safety and Advisor to Elon Musk's XAI. In this episode of The Cognitive Revolution, we explore Dan's groundbreaking work in AI safety and alignment, from his early contributions to activation functions to his recent projects on AI robustness and governance. Discover insights on representation engineering, circuit breakers, and tamper-resistant training, as well as Dan's perspectives on AI's impact on society and the future of intelligence. Don't miss this in-depth discussion with one of the most influential figures in AI research and safety.

Check out some of Dan's research papers:

MMLU: https://arxiv.org/abs/2009.03300

GELU: https://arxiv.org/abs/1606.08415

Machiavelli Benchmark: https://arxiv.org/abs/2304.03279

Circuit Breakers: https://arxiv.org/abs/2406.04313

Tamper Resistant Safeguards: https://arxiv.org/abs/2408.00761

Statement on AI Risk: https://www.safe.ai/work/statement-on-ai-risk

Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/

SPONSORS:

Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitive.

LMNT: LMNT is a zero-sugar electrolyte drink mix that's redefining hydration and performance. Ideal for those who fast or anyone looking to optimize their electrolyte intake. Support the show and get a free sample pack with any purchase at https://drinklmnt.com/tcr.

Notion: Notion offers powerful workflow and automation templates, perfect for streamlining processes and laying the groundwork for AI-driven automation. With Notion AI, you can search across thousands of documents from various platforms, generating highly relevant analysis and content tailored just for you - try it for free at https://notion.com/cognitiverevolution

Oracle: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive

CHAPTERS:

(00:00:00) Teaser

(00:00:48) About the Show

(00:02:17) About the Episode

(00:05:41) Intro

(00:07:19) GELU Activation Function

(00:10:48) Signal Filtering

(00:12:46) Scaling Maximalism

(00:18:35) Sponsors: Shopify | LMNT

(00:22:03) New Architectures

(00:25:41) AI as Complex System

(00:32:35) The Machiavelli Benchmark

(00:34:10) Sponsors: Notion | Oracle

(00:37:20) Understanding MMLU Scores

(00:45:23) Reasoning in Language Models

(00:49:18) Multimodal Reasoning

(00:54:53) World Modeling and Sora

(00:57:07) Arc Benchmark and Hypothesis

(01:01:06) Humanity's Last Exam

(01:08:46) Benchmarks and AI Ethics

(01:13:28) Robustness and Jailbreaking

(01:18:36) Representation Engineering

(01:30:08) Convergence of Approaches

(01:34:18) Circuit Breakers

(01:37:52) Tamper Resistance

(01:49:10) Interpretability vs. Robustness

(01:53:53) Open Source and AI Safety

(01:58:16) Computational Irreducibility

(02:06:28) Neglected Approaches

(02:12:47) Truth Maxing and XAI

(02:19:59) AI-Powered Forecasting

(02:24:53) Chip Bans and Geopolitics

(02:33:30) Working at CAIS

(02:35:03) Extinction Risk Statement

(02:37:24) Outro

  continue reading

200 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 445856065 series 3452589
เนื้อหาจัดทำโดย Turpentine, Erik Torenberg, and Nathan Labenz เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Turpentine, Erik Torenberg, and Nathan Labenz หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Join Nathan for an expansive conversation with Dan Hendrycks, Executive Director of the Center for AI Safety and Advisor to Elon Musk's XAI. In this episode of The Cognitive Revolution, we explore Dan's groundbreaking work in AI safety and alignment, from his early contributions to activation functions to his recent projects on AI robustness and governance. Discover insights on representation engineering, circuit breakers, and tamper-resistant training, as well as Dan's perspectives on AI's impact on society and the future of intelligence. Don't miss this in-depth discussion with one of the most influential figures in AI research and safety.

Check out some of Dan's research papers:

MMLU: https://arxiv.org/abs/2009.03300

GELU: https://arxiv.org/abs/1606.08415

Machiavelli Benchmark: https://arxiv.org/abs/2304.03279

Circuit Breakers: https://arxiv.org/abs/2406.04313

Tamper Resistant Safeguards: https://arxiv.org/abs/2408.00761

Statement on AI Risk: https://www.safe.ai/work/statement-on-ai-risk

Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/

SPONSORS:

Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitive.

LMNT: LMNT is a zero-sugar electrolyte drink mix that's redefining hydration and performance. Ideal for those who fast or anyone looking to optimize their electrolyte intake. Support the show and get a free sample pack with any purchase at https://drinklmnt.com/tcr.

Notion: Notion offers powerful workflow and automation templates, perfect for streamlining processes and laying the groundwork for AI-driven automation. With Notion AI, you can search across thousands of documents from various platforms, generating highly relevant analysis and content tailored just for you - try it for free at https://notion.com/cognitiverevolution

Oracle: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive

CHAPTERS:

(00:00:00) Teaser

(00:00:48) About the Show

(00:02:17) About the Episode

(00:05:41) Intro

(00:07:19) GELU Activation Function

(00:10:48) Signal Filtering

(00:12:46) Scaling Maximalism

(00:18:35) Sponsors: Shopify | LMNT

(00:22:03) New Architectures

(00:25:41) AI as Complex System

(00:32:35) The Machiavelli Benchmark

(00:34:10) Sponsors: Notion | Oracle

(00:37:20) Understanding MMLU Scores

(00:45:23) Reasoning in Language Models

(00:49:18) Multimodal Reasoning

(00:54:53) World Modeling and Sora

(00:57:07) Arc Benchmark and Hypothesis

(01:01:06) Humanity's Last Exam

(01:08:46) Benchmarks and AI Ethics

(01:13:28) Robustness and Jailbreaking

(01:18:36) Representation Engineering

(01:30:08) Convergence of Approaches

(01:34:18) Circuit Breakers

(01:37:52) Tamper Resistance

(01:49:10) Interpretability vs. Robustness

(01:53:53) Open Source and AI Safety

(01:58:16) Computational Irreducibility

(02:06:28) Neglected Approaches

(02:12:47) Truth Maxing and XAI

(02:19:59) AI-Powered Forecasting

(02:24:53) Chip Bans and Geopolitics

(02:33:30) Working at CAIS

(02:35:03) Extinction Risk Statement

(02:37:24) Outro

  continue reading

200 ตอน

All episodes

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน