Artwork

เนื้อหาจัดทำโดย Turpentine, Erik Torenberg, and Nathan Labenz เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Turpentine, Erik Torenberg, and Nathan Labenz หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

Guaranteed Safe AI? World Models, Safety Specs, & Verifiers, with Nora Ammann & Ben Goldhaber

1:46:15
 
แบ่งปัน
 

Manage episode 429297062 series 3452589
เนื้อหาจัดทำโดย Turpentine, Erik Torenberg, and Nathan Labenz เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Turpentine, Erik Torenberg, and Nathan Labenz หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Nathan explores the Guaranteed Safe AI Framework with co-authors Ben Goldhaber and Nora Ammann. In this episode of The Cognitive Revolution, we discuss their groundbreaking position paper on ensuring robust and reliable AI systems. Join us for an in-depth conversation about the three-part system governing AI behavior and its potential impact on the future of AI safety.

Apply to join over 400 founders and execs in the Turpentine Network: https://hmplogxqz0y.typeform.com/to/JCkphVqj

RECOMMENDED PODCAST:

🎙️ Second Opinion - A new podcast for health-tech insiders from Christina Farr of the Second Opinion newsletter. Join Christina Farr, Luba Greenwood, and Ash Zenooz every week as they challenge industry experts with tough questions about the best bets in health-tech.

Apple Podcasts: https://podcasts.apple.com/us/podcast/id1759267211

Spotify: https://open.spotify.com/show/0A8NwQE976s32zdBbZw6bv

-

🎙️ History 102 with WhatifAltHist

Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth -- in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more.

Subscribe on Spotify: https://open.spotify.com/show/36Kqo3BMMUBGTDo1IEYihm

Apple: https://podcasts.apple.com/us/podcast/history-102-with-whatifalthists-rudyard-lynch-and/id1730633913

YouTube: https://www.youtube.com/@History102-qg5oj

SPONSORS:

Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive

The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR

Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/

Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist.

CHAPTERS:

(00:00:00) About the Show

(00:04:39) Introduction

(00:07:58) Convergence

(00:10:32) Safety guarantees

(00:14:35) World model (Part 1)

(00:22:22) Sponsors: Oracle | Brave

(00:24:31) World model (Part 2)

(00:26:55) AI boxing

(00:30:28) Verifier

(00:33:33) Sponsors: Omneky | Squad

(00:35:20) Example: Self-Driving Cars

(00:38:08) Moral Desiderata

(00:41:09) Trolley Problems

(00:47:24) How to approach the world model

(00:50:50) Deriving the world model

(00:55:13) How far should the world model extend?

(01:00:55) Safety through narrowness

(01:02:38) Safety specs

(01:08:26) Experiments

(01:11:25) How GSAI can help in the short term

(01:27:40) What would be the basis for the world model?

(01:31:23) Interpretability

(01:34:24) Competitive dynamics

(01:37:35) Regulation

(01:42:02) GSAI authors

(01:43:25) Outro

  continue reading

197 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 429297062 series 3452589
เนื้อหาจัดทำโดย Turpentine, Erik Torenberg, and Nathan Labenz เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Turpentine, Erik Torenberg, and Nathan Labenz หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Nathan explores the Guaranteed Safe AI Framework with co-authors Ben Goldhaber and Nora Ammann. In this episode of The Cognitive Revolution, we discuss their groundbreaking position paper on ensuring robust and reliable AI systems. Join us for an in-depth conversation about the three-part system governing AI behavior and its potential impact on the future of AI safety.

Apply to join over 400 founders and execs in the Turpentine Network: https://hmplogxqz0y.typeform.com/to/JCkphVqj

RECOMMENDED PODCAST:

🎙️ Second Opinion - A new podcast for health-tech insiders from Christina Farr of the Second Opinion newsletter. Join Christina Farr, Luba Greenwood, and Ash Zenooz every week as they challenge industry experts with tough questions about the best bets in health-tech.

Apple Podcasts: https://podcasts.apple.com/us/podcast/id1759267211

Spotify: https://open.spotify.com/show/0A8NwQE976s32zdBbZw6bv

-

🎙️ History 102 with WhatifAltHist

Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth -- in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more.

Subscribe on Spotify: https://open.spotify.com/show/36Kqo3BMMUBGTDo1IEYihm

Apple: https://podcasts.apple.com/us/podcast/history-102-with-whatifalthists-rudyard-lynch-and/id1730633913

YouTube: https://www.youtube.com/@History102-qg5oj

SPONSORS:

Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive

The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR

Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/

Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist.

CHAPTERS:

(00:00:00) About the Show

(00:04:39) Introduction

(00:07:58) Convergence

(00:10:32) Safety guarantees

(00:14:35) World model (Part 1)

(00:22:22) Sponsors: Oracle | Brave

(00:24:31) World model (Part 2)

(00:26:55) AI boxing

(00:30:28) Verifier

(00:33:33) Sponsors: Omneky | Squad

(00:35:20) Example: Self-Driving Cars

(00:38:08) Moral Desiderata

(00:41:09) Trolley Problems

(00:47:24) How to approach the world model

(00:50:50) Deriving the world model

(00:55:13) How far should the world model extend?

(01:00:55) Safety through narrowness

(01:02:38) Safety specs

(01:08:26) Experiments

(01:11:25) How GSAI can help in the short term

(01:27:40) What would be the basis for the world model?

(01:31:23) Interpretability

(01:34:24) Competitive dynamics

(01:37:35) Regulation

(01:42:02) GSAI authors

(01:43:25) Outro

  continue reading

197 ตอน

ทุกตอน

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน