The director’s commentary track for Daring Fireball. Long digressions on Apple, technology, design, movies, and more.
…
continue reading
เนื้อหาจัดทำโดย LessWrong เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก LessWrong หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !
ออฟไลน์ด้วยแอป Player FM !
“Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most ‘classic humans’ in a few decades.” by Raemon
MP3•หน้าโฮมของตอน
Manage episode 510869585 series 3364760
เนื้อหาจัดทำโดย LessWrong เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก LessWrong หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"...
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
…
continue reading
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
- It's politically intractable to get a global halt or globally controlled takeoff.
- Superintelligence is moderately likely to be somewhat nice.
- We'll get to run lots of experiments on near-human-AI that will be reasonably informative about how things will generalize to the somewhat-superhuman-level.
- We get to ramp up [...]
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
625 ตอน
MP3•หน้าโฮมของตอน
Manage episode 510869585 series 3364760
เนื้อหาจัดทำโดย LessWrong เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก LessWrong หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"...
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
…
continue reading
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
- It's politically intractable to get a global halt or globally controlled takeoff.
- Superintelligence is moderately likely to be somewhat nice.
- We'll get to run lots of experiments on near-human-AI that will be reasonably informative about how things will generalize to the somewhat-superhuman-level.
- We get to ramp up [...]
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
625 ตอน
ทุกตอน
×ขอต้อนรับสู่ Player FM!
Player FM กำลังหาเว็บ