Artwork

เนื้อหาจัดทำโดย The Gradient เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก The Gradient หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

Seth Lazar: Normative Philosophy of Computing

1:50:17
 
แบ่งปัน
 

Manage episode 419835463 series 2975159
เนื้อหาจัดทำโดย The Gradient เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก The Gradient หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Episode 124

You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.

I spoke with Professor Seth Lazar about:

* Why managing near-term and long-term risks isn’t always zero-sum

* How to think through axioms and systems in political philosphy

* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI

Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.

Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:54) Ad read — MLOps conference

* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation

* (03:53) Attention allocation as an independent good (or bad)

* (08:22) Axioms in political philosophy

* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust

* (15:05) AI safety / catastrophic risk concerns

* (22:10) Superintelligence arguments, reasoning about technology

* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?

* (35:55) GPT-2, model weights, related debates

* (39:11) Power and economics—coordination problems, company incentives

* (50:42) Morality tales, relationship between safety and capabilities

* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy

* (1:02:28) What is a feasibility horizon?

* (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter

* (1:14:25) Sociotechnical lenses, narrowly technical solutions

* (1:19:47) Experiments for responsibly integrating AI systems into society

* (1:26:53) Helpful/honest/harmless and antagonistic AI systems

* (1:33:35) Managing incentives conducive to developing technology in the public interest

* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia

* (1:46:54) How we can help legitimize and support interdisciplinary work

* (1:50:07) Outro

Links:

* Seth’s Linktree and Twitter

* Resources

* Attention, moral skill, and algorithmic recommendation

* Catastrophic AI Risk slides


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

132 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 419835463 series 2975159
เนื้อหาจัดทำโดย The Gradient เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก The Gradient หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Episode 124

You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.

I spoke with Professor Seth Lazar about:

* Why managing near-term and long-term risks isn’t always zero-sum

* How to think through axioms and systems in political philosphy

* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI

Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.

Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:54) Ad read — MLOps conference

* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation

* (03:53) Attention allocation as an independent good (or bad)

* (08:22) Axioms in political philosophy

* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust

* (15:05) AI safety / catastrophic risk concerns

* (22:10) Superintelligence arguments, reasoning about technology

* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?

* (35:55) GPT-2, model weights, related debates

* (39:11) Power and economics—coordination problems, company incentives

* (50:42) Morality tales, relationship between safety and capabilities

* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy

* (1:02:28) What is a feasibility horizon?

* (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter

* (1:14:25) Sociotechnical lenses, narrowly technical solutions

* (1:19:47) Experiments for responsibly integrating AI systems into society

* (1:26:53) Helpful/honest/harmless and antagonistic AI systems

* (1:33:35) Managing incentives conducive to developing technology in the public interest

* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia

* (1:46:54) How we can help legitimize and support interdisciplinary work

* (1:50:07) Outro

Links:

* Seth’s Linktree and Twitter

* Resources

* Attention, moral skill, and algorithmic recommendation

* Catastrophic AI Risk slides


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

132 ตอน

ทุกตอน

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน