Artwork

เนื้อหาจัดทำโดย Sentience Institute เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Sentience Institute หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

Raphaël Millière on large language models

1:49:27
 
แบ่งปัน
 

Manage episode 367749015 series 2596584
เนื้อหาจัดทำโดย Sentience Institute เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Sentience Institute หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.

  • Raphaël Millière

How do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language?

Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.

Topics discussed in the episode:

  • Introduction (0:00)
  • How Raphaël came to work on AI (1:25)
  • How do large language models work? (5:50)
  • Deflationary and inflationary claims about large language models (19:25)
  • The dangers of overclaiming and underclaiming (25:20)
  • Summary of cognitive capacities large language models might have (33:20)
  • Intelligence (38:10)
  • Artificial general intelligence (53:30)
  • Consciousness and sentience (1:06:10)
  • Theory of mind (01:18:09)
  • Compositionality (1:24:15)
  • Language understanding and referential grounding (1:30:45)
  • Which cognitive capacities are most useful to understand for various purposes? (1:41:10)
  • Conclusion (1:47:23)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

  continue reading

23 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 367749015 series 2596584
เนื้อหาจัดทำโดย Sentience Institute เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Sentience Institute หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.

  • Raphaël Millière

How do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language?

Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.

Topics discussed in the episode:

  • Introduction (0:00)
  • How Raphaël came to work on AI (1:25)
  • How do large language models work? (5:50)
  • Deflationary and inflationary claims about large language models (19:25)
  • The dangers of overclaiming and underclaiming (25:20)
  • Summary of cognitive capacities large language models might have (33:20)
  • Intelligence (38:10)
  • Artificial general intelligence (53:30)
  • Consciousness and sentience (1:06:10)
  • Theory of mind (01:18:09)
  • Compositionality (1:24:15)
  • Language understanding and referential grounding (1:30:45)
  • Which cognitive capacities are most useful to understand for various purposes? (1:41:10)
  • Conclusion (1:47:23)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

  continue reading

23 ตอน

ทุกตอน

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน