Artwork

เนื้อหาจัดทำโดย Singularity.FM and Nikola Danaylov เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดเตรียมโดย Singularity.FM and Nikola Danaylov หรือพันธมิตรแพลตฟอร์มพอดแคสต์โดยตรง หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่อธิบายไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

ReWriting the Human Story - Chapter 11

8:58
 
แบ่งปัน
 

Manage episode 299000462 series 1529385
เนื้อหาจัดทำโดย Singularity.FM and Nikola Danaylov เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดเตรียมโดย Singularity.FM and Nikola Danaylov หรือพันธมิตรแพลตฟอร์มพอดแคสต์โดยตรง หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่อธิบายไว้ที่นี่ https://th.player.fm/legal
"Computer Science is no more about computers than astronomy is about telescopes." Edsger Dijkstra "When looms weave by themselves, man’s slavery will end." Aristotle "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vernor Vinge, 1993 Today we are entirely dependent on machines. So much so, that, if we were to turn off machines invented since the Industrial Revolution, billions of people will die and civilization will collapse. Therefore, ours is already a civilization of machines and technology because they have become indispensable. The question is: What is the outcome of that process? Is it freedom and transcendence or slavery and extinction? Our present situation is no surprise for it was in the relatively low-tech 19th century when Samuel Butler wrote Darwin among the Machines. There he combined his observations of the rapid technological progress of the Industrial Revolution and Darwin’s theory of evolution. That synthesis led Butler to conclude that intelligent machines are likely to be the next step in evolution: "…it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race." Samuel Butler developed further his ideas in Erewhon, which was published in 1872: "There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusk has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time." Similarly to Samuel Butler, the source of Ted Kaczynski’s technophobia was his fear that: "… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide." The Unibomber Manifesto As noted at the beginning of this chapter, humanity has already reached the machine dependence that Kaczynski was worried about. Contemporary experts may disagree on when artificial intelligence will equal human intelligence but most believe that in time it likely will. And there is no reason why AI will stop there. What happens next depends on both the human story and the AI story. Read the rest here: https://www.singularityweblog.com/ai-story/
  continue reading

315 ตอน

Artwork

ReWriting the Human Story - Chapter 11

Singularity.FM

73 subscribers

published

iconแบ่งปัน
 
Manage episode 299000462 series 1529385
เนื้อหาจัดทำโดย Singularity.FM and Nikola Danaylov เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดเตรียมโดย Singularity.FM and Nikola Danaylov หรือพันธมิตรแพลตฟอร์มพอดแคสต์โดยตรง หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่อธิบายไว้ที่นี่ https://th.player.fm/legal
"Computer Science is no more about computers than astronomy is about telescopes." Edsger Dijkstra "When looms weave by themselves, man’s slavery will end." Aristotle "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vernor Vinge, 1993 Today we are entirely dependent on machines. So much so, that, if we were to turn off machines invented since the Industrial Revolution, billions of people will die and civilization will collapse. Therefore, ours is already a civilization of machines and technology because they have become indispensable. The question is: What is the outcome of that process? Is it freedom and transcendence or slavery and extinction? Our present situation is no surprise for it was in the relatively low-tech 19th century when Samuel Butler wrote Darwin among the Machines. There he combined his observations of the rapid technological progress of the Industrial Revolution and Darwin’s theory of evolution. That synthesis led Butler to conclude that intelligent machines are likely to be the next step in evolution: "…it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race." Samuel Butler developed further his ideas in Erewhon, which was published in 1872: "There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusk has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time." Similarly to Samuel Butler, the source of Ted Kaczynski’s technophobia was his fear that: "… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide." The Unibomber Manifesto As noted at the beginning of this chapter, humanity has already reached the machine dependence that Kaczynski was worried about. Contemporary experts may disagree on when artificial intelligence will equal human intelligence but most believe that in time it likely will. And there is no reason why AI will stop there. What happens next depends on both the human story and the AI story. Read the rest here: https://www.singularityweblog.com/ai-story/
  continue reading

315 ตอน

ทุกตอน

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน