A podcast featuring panelists of engineers from Netflix, Twitch, & Atlassian talking over drinks about all things software engineering.
…
continue reading
เนื้อหาจัดทำโดย Security Voices เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Security Voices หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !
ออฟไลน์ด้วยแอป Player FM !
From Tool to Sidekick - Human/Machine Teaming with Jamie Winterton
MP3•หน้าโฮมของตอน
Manage episode 294304951 series 2495524
เนื้อหาจัดทำโดย Security Voices เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Security Voices หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
We’ve conditioned ourselves to look at our technology in a similar way we look at a box of tools: as instruments that passively do what we make them do. When we think of the future of artificial intelligence, it’s tempting to leap to fully autonomous solutions一 when exactly will that Tesla finally drive by itself? In our interview with Jamie Winterton, we explore a future where AI is neither a passive tool or a self-contained machine but rather an active partner.
Human/machine teaming, an approach where AI works alongside a person as an integrated pair, has been advocated by the U.S. Department of Defense for several years now and is the focus of Jamie’s recent work at Arizona State University where she is Director of Strategy for ASU’s Global Security Initiative and chairs the DARPA Working Group. From testing A.I. assisted search and rescue scenarios in Minecraft to real war time settings, Jamie takes us through the opportunity and the issues that arise when we make technology our sidekick instead of solely our instruments.
The central challenges of human/machine teaming? They’re awfully familiar. The same thorny matters of trust and communication that plague human interactions are still front and center. If we can’t understand how A.I. arrived at a recommendation, will we trust its advice? If it makes a mistake, are we willing to forgive it? And how about all those non-verbal cues that are so central to human communication and vary person to person? Jamie recounts stories of sophisticated “nerd stuff” being disregarded by people in favor of simplistic solutions they could more easily understand (e.g., Google Earth).
The future of human/machine teaming may be less about us slowly learning to trust and giving over more control to our robot partners and more about A.I. learning the soft skills that so frequently make our other interpersonal relationships work harmoniously. But what if the bad guys send their fully autonomous weapons against us in the future? Will we be too slow to survive with an integrated approach? Jamie explains the prevailing thinking on the topic of speed and autonomy vs. an arguably slower but more optimal teaming approach and what it might mean for the battlefields of the future.
Note: Our conversation on human/machine teaming follows an introductory chat about data breaches, responsible disclosure and how future breaches that involve biometric data theft may require surgeries as part of the remediation. If you want to jump straight to the human/machine teaming conversation, it picks up around the 18 minute mark.
…
continue reading
Human/machine teaming, an approach where AI works alongside a person as an integrated pair, has been advocated by the U.S. Department of Defense for several years now and is the focus of Jamie’s recent work at Arizona State University where she is Director of Strategy for ASU’s Global Security Initiative and chairs the DARPA Working Group. From testing A.I. assisted search and rescue scenarios in Minecraft to real war time settings, Jamie takes us through the opportunity and the issues that arise when we make technology our sidekick instead of solely our instruments.
The central challenges of human/machine teaming? They’re awfully familiar. The same thorny matters of trust and communication that plague human interactions are still front and center. If we can’t understand how A.I. arrived at a recommendation, will we trust its advice? If it makes a mistake, are we willing to forgive it? And how about all those non-verbal cues that are so central to human communication and vary person to person? Jamie recounts stories of sophisticated “nerd stuff” being disregarded by people in favor of simplistic solutions they could more easily understand (e.g., Google Earth).
The future of human/machine teaming may be less about us slowly learning to trust and giving over more control to our robot partners and more about A.I. learning the soft skills that so frequently make our other interpersonal relationships work harmoniously. But what if the bad guys send their fully autonomous weapons against us in the future? Will we be too slow to survive with an integrated approach? Jamie explains the prevailing thinking on the topic of speed and autonomy vs. an arguably slower but more optimal teaming approach and what it might mean for the battlefields of the future.
Note: Our conversation on human/machine teaming follows an introductory chat about data breaches, responsible disclosure and how future breaches that involve biometric data theft may require surgeries as part of the remediation. If you want to jump straight to the human/machine teaming conversation, it picks up around the 18 minute mark.
66 ตอน
MP3•หน้าโฮมของตอน
Manage episode 294304951 series 2495524
เนื้อหาจัดทำโดย Security Voices เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Security Voices หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
We’ve conditioned ourselves to look at our technology in a similar way we look at a box of tools: as instruments that passively do what we make them do. When we think of the future of artificial intelligence, it’s tempting to leap to fully autonomous solutions一 when exactly will that Tesla finally drive by itself? In our interview with Jamie Winterton, we explore a future where AI is neither a passive tool or a self-contained machine but rather an active partner.
Human/machine teaming, an approach where AI works alongside a person as an integrated pair, has been advocated by the U.S. Department of Defense for several years now and is the focus of Jamie’s recent work at Arizona State University where she is Director of Strategy for ASU’s Global Security Initiative and chairs the DARPA Working Group. From testing A.I. assisted search and rescue scenarios in Minecraft to real war time settings, Jamie takes us through the opportunity and the issues that arise when we make technology our sidekick instead of solely our instruments.
The central challenges of human/machine teaming? They’re awfully familiar. The same thorny matters of trust and communication that plague human interactions are still front and center. If we can’t understand how A.I. arrived at a recommendation, will we trust its advice? If it makes a mistake, are we willing to forgive it? And how about all those non-verbal cues that are so central to human communication and vary person to person? Jamie recounts stories of sophisticated “nerd stuff” being disregarded by people in favor of simplistic solutions they could more easily understand (e.g., Google Earth).
The future of human/machine teaming may be less about us slowly learning to trust and giving over more control to our robot partners and more about A.I. learning the soft skills that so frequently make our other interpersonal relationships work harmoniously. But what if the bad guys send their fully autonomous weapons against us in the future? Will we be too slow to survive with an integrated approach? Jamie explains the prevailing thinking on the topic of speed and autonomy vs. an arguably slower but more optimal teaming approach and what it might mean for the battlefields of the future.
Note: Our conversation on human/machine teaming follows an introductory chat about data breaches, responsible disclosure and how future breaches that involve biometric data theft may require surgeries as part of the remediation. If you want to jump straight to the human/machine teaming conversation, it picks up around the 18 minute mark.
…
continue reading
Human/machine teaming, an approach where AI works alongside a person as an integrated pair, has been advocated by the U.S. Department of Defense for several years now and is the focus of Jamie’s recent work at Arizona State University where she is Director of Strategy for ASU’s Global Security Initiative and chairs the DARPA Working Group. From testing A.I. assisted search and rescue scenarios in Minecraft to real war time settings, Jamie takes us through the opportunity and the issues that arise when we make technology our sidekick instead of solely our instruments.
The central challenges of human/machine teaming? They’re awfully familiar. The same thorny matters of trust and communication that plague human interactions are still front and center. If we can’t understand how A.I. arrived at a recommendation, will we trust its advice? If it makes a mistake, are we willing to forgive it? And how about all those non-verbal cues that are so central to human communication and vary person to person? Jamie recounts stories of sophisticated “nerd stuff” being disregarded by people in favor of simplistic solutions they could more easily understand (e.g., Google Earth).
The future of human/machine teaming may be less about us slowly learning to trust and giving over more control to our robot partners and more about A.I. learning the soft skills that so frequently make our other interpersonal relationships work harmoniously. But what if the bad guys send their fully autonomous weapons against us in the future? Will we be too slow to survive with an integrated approach? Jamie explains the prevailing thinking on the topic of speed and autonomy vs. an arguably slower but more optimal teaming approach and what it might mean for the battlefields of the future.
Note: Our conversation on human/machine teaming follows an introductory chat about data breaches, responsible disclosure and how future breaches that involve biometric data theft may require surgeries as part of the remediation. If you want to jump straight to the human/machine teaming conversation, it picks up around the 18 minute mark.
66 ตอน
ทุกตอน
×ขอต้อนรับสู่ Player FM!
Player FM กำลังหาเว็บ