Artwork

เนื้อหาจัดทำโดย Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

🧐 Responsible AI is NOT the icing on the cake | irResponsible AI EP4S01

31:03
 
แบ่งปัน
 

Manage episode 421983737 series 3578042
เนื้อหาจัดทำโดย Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Got questions or comments or topics you want us to cover? Text us!

In this episode filled with hot takes, Upol and Shea discuss three things:
✅ How the Gemini Scandal unfolded
✅ Is Responsible AI is too woke? Or is there a hidden agenda?
✅ What companies can do to address such scandals
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
0:00 - Introduction
1:25 - How the Gemini Scandal unfolded
5:30 - Selective outrage: hidden social justice warriors?
7:44 - Should we expect Generative AI to be historically accurate?
11:53 - Responsible AI is NOT the icing on the cake
14:58 - How Google and other companies should respond
16:46 - Immature Responsible AI leads to irresponsible AI
19:54 - Is Responsible AI too woke?
22:00 - Identity politics in Responsible AI
23:21 - What can tech companies do to solve this problem?
26:43 - Responsible AI is a process, not a product
28:54 - The key takeaways from the episode
#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

บท

1. 🧐 Responsible AI is NOT the icing on the cake | irResponsible AI EP4S01 (00:00:00)

2. How the Google Gemini Scandal unfolded (00:01:25)

3. Selective outrage: hidden social justice warriors? (00:05:30)

4. Should we expect Generative AI to be historically accurate? (00:07:44)

5. Responsible AI is NOT the icing on the cake (00:11:53)

6. How Google and other companies should respond (00:14:58)

7. Immature Responsible AI leads to irresponsible AI (00:16:46)

8. Is Responsible AI too woke? (00:19:54)

9. Identity politics in Responsible AI (00:22:00)

10. What can tech companies do to solve this problem? (00:23:21)

11. Responsible AI is a process, not a product (00:26:43)

12. The key takeaways from the episode (00:28:54)

6 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 421983737 series 3578042
เนื้อหาจัดทำโดย Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Upol Ehsan, Shea Brown, Upol Ehsan, and Shea Brown หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

Got questions or comments or topics you want us to cover? Text us!

In this episode filled with hot takes, Upol and Shea discuss three things:
✅ How the Gemini Scandal unfolded
✅ Is Responsible AI is too woke? Or is there a hidden agenda?
✅ What companies can do to address such scandals
What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.
🎙️Who are your hosts and why should you even bother to listen?
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI.
Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.
All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives.
Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/
CHAPTERS:
0:00 - Introduction
1:25 - How the Gemini Scandal unfolded
5:30 - Selective outrage: hidden social justice warriors?
7:44 - Should we expect Generative AI to be historically accurate?
11:53 - Responsible AI is NOT the icing on the cake
14:58 - How Google and other companies should respond
16:46 - Immature Responsible AI leads to irresponsible AI
19:54 - Is Responsible AI too woke?
22:00 - Identity politics in Responsible AI
23:21 - What can tech companies do to solve this problem?
26:43 - Responsible AI is a process, not a product
28:54 - The key takeaways from the episode
#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the show

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!
Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

  continue reading

บท

1. 🧐 Responsible AI is NOT the icing on the cake | irResponsible AI EP4S01 (00:00:00)

2. How the Google Gemini Scandal unfolded (00:01:25)

3. Selective outrage: hidden social justice warriors? (00:05:30)

4. Should we expect Generative AI to be historically accurate? (00:07:44)

5. Responsible AI is NOT the icing on the cake (00:11:53)

6. How Google and other companies should respond (00:14:58)

7. Immature Responsible AI leads to irresponsible AI (00:16:46)

8. Is Responsible AI too woke? (00:19:54)

9. Identity politics in Responsible AI (00:22:00)

10. What can tech companies do to solve this problem? (00:23:21)

11. Responsible AI is a process, not a product (00:26:43)

12. The key takeaways from the episode (00:28:54)

6 ตอน

ทุกตอน

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน