Artwork

เนื้อหาจัดทำโดย Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

#6 - AI Chatbots Gone Wrong

27:05
 
แบ่งปัน
 

Manage episode 501626387 series 3678189
เนื้อหาจัดทำโดย Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

What if a chatbot designed to support recovery instead encouraged the very behaviors it was meant to prevent? In this episode, we unravel the cautionary saga of Tessa, a digital companion built by the National Eating Disorder Association to scale mental health support during the COVID-19 surge—only to take a troubling turn when powered by generative AI.

At first, Tessa was a straightforward rules-based helper, offering pre-vetted encouragement and resources. But after an AI upgrade, users began receiving rigid diet tips: restrict calories, aim for weekly weight loss goals, and obsessively track measurements—precisely the advice no one battling an eating disorder should hear. What should have been a lifeline revealed the danger of unguarded algorithmic “help.”

We trace this journey from the earliest chatbots—think ELIZA’s therapeutic mimicry in the 1960s—to today’s sophisticated large language models. Along the way, we highlight why shifting from scripted responses to free-form generation opens doors for innovation in healthcare and, simultaneously, for unintended harm. Crafting effective guardrails isn’t just a technical challenge; it’s a moral imperative when lives hang in the balance.

As providers eye AI to extend care, Tessa’s story offers vital lessons on rigorous testing, transparency around updates, and the irreplaceable role of human oversight. Despite the pitfalls, we close on a hopeful note: with the right safeguards, AI can amplify human expertise—transforming support for vulnerable patients without losing the empathy and nuance only people can provide.

Reference:

National Eating Disorders Association phases out human helpline, pivots to chatbot
Kate Wells
NPR, May 2023

An eating disorders chatbot offered dieting advice, raising fears about AI in health
Kate Wells
NPR, June 2023

The Unexpected Harms of Artificial Intelligence in Healthcare
Kerstin Denecke Guillermo Lopez-Compos, Octavio Rivera-Romero, and Elia Gabarron
Studies in Health Technology and Informatics, May 2025

Credits:

Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0
https://creativecommons.org/licenses/by/4.0/

  continue reading

บท

1. The Tessa Chatbot Controversy (00:00:00)

2. History of AI Chatbots (00:04:08)

3. From Rules-Based to Generative AI (00:09:13)

4. When Chatbots Go Wrong (00:14:50)

5. Balancing Helpfulness and Safety (00:19:16)

6. Testing and Implementing AI in Healthcare (00:23:30)

8 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 501626387 series 3678189
เนื้อหาจัดทำโดย Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

What if a chatbot designed to support recovery instead encouraged the very behaviors it was meant to prevent? In this episode, we unravel the cautionary saga of Tessa, a digital companion built by the National Eating Disorder Association to scale mental health support during the COVID-19 surge—only to take a troubling turn when powered by generative AI.

At first, Tessa was a straightforward rules-based helper, offering pre-vetted encouragement and resources. But after an AI upgrade, users began receiving rigid diet tips: restrict calories, aim for weekly weight loss goals, and obsessively track measurements—precisely the advice no one battling an eating disorder should hear. What should have been a lifeline revealed the danger of unguarded algorithmic “help.”

We trace this journey from the earliest chatbots—think ELIZA’s therapeutic mimicry in the 1960s—to today’s sophisticated large language models. Along the way, we highlight why shifting from scripted responses to free-form generation opens doors for innovation in healthcare and, simultaneously, for unintended harm. Crafting effective guardrails isn’t just a technical challenge; it’s a moral imperative when lives hang in the balance.

As providers eye AI to extend care, Tessa’s story offers vital lessons on rigorous testing, transparency around updates, and the irreplaceable role of human oversight. Despite the pitfalls, we close on a hopeful note: with the right safeguards, AI can amplify human expertise—transforming support for vulnerable patients without losing the empathy and nuance only people can provide.

Reference:

National Eating Disorders Association phases out human helpline, pivots to chatbot
Kate Wells
NPR, May 2023

An eating disorders chatbot offered dieting advice, raising fears about AI in health
Kate Wells
NPR, June 2023

The Unexpected Harms of Artificial Intelligence in Healthcare
Kerstin Denecke Guillermo Lopez-Compos, Octavio Rivera-Romero, and Elia Gabarron
Studies in Health Technology and Informatics, May 2025

Credits:

Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0
https://creativecommons.org/licenses/by/4.0/

  continue reading

บท

1. The Tessa Chatbot Controversy (00:00:00)

2. History of AI Chatbots (00:04:08)

3. From Rules-Based to Generative AI (00:09:13)

4. When Chatbots Go Wrong (00:14:50)

5. Balancing Helpfulness and Safety (00:19:16)

6. Testing and Implementing AI in Healthcare (00:23:30)

8 ตอน

सभी एपिसोड

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน

ฟังรายการนี้ในขณะที่คุณสำรวจ
เล่น