Artwork

เนื้อหาจัดทำโดย The Gradient เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก The Gradient หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

Vera Liao: AI Explainability and Transparency

1:37:03
 
แบ่งปัน
 

Manage episode 388141740 series 2975159
เนื้อหาจัดทำโดย The Gradient เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก The Gradient หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

In episode 101 of The Gradient Podcast, Daniel Bashir speaks to Vera Liao.

Vera is a Principal Researcher at Microsoft Research (MSR) Montréal where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics) group. She is trained in human-computer interaction research and works on human-AI interaction, currently focusing on explainable AI and responsible AI. She aims to bridge emerging AI technologies and human-centered design practices, and use both qualitative and quantitative methods to generate recommendations for technology design. Before joining MSR, Vera worked at IBM TJ Watson Research Center, and her work contributed to IBM products such as AI Explainability 360, Uncertainty Quantification 360, and Watson Assistant.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:41) Vera’s background

* (07:15) The sociotechnical gap

* (09:00) UX design and toolkits for AI explainability

* (10:50) HCI, explainability, etc. as “separate concerns” from core AI reseaarch

* (15:07) Interfaces for explanation and model capabilities

* (16:55) Vera’s earlier studies of online social communities

* (22:10) Technologies and user behavior

* (23:45) Explainability vs. interpretability, transparency

* (26:25) Questioning the AI: Informing Design Practices for Explainable AI User Experiences

* (42:00) Expanding Explainability: Towards Social Transparency in AI Systems

* (50:00) Connecting Algorithmic Research and Usage Contexts

* (59:40) Pitfalls in existing explainability methods

* (1:05:35) Ideal and real users, seamful systems and slow algorithms

* (1:11:08) AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap

* (1:11:35) Vera’s earlier experiences with chatbots

* (1:13:00) Need to understand pitfalls and use-cases for LLMs

* (1:13:45) Perspectives informing this paper

* (1:20:30) Transparency informing goals for LLM use

* (1:22:45) Empiricism and explainability

* (1:27:20) LLM faithfulness

* (1:32:15) Future challenges for HCI and AI

* (1:36:28) Outro

Links:

* Vera’s homepage and Twitter

* Research

* Earlier work

* Understanding Experts’ and Novices’ Expertise Judgment of Twitter Users

* Beyond the Filter Bubble

* Expert Voices in Echo Chambers

* HCI / collaboration

* Exploring AI Values and Ethics through Participatory Design Fictions

* Ways of Knowing for AI: (Chat)bots as Interfaces for ML

* Human-AI Collaboration: Towards Socially-Guided Machine Learning

* Questioning the AI: Informing Design Practices for Explainable AI User Experiences

* Rethinking Model Evaluation as Narrowing the Socio-Technical Gap

* Human-Centered XAI: From Algorithms to User Experiences

* AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap

* Fairness and explainability

* Questioning the AI: Informing Design Practices for Explainable AI User Experiences

* Expanding Explainability: Towards Social Transparency in AI Systems

* Connecting Algorithmic Research and Usage Contexts


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

128 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 388141740 series 2975159
เนื้อหาจัดทำโดย The Gradient เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก The Gradient หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

In episode 101 of The Gradient Podcast, Daniel Bashir speaks to Vera Liao.

Vera is a Principal Researcher at Microsoft Research (MSR) Montréal where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics) group. She is trained in human-computer interaction research and works on human-AI interaction, currently focusing on explainable AI and responsible AI. She aims to bridge emerging AI technologies and human-centered design practices, and use both qualitative and quantitative methods to generate recommendations for technology design. Before joining MSR, Vera worked at IBM TJ Watson Research Center, and her work contributed to IBM products such as AI Explainability 360, Uncertainty Quantification 360, and Watson Assistant.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:41) Vera’s background

* (07:15) The sociotechnical gap

* (09:00) UX design and toolkits for AI explainability

* (10:50) HCI, explainability, etc. as “separate concerns” from core AI reseaarch

* (15:07) Interfaces for explanation and model capabilities

* (16:55) Vera’s earlier studies of online social communities

* (22:10) Technologies and user behavior

* (23:45) Explainability vs. interpretability, transparency

* (26:25) Questioning the AI: Informing Design Practices for Explainable AI User Experiences

* (42:00) Expanding Explainability: Towards Social Transparency in AI Systems

* (50:00) Connecting Algorithmic Research and Usage Contexts

* (59:40) Pitfalls in existing explainability methods

* (1:05:35) Ideal and real users, seamful systems and slow algorithms

* (1:11:08) AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap

* (1:11:35) Vera’s earlier experiences with chatbots

* (1:13:00) Need to understand pitfalls and use-cases for LLMs

* (1:13:45) Perspectives informing this paper

* (1:20:30) Transparency informing goals for LLM use

* (1:22:45) Empiricism and explainability

* (1:27:20) LLM faithfulness

* (1:32:15) Future challenges for HCI and AI

* (1:36:28) Outro

Links:

* Vera’s homepage and Twitter

* Research

* Earlier work

* Understanding Experts’ and Novices’ Expertise Judgment of Twitter Users

* Beyond the Filter Bubble

* Expert Voices in Echo Chambers

* HCI / collaboration

* Exploring AI Values and Ethics through Participatory Design Fictions

* Ways of Knowing for AI: (Chat)bots as Interfaces for ML

* Human-AI Collaboration: Towards Socially-Guided Machine Learning

* Questioning the AI: Informing Design Practices for Explainable AI User Experiences

* Rethinking Model Evaluation as Narrowing the Socio-Technical Gap

* Human-Centered XAI: From Algorithms to User Experiences

* AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap

* Fairness and explainability

* Questioning the AI: Informing Design Practices for Explainable AI User Experiences

* Expanding Explainability: Towards Social Transparency in AI Systems

* Connecting Algorithmic Research and Usage Contexts


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

128 ตอน

ทุกตอน

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน