BBC Radio 5 live’s award winning gaming podcast, discussing the world of video games and games culture.
…
continue reading
Player FM - Internet Radio Done Right
15 subscribers
Checked 3d ago
Добавлено шесть лет назад
เนื้อหาจัดทำโดย Brian T. O’Neill from Designing for Analytics เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Brian T. O’Neill from Designing for Analytics หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !
ออฟไลน์ด้วยแอป Player FM !
พอดคาสต์ที่ควรค่าแก่การฟัง
สปอนเซอร์
M
Mind The Business: Small Business Success Stories


Hitting plateaus is a common milestone in business, but there’s a difference between stability and a rut. In the last installment of this season, we’ll dive into the ways small business owners push beyond plateaus and find new ways to achieve revenue growth. Jannese and Austin wrap up their time in Nashville, Tennessee with a wonderful visit to N.B. Goods to speak with owner Camille Alston . Camille details the times where she hit a wall with profits, the strategies she implemented to increase revenue, what worked, what didn’t, and the important lessons she learned in the process. You won’t want to miss this informative final chapter! Learn more about how QuickBooks can help you grow your business: QuickBooks.com See omnystudio.com/listener for privacy information.…
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)
ทำเครื่องหมายทั้งหมดว่า (ยังไม่ได้)เล่น…
Manage series 2527129
เนื้อหาจัดทำโดย Brian T. O’Neill from Designing for Analytics เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Brian T. O’Neill from Designing for Analytics หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be? While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be? If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions. Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies. I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better. Hashtag: #ExperiencingData. JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
…
continue reading
113 ตอน
ทำเครื่องหมายทั้งหมดว่า (ยังไม่ได้)เล่น…
Manage series 2527129
เนื้อหาจัดทำโดย Brian T. O’Neill from Designing for Analytics เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Brian T. O’Neill from Designing for Analytics หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Is the value of your enterprise analytics SAAS or AI product not obvious through it’s UI/UX? Got the data and ML models right...but user adoption of your dashboards and UI isn’t what you hoped it would be? While it is easier than ever to create AI and analytics solutions from a technology perspective, do you find as a founder or product leader that getting users to use and buyers to buy seems harder than it should be? If you lead an internal enterprise data team, have you heard that a ”data product” approach can help—but you’re concerned it’s all hype? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I share the stories of leaders who are leveraging product and UX design to make SAAS analytics, AI applications, and internal data products indispensable to their customers. After all, you can’t create business value with data if the humans in the loop can’t or won’t use your solutions. Every 2 weeks, I release interviews with experts and impressive people I’ve met who are doing interesting work at the intersection of enterprise software product management, UX design, AI and analytics—work that you need to hear about and from whom I hope you can borrow strategies. I also occasionally record solo episodes on applying UI/UX design strategies to data products—so you and your team can unlock financial value by making your users’ and customers’ lives better. Hashtag: #ExperiencingData. JOIN MY INSIGHTS LIST FOR 1-PAGE EPISODE SUMMARIES, TRANSCRIPTS, AND FREE UX STRATEGY TIPS https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
…
continue reading
113 ตอน
Все серии
×E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 167 - AI Product Management and Design: How Natalia Andreyeva and Team at Infor Nexus Create B2B Data Products that Customers Value 37:34
Today, I’m talking with Natalia Andreyeva from Infor about AI / ML product management and its application to supply chain software. Natalia is a Senior Director of Product Management for the Nexus AI / ML Solution Portfolio, and she walks us through what is new, and what is not, about designing AI capabilities in B2B software. We also got into why user experience is so critical in data-driven products, and the role of design in ensuring AI produces value. During our chat, Natalia hit on the importance of really nailing down customer needs through solid discovery and the role of product leaders in this non-technical work. We also tackled some of the trickier aspects of designing for GenAI, digital assistants, the need to keep efforts strongly grounded in value creation for customers, and how even the best ML-based predictive analytics need to consider UX and the amount of evidence that customers need to believe the recommendations. During this episode, Natalia emphasizes a huge key to her work’s success: keeping customers and users in the loop throughout the product development lifecycle. Highlights/ Skip to What Natalia does as a Senior Director of Product Management for Infor Nexus (1:13) Who are the people using Infor Nexus Products and what do they accomplish when using them (2:51) Breaking down who makes up Natalia's team (4:05) What role does AI play in Natalia's work? (5:32) How do designers work with Natalia's team? (7:17) The problem that had Natalia rethink the discovery process when working with AI and machine learning applications (10:28) Why Natalia isn’t worried about competitors catching up to her team's design work (14:24) How Natalia works with Infor Nexus customers to help them understand the solutions her team is building (23:07) The biggest challenges Natalia faces with building GenAI and machine learning products (27:25) Natalia’s four steps to success in building AI products and capabilities (34:53) Where you can find more from Natalia (36:49) Quotes from Today’s Episode “I always launch discovery with customers, in the presence of the UX specialist [our designer]. We do the interviews together, and [regardless of who is facilitating] the goal is to understand the pain points of our customers by listening to how they do their jobs today. We do a series of these interviews and we distill them into the customer needs; the problems we need to really address for the customers. And then we start thinking about how to [address these needs]. Data products are a particular challenge because it’s not always that you can easily create a UX that would allow users to realize the value they’re searching for from the solution. And even if we can deliver it, consuming that is typically a challenge, too. So, this is where [design becomes really important]. [...] What I found through the years of experience is that it’s very difficult to explain to people around you what it is that you’re building when you’re dealing with a data-driven product. Is it a dashboard? Is it a workboard? They understand the word data, but that’s not what we are creating. We are creating the actual experience for the outcome that data will deliver to them indirectly, right? So, that’s typically how we work.” - Natalia Andreyeva (7:47) “[When doing discovery for products without AI], we already have ideas for what we want to get out. We know that there is a space in the market for those solutions to come to life. We just have to understand where. For AI-driven products, it’s not only about [the user’s] understanding of the problem or the design, it is also about understanding if the data exists and if it’s feasible to build the solution to address [the user’s] problem. [Data] feasibility is an extremely important piece because it will drive the UX as well.” - Natalia Andreyeva (10:50) “When [the team] discussed the problem, it sounded like a simple calculation that needed to be created [for users]. In reality, it was an entire process of thinking of multiple people in the chain [of command] to understand whether or not a medical product was safe to be consumed. That’s the outcome we needed to produce, and when we finally did, we actually celebrated with our customers and with our designers. It was one of the most difficult things that we had to design. So why did this problem actually get solved, and why we were the ones who solved it? It’s because we took the time to understand the current user experience through [our customer] interviews. We connected the dots and translated it all into a visual solution. We would never be able to do that without the proper UX and design in that place for the data.” - Natalia Andreyeva (13:16) “Everybody is pressured to come up with a strategy [for AI] or explain how AI is being incorporated into their solutions and platform, but it is still essential for all of my peers in product management to focus on the value [we’re] creating for customers. You cannot bypass discovery. Discovery is the essential portion where you have to spend time with your customers, champions, advisors, and their leads, but especially users who are doing this [supply chain] job every single day—so we understand where the pain point really is for them, we solve that pain, and we solve it with our design team as a partner, so that solution can surface value. ” - Natalia Andreyeva (22:08) “GenAI is a new field and new technology. It’s evolving quickly, and nobody really knows how to properly adapt or drive the adoption of AI solutions. The speed of innovation [in the AI field] is a challenge for everybody. People who work on the frontlines (i.e. product, engineering teams), have to stay way ahead of the market. Meanwhile, customers who are going to be using these [AI] solutions are not going to trust the [initial] outcomes. It’s going to take some time for people to become comfortable with them. But it doesn’t mean that your solution is bad or didn’t find the market fit. It’s just not time for your [solution] yet. Educating our users on the value of the solution is also part of that challenge, and [designers] have to be very careful that solutions are accessible. Users do not adopt intimidating solutions.” - Natalia Andreyeva (27:41) “First, discovery—where we search for the problems. From my experience, [discovery] works better if you’re very structured. I always provide [a customer] with an outline of what needs to happen so it’s not a secret. Then, do the prototyping phase and keep the customer engaged so they can see the quick outcomes of those prototypes. This is where you also have to really include the feasibility of the data if you’re building an AI solution, right? [Prototyping] can be short or long, but you need to keep the customer engaged throughout that phase so they see quick outcomes. Keep on validating this conceptually, you know, on the napkin, in Figma, it doesn’t really matter; you have to keep on keeping them engaged. Then, once you validate it works and the customer likes it, then build. Don’t really go into the deep development work until you know [all of this!] When you do build, create a beta solution. It only has to work so much to prove the value. Then, run the pilot, and if it’s successful, build the MVP, then launch. It’s simple, but it is a lot of work, and you have to keep your customers really engaged through all of those phases. If something doesn’t work [along the way], try to pivot early enough so you still have a viable product at the end.” - Natalia Andreyeva (34:53) Links Natalia's LinkedIn…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

Today I am going to try to answer a fundamental question: how should you actually measure user experience, especially with data products—and tie this to business value? It's easy to get lost in analytics and think we're seeing the whole picture, but I argue that this is far from the truth. Product leaders need to understand the subjective experience of our users—and unfortunately, analytics does not tell us this. The map is not the territory. In this episode, I discuss why qualitative data and subjective experience is the data that will most help you make product decisions that will lead you to increased business value. If users aren't getting value from your product(s), and their lives aren’t improving, business value will be extremely difficult to create. So today, I share my thoughts on how to move beyond thinking that analytics is the only way to track UX, and how this helps product leaders uncover opportunities to produce better organizational value. Ultimately, it’s about creating indispensable solutions and building trust, which is key for any product team looking to make a real impact. Hat tip to UX guru Jared Spool who inspired several of the concepts I share with you today. Highlights/ Skip to Don't target adoption for adoption's sake, because product usage can be a tax or benefit (3:00) Why your analytical mind may bias you—and what changes you might have to do this type of product and user research work (7:31) How "making the user's life better" translates to organizational value (10:17) Using Jared Spool's roller coaster chart to measure your product’s user experience and find your opportunities and successes (13:05) How do you measure that you have done a good job with your UX? (17:28) Conclusions and final thoughts (21:06) Quotes from Today’s Episode Usage doesn't automatically equal value. Analytics on your analytics is not telling you useful things about user experience or satisfaction. Why? "The map is not the territory." Analytics measure computer metrics, not feelings, and let's face it, users aren't always rational. To truly gauge user value, we need qualitative research - to talk to users - and to hear what their subjective experience is. Want *meaningful* adoption? Talk to and observe your users. That's how you know you are actually making things better. When it’s better for them, the business value will follow. (3:12) Make better things—where better is a measurement based on the subjective experience of the user—not analytics. Usable doesn’t mean they will necessarily want it. Sessions and page views don’t tell you how people *feel* about it. (7:39) Think about the dreadful tools you and so many have been forced to use: the things that waste your time and don’t let you focus on what’s really important. Ever talked to a data scientist who is sick of doing data prep instead of building models, and wondering, “why am I here? This isn’t what I went to school for.” Ignoring these personal frustrations and feelings and focusing only on your customers’ feature requests, JIRA tickets, stakeholder orders, requirements docs, and backlog items is why many teams end up building technically right, effectively wrong solutions. These end user frustrations are where we find our opportunities to delight—and create products and UXs that matter. To improve their lives, we need to dig into their workflows, identify frustrations, and understand the context around our data product solutions. Product leaders need to fall in love with the problems and the frustrations—these are the magic keys to the value kingdom. However, to do this well, you probably need to be doing less delivery and more discovery. (10:27) Imagine a line chart with a Y-axis that is "frustration" at the bottom to "delight" at the top. The X-axis is their user experience, taking place over time. As somebody uses your data product to do their job/task, you can plot their emotional journey. “Get the data, format the data, include the data in a tool, derive some conclusion, challenge the data, share it, make a decision” etc. As a product manager, you probably know what a use-case looks like. Your first job is to plot their existing experience trying/doing that use case with your data product. Where are they frustrated? Where are they delighted? Celebrate your peaks/delighters, and fall in love with the valleys where satisfaction work needs to be done. Connect the dots between these valleys and business value. Address the valleys—especially the ones that impede business value—and you’ll be on your way to “showing the value of your data product.” Analytics on your data product won’t tell you this information; the map is not the territory. (13:22) Analytics about your data product are lying to you. They give you the facts about the product, but not about the user. An example? “Time spent” doing a task. How long is too long? 5 minutes? 50? Analytics will tell you precisely how long it took. The problem is, it won’t tell you how long it FELT it took. And guess what? Your customers and users only care about how long it felt it took—vs. their expectation. Sure, at some point, analytics might eventually help—at scale—understand how your data product is doing—but first you have to understand how people FEEL about it. Only then will you know whether 5 minutes, or 50 minutes is telling you anything meaningful about what—if anything—needs to change. (16:17)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 165 - How to Accommodate Multiple User Types and Needs in B2B Analytics and AI Products When You Lack UX Resources 49:04
A challenge I frequently hear about from subscribers to my insights mailing list is how to design B2B data products for multiple user types with differing needs. From dashboards to custom apps and commercial analytics / AI products, data product teams often struggle to create a single solution that meets the diverse needs of technical and business users in B2B settings. If you're encountering this issue, you're not alone! In this episode, I share my advice for tackling this challenge including the gift of saying "no.” What are the patterns you should be looking out for in your customer research? How can you choose what to focus on with limited resources? What are the design choices you should avoid when trying to build these products? I’m hoping by the end of this episode, you’ll have some strategies to help reduce the size of this challenge—particularly if you lack a dedicated UX team to help you sort through your various user/stakeholder demands. Highlights/ Skip to The importance of proper user research and clustering “jobs to be done” around business importance vs. task frequency—ignoring the rest until your solution can show measurable value (4:29) What “level” of skill to design for, and why “as simple as possible” isn’t what I generally recommend (13:44) When it may be advantageous to use role or feature-based permissions to hide/show/change certain aspects, UI elements, or features (19:50) Leveraging AI and LLMs in-product to allow learning about the user and progressive disclosure and customization of UIs (26:44) Leveraging the “old” solution of rapid prototyping—which is now faster than ever with AI, and can accelerate learning (capturing user feedback) (31:14) 5 things I do not recommend doing when trying to satisfy multiple user types in your b2b AI or analytics product (34:14) Quotes from Today’s Episode If you're not talking to your users and stakeholders sufficiently, you're going to have a really tough time building a successful data product for one user – let alone for multiple personas. Listen for repeating patterns in what your users are trying to achieve (tasks they are doing). Focus on the jobs and tasks they do most frequently or the ones that bring the most value to their business. Forget about the rest until you've proven that your solution delivers real value for those core needs. It's more about understanding the problems and needs, not just the solutions. The solutions tend to be easier to design when the problem space is well understood. Users often suggest solutions, but it's our job to focus on the core problem we're trying to solve; simply entering in any inbound requests verbatim into JIRA and then “eating away” at the list is not usually a reliable strategy. (5:52) I generally recommend not going for “easy as possible” at the cost of shallow value. Instead, you’re going to want to design for some “mid-level” ability, understanding that this may make early user experiences with the product more difficult. Why? Oversimplification can mislead because data is complex, problems are multivariate, and data isn't always ideal. There are also “n” number of “not-first” impressions users will have with your product. This also means there is only one “first impression” they have. As such, the idea conceptually is to design an amazing experience for the “n” experiences, but not to the point that users never realize value and give up on the product. While I'd prefer no friction, technical products sometimes will have to have a little friction up front however, don't use this as an excuse for poor design. This is hard to get right, even when you have design resources, and it’s why UX design matters as thinking this through ends up determining, in part, whether users obtain the promise of value you made to them. (14:21) As an alternative to rigid role and feature-based permissions in B2B data products, you might consider leveraging AI and / or LLMs in your UI as a means of simplifying and customizing the UI to particular users. This approach allows users to potentially interrogate the product about the UI, customize the UI, and even learn over time about the user’s questions (jobs to be done) such that becomes organically customized over time to their needs. This is in contrast to the rigid buckets that role and permission-based customization present. However, as discussed in my previous episode (164 - “The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge”) designing effective AI features and capabilities can also make things worse due to the probabilistic nature of the responses GenAI produces. As such, this approach may benefit from a UX designer or researcher familiar with designing data products. Understanding what “quality” means to the user, and how to measure it, is especially critical if you’re going to leverage AI and LLMs to make the product UX better. (20:13) The old solution of rapid prototyping is even more valuable now—because it’s possible to prototype even faster. However, prototyping is not just about learning if your solution is on track. Whether you use AI or pencil and paper, prototyping early in the product development process should be framed as a “prop to get users talking.” In other words, it is a prop to facilitate problem and need clarity—not solution clarity. Its purpose is to spark conversation and determine if you're solving the right problem. As you iterate, your need to continually validate the problem should shrink, which will present itself in the form of consistent feedback you hear from end users. This is the point where you know you can focus on the design of the solution. Innovation happens when we learn; so the goal is to increase your learning velocity. (31:35) Have you ever been caught in the trap of prioritizing feature requests based on volume? I get it. It's tempting to give the people what they think they want. For example, imagine ten users clamoring for control over specific parameters in your machine learning forecasting model. You could give them that control, thinking you're solving the problem because, hey, that's what they asked for! But did you stop to ask why they want that control? The reasons behind those requests could be wildly different. By simply handing over the keys to all the model parameters, you might be creating a whole new set of problems. Users now face a "usability tax," trying to figure out which parameters to lock and which to let float. The key takeaway? Focus on addressing the frequency that the same problems are occurring across your users, not just the frequency a given tactic or “solution” method (i.e. “model” or “dashboard” or “feature”) appears in a stakeholder or user request. Remember, problems are often disguised as solutions. We've got to dig deeper and uncover the real needs, not just address the symptoms. (36:19)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 164 - The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge 45:25
Are you prepared for the hidden UX taxes that AI and LLM features might be imposing on your B2B customers—without your knowledge? Are you certain that your AI product or features are truly delivering value, or are there unseen taxes that are working against your users and your product / business? In this episode, I’m delving into some of UX challenges that I think need to be addressed when implementing LLM and AI features into B2B products. While AI seems to offer the change for significantly enhanced productivity, it also introduces a new layer of complexity for UX design. This complexity is not limited to the challenges of designing in a probabilistic medium (i.e. ML/AI), but also in being able to define what “quality” means. When the product team does not have a shared understanding of what a measurably better UX outcome means, improved sales and user adoption are less likely to follow. I’ll also discuss aspects of designing for AI that may be invisible on the surface. How might AI-powered products change the work of B2B users? What are some of the traps I see some startup clients and founders I advise in MIT’s Sandbox venture fund fall into? If you’re a product leader in B2B / enterprise software and want to make sure your AI capabilities don’t end up creating more damage than value for users, this episode will help! Highlights/ Skip to Improving your AI model accuracy improves outputs—but customers only care about outcomes (4:02) AI-driven productivity gains also put the customer’s “next problem” into their face sooner. Are you addressing the most urgent problem they now have—or used to have? (7:35) Products that win will combine AI with tastefully designed deterministic-software—because doing everything for everyone well is impossible and most models alone aren’t products (12:55) Just because your AI app or LLM feature can do ”X” doesn't mean people will want it or change their behavior (16:26) AI Agents sound great—but there is a human UX too, and it must enable trust and intervention at the right times (22:14) Not overheard from customers: “I would buy this/use this if it had AI” (26:52) Adaptive UIs sound like they’ll solve everything—but to reduce friction, they need to adapt to the person, not just the format of model outputs (30:20) Introducing AI introduces more states and scenarios that your product may need to support that may not be obvious right away (37:56) Quotes from Today’s Episode Product leaders have to decide how much effort and resources you should put into model improvements versus improving a user’s experience. Obviously, model quality is important in certain contexts and regulated industries, but when GenAI errors and confabulations are lower risk to the user (i.e. they create minor friction or inconveniences), the broader user experience that you facilitate might be what is actually determining the true value of your AI features or product. Model accuracy alone is not going to necessarily lead to happier users or increased adoption. ML models can be quantifiably tested for accuracy with structured tests, but because they’re easier to test for quality vs. something like UX doesn’t mean users value these improvements more. The product will stand a better chance of creating business value when it is clearly demonstrating it is improving your users’ lives. (5:25) When designing AI agents, there is still a human UX - a beneficiary - in the loop. They have an experience, whether you designed it with intention or not. How much transparency needs to be given to users when an agent does work for them? Should users be able to intervene when the AI is doing this type of work? Handling errors is something we do in all software, but what about retraining and learning so that the future user experiences is better? Is the system learning anything while it’s going through this—and can I tell if it’s learning what I want/need it to learn? What about humans in the loop who might interact with or be affected by the work the agent is doing even if they aren’t the agent’s owner or “user”? Who’s outcomes matter here? At what cost? (22:51) Customers primarily care about things like raising or changing their status, making more money, making their job easier, saving time, etc. In fact,I believe a product marketed with GenAI may eventually signal a negative / burden on customers thanks to the inflated and unmet expectations around AI that is poorly implemented in the product UX. Don’t think it’s going to be bought just because it using AI in a novel way. Customers aren’t sitting around wishing for “disruption” from your product; quite the opposite. AI or not, you need to make the customer the hero. Your AI will shine when it delivers an outsized UX outcome for your users (27:49) What kind of UX are you delivering right out of the box when a customer tries out your AI product or feature? Did you design it for tire kicking, playing around, and user stress testing? Or just an idealistic happy path? GenAI features inside b2b products should surface capabilities and constraints particularly around where users can create value for themselves quickly. Natural hints and well-designed prompt nudges in LLMs for example are important to users and to your product team: because you’re setting a more realistic expectation of what’s possible with customers and helping them get to an outcome sooner. You’re also teaching them how to use your solution to get the most value—without asking them to go read a manual. (38:21)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 163 - It’s Not a Math Problem: How to Quantify the Value of Your Enterprise Data Products or Your Data Product Management Function 41:41
I keep hearing data product, data strategy, and UX teams often struggle to quantify the value of their work. Whether it’s as a team as a whole or on a specific data product initiative, the underlying problem is the same – your contribution is indirect, so it’s harder to measure. Even worse, your stakeholders want to know if your work is creating an impact and value, but because you can’t easily put numbers on it, valuation spirals into a messy problem. The messy part of this valuation problem is what today’s episode is all about—not math! Value is largely subjective, not objective, and I think this is partly why analytical teams may struggle with this. To improve at how you estimate the value of your data products, you need to leverage other skills—and stop approaching this as a math problem. As a consulting product designer, estimating value when it’s indirect is something that I’ve dealt with my entire career. It’s not a skill learned overnight, and it’s one you will need to keep developing over time—but the basic concepts are simple. I hope you’ll find some value in applying these along with your other frameworks and tools. Highlights/ Skip to Value is subjective, not objective (5:01) Measurability does not necessarily mean valuable (6:36) Businesses are made up of humans. Most b2b stakeholders aren’t spending their own money when making business decisions—what does that mean for your work? (9:30) Quantifying a data product’s value starts with understanding what is worth measuring in the eye of the beholder(s)—not math calculations (13:44) The more difficult it is to show the value of your product (or team) in numbers, the lower that value is to the stakeholder—initially (16:46) By simply helping a stakeholder to think through how value should be calculated on a data product, you’re likely already providing additional value (18:02) Focus on expressing estimated value via a range versus a single number (19:36) Measurement of anything requires that we can observe the phenomenon first—but many stakeholders won’t be able to cite these phenomena without [your!] help (22:16) When you are measuring quantitative aspects of value, remember that measurement is not the same as accuracy (precision)—and the precision game can become a trap (25:37) How to measure anything—and why estimates often trump accuracy (31:19) Why you may need to steer the conversation away from ROI calculations in the short term (35:00) Quotes from Today’s Episode Even when you can easily assign a dollar value to the data product you’re building, that does not necessarily reflect what your stakeholder actually feels about it—or your team’s contribution. So, why do they keep asking you to quantify the value of your work? By actually understanding what a shareholder needs to observe for them to know progress has been made on their initiative or data product, you will be positioned to deliver results they actually care about. While most of the time, you should be able to show some obvious economic value in the work you’re doing, you may be getting hounded about this because you’re not meeting the often unstated qualitative goals. If you can surface the qualitative goals of your stakeholder, then the perception of the value of your team and its work goes up, and you’ll spend less time trying to measure an indirect contribution in quant terms that only has a subjectively right answer. (6:50) The more difficult it is for you to show the monetary value of your data product (or team), the lower that value likely is to the stakeholder. This does not mean the value of your work is “low.” It means it’s perceived as low because it cannot be easily quantified in a way that is observable to the person whose judgment matters. By understanding the personal motivations and interests of your stakeholders, you can begin to collaboratively figure out what the correct success metrics should be—and how they’d be measured. By just simply beginning to ask and uncover what they’re trying to measure, you can start to increase your contributions’ perceived value. (17:01) Think about expressing “indirect value” as a range, not a precise single value. It’s much easier to refine your estimate (if necessary) once a range has been defined, and you only need to get precise enough for your stakeholder to make a decision with the information. How much time should you spend refining your measurement of the value? Potentially little to none—if the “better math” isn’t going to change anyone’s mind or decision. Spending more time to measure a data product’s value more accurately takes you away from doing actual product work—and if there isn’t much obvious value to the work, maybe the work—not the measurement of the work—needs to change. (19:49) Smart leaders know that deriving a simple calculation of indirect contributions is complex—otherwise, the topic wouldn’t keep coming up. There is a “why” behind why they’re asking, and when you understand the “why,” you’ll be better positioned to deliver the value they actually seek, using valuation measurements that are “just enough” in their precision. What do you think it says to a stakeholder if you’re spending an inordinate amount of time simply trying to calculate and explain the value of your data product? (23:22) Many organizations for years have invested in things that don’t always have a short term ROI. They know that ROI takes time, and they can’t really measure what it’s worth along the way. Examples include investments in company culture, innovation, brand reputation, and many others. If you’re constantly playing defense and having to justify your existence or methods by quantifying the financial value of your data products (or data product management team, or UX team, or any other indirect contributor/contribution), then either your work truly does lack value, or you haven’t surfaced what the actual success metrics and outcomes are— in the eyes of the stakeholder. As such, the perceived value is “low” or opaque. They might be looking for a hard number to assign to it because they’re not seeing any of the other forms of value that they care about that would indicate positive progress. It’s easier to write [you] a large check for a big, innovative, unproven initiative if your stakeholders know what you and your team can accomplish with a small check. (35:16) Links Experiencing Data: Episode 80 with Doug Hubbard…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

I’m doing things a bit differently for this episode of Experiencing Data. For the first time on the show, I’m hosting a panel discussion. I’m joined by Thomson Reuters’s Simon Landry, Sumo Logic’s Greg Nudelman, and Google’s Paz Perez to chat about how we design user experiences that improve people’s lives and create business impact when we expose LLM capabilities to our users. With the rise of AI, there are a lot of opportunities for innovation, but there are also many challenges—and frankly, my feeling is that a lot of these capabilities right now are making things worse for users, not better. We’re looking at a range of topics such as the pros and cons of AI-first thinking, collaboration between UX designers and ML engineers, and the necessity of diversifying design teams when integrating AI and LLMs into b2b products. Highlights/ Skip to Thoughts on how the current state of LLMs implementations and its impact on user experience (1:51) The problems that can come with the "AI-first" design philosophy (7:58) Should a company's design resources be spent on go toward AI development? (17:20) How designers can navigate "fuzzy experiences” (21:28) Why you need to narrow and clearly define the problems you’re trying to solve when building LLMs products (27:35) Why diversity matters in your design and research teams when building LLMs (31:56) Where you can find more from Paz, Greg, and Simon (40:43) Quotes from Today’s Episode “ [AI] will connect the dots. It will argue pro, it will argue against, it will create evidence supporting and refuting, so it’s really up to us to kind of drive this. If we understand the capabilities, then it is an almost limitless field of possibility. And these things are taught, and it’s a fundamentally different approach to how we build user interfaces. They’re no longer completely deterministic. They’re also extremely personalized to the point where it’s ridiculous.” - Greg Nudelman (12:47) “ To put an LLM into a product means that there’s a non-zero chance your user is going to have a [negative] experience and no longer be your customer. That is a giant reputational risk, and there’s also a financial cost associated with running these models. I think we need to take more of a service design lens when it comes to [designing our products with AI] and ask what is the thing somebody wants to do… not on my website, but in their lives? What brings them to my [product]? How can I imagine a different world that leverages these capabilities to help them do their job? Because what [designers] are competing against is [a customer workflow] that probably worked well enough.” - Simon Landry (15:41) “ When we go general availability (GA) with a product, that traditionally means [designers] have done all the research, got everything perfect, and it’s all great, right? Today, GA is a starting gun. We don’t know [if the product is working] unless we [seek out user feedback]. A massive research method is needed. [We need qualitative research] like sitting down with the customer and watching them use the product to really understand what is happening[…] but you also need to collect data. What are they typing in? What are they getting back? Is somebody who’s typing in this type of question always having a short interaction? Let’s dig into it with rapid, iterative testing and evaluation, so that we can update our model and then move forward. Launching a product these days means the starting guns have been fired. Put the research to work to figure out the next step.” - (23:29) Greg Nudelman “ I think that having diversity on your design team (i.e. gender, level of experience, etc.) is critical. We’ve already seen some terrible outcomes. Multiple examples where an LLM is crafting horrendous emails, introductions, and so on. This is exactly why UXers need to get involved [with building LLMs]. This is why diversity in UX and on your tech team that deals with AI is so valuable. Number one piece of advice: get some researchers. Number two: make sure your team is diverse.” - Greg Nudelman (32:39) “ It’s extremely important to have UX talks with researchers, content designers, and data teams. It’s important to understand what a user is trying to do, the context [of their decisions], and the intention. [Designers] need to help [the data team] understand the types of data and prompts being used to train models. Those things are better when they’re written and thought of by [designers] who understand where the user is coming from. [Design teams working with data teams] are getting much better results than the [teams] that are working in a vacuum.” - Paz Perez (35:19) Links Milly Barker’s LinkedIn post Greg Nudelman’s Value Matrix Article Greg Nudelman website Paz Perez on Medium Paz Perez on LinkedIn Simon Landry LinkedIn…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

With GenAI and LLMs comes great potential to delight and damage customer relationships—both during the sale, and in the UI/UX. However, are B2B AI product teams actually producing real outcomes, on the business side and the UX side, such that customers find these products easy to buy, trustworthy and indispensable? What is changing with customer problems as a result of LLM and GenAI technologies becoming more readily available to implement into B2B software? Anything? Is your current product or feature development being driven by the fact you might be able to now solve it with AI? The “AI-first” team sounds like it’s cutting edge, but is that really determining what a customer will actually buy from you? Today I want to talk to you about the interplay of GenAI, customer trust (both user and buyer trust), and the role of UX in products using probabilistic technology. These thoughts are based on my own perceptions as a “user” of AI “solutions,” (quotes intentional!), conversations with prospects and clients at my company (Designing for Analytics), as well as the bright minds I mentor over at the MIT Sandbox innovation fund. I also wrote an article about this subject if you’d rather read an abridged version of my thoughts. Highlights/ Skip to: AI and LLM-Powered Products Do Not Turn Customer Problems into “Now” and “Expensive” Problems (4:03) Trust and Transparency in the Sale and the Product UX: Handling LLM Hallucinations (Confabulations) and Designing for Model Interpretability (9:44) Selling AI Products to Customers Who Aren’t Users (13:28) How LLM Hallucinations and Model Interpretability Impact User Trust of Your Product (16:10) Probabilistic UIs and LLMs Don’t Negate the Need to Design for Outcomes (22:48) How AI Changes (or Doesn’t) Our Benchmark Use Cases and UX Outcomes (28:41) Closing Thoughts (32:36) Quotes from Today’s Episode “Putting AI or GenAI into a product does not change the urgency or the depth of a particular customer problem; it just changes the solution space. Technology shifts in the last ten years have enabled founders to come up with all sorts of novel ways to leverage traditional machine learning, symbolic AI, and LLMs to create new products and disrupt established products; however, it would be foolish to ignore these developments as a product leader. All this technology does is change the possible solutions you can create. It does not change your customer situation, problem, or pain, either in the depth, or severity, or frequency. In fact, it might actually cause some new problems. I feel like most teams spend a lot more time living in the solution space than they do in the problem space. Fall in love with the problem and love that problem regardless of how the solution space may continue to change.” (4:51) “Narrowly targeted, specialized AI products are going to beat solutions trying to solve problems for multiple buyers and customers. If you’re building a narrow, specific product for a narrow, specific audience, one of the things you have on your side is a solution focused on a specific domain used by people who have specific domain experience. You may not need a trillion-parameter LLM to provide significant value to your customer. AI products that have a more specific focus and address a very narrow ICP I believe are more likely to succeed than those trying to serve too many use cases—especially when GenAI is being leveraged to deliver the value. I think this can be true even for platform products as well. Narrowing the audience you want to serve also narrows the scope of the product, which in turn should increase the value that you bring to that audience—in part because you probably will have fewer trust, usability, and utility problems resulting from trying to leverage a model for a wide range of use cases.” (17:18) “Probabilistic UIs and LLMs are going to create big problems for product teams, particularly if they lack a set of guiding benchmark use cases. I talk a lot about benchmark use cases as a core design principle and data-rich enterprise products. Why? Because a lot of B2B and enterprise products fall into the game of ‘adding more stuff over time.’ ‘Add it so you can sell it.’ As products and software companies begin to mature, you start having product owners and PMs attached to specific technologies or parts of a product. Figuring out how to improve the customer’s experience over time against the most critical problems and needs they have is a harder game to play than simply adding more stuff— especially if you have no benchmark use cases to hold you accountable. It’s hard to make the product indispensable if it’s trying to do 100 things for 100 people.“ (22:48) “Product is a hard game, and design and UX is by far not the only aspect of product that we need to get right. A lot of designers don’t understand this, and they think if they just nail design and UX, then everything else solves itself. The reason the design and experience part is hard is that it’s tied to behavior change– especially if you are ‘disrupting’ an industry, incumbent tool, application, or product. You are in the behavior-change game, and it’s really hard to get it right. But when you get it right, it can be really amazing and transformative.” (28:01) “If your AI product is trying to do a wide variety of things for a wide variety of personas, it’s going to be harder to determine appropriate benchmarks and UX outcomes to measure and design against. Given LLM hallucinations, the increased problem of trust, model drift problems, etc., your AI product has to actually innovate in a way that is both meaningful and observable to the customer. It doesn’t matter what your AI is trying to “fix.” If they can’t see what the benefit is to them personally, it doesn’t really matter if technically you’ve done something in a new and novel way. They’re just not going to care because that question of what’s in it for me is always sitting behind, in their brain, whether it’s stated out loud or not.” (29:32) Links Designing for Analytics mailing list…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 160 - Leading Product Through a Merger/Acquisition: Lessons from The Predictive Index’s CPO Adam Berke 42:10
Today, I’m chatting with Adam Berke, the Chief Product Officer at The Predictive Index. For 70 years, The Predictive Index has helped customers hire the right employees, and after the merger with Charma, their products now nurture the employee/manager relationship. This is something right up Adam’s alley, as he previously helped co-found the employee and workflow performance management software company Charma before both aforementioned organizations merged back in 2023. You’ll hear Adam talk about the first-time challenges (and successes) that come with integrating two products and two product teams, and why squashing out any ambiguity with overindexing (i.e. coming prepared with new org charts ASAP) is essential during the process. Integrating behavioral science into the world of data is what has allowed The Predictive Index to thrive since the 1950s. While this is the company’s main selling point, Adam explains how the science-forward approach can still create some disagreements–and learning opportunities–with The Predictive Index’s legacy customers. Highlights/ Skip to: What is The Predictive Index and how does the product team conduct their work (1:24) Why Charma merged with The Predictive Index (5:11) The challenges Adam has faced as a CPO since the Charma/Predictive Index merger (9:21) How Predictive Index has utilized behavioral science to remove the guesswork of hiring (14:22) The makeup of the product team that designs and delivers The Predictive Index's products (20:24) Navigating the clashes between changing science and Predictive Index's legacy customers (22:37) How The Predictive Index analyzes the quality of their products with multiple user data metrics (27:21) What Adam would do differently if had to redo the merger (37:52) Where you can find more from Adam and The Predictive Index (41:22) Quotes from Today’s Episode “ Acquisitions are complicated. Outside of a few select companies, there are very few that have mergers and acquisitions as a repeatable discipline. More often than not, neither [company in the merger] has an established playbook for how to do this. You’re [acquiring a company] because of its product, team, or maybe even one feature. You have different theories on how the integration might look, but experiencing it firsthand is a whole different thing. My initial role didn’t exist in [The Predictive Index] before. The rest of the whole PI organization knows how to get their work done before this, and now there’s this new executive. There’s just tons of [questions and confusion] if you don’t go in assuming good faith and be willing to work through the bumps. It’s going to get messy.” - Adam Berke (9:41) “We integrated the teams and relaunched the product. Charma became [a part of the product called] PI Perform, and right away there was re-skinning, redesign, and some back-end architecture that needed to happen to make it its own module. From a product perspective, we’re trying to deliver [Charma’s] unique value prop. That’s when we can start [figuring out how to] infuse PI’s behavioral science into these workflows. We have this foundation. We got the thing organized. We got the teams organized. We were 12 people when we were acquired… and here we are a year later. 150+ new customers have been added to PI Perform because it’s accelerating now that we’re figuring out the product.” - Adam Berke (12:18) “Our product team has the roles that you would expect: a PM, researcher, ux design, and then one atypical role–a PhD behavioral scientist. [Our product already had] suggested topics and templates [for manager/IC one-on-one meetings], but now we want to make those templates and suggested topics more dynamic. There might be different questions to draw out a better discussion, and our behavioral scientists help us determine [those questions]... [Our behavioral scientists] look at the science, other research, and calibrate [the one-on-one questions] before we implement them into the product.” - Adam Berke (21:04) “We’ve adapted the technology and science over time as they move forward. We want to update the product with the most recent science, but there are customers who have used this product in a certain way for decades in some cases. Our desire is to follow the science… but you can’t necessarily stop people from using the stuff in a way that they used it 20 years ago. We sometimes end up with disagreements [with customers over product changes based on scientific findings], and those are tricky conversations. But even in that debate… it comes down to all the best practices you would follow in product development in general–listening to your customers, asking that additional ‘why’ question, and trying to get to root causes.” - Adam Berke (23:36) “ We’re doing an upgrade to our platform right now trying to figure out how to manage user permissions in the new version of the product. The way that we did it in the old version had a lot of problems associated… and we put out a survey. “Hey, do you use this to do X?’ We got hundreds of responses and found that half of them were not using it for the reason that we thought they were. At first, we thought thousands of people were going to have deep, deep sensitivities to tweaks in how this works, and now we realize that it might be half that, at best. A simple one-question survey asked about the right problem in the right way can help to avoid a lot of unnecessary thrashing on a product problem that might not have even existed in the first place.” - Adam Berke (35:22) Links Referenced The Predictive Index: https://www.predictiveindex.com/ LinkedIn: https://www.linkedin.com/in/adamberke/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 159 - Uncorking Customer Insights: How Data Products Revealed Hidden Gems in Liquor & Hospitality Retail 40:47
Today, I’m talking to Andy Sutton, GM of Data and AI at Endeavour Group, Australia's largest liquor and hospitality company. In this episode, Andy—who is also a member of the Data Product Leadership Community (DPLC)—shares his journey from traditional, functional analytics to a product-led approach that drives their mission to leverage data and personalization to build the “Spotify for wines.” This shift has greatly transformed how Endeavour’s digital and data teams work together, and Andy explains how their advanced analytics work has paid off in terms of the company’s value and profitability. You’ll learn about the often overlooked importance of relationships in a data-driven world, and how Andy sees the importance of understanding how users do their job in the wild (with and without your product(s) in hand). Earlier this year, Andy also gave the DPLC community a deeper look at how they brew data products at EDG, and that recording is available to our members in the archive. We covered: What it was like at EDG before Andy started adopting a producty approach to data products and how things have now changed (1:52) The moment that caused Andy to change how his team was building analytics solutions (3:42) The amount of financial value that Andy's increased with his scaling team as a result of their data product work (5:19) How Andy and Endeavour use personalization to help build “the Spotify of wine” (9:15) What the team under Andy required in order to make the transition to being product-led (10:27) The successes seen by Endeavour through the digital and data teams’ working relationship (14:04) What data product management looks like for Andy’s team (18:45) How Andy and his team find solutions to bridging the adoption gap (20:53) The importance of exposure time to end users for the adoption of a data product (23:43) How talking to the pub staff at EDG’s bars and restaurants helps his team build better data products (27:04) What Andy loves about working for Endeavour Group (32:25) What Andy would change if he could rewind back to 2022 and do it all over (34:55) Final thoughts (38:25) Quotes from Today’s Episode “I think the biggest thing is the value we unlock in terms of incremental dollars, right? I’ve not worked in analytics team before where we’ve been able to deliver a measurable value…. So, we’re actually—in theory—we’re becoming a profit center for the organization, not just a cost center. And so, there’s kind of one key metric. The second one, we do measure the voice of the team and how engaged our team are, and that’s on an upward trend since we moved to the new operating model, too. We also measure [a type of] “voice of partner” score [and] get something like a 4.1 out of 5 on that scale. Those are probably the three biggest ones: we’re putting value in, and we’re delivering products, I guess, our internal team wants to use, and we are building an enthused team at the same time.” - Andy Sutton (16:18) “ You can put an [unfinished] product in front of an end customer, and they will give you quality feedback that you can then iterate on quickly. You can do that with an internal team, but you’ll lose credibility. Internal teams hold their analytics colleagues to a higher standard than the external customers. We’re trying to change how people do their roles. People feel very passionate about the roles they do, and how they do them, and what they bring to that role. We’re trying to build some of that into products. It requires probably more design consideration than I’d anticipated, and we’re still bringing in more designers to help us move closer to the start line.’” - Andy Sutton (19:25) “ [Customer research] is becoming critical in terms of the products we’re building. You’re building a product, a set of products, or a process for an operations team. In our context, an operations team can mean a team of people who run a pub. It’s not just about convincing me, my product managers, or my data scientists that you need research; we want to take some of the resources out of running that bar for a period of time because we want to spend time with [the pub staff] watching, understanding, and researching. We’ve learned some of these things along the way… we’ve earned the trust, we’ve earned that seat at the table, and so we can have those conversations. It’s not trivial to get people to say, ‘I’ll give you a day-long workshop, or give you my team off of running a restaurant and a bar for the day so that they can spend time with you, and so you can understand our processes.’” - Andy Sutton (24:42) “ I think what is very particular to pubs is the importance of the interaction between the customer and the person serving the customer. [Pubs] are about the connections between the staff and the customer, and you don’t get any of that if you’re just looking at things from a pure data perspective… You don’t see the [relationships between pub staff and customer] in the [data], so how do you capture some of that in your product? It’s about understanding the context of the data, not just the data itself.” - Andy Sutton (28:15) “Every winery, every wine grower, every wine has got a story. These conversations [and relationships] are almost natural in our business. Our CEO started work on the shop floor in one of our stores 30 years ago. That kind of relationship stuff percolates through the organization. Having these conversations around the customer and internal stakeholders in the context of data feels a lot easier because storytelling and relationships are the way we get things done. An analytics team may get frustrated with people who can’t understand data, but it’s [the analytics team’s job] to help bridge that gap.” - Andy Sutton (32:34) Links Referenced LinkedIn: https://www.linkedin.com/in/andysutton/ Endeavour Group: https://www.endeavourgroup.com.au/ Data Product Leadership Community https://designingforanalytics.com/community…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 158 - From Resistance to Reliance: Designing Data Products for Non-Believers with Anna Jacobson of Operator Collective 43:41
After getting started in construction management, Anna Jacobson traded in the hard hat for the world of data products and operations at a VC company. Anna, who has a structural engineering undergrad and a masters in data science, is also a Founding Member of the Data Product Leadership Community (DPLC). However, her work with data products is more “accidental” and is just part of her responsibility at Operator Collective. Nonetheless, Anna had a lot to share about building data products, dashboards, and insights for users—including resistant ones! That resistance is precisely what I wanted to talk to her about in this episode: how does Anna get somebody to adopt a data product to which they may be apathetic, if not completely resistant? At the end of the episode, Anna gives us a sneak peek at what she’s planning to talk about in our final 2024 live DPLC group discussion coming up on 12/18/2024. We covered: (1:17) Anna's background and how she got involved with data products (3:32) The ways Anna applied her experiences working in construction management to her current work with data products at a VC firm (5:32) Explaining one of the main data products she works on at Operator Collective (9:55) How Anna defines success for her data products (15:21) The process of designing data products for "non-believers" (21:08) How to think about "super users" and their feedback on a data product (27:11) How a company's cultural problems can be a blocker for product adoption (38:21) A preview of what you can expect from Anna's talk and live group discussion in the DPLC (40:24) Closing thoughts from Anna (42:54) Where you can find more from Anna Quotes from Today’s Episode “People working with data products are always thinking about how to [gain user adoption of their product]... I can’t think of a single one where [all users] were immediately on board. There’s a lot to unpack in what it takes to get non-believers on board, and it’s something that none of us ever get any training on. You just learn through experience, and it’s not something that most people took a class on in college. All of the social science around what we do gets really passed over for all the technical stuff. It takes thinking through and understanding where different [users] are coming from, and [understanding] that my perspective alone is not enough to make it happen.” - Anna Jacobson (16:00) “If you only bring together the super users and don’t try to get feedback from the average user, you are missing the perspective of the person who isn’t passionate about the product. A non-believer is someone who is just over capacity. They may be very hard-working, they may be very smart, but they just don’t have the bandwidth for new things. That’s something that has to be overcome when you’re putting a new product into place.” - Anna Jacobson (22:35) “If a company can’t find budget to support [a data product], that’s a cultural decision. It’s not a financial decision. They find the money for the things that they care about. Solving the technology challenge is pretty easy, but you have to have a company that’s motivated to do that. If you want to implement something new, be it a data product or any change in an organization, identifying the cultural barriers and figuring out how to bring [people in an organization] on board is the crux of it. The money and the technology can be found.” - Anna Jacobson (27:58) “I think people are actually very bad at explaining what they want, and asking people what they want is not helpful. If you ask people what they want to do, then I think you have a shot at being able to build a product that does [what they want]. The executive sponsors typically have a very different perspective on what the product [should be] than the users do. If all of your information is getting filtered through the executive sponsor, you’re probably not getting the full picture” - Anna Jacobson (31:45) “You want to define what the opportunity is, the problem, the solution, and you want to talk about costs and benefits. You want to align [the data product] with corporate strategy, and those things are fairly easy to map out. But as you get down to the user, what they want to know is, ‘How is this going to make my life easier? How is this going to make [my job] faster? How is it going to result in better outcomes?’ They may have an interest in how it aligns with corporate strategy, but that’s not what’s going to motivate them. It’s really just easier, faster, better.” - Anna Jacobson (35:00) Links Referenced LinkedIn: https://www.linkedin.com/in/anna-ching-jacobson/ DPLC (Data Product Leadership Community): https://designingforanalytics.com/community…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 157 - How this materials science SAAS company brings PM+UX+data science together to help materials scientists accelerate R&D 34:58
R&D for materials-based products can be expensive, because improving a product’s materials takes a lot of experimentation that historically has been slow to execute. In traditional labs, you might change one variable, re-run your experiment, and see if the data shows improvements in your desired attributes (e.g. strength, shininess, texture/feel, power retention, temperature, stability, etc.). However, today, there is a way to leverage machine learning and AI to reduce the number of experiments a material scientist needs to run to gain the improvements they seek. Materials scientists spend a lot of time in the lab—away from a computer screen—so how do you design a desirable informatics SAAS that actually works, and fits into the workflow of these end users? As the Chief Product Officer at MaterialsZone, Ori Yudilevich came on Experiencing Data with me to talk about this challenge and how his PM, UX, and data science teams work together to produce a SAAS product that makes the benefits of materials informatics so valuable that materials scientists depend on their solution to be time and cost-efficient with their R&D efforts. We covered: (0:45) Explaining what Ori does at MaterialZone and who their product serves (2:28) How Ori and his team help make material science testing more efficient through their SAAS product (9:37) How they design a UX that can work across various scientific domains (14:08) How “doing product” at MaterialsZone matured over the past five years (17:01) Explaining the "Wizard of Oz" product development technique (21:09) The importance of integrating UX designers into the "Wizard of Oz" (23:52) The challenges MaterialZone faces when trying to get users to adopt to their product (32:42) Advice Ori would've given himself five years ago (33:53) Where you can find more from MaterialsZone and Ori Quotes from Today’s Episode “The fascinating thing about materials science is that you have this variety of domains, but all of these things follow the same process. One of the problems [consumer goods companies] face is that they have to do lengthy testing of their products. This is something you can use machine learning to shorten. [Product research] is an iterative process that typically takes a long time. Using your data effectively and using machine learning to predict what can happen, what’s better to try out, and what will reduce costs can accelerate time to market.” - Ori Yudilevich (3:47) “The difference [in time spent testing a product] can be up to 70% [i.e. you can run 70% fewer experiments using ML.] That [also] means 70% less resources you’re using. Under the ‘old system’ of trial and error, you were just trying out a lot of things. The human mind cannot process a large number of parameters at once, so [a materials scientist] would just start playing only with [one parameter at a time]. You’ll have many experiments where you just try to optimize [for] one parameter, but then you might have 20, 30, or 100 more [to test]. Using machine learning, you can change a lot of parameters at once. The model can learn what has the most effect, what has a positive effect, and what has a negative effect. The differences can be really huge.” - Ori Yudilevich (5:50) “Once you go deeper into a use case, you see that there are a lot of differences. The types of raw materials, the data structure, the quantity of data, etc. For example, with batteries, you have lots of data because you can test hundreds all at once. Whereas with something like ceramics, you don’t try so many [experiments]. You just can’t. It’s much slower. You can’t do so many [experiments] in parallel. You have much less data. Your models are different, and your data structure is different. But there’s also quite a lot of commonality because you’re storing the data. In the end, you have each domain, some raw materials, formulations, tests that you’re doing, and different statistical plots that are very common.” - Ori Yudilvech (11:24) “We’ll typically do what we call the ‘Wizard of Oz’ technique. You simulate as if you have a feature, but you’re actually working for your client behind the scenes. You tell them [the simulated feature] is what you’re doing, but then measure [the client’s response] to understand if there’s any point in further developing that feature. Once you validate it, have enough data, and know where the feature is going, then you’ll start designing it and releasing it in incremental stages. We’ve made a lot of progress in how we discover opportunities and how we build something iteratively to make sure that we’re always going in the right direction” - Ori Yudilevich (15:56) “The main problem we’re encountering is changing the mindset of users. Our users are not people who sit in front of a computer. These are researchers who work in [a materials science] lab. The challenge [we have] is getting people to use the platform more. To see it’s worth [their time] to look at some insights, and run the machine learning models. We’re always looking for ways to make that transition faster… and I think the key is making [the user experience] just fun, easy, and intuitive.” - Ori Yudilevich (24:17) “Even if you make [the user experience] extremely smooth, if [users] don’t see what they get out of it, they’re still not going to [adopt your product] just for the sake of doing it. What we find is if this [product] can actually make them work faster or develop better products– that gets them interested. If you’re adopting these advanced tools, it makes you a better researcher and worker. People who [adopt those tools] grow faster. They become leaders in their team, and they slowly drag the others in.” - Ori Yudilevich (26:55) “Some of [MaterialsZone’s] most valuable employees are the people who have been users. Our product manager is a materials scientist. I’m not a material scientist, and it’s hard to imagine being that person in the lab. What I think is correct turns out to be completely wrong because I just don’t know what it’s like. Having [material scientists] who’ve made the transition to software and data science? You can’t replace that.” - Ori Yudilevich (31:32) Links Referenced Website: https://www.materials.zone LinkedIn: https://www.linkedin.com/in/oriyudilevich/ Email: ori@materials.zone…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 156-The Challenges of Bringing UX Design and Data Science Together to Make Successful Pharma Data Products with Jeremy Forman 41:37
Jeremy Forman joins us to open up about the hurdles– and successes that come with building data products for pharmaceutical companies. Although he’s new to Pfizer, Jeremy has years of experience leading data teams at organizations like Seagen and the Bill and Melinda Gates Foundation. He currently serves in a more specialized role in Pfizer’s R&D department, building AI and analytical data products for scientists and researchers. . Jeremy gave us a good luck at his team makeup, and in particular, how his data product analysts and UX designers work with pharmaceutical scientists and domain experts to build data-driven solutions.. We talked a good deal about how and when UX design plays a role in Pfizer’s data products, including a GenAI-based application they recently launched internally. Highlights/ Skip to: (1:26) Jeremy's background in analytics and transition into working for Pfizer (2:42) Building an effective AI analytics and data team for pharma R&D (5:20) How Pfizer finds data products managers (8:03) Jeremy's philosophy behind building data products and how he adapts it to Pfizer (12:32) The moment Jeremy heard a Pfizer end-user use product management research language and why it mattered (13:55) How Jeremy's technical team members work with UX designers (18:00) The challenges that come with producing data products in the medical field (23:02) How to justify spending the budget on UX design for data products (24:59) The results we've seen having UX design work on AI / GenAI products (25:53) What Jeremy learned at the Bill & Melinda Gates Foundation with regards to UX and its impact on him now (28:22) Managing the "rough dance" between data science and UX (33:22) Breaking down Jeremy's GenAI application demo from CDIOQ (36:02) What would Jeremy prioritize right now if his team got additional funding (38:48) Advice Jeremy would have given himself 10 years ago (40:46) Where you can find more from Jeremy Quotes from Today’s Episode “We have stream-aligned squads focused on specific areas such as regulatory, safety and quality, or oncology research. That’s so we can create functional career pathing and limit context switching and fragmentation. They can become experts in their particular area and build a culture within that small team. It’s difficult to build good [pharma] data products. You need to understand the domain you’re supporting. You can’t take somebody with a financial background and put them in an Omics situation. It just doesn’t work. And we have a lot of the scars, and the failures to prove that.” - Jeremy Forman (4:12) “You have to have the product mindset to deliver the value and the promise of AI data analytics. I think small, independent, autonomous, empowered squads with a product leader is the only way that you can iterate fast enough with [pharma data products].” - Jeremy Forman (8:46) “The biggest challenge is when we say data products. It means a lot of different things to a lot of different people, and it’s difficult to articulate what a data product is. Is it a view in a database? Is it a table? Is it a query? We’re all talking about it in different terms, and nobody’s actually delivering data products.” - Jeremy Forman (10:53) “I think when we’re talking about [data products] there’s some type of data asset that has value to an end-user, versus a report or an algorithm. I think it’s even hard for UX people to really understand how to think about an actual data product. I think it’s hard for people to conceptualize, how do we do design around that? It’s one of the areas I think I’ve seen the biggest challenges, and I think some of the areas we’ve learned the most. If you build a data product, it’s not accurate, and people are getting results that are incomplete… people will abandon it quickly.” - Jeremy Forman (15:56) “ I think that UX design and AI development or data science work is a magical partnership, but they often don’t know how to work with each other. That’s been a challenge, but I think investing in that has been critical to us. Even though we’ve had struggles… I think we’ve also done a good job of understanding the [user] experience and impact that we want to have. The prototype we shared [at CDIOQ] is driven by user experience and trying to get information in the hands of the research organization to understand some portfolio types of decisions that have been made in the past. And it’s been really successful.” - Jeremy Forman (24:59) “If you’re having technology conversations with your business users, and you’re focused only the technology output, you’re just building reports. [After adopting If we’re having technology conversations with our business users and only focused on the technology output, we’re just building reports. [After we adopted a human-centered design approach], it was talking [with end-users] about outcomes, value, and adoption. Having that resource transformed the conversation, and I felt like our quality went up. I felt like our output went down, but our impact went up. [End-users] loved the tools, and that wasn’t what was happening before… I credit a lot of that to the human-centered design team.” - Jeremy Forman (26:39) “When you’re thinking about automation through machine learning or building algorithms for [clinical trial analysis], it becomes a harder dance between data scientists and human-centered design. I think there’s a lack of appreciation and understanding of what UX can do. Human-centered design is an empathy-driven understanding of users’ experience, their work, their workflow, and the challenges they have. I don’t think there’s an appreciation of that skill set.” - Jeremy Forman (29:20) “Are people excited about it? Is there value? Are we hearing positive things? Do they want us to continue? That’s really how I’ve been judging success. Is it saving people time, and do they want to continue to use it? They want to continue to invest in it. They want to take their time as end-users, to help with testing, helping to refine it. Those are the indicators. We’re not generating revenue, so what does the adoption look like? Are people excited about it? Are they telling friends? Do they want more? When I hear that the ten people [who were initial users] are happy and that they think it should be rolled out to the whole broader audience, I think that’s a good sign.” - Jeremy Forman (35:19) Links Referenced LinkedIn: https://www.linkedin.com/in/jeremy-forman-6b982710/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

The relationship between AI and ethics is both developing and delicate. On one hand, the GenAI advancements to date are impressive. On the other, extreme care needs to be taken as this tech continues to quickly become more commonplace in our lives. In today’s episode, Ovetta Sampson and I examine the crossroads ahead for designing AI and GenAI user experiences. While professionals and the general public are eager to embrace new products, recent breakthroughs, etc.; we still need to have some guard rails in place. If we don’t, data can easily get mishandled, and people could get hurt. Ovetta possesses firsthand experience working on these issues as they sprout up. We look at who should be on a team designing an AI UX, exploring the risks associated with GenAI, ethics, and need to be thinking about going forward. Highlights/ Skip to: (1:48) Ovetta's background and what she brings to Google’s Core ML group (6:03) How Ovetta and her team work with data scientists and engineers deep in the stack (9:09) How AI is changing the front-end of applications (12:46) The type of people you should seek out to design your AI and LLM UXs (16:15) Explaining why we’re only at the very start of major GenAI breakthroughs (22:34) How GenAI tools will alter the roles and responsibilities of designers, developers, and product teams (31:11) The potential harms of carelessly deploying GenAI technology (42:09) Defining acceptable levels of risk when using GenAI in real-world applications (53:16) Closing thoughts from Ovetta and where you can find her Quotes from Today’s Episode “If artificial intelligence is just another technology, why would we build entire policies and frameworks around it? The reason why we do that is because we realize there are some real thorny ethical issues [surrounding AI]. Who owns that data? Where does it come from? Data is created by people, and all people create data. That’s why companies have strong legal, compliance, and regulatory policies around [AI], how it’s built, and how it engages with people. Think about having a toddler and then training the toddler on everything in the Library of Congress and on the internet. Do you release that toddler into the world without guardrails? Probably not.” - Ovetta Sampson (10:03) “[When building a team] you should look for a diverse thinker who focuses on the limitations of this technology- not its capability. You need someone who understands that the end destination of that technology is an engagement with a human being. You need somebody who understands how they engage with machines and digital products. You need that person to be passionate about testing various ways that relationships can evolve. When we go from execution on code to machine learning, we make a shift from [human] agency to a shared-agency relationship. The user and machine both have decision-making power. That’s the paradigm shift that [designers] need to understand. You want somebody who can keep that duality in their head as they’re testing product design.” - Ovetta Sampson (13:45) “We’re in for a huge taxonomy change. There are words that mean very specific definitions today. Software engineer. Designer. Technically skilled. Digital. Art. Craft. AI is changing all that. It’s changing what it means to be a software engineer. Machine learning used to be the purview of data scientists only, but with GenAI, all of that is baked in to Gemini. So, now you start at a checkpoint, and you’re like, all right, let’s go make an API, right? So, the skills, the understanding, the knowledge, the taxonomy even, how we talk about these things, how do we talk about the machine who speaks to us talks to us, who could create a podcast out of just voice memos?” - Ovetta Sampson (24:16) “We have to be very intentional [when building AI tools], and that’s the kind of folks you want on teams. [Designers] have to go and play scary scenarios. We have to do that. No designer wants to be “Negative Nancy,” but this technology has huge potential to harm. It has harmed. If we don’t have the skill sets to recognize, document, and minimize harm, that needs to be part of our skill set. If we’re not looking out for the humans, then who actually is?” - Ovetta Sampson (32:10) “[Research shows] things happen to our brain when we’re exposed to artificial intelligence… there are real human engagement risks that are an opportunity for design. When you’re designing a self-driving car, you can’t just let the person go to sleep unless the car is fully [automated] and every other car on the road is self-driving. If there are humans behind the wheel, you need to have a feedback loop system—something that’s going to happen [in case] the algorithm is wrong. If you don’t have that designed, there’s going to be a large human engagement risk that a car is going to run over somebody who’s [for example] pushing a bike up a hill[...] Why? The car could not calculate the right speed and pace of a person pushing their bike. It had the speed and pace of a person walking, the speed and pace of a person on a bike, but not the two together. Algorithms will be wrong, right?” - Ovetta Sampson (39:42) “Model goodness used to be the purview of companies and the data scientists. Think about the first search engines. Their model goodness was [about] 77%. That’s good, right? And then people started seeing photos of apes when [they] typed in ‘black people.’ Companies have to get used to going to their customers in a wide spectrum and asking them when they’re [models or apps are] right and wrong. They can’t take on that burden themselves anymore. Having ethically sourced data input and variables is hard work. If you’re going to use this technology, you need to put into place the governance that needs to be there.” - Ovetta Sampson (44:08)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 154 - 10 Things Founders of B2B SAAS Analytics and AI Startups Get Wrong About DIY Product and UI/UX Design 44:47
Sometimes DIY UI/UX design only gets you so far—and you know it’s time for outside help. One thing prospects from SAAS analytics and data-related product companies often ask me is how things are like in the other guy/gal’s backyard. They want to compare their situation to others like them. So, today, I want to share some of the common “themes” I see that usually are the root causes of what leads to a phone call with me. By the time I am on the phone with most prospects who already have a product in market, they’re usually either having significant problems with 1 or more of the following: sales friction (product value is opaque); low adoption/renewal worries (user apathy), customer complaints about UI/UX being hard to use; velocity (team is doing tons of work, but leader isn’t seeing progress)—and the like. I’m hoping today’s episode will explain some of the root causes that may lead to these issues — so you can avoid them in your data product building work! Highlights/ Skip to: (10:47) Design != "front-end development" or analyst work (12:34) Liking doing UI/UX/viz design work vs. knowing (15:04) When a leader sees lots of work being done, but the UX/design isn’t progressing (17:31) Your product’s UX needs to convey some magic IP/special sauce…but it isn’t (20:25) Understanding the tradeoffs of using libraries, templates, and other solution’s design as a foundation for your own (25:28) The sunk cost bias associated with POCs and “we’ll iterate on it” (28:31) Relying on UI/UX "customization" to please all customers (31:26) The hidden costs of abstraction of system objects, UI components, etc. to make life easier for engineering and technical teams (32:32) Believing you’ll know the design is good “when you see it” (and what you don’t know you don’t know) (36:43) Believing that because the data science/AI/ML modeling under your solution was, accurate, difficult, and/or expensive makes it automatically worth paying for Quotes from Today’s Episode The challenge is often not knowing what you don’t know about a project. We often end up focusing on building the tech [and rushing it out] so we can get some feedback on it… but product is not about getting it out there so we can get feedback. The goal of doing product well is to produce value, benefits, or outcomes. Learning is important, but that’s not what the objective is. The objective is benefits creation. (5:47) When we start doing design on a project that’s not design actionable, we build debt and sometimes can hurt the process of design. If you start designing your product with an entire green space, no direction, and no constraints, the chance of you shipping a good v1 is small. Your product strategy needs to be design-actionable for the team to properly execute against it. (19:19) While you don’t need to always start at zero with your UI/UX design, what are the parts of your product or application that do make sense to borrow , “steal” and cheat from? And when does it not? It takes skill to know when you should be breaking the rules or conventions. Shortcuts often don’t produce outsized results—unless you know what a good shortcut looks like. (22:28) A proof of concept is not a minimum valuable product. There’s a difference between proving the tech can work and making it into a product that’s so valuable, someone would exchange money for it because it’s so useful to them. Whatever that value is, these are two different things. (26:40) Trying to do a little bit for everybody [through excessive customization] can often result in nobody understanding the value or utility of your solution. Customization can hide the fact the team has decided not to make difficult choices. If you’re coming into a crowded space… it’s like’y not going to be a compelling reason to [convince customers to switch to your solution]. Customization can be a tax, not a benefit. (29:26) Watch for the sunk cost bias [in product development]. [Buyers] don’t care how the sausage was made. Many don’t understand how the AI stuff works, they probably don’t need to understand how it works. They want the benefits downstream from technology wrapped up in something so invaluable they can’t live without it. Watch out for technically right, effectively wrong. (39:27)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 153 - What Impressed Me About How John Felushko Does Product and UX at the Analytics SAAS Company, LabStats 57:31
In today’s episode, I’m joined by John Felushko, a product manager at LabStats who impressed me after we recently had a 1x1 call together. John and his team have developed a successful product that helps universities track and optimize their software and hardware usage so schools make smart investments. However, John also shares how culture and value are very tied together—and why their product isn’t a fit for every school, and every country. John shares how important customer relationships are , how his team designs great analytics user experiences, how they do user research, and what he learned making high-end winter sports products that’s relevant to leading a SAAS analytics product. Combined with John’s background in history and the political economy of finance, John paints some very colorful stories about what they’re getting right—and how they’ve course corrected over the years at LabStats. Highlights/ Skip to: (0:46) What is the LabStats product (2:59) Orienting analytics around customer value instead of IT/data (5:51) "Producer of Persistently Profitable Product Process" (11:22) How they make product adjustments based on previous failures (15:55) Why a lack of cultural understanding caused LabStats to fail internationally (18:43) Quantifying value beyond dollars and cents (25:23) How John is able to work so closely with his customers without barriers (30:24) Who makes up the LabStats product research team (35:04) How strong customer relationships help inform the UX design process (38:29) Getting senior management to accept that you can't regularly and accurately predict when you’ll be feature-complete and ship (43:51) Where John learned his skills as a successful product manager (47:20) Where you can go to cultivate the non-technical skills to help you become a better SAAS analytics product leader (51:00) What advice would John Felushko have given himself 10 years ago? (56:19) Where you can find more from John Felushko Quotes from Today’s Episode “The product process is [essentially] really nothing more than the scientific method applied to business. Every product is an experiment - it has a hypothesis about a problem it solves. At LabStats [we have a process] where we go out and clearly articulate the problem. We clearly identify who the customers are, and who are [people at other colleges] having that problem. Incrementally and as inexpensively as possible, [we] test our solutions against those specific customers. The success rate [of testing solutions by cross-referencing with other customers] has been extremely high.” - John Felushko (6:46) “One of the failures I see in Americans is that we don’t realize how much culture matters. Americans have this bias to believe that whatever is valuable in my culture is valuable in other cultures. Value is entirely culturally determined and subjective. Value isn’t a number on a spreadsheet. [LabStats positioned our producty] as something that helps you save money and be financially efficient. In French government culture, financial efficiency is not a top priority. Spending government money on things like education is seen as a positive good. The more money you can spend on it, the better. So, the whole message of financial efficiency wasn’t going to work in that market.” - John Felushko (16:35) “What I’m really selling with data products is confidence. I’m selling assurance. I’m selling an emotion. Before I was a product manager, I spent about ten years in outdoor retail, selling backpacks and boots. What I learned from that is you’re always selling emotion, at every level. If you can articulate the ROI, the real value is that the buyer has confidence they bought the right thing.” - John Felushko (20:29) “[LabStats] has three massive, multi-million dollar horror stories in our past where we [spent] millions of dollars in development work for no results. No ROI. Horror stories are what shape people’s values more than anything else. Avoiding negative outcomes is what people avoid more than anything else. [It’s important to] tell those stories and perpetuate those [lessons] through the culture of your organization. These are the times we screwed up, and this is what we learned from it—do you want to screw up like that again because we learned not to do that.” - John Felushko (38:45) “There’s an old description of a product manager, like, ‘Oh, they come across as the smartest person in the room.’ Well, how do you become that person? Expand your view, and expand the amount of information you consume as widely as possible. That’s so important to UX design and thinking about what went wrong. Why are some customers super happy and some customers not? What is the difference between those two groups of people? Is it culture? Is it time? Is it mental ability? Is it the size of the screen they’re looking at my product on? What variables can I define and rule out, and what data sources do I have to answer all those questions? It’s just the normal product manager thing—constant curiosity.” -John Felushko (48:04)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 137 - Immature Data, Immature Clients: When Are Data Products the Right Approach? feat. Data Product Architect, Karen Meppen 44:50
This week, I'm chatting with Karen Meppen, a founding member of the Data Product Leadership Community and a Data Product Architect and Client Services Director at Hakkoda. Today, we're tackling the difficult topic of developing data products in situations where a product-oriented culture and data infrastructures may still be emerging or “at odds” with a human-centered approach. Karen brings extensive experience and a strong belief in how to effectively negotiate the early stages of data maturity. Together we look at the major hurdles that businesses encounter when trying to properly exploit data products, as well as the necessity of leadership support and strategy alignment in these initiatives. Karen's insights offer a roadmap for those seeking to adopt a product and UX-driven methodology when significant tech or cultural hurdles may exist. Highlights/ Skip to: I Introduce Karen Meppen and the challenges of dealing with data products in places where the data and tech aren't quite there yet (00:00) Karen shares her thoughts on what it's like working with "immature data" (02:27) Karen breaks down what a data product actually is (04:20) Karen and I discuss why having executive buy-in is crucial for moving forward with data products (07:48) The sometimes fuzzy definition of "data products." (12:09) Karen defines “shadow data teams” and explains how they sometimes conflict with tech teams (17:35) How Karen identifies the nature of each team to overcome common hurdles of connecting tech teams with business units (18:47) How she navigates conversations with tech leaders who think they already understand the requirements of business users (22:48) Using design prototypes and design reviews with different teams to make sure everyone is on the same page about UX (24:00) Karen shares stories from earlier in her career that led her to embrace human-centered design to ensure data products actually meet user needs (28:29) We reflect on our chat about UX, data products, and the “producty” approach to ML and analytics solutions (42:11) Quotes from Today’s Episode "It’s not really fair to get really excited about what we hear about or see on LinkedIn, at conferences, etc. We get excited about the shiny things, and then want to go straight to it when [our] organization [may not be ] ready to do that, for a lot of reasons." - Karen Meppen (03:00) "If you do not have support from leadership and this is not something [they are] passionate about, you probably aren’t a great candidate for pursuing data products as a way of working." - Karen Meppen (08:30) "Requirements are just friendly lies." - Karen, quoting Brian about how data teams need to interpret stakeholder requests (13:27) "The greatest challenge that we have in technology is not technology, it’s the people, and understanding how we’re using the technology to meet our needs." - Karen Meppen (24:04) "You can’t automate something that you haven’t defined. For example, if you don’t have clarity on your tagging approach for your PII, or just the nature of all the metadata that you’re capturing for your data assets and what it means or how it’s handled—to make it good, then how could you possibly automate any of this that hasn’t been defined?" - Karen Meppen (38:35) "Nothing upsets an end-user more than lifting-and-shifting an existing report with the same problems it had in a new solution that now they’ve never used before." - Karen Meppen (40:13) “Early maturity may look different in many ways depending upon the nature of business you’re doing, the structure of your data team, and how it interacts with folks.” (42:46) Links Data Product Leadership Community https://designingforanalytics.com/community/ Karen Meppen on LinkedIn: https://www.linkedin.com/in/karen--m/ Hakkōda, Karen's company, for more insights on data products and services: https://hakkoda.io/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 136 - Navigating the Politics of UX Research and Data Product Design with Caroline Zimmerman 44:16
This week I’m chatting with Caroline Zimmerman, Director of Data Products and Strategy at Profusion. Caroline shares her journey through the school of hard knocks that led to her discovery that incorporating more extensive UX research into the data product design process improves outcomes. We explore the complicated nature of discovering and building a better design process, how to engage end users so they actually make time for research, and why understanding how to navigate interdepartmental politics is necessary in the world of data and product design. Caroline reveals the pivotal moment that changed her approach to data product design, as well as her learnings from evolving data products with the users as their needs and business strategies change. Lastly, Caroline and I explore what the future of data product leadership looks like and Caroline shares why there's never been a better time to work in data. Highlights/ Skip to: Intros and Caroline describes how she learned crucial lessons on building data products the hard way (00:36) The fundamental moment that helped Caroline to realize that she needed to find a different way to uncover user needs (03:51) How working with great UX researchers influenced Caroline’s approach to building data products (08:31) Why Caroline feels that exploring the ‘why’ is foundational to designing a data product that gets adopted (10:25) Caroline’s experience building a data model for a client and what she learned from that experience when the client’s business model changed (14:34) How Caroline addresses the challenge of end users not making time for user research (18:00) A high-level overview of the UX research process when Caroline’s team starts working with a new client (22:28) The biggest challenges that Caroline faces as a Director of Data Products, and why data products require the ability to navigate company politics and interests (29:58) Caroline describes the nuances of working with different stakeholder personas (35:15) Why data teams need to embrace a more human-led approach to designing data products and focus less on metrics and the technical aspects (38:10) Caroline’s closing thoughts on what she’d like to share with other data leaders and how you can connect with her (40:48) Quotes from Today’s Episode “When I was first starting out, I thought that you could essentially take notes on what someone was asking for, go off and build it to their exact specs, and be successful. And it turns out that you can build something to exact specs and suffer from poor adoption and just not be solving problems because I did it as a wish fulfillment, laundry-list exercise rather than really thinking through user needs.” — Caroline Zimmerman (01:11) “People want a thing. They’re paying for a thing, right? And so, just really having that reflex to try to gently come back to that why and spending sufficient time exploring it before going into solution build, even when people are under a lot of deadline pressure and are paying you to deliver a thing [is the most important element of designing a data product].” – Caroline Zimmerman (11:53) “A data product evolves because user needs change, business models change, and business priorities change, and we need to evolve with it. It’s not like you got it right once, and then you’re good for life. At all.” – Caroline Zimmerman (17:48) “I continue to have lots to learn about stakeholder management and understanding the interplay between what the organization needs to be successful, but also, organizations are made up of people with personal interests, and you need to understand both.” – Caroline Zimmerman (30:18) “Data products are built in a political context. And just being aware of that context is important.” – Caroline Zimmerman (32:33) “I think that data, maybe more than any other function, is transversal. I think data brings up politics because, especially with larger organizations, there are those departmental and team silos. And the whole thing about data is it cuts through those because it touches all the different teams. It touches all the different processes. And so in order to build great data products, you have to be navigating that political context to understand how to get things done transversely in organizations where most stuff gets done vertically.” – Caroline Zimmerman (34:37) “Data leadership positions are data product expertise roles. And I think that often it’s been more technical people that have advanced into those roles. If you follow the LinkedIn-verse in data, it’s very much on every data leader’s mind at the moment: how do you articulate benefits to your CEO and your board and try to do that before it’s too late? So, I’d say that’s really the main thing and that there’s just never been a better time to be a data product person.” – Caroline Zimmerman (37:16) Links Profusion: https://profusion.com/ Caroline Zimmerman LinkedIn: https://www.linkedin.com/in/caroline-zimmerman-4a531640/ Nick Zervoudis LinkedIn: https://www.linkedin.com/in/nzervoudis/ Email: mailto:carolinez@profusion.com…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 135 - “No Time for That:” Enabling Effective Data Product UX Research in Product-Immature Organizations 52:47
This week, I’m chatting with Steve Portigal, who is the Principal of Portigal Consulting and the Author of Interviewing Users . We discuss the changes that prompted him to release a second version of his book 10 years after its initial release, and dive into the best practices that any team can implement to start unlocking the value of data product UX research. Steve explains that the key to making time for user research is knowing what business value you’re after, not simply having a list of research questions. We then role-play through some in-depth examples of real-life experiences we’ve seen from both end users and leadership when it comes to implementing a user research strategy. Thhroughout our conversation, we come back to the idea that even taking imperfect action towards doing user research can lead to increased data product adoption and business value. Highlights/ Skip to: I introduce Steve Portigal, Principal of Portigal Consulting and Author of Interviewing Users (00:38) What changes caused Steve to release a second edition of his book (00:58) Steve and I discuss the importance of understanding how to conduct effective user research (03:44) Steve explains why it’s crucial to understand that the business challenge and the research questions are two different things (08:16) Brian and Steve role-play a common scenario that comes up in user research, and Steve explains an optimal workflow for user research (11:50) The importance of provocation in performing user research (21:02) How Steve would handle a situation where a member of leadership is preventing research being done with end users (24:23) Why a consultative approach is valuable when getting buy-in for conducting user research (35:04) Steve shares some of the major benefits of taking imperfect action towards starting user research (36:59) The impact and value even easy wins in user research can have (42:54) Steve describes the exploratory nature of user research and how to maximize the chance of finding the most valuable insights (46:57) Where you can connect with Steve and get a copy of v2 of his book, Interviewing Users (49:35) Quotes from Today’s Episode “If you don’t know what you’re doing, and you don’t know what you should be investing effort-wise, that’s the inexperience in the approach. If you don’t know how to plan, what should we be trying to solve in this research? What are we trying to learn? What are we going to do with it in the organization? Who should we be talking to? How do we find them? What do we ask them? And then a really good one: how do we make sense of that information so that it has impact that we can take away?” — Steve Portigal (07:15) “What do people get [from user research]? I think the chance for a team to align around something that comes in from the outside.” – Steve Portigal (41:36) On the impact user research can have if teams embrace it: “They had a product that did a thing that no one [understood], and they had to change the product, but also change how they talked about it, change how they built it, and change how they packaged it. And that was a really dramatic turnaround. And it came out of our research, but [mostly] because they really leaned into making use of this stuff.” – Steve Portigal (42:35) "If we knew all the questions to ask, we would just write a survey, right? It’s a lower time commitment from the participant to do that. But we’re trying to get at what we don’t know that we don’t know. For some of us, that’s fun!" – Steve Portigal (48:36) Links Interviewing Users (use code DATA20 to get 20% off the list price): https://rosenfeldmedia.com/books/interviewing-users-second-edition/ Personal website: https://portigal.com Publisher website: https://rosenfeldmedia.com LinkedIn: https://www.linkedin.com/in/steveportigal/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

In this episode, I’m chatting with former Gartner analyst Sanjeev Mohan who is the Co-Author of Data Products for Dummies . Throughout our conversation, Sanjeev shares his expertise on the evolution of data products, and what he’s seen as a result of implementing practices that prioritize solving for use cases and business value. Sanjeev also shares a new approach of structuring organizations to best implement ownership and accountability of data product outcomes. Sanjeev and I also explore the common challenges of product adoption and who is responsible for user experience. I purposefully had Sanjeev on the show because I think we have pretty different perspectives from which we see the data product space. Highlights/ Skip to: I introduce Sanjeev Mohan, co-author of Data Products for Dummies (00:39) Sanjeev expands more on the concept of writing a “for Dummies” book (00:53) Sanjeev shares his definition of a data product, including both a technical and a business definition (01:59) Why Sanjeev believes organizational changes and accountability are the keys to preventing the acceleration of shipping data products with little to no tangible value (05:45) How Sanjeev recommends getting buy-in for data product ownership from other departments in an organization (11:05) Sanjeev and I explore adoption challenges and the topic of user experience (13:23) Sanjeev explains what role is responsible for user experience and design (19:03) Who should be responsible for defining the metrics that determine business value (28:58) Sanjeev shares some case studies of companies who have adopted this approach to data products and their outcomes (30:29) Where companies are finding data product managers currently (34:19) Sanjeev expands on his perspective regarding the importance of prioritizing business value and use cases (40:52) Where listeners can get Data Products for Dummies , and learn more about Sanjeev’s work (44:33) Quotes from Today’s Episode “You may slap a label of data product on existing artifact; it does not make it a data product because there’s no sense of accountability. In a data product, because they are following product management best practices, there must be a data product owner or a data product manager. There’s a single person [responsible for the result]. — Sanjeev Mohan (09:31) “I haven’t even mentioned the word data mesh because data mesh and data products, they don’t always have to go hand-in-hand. I can build data products, but I don’t need to go into the—do all of data mesh principles.” – Sanjeev Mohan (26:45) “We need to have the right organization, we need to have a set of processes, and then we need a simplified technology which is standardized across different teams. So, this way, we have the benefit of reusing the same technology. Maybe it is Snowflake for storage, DBT for modeling, and so on. And the idea is that different teams should have the ability to bring their own analytical engine.” – Sanjeev Mohan (27:58) “Generative AI, right now as we are recording, is still in a prototyping phase. Maybe in 2024, it’ll go heavy-duty production. We are not in prototyping phase for data products for a lot of companies. They’ve already been experimenting for a year or two, and now they’re actually using them in production. So, we’ve crossed that tipping point for data products.” – Sanjeev Mohan (33:15) “Low adoption is a problem that’s not just limited to data products. How long have we had data catalogs, but they have low adoption. So, it’s a common problem.” – Sanjeev Mohan (39:10) “That emphasis on technology first is a wrong approach. I tell people that I’m sorry to burst your bubble, but there are no technology projects, there are only business projects. Technology is an enabler. You don’t do technology for the sake of technology; you have to serve a business cause, so let’s start with that and keep that front and center.” – Sanjeev Mohan (43:03) Links Data Products for Dummies : https://www.dataops.live/dataproductsfordummies “What Exactly is A Data Product” article: https://medium.com/data-mesh-learning/what-exactly-is-a-data-product-7f6935a17912 It Depends : https://www.youtube.com/@SanjeevMohan Chief Data Analytics and Product Officer of Equifax: https://www.youtube.com/watch?v=kFY7WGc-jFM SanjMo Consulting: https://www.sanjmo.com/ dataops.live: https://dataops.live dataops.live/dataproductsfordummies: https://dataops.live/dataproductsfordummies LinkedIn: https://www.linkedin.com/in/sanjmo/ Medium articles: https://sanjmo.medium.com…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

Today I am sharing some highlights for 2023 from the podcast, and also letting you all know I’ll be taking a break from the podcast for the rest of December, but I’ll be back with a new episode on January 9th, 2024. I’ve also got two links to share with you—details inside! Transcript Greetings everyone - I’m taking a little break from Experiencing Data over December of 2023, but I’ll be back in January with more interviews and insights on leveraging UX design and product management to create indispensable data products, machine learning apps, and decision support tools. Experiencing Data turned this year five years old back in November, with over 130 episodes to date! I still can’t believe it’s been going that long and how far we’ve come. Some highlights for me in 2023 included launching the Data Product Leadership Community, finding out that the show is now in the top 2% of all podcasts worldwide according to ListenNotes, and most of all, hearing from you that the podcast, and my writing, and the guests that I have brought on are having an impact on your work, your careers, and hopefully the lives of your customers, users, and stakeholders as well! So, for now, I’ve got just two links for you: If you’re wondering how to either: support the show yourself with a really fast review on Apple Podcasts, to record a quick audio question for me to answer on the show, or if you want to join my free Insights mailing lists where I share my bi-weekly ideas and thoughts and 1-page episode summaries of all the show drops that I put out here on Experiencing Data. …just head over to designingforanalytics.com/podcast and you’ll get links to all those things there. And secondly, if you need help increasing customer adoption, delight, the business value, or the usability of your analytics and machine learning applications in 2024, I invite you to set up a free discovery call with me 1 on 1. You bring the questions, I’ll bring my ears, and by the end of the call, I’ll give you my best advice on how to move forward with your situation – whether it’s working with me or not. To schedule one of those free discovery calls, visit designingforanalytics.com/go And finally, there will be some news coming out next year with the show, as well as my business, so I hope you’ll hop on the mailing list and stay tuned, that’s probably the best place to do that. And if you celebrate holidays in December and January, I hope they’re safe, enjoyable, and rejuvenating. Until 2024, stay tuned right here - and in the words of the great Arnold Schwarzenegger, I’ll be back.…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

In this conversation with Klara Lindner, Service Designer at diconium data, we explore how behavioral science and UX can be used to increase adoption of data products. Klara describes how she went from having a highly technical career as an electrical engineer and being the founder of a solar startup to her current role in service design for data products. Klara shares powerful insights into the value of user research and human-centered design, including one which stopped me in my tracks during this episode: how the people making data products and evangelizing data-driven decision making aren’t actually following their own advice when it comes to designing their data products. Klara and I also explore some easy user research techniques that data professionals can use, and discuss who should ultimately be responsible for user adoption of data products. Lastly, Klara gives us a peek at her upcoming December 19th, 2023 webinar with the The Data Product Leadership Community (DPLC) where she will be going deeper on two frameworks from psychology and behavioral science that teams can use to increase adoption of data products. Klara is also a founding member of the DPLC and was one of—if not the very first—design/UX professionals to join. Highlights/ Skip to: I introduce Klara, and she explains the role of Service Design to our audience (00:49) Klara explains how she realized she’s been doing design work longer than she thought by reflecting on the company she founded, Mobisol (02:09) How Klara balances the desire to design great dashboards with the mission of helping end users (06:15) Klara describes the psychology behind user research and her upcoming talk on December 19th at The Data Product Leadership Community (08:32) What data product teams can do as a starting point to begin implementing user research principles (10:52) Klara gives a powerful example of the type of insight and value even basic user research can provide (12:49) Klara and I discuss a key revelation when it comes to designing data products for users, which is the irony that even developers use intuition as well as quantitative data when building (16:43) What adjustments Klara had to make in her thinking when moving from a highly technical background to doing human-centered design (21:08) Klara describes the two frameworks for driving adoption that she’ll be sharing in her talk at the DPLC on December 19th (24:23) An example of how understanding and addressing adoption blockers is important for product and design teams (30:44) How Klara has seen her teams adopt a new way of thinking about product & service design (32:55) Klara gives her take on the Jobs to be Done framework, which she will also be sharing in her talk at the DPLC on December 19th (35:26) Klara’s advice to teams that are looking to build products around generative AI (39:28) Where listeners can connect with Klara to learn more (41:37) Links diconium data: http://www.diconium.com/ LinkedIn: https://www.linkedin.com/in/klaralindner/ Personal Website: https://magic-investigations.com/ Hear Klara speak on Dec 19, 2023 at 10am ET here: https://designingforanalytics.com/community/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 131 - 15 Ways to Increase User Adoption of Data Products (Without Handcuffs, Threats and Mandates) with Brian T. O’Neill 36:57
This week I’m covering Part 1 of the 15 Ways to Increase User Adoption of Data Products, which is based on an article I wrote for subscribers of my mailing list. Throughout this episode, I describe why focusing on empathy, outcomes, and user experience leads to not only better data products, but also better business outcomes. The focus of this episode is to show you that it’s completely possible to take a human-centered approach to data product development without mandating behavioral changes, and to show how this approach benefits not just end users, but also the businesses and employees creating these data products. Highlights/ Skip to: Design behavior change into the data product. (05:34) Establish a weekly habit of exposing technical and non-technical members of the data team directly to end users of solutions - no gatekeepers allowed. (08:12) Change funding models to fund problems, not specific solutions, so that your data product teams are invested in solving real problems. (13:30) Hold teams accountable for writing down and agreeing to the intended benefits and outcomes for both users and business stakeholders. Reject projects that have vague outcomes defined. (16:49) Approach the creation of data products as “user experiences” instead of a “thing” that is being built that has different quality attributes. (20:16) If the team is tasked with being “innovative,” leaders need to understand the innoficiency problem, shortened iterations, and the importance of generating a volume of ideas (bad and good) before committing to a final direction. (23:08) Co-design solutions with [not for!] end users in low, throw-away fidelity, refining success criteria for usability and utility as the solution evolves. Embrace the idea that research/design/build/test is not a linear process. (28:13) Test (validate) solutions with users early, before committing to releasing them, but with a pre-commitment to react to the insights you get back from the test. (31:50) Links: 15 Ways to Increase Adoption of Data Products : https://designingforanalytics.com/resources/15-ways-to-increase-adoption-of-data-products-using-techniques-from-ux-design-product-management-and-beyond/ Company website: https://designingforanalytics.com Episode 54: https://designingforanalytics.com/resources/episodes/054-jared-spool-on-designing-innovative-ml-ai-and-analytics-user-experiences/ Episode 106: https://designingforanalytics.com/resources/episodes/106-ideaflow-applying-the-practice-of-design-and-innovation-to-internal-data-products-w-jeremy-utley/ Ideaflow : https://www.amazon.com/Ideaflow-Only-Business-Metric-Matters/dp/0593420586/ Podcast website: https://designingforanalytics.com/podcast…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 130 - Nick Zervoudis on Data Product Management, UX Design Training and Overcoming Imposter Syndrome 48:56
Today I’m joined by Nick Zervoudis, Data Product Manager at CKDelta. As we dive into his career and background, Nick shares insights into his approach when it comes to developing both internal and external data products. Nick explains why he feels that a software engineering approach is the best way to develop a product that could have multiple applications, as well as the unique way his team is structured to best handle the needs of both internal and external customers. He also talks about the UX design course he took, how that affected his data product work and research with users, and his thoughts on dashboard design. We discuss common themes he’s observed when data product teams get it wrong, and how he manages feelings of imposter syndrome in his career as a DPM. Highlights/ Skip to: I introduce Nick, who is a Data Product Manager at CKDelta (00:35) Nick’s mindset around data products and how his early career in consulting shaped his approach (01:30) How Nick defines a data product and why he focuses more on the process rather than the end product (03:59) The types of data products that Nick has helped design and his work on both internal and external projects at CKDelta (07:57) The similarities and differences of working with internal versus external stakeholders (12:37) Nick dives into the details of the data products he has built and how they feed into complex use cases (14:21) The role that Nick plays in the Delta Power SaaS application and how the CKDelta team is structured around that product (17:14) Where Nick sees data products going wrong and how he’s found value in filling those gaps (23:30) Nick’s view on how a digital-first mindset affects the scalability of data products (26:15) Why Nick is often heavily involved in the design element of data product development and the course he took that helped shape his design work (28:55) The imposter syndrome that Nick has experienced when implementing this new strategy to data product design (36:51) Why Nick feels that figuring things out yourself is an inherent part of the DPM role (44:53) Nick shares the origins and information on the London Data Product Management meetup (46:08) Quotes from Today’s Episode “What I’m always trying to do is see, how can we best balance the customer’s need to get exactly the data point or insight that they’re after to the business need. ... There’s that constant tug of war between customization and standardization that I have the joy of adjudicating. I think it’s quite fun.” — Nick Zervoudis (16:40) “I’ve had times where I was hired, told, 'You’re going to be the product manager for this data product that we have,' as if it’s already, to some extent built and maybe the challenge is scaling it or bringing it to more customers or improving it, and then within a couple of weeks of starting to peek under the hood, realizing that this thing that is being branded a product is actually a bunch of projects hiding under a trench coat.” — Nick Zervoudis (24:04) “If I just speak to five users because they’re the users, they’ll give me the insight I need. […] Even when you have a massive product with a huge user base, people face the same issues.” — Nick Zervoudis (33:49) “For me, it’s more about making sure that you’re bringing that more software engineering way of building things, but also, before you do that, knowing that your users' needs are going to [be varied]. So, it’s a combination of both, are we building the right thing—in other words, a product that’s flexible enough to meet the different needs of different users—but also, are we building it in the right way?” – Nick Zervoudis (27:51) “It’s not to say I’m the only person thinking about [UX design], but very often, I’m the one driving it.” – Nick Zervoudis (30:55) “You’re never going to be as good at the thing your colleague does because their job almost certainly is to be a specialist: they’re an architect, they’re a designer, they’re a developer, they’re a salesperson, whereas your job [as a DPM] is to just understand it enough that you can then pass information across other people.” – Nick Zervoudis (41:12) “Every time I feel like an imposter, good. I need to embrace that, because I need to be working with people that understand something better than me. If I’m not, then maybe something’s gone wrong there. That’s how I’ve actually embraced impostor syndrome.” – Nick Zervoudis (41:35) Links CKDelta: https://www.ckdelta.ie LinkedIn: https://www.linkedin.com/in/nzervoudis/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 129 - Why We Stopped, Deleted 18 Months of ML Work, and Shifted to a Data Product Mindset at Coolblue 35:21
Today I’m joined by Marnix van de Stolpe, Product Owner at Coolblue in the area of data science. Throughout our conversation, Marnix shares the story of how he joined a data science team that was developing a solution that was too focused on the delivery of a data-science metric that was not on track to solve a clear customer problem. We discuss how Marnix came to the difficult decision to throw out 18 months of data science work, what it was like to switch to a human-centered, product approach, and the challenges that came with it. Marnix shares the impact this decision had on his team and the stakeholders involved, as well as the impact on his personal career and the advice he would give to others who find themselves in the same position. Marnix is also a Founding Member of the Data Product Leadership Community and will be going much more into the details and his experience live on Zoom on November 16 @ 2pm ET for members . Highlights/ Skip to: I introduce Marnix, Product Owner at Coolblue and one of the original members of the Data Product Leadership Community (00:35) Marnix describes what Coolblue does and his role there (01:20) Why and how Marnix decided to throw away 18 months of machine learning work (02:51) How Marnix determined that the KPI (metric) being created wasn’t enough to deliver a valuable product (07:56) Marnix describes the conversation with his data science team on mapping the solution back to the desired outcome (11:57) What the culture is like at Coolblue now when developing data products (17:17) Marnix’s advice for data product managers who are coming into an environment where existing work is not tied to a desired outcome (18:43) Marnix and I discuss why data literacy is not the solution to making more impactful data products (21:00) The impact that Marnix’s human-centered approach to data product development has had on the stakeholders at Coolblue (24:54) Marnix shares the ultimate outcome of the product his team was developing to measure product returns (31:05) How you can get in touch with Marnix (33:45) Links Coolblue: https://www.coolblue.nl LinkedIn: https://www.linkedin.com/in/marnixvdstolpe/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 128 - Data Products for Dummies and The Importance of Data Product Management with Vishal Singh of Starburst 53:01
Today I’m joined by Vishal Singh, Head of Data Products at Starburst and co-author of the newly published e-book, Data Products for Dummies . Throughout our conversation, Vishal explains how the variations in definitions for a data product actually led to the creation of the e-book, and we discuss the differences between our two definitions. Vishal gives a detailed description of how he believes Data Product Managers should be conducting their discovery and gathering feedback from end users, and how his team evaluates whether their data products are truly successful and user-friendly. Highlights/ Skip to: I introduce Vishal, the Head of Data Products at Starburst and contributor of the e-book Data Products for Dummies (00:37) Vishal describes how his customers at Starburst all had a common problem, but differing definitions of a data product, which led to the creation of his e-book (01:15) Vishal shares his one-sentence definition of a data product (02:50) How Vishal’s definition of a data product differs from mine, and we both expand on the possibilities between the two (05:33) The tactics Vishal uses to useful feedback to ensure the data products he develops are valuable for end users (07:48) Why Vishal finds it difficult to get one on one feedback from users during the iteration phase of data product development (11:07) The danger of sunk cost bias in the iteration phase of data product development (13:10) Vishal describes how he views the role of a DPM when it comes to doing effective initial discovery (15:27) How Vishal structures his teams and their interactions with each other and their end users (21:34) Vishal’s thoughts on how design affects both data scientists and end users (24:16) How DPMs at Starburst evaluate if the data product design is user-friendly (28:45) Vishal’s views on where Designers are valuable in the data product development process (35:00) Vishal and I discuss the importance of ensuring your products truly solve your user’s problems (44:44) Where you can learn more about Vishal’s upcoming events and the e-book, Data Products for Dummies (49:48) Links Starburst: https://www.starburst.io/ Data Products for Dummies : https://www.starburst.io/info/data-products-for-dummies/ “How to Measure the Impact of Data Products with Doug Hubbard”: https://designingforanalytics.com/resources/episodes/080-how-to-measure-the-impact-of-data-productsand-anything-else-with-forecasting-and-measurement-expert-doug-hubbard/ Trino Summit: https://www.starburst.io/info/trinosummit2023/ Galaxy Platform: https://www.starburst.io/platform/starburst-galaxy/ Datanova Summit: https://www.starburst.io/datanova/ LinkedIn: https://www.linkedin.com/in/singhsvishal/ Twitter: https://twitter.com/vishal_singh…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 127 - On the Road to Adopting a “Producty” Approach to Data Products at the UK’s Care Quality Commission with Jonathan Cairns-Terry 36:55
Today I’m joined by Jonathan Cairns-Terry, who is the Head of Insight Products at the Care Quality Commission. The Care Quality Commission is the the regulator for England for health and social care, and Jonathan recently joined their data team and is working to transform their approach to be more product-led and user-centric. Throughout our conversation, Jonathan shares valuable insights into what the first year of that type of shift looks like, and why it’s important to focus on outcomes, and how he measures progress. Jonathan and I explore the signals that told Jonathan it’s time for his team to invest in a designer, the benefits he’s gotten from UX research on his team, and the recent successes that Jonathan’s team is seeing as a result of implementing this approach. Jonathan is also a Founding Member of the Data Product Leadership Community and we discuss his upcoming webinar for the group on Oct 12, 2023. Highlights/ Skip to: I introduce Jonathan, who is the Head of Insight Products at the Care Quality Commission in the UK (00:37) How Jonathan went from being a “maths person” to being a “product person” (01:02) Who uses the data products that Jonthan makes at the Care Quality Commission (02:44) Jonathan describes the recent transition towards a product focus (03:45) How Jonathan expresses and measures the benefit and purpose of a product-led orientation, and how the team has embraced the transformation (07:08) The nuance between evaluating outcomes and measuring outputs in a product-led approach, and how UX research has impacted Jonathan’s team (12:53) What signals Jonathan received that told him it’s time to hire a designer (17:05) How Jonathan’s team approaches shadowing users (21:20) Some of the recent successes of the product-led approach Jonathan is implementing on his team (25:28) What Jonathan would change if he had to start the process of moving to outcomes over outputs with his team all over again (30:04) Get the full scoop on the topics discussed in this episode on October 12, 2023 when Jonathan presents his deep-dive webinar to the Data Product Leadership Community. Available to members only. Apply toda y. Links Care Quality Commission: https://www.cqc.org.uk/ LinkedIn: https://www.linkedin.com/in/jcairnsterry…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

Today I’m joined by Anthony Deighton, General Manager of Data Products at Tamr. Throughout our conversation, Anthony unpacks his definition of a data product and we discuss whether or not he feels that Tamr itself is actually a data product. Anthony shares his views on why it’s so critical to focus on solving for customer needs and not simply the newest and shiniest technology. We also discuss the challenges that come with building a product that’s designed to facilitate the creation of better internal data products, as well as where we are in this new wave of data product management, and the evolution of the role. Highlights/ Skip to: I introduce Anthony, General Manager of Data Products at Tamr, and the topics we’ll be discussing today (00:37) Anthony shares his observations on how BI analytics are an inch deep and a mile wide due to the data that’s being input (02:31) Tamr’s focus on data products and how that reflects in Anthony’s recent job change from Chief Product Officer to General Manager of Data Products (04:35) Anthony’s definition of a data product (07:42) Anthony and I explore whether he feels that decision support is necessary for a data product (13:48) Whether or not Anthony feels that Tamr qualifies as a data product (17:08) Anthony speaks to the importance of focusing on outcomes and benefits as opposed to endlessly knitting together features and products (19:42) The challenges Anthony sees with metrics like Propensity to Churn (21:56) How Anthony thinks about design in a product like Tamr (30:43) Anthony shares how data science at Tamr is a tool in his toolkit and not viewed as a “fourth” leg of the product triad/stool (36:01) Anthony’s views on where we are in the evolution of the DPM role (41:25) What Anthony would do differently if he could start over at Tamr knowing what he knows now (43:43) Links Tamr: https://www.tamr.com/ Innovating : https://www.amazon.com/Innovating-short-guide-making-things/dp/B0C8R79PVB The Mom Test : https://www.amazon.com/The-Mom-Test-Rob-Fitzpatrick-audiobook/dp/B07RJZKZ7F LinkedIn: https://www.linkedin.com/in/anthonydeighton/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 125 - Human-Centered XAI: Moving from Algorithms to Explainable ML UX with Microsoft Researcher Vera Liao 44:42
Today I’m joined by Vera Liao, Principal Researcher at Microsoft. Vera is a part of the FATE (Fairness, Accountability, Transparency, and Ethics of AI) group, and her research centers around the ethics, explainability, and interpretability of AI products. She is particularly focused on how designers design for explainability. Throughout our conversation, we focus on the importance of taking a human-centered approach to rendering model explainability within a UI, and why incorporating users during the design process informs the data science work and leads to better outcomes. Vera also shares some research on why example-based explanations tend to out-perform [model] feature-based explanations, and why traditional XAI methods LIME and SHAP aren’t the solution to every explainability problem a user may have. Highlights/ Skip to: I introduce Vera, who is Principal Researcher at Microsoft and whose research mainly focuses on the ethics, explainability, and interpretability of AI (00:35) Vera expands on her view that explainability should be at the core of ML applications (02:36) An example of the non-human approach to explainability that Vera is advocating against (05:35) Vera shares where practitioners can start the process of responsible AI (09:32) Why Vera advocates for doing qualitative research in tandem with model work in order to improve outcomes (13:51) I summarize the slides I saw in Vera’s deck on Human-Centered XAI and Vera expands on my understanding (16:06) Vera’s success criteria for explainability (19:45) The various applications of AI explainability that Vera has seen evolve over the years (21:52) Why Vera is a proponent of example-based explanations over model feature ones (26:15) Strategies Vera recommends for getting feedback from users to determine what the right explainability experience might be (32:07) The research trends Vera would most like to see technical practitioners apply to their work (36:47) Summary of the four-step process Vera outlines for Question-Driven XAI design (39:14) Links “Human-Centered XAI: From Algorithms to User Experiences” Presentation “Human-Centered XAI: From Algorithms to User Experiences” Slide Deck “Human-Centered AI Transparency in the Age of Large Language Models” MSR Microsoft Research Vera's Personal Website…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

In this episode, I give an overview of my PiCAA Framework, which is a framework I shared at my keynote talk for Netguru’s annual conference, Burning Minds. This framework helps with brainstorming machine learning use cases or reverse engineering them, starting with the tactic. Throughout the episode, I give context to the preliminary types of work and preparation you and your team would want to do before implementing PiCAA, as well as the process and potential pitfalls you may run into, and the end results that make it a beneficial tool to experiment with. Highlights/ Skip to: Where/ how you might implement the PiCAA Framework (1:22) Focusing on the human part of your ideas (5:04) Keynote excerpt outlining the PiCAA Framework (7:28) Closing a PiCAA workshop by exploring what could go wrong (18:03) Links Experiencing Data Episode 106 with Jeremy Utley The Data Product Leadership Community Ask me a question (below the recent episodes)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 123 - Learnings From the CDOIQ Symposium and How Data Product Definitions are Evolving with Brian T. O’Neill 27:17
Today I’m wrapping up my observations from the CDOIQ Symposium and sharing what’s new in the world of data. I was only able to attend a handful of sessions, but they were primarily ones tied to the topic of data products, which, of course, brings us to “What’s a data product?” During this episode, I cover some of what I’ve been hearing about the definition of this word, and I also share my revised v2 definition. I also walk through some of the questions that CDOs and fellow attendees were asking at the sessions I went to and a few reactions to those questions. Finally, I announce an exciting development on the launch of the Data Product Leadership Community. Highlights/ Skip to: Brian introduces the topic for this episode, including his wrap-up of the CDOIQ Symposium (00:29) The general impressions Brian heard at the Symposium, including a focus on people & culture and an emphasis on data products (01:51) The three main areas the definition of a data product covers according to Brian’s observations (04:43) Brian describes how companies are looking for successful data product development models to follow and explores where new Data Product Managers are coming from (07:17) A methodology that Brian feels leads to a successful data product team (10:14) How Brian feels digital-native folks see the world of data products differently (11:29) The topic of Data Mesh and Human-Centered Design and how it came up in two presentations at the CDOIQ Symposium (13:24) The rarity of design and UX being talked about at data conferences, and why Brian feels that is the case (15:24) Brian’s current definition of a data product and how it’s evolved from his V1 definition (18:43) Brian lists the main questions that were being asked at CDOIQ sessions he attended around data products (22:19) Where to find answers to many of the questions being asked about data products and an update on the Data Product Leader Community that he will launch in August 2023 (24:28) Quotes from Today’s Episode “I think generally what’s happening is the technology continues to evolve, I think it generally continues to get easier, and all of the people and cultural parts and the change management and all of that, that problem just persists no matter what. And so, I guess the question is, what are we going to do about it?” — Brian T. O’Neill (03:11) “The feeling I got from the questions [at the CDOIQ Symposium], … and particularly the ones that were talking about the role of data product management and the value of these things was, it’s like they’re looking for a recipe to follow.” — Brian T. O’Neill (07:17) “My guess is people are just kind of reading up about it, self-training a bit, and trying to learn how to do product on their own. I think that’s how you learn how to do stuff is largely through trial and error. You can read books, you can do all that stuff, but beginning to do it is part of it.” — Brian T. O’Neill (08:57) “I think the most important thing is that data is a raw ingredient here; it’s a foundation piece for the solution that we’re going to make that’s so good, someone might pay to use it or trade something of value to use it. And as long as that’s intact, I think you’re kind of checking the box as to whether it’s a data product.” — Brian T. O’Neill (12:13) “I also would say on the data mesh topic, the feeling I got from people who had been to this conference before was that was quite a hyped thing the last couple years. Now, it was not talked about as much, but I think now they’re actually seeing some examples of this working.” — Brian T. O’Neill (16:25) “My current v2 definition right now is, ‘A data product is a managed, end-to-end software solution that organizes, refines, or transforms data to solve a problem that’s so important customers would pay for it or exchange something of value to use it.’” — Brian T. O’Neill (19:47) “We know [the product is] of value because someone was willing to pay for it or exchange their time or switch from their old way of doing things to the new way because it has that inherent benefit baked in. That’s really the most important part here that I think any data product manager should fully be aligned with.” — Brian T. O’Neill (21:35) Links Episode 67 Episode 110 The Definition of Data Product The Data Product Leadership Community Ask me a question (below the recent episodes)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 152 - 10 Reasons Not to Get Professional UX Design Help for Your Enterprise AI or SAAS Analytics Product 53:00
In today’s episode, I’m going to perhaps work myself out of some consulting engagements, but hey, that’s ok! True consulting is about service—not PPT decks with strategies and tiers of people attached to rate cards. Specifically today, I decided to reframe a topic and approach it from the opposite/negative side. So, instead of telling you when the right time is to get UX design help for your enterprise SAAS analytics or AI product(s), today I’m going to tell you when you should NOT get help! Reframing this was really fun and made me think a lot as I recorded the episode. Some of these reasons aren’t necessarily representative of what I believe, but rather what I’ve heard from clients and prospects over 25 years—what they believe. For each of these, I’m also giving a counterargument, so hopefully, you get both sides of the coin. Finally, analytical thinkers, especially data product managers it seems, often want to quantify all forms of value they produce in hard monetary units—and so in this episode, I’m also going to talk about other forms of value that products can create that are worth paying for—and how mushy things like “feelings” might just come into play ;-) Ready? Highlights/ Skip to: (1:52) Going for short, easy wins (4:29) When you think you have good design sense/taste (7:09) The impending changes coming with GenAI (11:27) Concerns about "dumbing down" or oversimplifying technical analytics solutions that need to be powerful and flexible (15:36) Agile and process FTW? (18:59) UX design for and with platform products (21:14) The risk of involving designers who don’t understand data, analytics, AI, or your complex domain considerations (30:09) Designing after the ML models have been trained—and it’s too late to go back (34:59) Not tapping professional design help when your user base is small , and you have routine access and exposure to them (40:01) Explaining the value of UX design investments to your stakeholders when you don’t 100% control the budget or decisions Quotes from Today’s Episode “It is true that most impactful design often creates more product and engineering work because humans are messy. While there sometimes are these magic, small GUI-type changes that have big impact downstream, the big picture value of UX can be lost if you’re simply assigning low-level GUI improvement tasks and hoping to see a big product win. It always comes back to the game you’re playing inside your team: are you working to produce UX and business outcomes or shipping outputs on time? ” (3:18) “If you’re building something that needs to generate revenue, there has to be a sense of trust and belief in the solution. We’ve all seen the challenges of this with LLMs. [when] you’re unable to get it to respond in a way that makes you feel confident that it understood the query to begin with. And then you start to have all these questions about, ‘Is the answer not in there,’ or ‘Am I not prompting it correctly?’ If you think that most of this is just an technical data science problem, then don’t bother to invest in UX design work… ” (9:52) “Design is about, at a minimum, making it useful and usable, if not delightful. In order to do that, we need to understand the people that are going to use it. What would an improvement to this person’s life look like? Simplifying and dumbing things down is not always the answer. There are tools and solutions that need to be complex, flexible, and/or provide a lot of power – especially in an enterprise context. Working with a designer who solely insists on simplifying everything at all costs regardless of your stated business outcome goals is a red flag—and a reason not to invest in UX design—at least with them!“ (12:28)“I think what an analytics product manager [or] an AI product manager needs to accept is there are other ways to measure the value of UX design’s contribution to your product and to your organization. Let’s say that you have a mission-critical internal data product, it’s used by the most senior executives in the organization, and you and your team made their day, or their month, or their quarter. You saved their job. You made them feel like a hero. What is the value of giving them that experience and making them feel like those things… What is that worth when a key customer or colleague feels like you have their back with this solution you created? Ideas that spread, win, and if these people are spreading your idea, your product, or your solution… there’s a lot of value in that.” (43:33) “Let’s think about value in non-financial terms. Terms like feelings. We buy insurance all the time. We’re spending money on something that most likely will have zero economic value this year because we’re actually trying not to have to file claims. Yet this industry does very well because the feeling of security matters. That feeling is worth something to a lot of people. The value of feeling secure is something greater than whatever the cost of the insurance plan. If your solution can build feelings of confidence and security, what is that worth? Does “hard to measure precisely” necessarily mean “low value?” (47:26)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 151 - Monetizing SAAS Analytics and The Challenges of Designing a Successful Embedded BI Product (Promoted Episode) 49:57
Due to a technical glitch that ended up unpublishing this episode right after it originally was released, Episode 151 is a replay of my conversation with Zalak Trivdei from this past March . Please enjoy our chat if you missed it the first time around! Thanks, Brian Links Original Episode: https://designingforanalytics.com/resources/episodes/139-monetizing-saas-analytics-and-the-challenges-of-designing-a-successful-embedded-bi-product-promoted-episode/ Sigma Computing: https://sigmacomputing.com Email: zalak@sigmacomputing.com LinkedIn: https://www.linkedin.com/in/trivedizalak/ Sigma Computing Embedded: https://sigmacomputing.com/embedded About Promoted Episodes on Experiencing Data: https://designingforanalytics.com/promoted…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 150 - How Specialized LLMs Can Help Enterprises Deliver Better GenAI User Experiences with Mark Ramsey 52:22
“Last week was a great year in GenAI,” jokes Mark Ramsey—and it’s a great philosophy to have as LLM tools especially continue to evolve at such a rapid rate. This week, you’ll get to hear my fun and insightful chat with Mark from Ramsey International about the world of large language models (LLMs) and how we make useful UXs out of them in the enterprise. Mark shared some fascinating insights about using a company’s website information (data) as a place to pilot a LLM project, avoiding privacy landmines, and how re-ranking of models leads to better LLM response accuracy. We also talked about the importance of real human testing to ensure LLM chatbots and AI tools truly delight users. From amusing anecdotes about the spinning beach ball on macOS to envisioning a future where AI-driven chat interfaces outshine traditional BI tools, this episode is packed with forward-looking ideas and a touch of humor. Highlights/ Skip to: (0:50) Why is the world of GenAI evolving so fast? (4:20) How Mark thinks about UX in an LLM application (8:11) How Mark defines “Specialized GenAI?” (12:42) Mark’s consulting work with GenAI / LLMs these days (17:29) How GenAI can help the healthcare industry (30:23) Uncovering users’ true feelings about LLM applications (35:02) Are UIs moving backwards as models progress forward? (40:53) How will GenAI impact data and analytics teams? (44:51) Will LLMs be able to consistently leverage RAG and produce proper SQL? (51:04) Where can find more from Mark and Ramsey International Quotes from Today’s Episode “With [GenAI], we have a solution that we’ve built to try to help organizations, and build workflows. We have a workflow that we can run and ask the same question [to a variety of GenAI models] and see how similar the answers are. Depending on the complexity of the question, you can see a lot of variability between the models… [and] we can also run the same question against the different versions of the model and see how it’s improved. Folks want a human-like experience interacting with these models.. [and] if the model can start responding in just a few seconds, that gives you much more of a conversational type of experience.” - Mark Ramsey (2:38) “[People] don’t understand when you interact [with GenAI tools] and it brings tokens back in that streaming fashion, you’re actually seeing inside the brain of the model. Every token it produces is then displayed on the screen, and it gives you that typewriter experience back in the day. If someone has to wait, and all you’re seeing is a logo spinning, from a UX experience standpoint… people feel like the model is much faster if it just starts to produce those results in that streaming fashion. I think in a design, it’s extremely important to take advantage of that [...] as opposed to waiting to the end and delivering the results some models support that, and other models don’t.”- Mark Ramsey (4:35) "All of the data that’s on the website is public information. We’ve done work with several organizations on quickly taking the data that’s on their website, packaging it up into a vector database, and making that be the source for questions that their customers can ask. [Organizations] publish a lot of information on their websites, but people really struggle to get to it. We’ve seen a lot of interest in vectorizing website data, making it available, and having a chat interface for the customer. The customer can ask questions, and it will take them directly to the answer, and then they can use the website as the source information.” - Mark Ramsey (14:04) “I’m not skeptical at all. I’ve changed much of my [AI chatbot searches] to Perplexity, and I think it’s doing a pretty fantastic job overall in terms of quality. It’s returning an answer with citations, so you have a sense of where it’s sourcing the information from. I think it’s important from a user experience perspective. This is a replacement for broken search, as I really don’t want to read all the web pages and PDFs you have that *might* be about my chiropractic care query to answer my actual [healthcare] question.” - Brian O’Neill (19:22) “We’ve all had great experience with customer service, and we’ve all had situations where the customer service was quite poor, and we’re going to have that same thing as we begin to [release more] chatbots. We need to make sure we try to alleviate having those bad experiences, and have an exit. If someone is running into a situation where they’d rather talk to a live person, have that ability to route them to someone else. That’s why the robustness of the model is extremely important in the implementation… and right now, organizations like OpenAI and Anthropic are significantly better at that [human-like] experience.” - Mark Ramsey (23:46) "There’s two aspects of these models: the training aspect and then using the model to answer questions. I recommend to organizations to always augment their content and don’t just use the training data. You’ll still get that human-like experience that’s built into the model, but you’ll eliminate the hallucinations. If you have a model that has been set up correctly, you shouldn’t have to ask questions in a funky way to get answers.” - Mark Ramsey (39:11) “People need to understand GenAI is not a predictive algorithm. It is not able to run predictions, it struggles with some math, so that is not the focus for these models. What’s interesting is that you can use the model as a step to get you [the answers]. A lot of the models now support functions… when you ask a question about something that is in a database, it actually uses its knowledge about the schema of the database. It can build the query, run the query to get the data back, and then once it has the data, it can reformat the data into something that is a good response back." - Mark Ramsey (42:02) Links Mark on LinkedIn Ramsey International Email: mark [at] ramsey.international Ramsey International's YouTube Channel…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 149 - What the Data Says About Why So Many Data Science and AI Initiatives Are Still Failing to Produce Value with Evan Shellshear 50:18
Guess what? Data science and AI initiatives are still failing here in 2024—despite widespread awareness. Is that news? Candidly, you’ll hear me share with Evan Shellshear—author of the new book Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics —about how much I actually didn’t want to talk about this story originally on my podcast—because it’s not news! However, what is news is what the data says behind Evan’s findings—and guess what? It’s not the technology. In our chat, Evan shares why he wanted to leverage a human approach to understand the root cause of multiple organizations’ failures and how this approach highlighted the disconnect between data scientists and decision-makers. He explains the human factors at play, such as poor problem surfacing and organizational culture challenges—and how these human-centered design skills are rarely taught or offered to data scientists. The conversation delves into why these failures are more prevalent in data science compared to other fields, attributing it to the complexity and scale of data-related problems. We also discuss how analytically mature companies can mitigate these issues through strategic approaches and stakeholder buy-in. Join us as we delve into these critical insights for improving data science project outcomes. Highlights/ Skip to: (4:45) Why are data science projects still failing? (9:17) Why is the disconnect between data scientists and decision-makers so pronounced relative to, say, engineering? (13:08) Why are data scientists not getting enough training for real-world problems? (16:18) What the data says about failure rates for mature data teams vs. immature data teams (19:39) How to change people’s opinions so they value data more (25:16) What happens at the stage where the beneficiaries of data don’t actually see the benefits? (31:09) What are the skills needed to prevent a repeating pattern of creating data products that customers ignore?? (37:10) Where do more mature organizations find non-technical help to complement their data science and AI teams? (41:44) Are executives and directors aware of the skills needed to level up their data science and AI teams? Quotes from Today’s Episode “People know this stuff. It’s not news anymore. And so, the reason why we needed this was really to dig in. And exactly like you did, like, keeping that list of articles is brilliant, and knowing what’s causing the failures and what’s leading to these issues still arising is really important. But at some point, we need to approach this in a scientific fashion, and we need to unpack this, and we need to really delve into the details beyond just the headlines and the articles themselves. And start collating and analyzing this to properly figure out what’s going wrong, and what do we need to do about it to fix it once and for all so you can stop your endless collection, and the AI Incident Database that now has over 3500 entries. It can hang its hat and say, ‘I’ve done my job. It’s time to move on. We’re not failing as we used to.’” - Evan Shellshear (3:01) "What we did is we took a number of different studies, and we split companies into what we saw as being analytically mature—and this is a common, well-known thing; there are many maturity frameworks exist across data, across AI, across all different areas—and what we call analytically immature, so those companies that probably aren’t there yet. And what we wanted to draw a distinction is okay, we say 80% of projects fail, or whatever the exact number is, but for who? And for what stage and for what capability? And so, what we then went and did is we were able to take our data and look at which failures are common for analytically immature organizations, and which failures are common for analytically mature organizations, and then we’re able to understand, okay, in the market, how many organizations do we think are analytically mature versus analytically immature, and then we were able to take that 80% failure rate and establish it. For analytically mature companies, the failure rate is probably more like 40%. For analytically immature companies, it’s over 90%, right? And so, you’re exactly right: organizations can do something about it, and they can build capabilities in to mitigate this. So definitely, it can be reduced. Definitely, it can be brought down. You might say, 40% is still too high, but it proves that by bringing in these procedures, you’re completely correct, that it can be reduced.” - Evan Shellshear (14:28) "What happens with the data science person, however, is typically they’re seen as a cost center—typically, not always; nowadays, that dialog is changing—and what they need to do is find partners across the other parts of the business. So, they’re going to go into the supply chain team, they’ll go into the merchandising team, they’ll go into the banking team, they’ll go into the other teams, and they’re going to find their supporters and winners there, and they’re going to probably build out from there. So, the first step would likely be, if you’re a big enough organization that you’re not having that strategy the executive level is to find your friends—and there will be some of the organization who support this data strategy—and get some wins for them.” - Evan Shellshear (24:38) “It’s not like there’s this box you put one in the other in. Because, like success and failure, there’s a continuum. And companies as they move along that continuum, just like you said, this year, we failed on the lack of executive buy-in, so let’s fix that problem. Next year, we fail on not having the right resources, so we fix that problem. And you move along that continuum, and you build it up. And at some point as you’re going on, that failure rate is dropping, and you’re getting towards that end of the scale where you’ve got those really capable companies that live, eat, and breathe data science and analytics, and so have to have these to be able to survive, otherwise a simple company evolution would have wiped them out, and they wouldn’t exist if they didn’t have that capability, if that’s their core thing.” - Evan Shellshear (18:56) “Nothing else could be correct, right? This subjective intuition and all this stuff, it’s never going to be as good as the data. And so, what happens is, is you, often as a data scientist—and I’ve been subjected to this myself—come in with this arrogance, this kind of data-driven arrogance, right? And it’s not a good thing. It puts up barriers, it creates issues, it separates you from the people.” - Evan Shellshear (27:38) "Knowing that you’re going to have to go on that journey from day one, you can’t jump from level zero to level five. That’s what all these data maturity models are about, right? You can’t jump from level zero data maturity to level five overnight. You really need to take those steps and build it up.” - Evan Shellshear (45:21) "What we’re talking about, it’s not new. It’s just old wine in a new skin, and we’re just presenting it for the data science age." - Evan Shellshear (48:15) Links Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics, without the Hype : https://www.routledge.com/Why-Data-Science-Projects-Fail-the-Harsh-Realities-of-Implementing-AI-and-Analytics-without-the-Hype/Gray-Shellshear/p/book/9781032660301 LinkedIn: https://www.linkedin.com/in/eshellshear/ Get the Book: Get 20% off at Routledge.com w/ code dspf20 Get it at Amazon Why do we still teach people to calculate? (People I Mostly Admire podcast)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

Ready for more ideas about UX for AI and LLM applications in enterprise environments? In part 2 of my topic on UX considerations for LLMs, I explore how an LLM might be used for a fictitious use case at an insurance company—specifically, to help internal tools teams to get rapid access to primary qualitative user research. (Yes, it’s a little “meta”, and I’m also trying to nudge you with this hypothetical example—no secret!) ;-) My goal with these episodes is to share questions you might want to ask yourself such that any use of an LLM is actually contributing to a positive UX outcome Join me as I cover the implications for design, the importance of foundational data quality, the balance between creative inspiration and factual accuracy, and the never-ending discussion of how we might handle hallucinations and errors posing as “facts”—all with a UX angle. At the end, I also share a personal story where I used an LLM to help me do some shopping for my favorite product: TRIP INSURANCE! (NOT!) Highlights/ Skip to: (1:05) I introduce a hypothetical internal LLM tool and what the goal of the tool is for the team who would use it (5:31) Improving access to primary research findings for better UX (10:19) What “quality data” means in a UX context (12:18) When LLM accuracy maybe doesn’t matter as much (14:03) How AI and LLMs are opening the door for fresh visioning work (15:38) Brian’s overall take on LLMs inside enterprise software as of right now (18:56) Final thoughts on UX design for LLMs, particularly in the enterprise (20:25) My inspiration for these 2 episodes—and how I had to use ChatGPT to help me complete a purchase on a website that could have integrated this capability right into their website Quotes from Today’s Episode “If we accept that the goal of most product and user experience research is to accelerate the production of quality services, products, and experiences, the question is whether or not using an LLM for these types of questions is moving the needle in that direction at all. And secondly, are the potential downsides like hallucinations and occasional fabricated findings, is that all worth it? So, this is a design for AI problem.” - Brian T. O’Neill (8:09) “What’s in our data? Can the right people change it when the LLM is wrong? The data product managers and AI leaders reading this or listening know that the not-so-secret path to the best AI is in the foundational data that the models are trained on. But what does the word *quality* mean from a product standpoint and a risk reduction one, as seen from an end-users’ perspective? Somebody who’s trying to get work done? This is a different type of quality measurement.” - Brian T. O’Neill (10:40) “When we think about fact retrieval use cases in particular, how easily can product teams—internal or otherwise—and end-users understand the confidence of responses? When responses are wrong, how easily, if at all, can users and product teams update the model’s responses? Errors in large language models may be a significant design consideration when we design probabilistic solutions, and we no longer control what exactly our products and software are going to show to users. If bad UX can include leading people down the wrong path unknowingly, then AI is kind of like the team on the other side of the tug of war that we’re playing.” - Brian T. O’Neill (11:22) “As somebody who writes a lot for my consulting business, and composes music in another, one of the hardest parts for creators can be the zero-to-one problem of getting started—the blank page—and this is a place where I think LLMs have great potential. But it also means we need to do the proper research to understand our audience, and when or where they’re doing truly generative or creative work—such that we can take a generative UX to the next level that goes beyond delivering banal and obviously derivative content.” - Brian T. O’Neill (13:31) “One thing I actually like about the hype, investment, and excitement around GenAI and LLMs in the enterprise is that there is an opportunity for organizations here to do some fresh visioning work. And this is a place that designers and user experience professionals can help data teams as we bring design into the AI space.” - Brian T. O’Neill (14:04) “If there was ever a time to do some new visioning work, I think now is one of those times. However, we need highly skilled design leaders to help facilitate this in order for this to be effective. Part of that skill is knowing who to include in exercises like this, and my perspective, one of those people, for sure, should be somebody who understands the data science side as well, not just the engineering perspective. And as I posited in my seminar that I teach, the AI and analytical data product teams probably need a fourth member. It’s a quartet and not a trio. And that quartet includes a data expert, as well as that engineering lead.” - Brian T. O’Neill (14:38) Links Perplexity.ai: https://perplexity.ai Ideaflow : https://www.amazon.com/Ideaflow-Only-Business-Metric-Matters/dp/0593420586 My article that inspired this episode…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

Let’s talk about design for AI (which more and more, I’m agreeing means GenAI to those outside the data space). The hype around GenAI and LLMs—particularly as it relates to dropping these in as features into a software application or product—seems to me, at this time, to largely be driven by FOMO rather than real value. In this “part 1” episode, I look at the importance of solid user experience design and outcome-oriented thinking when deploying LLMs into enterprise products. Challenges with immature AI UIs, the role of context, the constant game of understanding what accuracy means (and how much this matters), and the potential impact on human workers are also examined. Through a hypothetical scenario, I illustrate the complexities of using LLMs in practical applications, stressing the need for careful consideration of benchmarks and the acceptance of GenAI's risks. I also want to note that LLMs are a very immature space in terms of UI/UX design—even if the foundation models continue to mature at a rapid pace. As such, this episode is more about the questions and mindset I would be considering when integrating LLMs into enterprise software more than a suggestion of “best practices.” Highlights/ Skip to: (1:15) Currently, many LLM feature initiatives seem to mostly driven by FOMO (2:45) UX Considerations for LLM-enhanced enterprise applications (5:14) Challenges with LLM UIs / user interfaces (7:24) Measuring improvement in UX outcomes with LLMs (10:36) Accuracy in LLMs and its relevance in enterprise software (11:28) Illustrating key consideration for implementing an LLM-based feature (19:00) Leadership and context in AI deployment (19:27) Determining UX benchmarks for using LLMs (20:14) The dynamic nature of LLM hallucinations and how we design for the unknown (21:16) Closing thoughts on Part 1 of designing for AI and LLMs Quotes from Today’s Episode “While many product teams continue to race to deploy some sort of GenAI and especially LLMs into their products—particularly this is in the tech sector for commercial software companies—the general sense I’m getting is that this is still more about FOMO than anything else.” - Brian T. O’Neill (2:07) “No matter what the technology is, a good user experience design foundation starts with not doing any harm, and hopefully going beyond usable to be delightful. And adding LLM capabilities into a solution is really no different. So, we still need to have outcome-oriented thinking on both our product and design teams when deploying LLM capabilities into a solution. This is a cornerstone of good product work.” - Brian T. O’Neill (3:03) “So, challenges with LLM UIs and UXs, right, user interfaces and experiences, the most obvious challenge to me right now with large language model interfaces is that while we’ve given users tremendous flexibility in the form of a Google search-like interface, we’ve also in many cases, limited the UX of these interactions to a text conversation with a machine. We’re back to the CLI in some ways.” - Brian T. O’Neill (5:14) “Before and after we insert an LLM into a user’s workflow, we need to know what an improvement in their life or work actually means.”- Brian T. O’Neill (7:24) "If it would take the machine a few seconds to process a result versus what might take a day for a worker, what’s the role and purpose of that worker going forward? I think these are all considerations that need to be made, particularly if you’re concerned about adoption, which a lot of data product leaders are." - Brian T. O’Neill (10:17) “So, there’s no right or wrong answer here. These are all range questions, and they’re leadership questions, and context really matters. They are important to ask, particularly when we have this risk of reacting to incorrect information that looks plausible and believable because of how these LLMs tend to respond to us with a positive sheen much of the time.” - Brian T. O’Neill (19:00) Links View Part 1 of my article on UI/UX design considerations for LLMs in enterprise applications: https://designingforanalytics.com/resources/ui-ux-design-for-enterprise-llms-use-cases-and-considerations-for-data-and-product-leaders-in-2024-part-1/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 146 - (Rebroadcast) Beyond Data Science - Why Human-Centered AI Needs Design with Ben Shneiderman 42:07
Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence. I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data ! Ben and I talked a lot about the complex intersection of human-centered design and AI systems. In our chat, we covered: Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55) Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05) The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10) Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04) There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41) Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36) Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 145 - Data Product Success: Adopting a Customer-Centric Approach With Malcolm Hawker, Head of Data Management at Profisee 53:09
Wait, I’m talking to a head of data management at a tech company? Why!? Well, today I'm joined by Malcolm Hawker to get his perspective around data products and what he’s seeing out in the wild as Head of Data Management at Profisee. Why Malcolm? Malcolm was a former head of product in prior roles, and for several years, I’ve enjoyed Malcolm’s musings on LinkedIn about the value of a product-oriented approach to ML and analytics. We had a chance to meet at CDOIQ in 2023 as well and he went on my “need to do an episode” list! According to Malcom, empathy is the secret to addressing key UX questions that ensure adoption and business value. He also emphasizes the need for data experts to develop business skills so that they're seen as equals by their customers. During our chat, Malcolm stresses the benefits of a product- and customer-centric approach to data products and what data professionals can learn approaching problem solving with a product orientation. Highlights/ Skip to: Malcolm’s definition of a data product (2:10) Understanding your customers’ needs is the first step toward quantifying the benefits of your data product (6:34) How product makers can gain access to users to build more successful products (11:36) Answering the UX question to get past the adoption stage and provide business value (16:03) Data experts must develop business expertise if they want to be seen as equals by potential customers (20:07) What people really mean by “data culture" (23:02) Malcolm’s data product journey and his changing perspective (32:05) Using empathy to provide a better UX in design and data (39:24) Avoiding the death of data science by becoming more product-driven (46:23) Where the majority of data professionals currently land on their view of product management for data products (48:15) Quotes from Today’s Episode “My definition of a data product is something that is built by a data and analytics team that solves a specific customer problem that the customer would otherwise be willing to pay for. That’s it.” - Malcolm Hawker (3:42) “You need to observe how your customer uses data to make better decisions, optimize a business process, or to mitigate business risk. You need to know how your customers operate at a very, very intimate level, arguably, as well as they know how their business processes operate.” - Malcolm Hawker (7:36) “So, be a problem solver. Be collaborative. Be somebody who is eager to help make your customers’ lives easier. You hear "no" when people think that you’re a burden. You start to hear more “yeses” when people think that you are actually invested in helping make their lives easier.” - Malcolm Hawker (12:42) “We [data professionals] put data on a pedestal. We develop this mindset that the data matters more—as much or maybe even more than the business processes, and that is not true. We would not exist if it were not for the business. Hard stop.” - Malcolm Hawker (17:07) “I hate to say it, I think a lot of this data stuff should kind of feel invisible in that way, too. It’s like this invisible ally that you’re not thinking about the dashboard; you just access the information as part of your natural workflow when you need insights on making a decision, or a status check that you’re on track with whatever your goal was. You’re not really going out of mode.” - Brian O’Neill (24:59) “But you know, data people are basically librarians. We want to put things into classifications that are logical and work forwards and backwards, right? And in the product world, sometimes they just don’t, where you can have something be a product and be a material to a subsequent product.” - Malcolm Hawker (37:57) “So, the broader point here is just more of a mindset shift. And you know, maybe these things aren’t necessarily a bad thing, but how do we become a little more product- and customer-driven so that we avoid situations where everybody thinks what we’re doing is a time waster?” - Malcolm Hawker (48:00) Links Profisee: https://profisee.com/ LinkedIn: https://www.linkedin.com/in/malhawker/ CDO Matters: https://profisee.com/cdo-matters-live-with-malcolm-hawker/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 144 - The Data Product Debate: Essential Tech or Excessive Effort? with Shashank Garg, CEO of Infocepts (Promoted Episode) 52:38
Welcome to another curated, Promoted Episode of Experiencing Data! In episode 144, Shashank Garg, Co-Founder and CEO of Infocepts, joins me to explore whether all this discussion of data products out on the web actually has substance and is worth the perceived extra effort. Do we always need to take a product approach for ML and analytics initiatives? Shashank dives into how Infocepts approaches the creation of data solutions that are designed to be actionable within specific business workflows—and as I often do, I started out by asking Shashank how he and Infocepts define the term “data product.” We discuss a few real-world applications Infocepts has built, and the measurable impact of these data products—as well as some of the challenges they’ve faced that your team might as well. Skill sets also came up; who does design? Who takes ownership of the product/value side? And of course, we touch a bit on GenAI. Highlights/ Skip to Shashank gives his definition of data products (01:24) We tackle the challenges of user adoption in data products (04:29) We discuss the crucial role of integrating actionable insights into data products for enhanced decision-making (05:47) Shashank shares insights on the evolution of data products from concept to practical integration (10:35) We explore the challenges and strategies in designing user-centric data products (12:30) I ask Shashank about typical environments and challenges when starting new data product consultations (15:57) Shashank explains how Infocepts incorporates AI into their data solutions (18:55) We discuss the importance of understanding user personas and engaging with actual users (25:06) Shashank describes the roles involved in data product development’s ideation and brainstorming stages (32:20) The issue of proxy users not truly representing end-users in data product design is examined (35:47) We consider how organizations are adopting a product-oriented approach to their data strategies (39:48) Shashank and I delve into the implications of GenAI and other AI technologies on product orientation and user adoption (43:47) Closing thoughts (51:00) Quotes from Today’s Episode “Data products, at least to us at Infocepts, refers to a way of thinking about and organizing your data in a way so that it drives consumption, and most importantly, actions.” - Shashank Garg (1:44) “The way I see it is [that] the role of a DPM (data product manager)—whether they have the title or not—is benefits creation. You need to be responsible for benefits, not for outputs. The outputs have to create benefits or it doesn’t count. Game over” - Brian O’Neill (10:07) We talk about bridging the gap between the worlds of business and analytics... There's a huge gap between the perception of users and the tech leaders who are producing it." - Shashank Garg (17:37) “IT leaders often limit their roles to provisioning their secure data, and then they rely on businesses to be able to generate insights and take actions. Sometimes this handoff works, and sometimes it doesn’t because of quality governance.” - Shashank Garg (23:02) “Data is the kind of field where people can react very, very quickly to what’s wrong.” - Shashank Garg (29:44) “It’s much easier to get to a good prototype if we know what the inputs to a prototype are, which include data about the people who are going to use the solution, their usage scenarios, use cases, attitudes, beliefs…all these kinds of things.” - Brian O’Neill (31:49) “For data, you need a separate person, and then for designing, you need a separate person, and for analysis, you need a separate person—the more you can combine, I don’t think you can create super-humans who can do all three, four disciplines, but at least two disciplines and can appreciate the third one that makes it easier.” - Shashank Garg (39:20) “When we think of AI, we’re all talking about multiple different delivery methods here. I think AI is starting to become GenAI to a lot of non-data people. It’s like their—everything is GenAI.” - Brian O'Neill (43:48) Links Infocepts website: https://www.infocepts.ai/ Shashank Garg on LinkedIn: https://www.linkedin.com/in/shashankgarg/ Top 5 Data & AI initiatives for business success: https://www.infocepts.ai/downloads/top-5-data-and-ai-initiatives-to-drive-business-growth-in-2024-beyond/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 143 - The (5) Top Reasons AI/ML and Analytics SAAS Product Leaders Come to Me For UI/UX Design Help 50:01
Welcome back! In today's solo episode, I share the top five struggles that enterprise SAAS leaders have in the analytics/insight/decision support space that most frequently leads them to think they have a UI/UX design problem that has to be addressed. A lot of today's episode will talk about "slow creep," unaddressed design problems that gradually build up over time and begin to impact both UX and your revenue negatively. I will also share 20 UI and UX design problems I often see (even if clients do not!) that, when left unaddressed, may create sales friction, adoption problems, churn, or unhappy end users. If you work at a software company or are directly monetizing an ML or analytical data product, this episode is for you! Highlights/ Skip to I discuss how specific UI/UX design problems can significantly impact business performance (02:51) I discuss five common reasons why enterprise software leaders typically reach out for help (04:39) The 20 common symptoms I've observed in client engagements that indicate the need for professional UI/UX intervention or training (13:22) The dangers of adding too many features or customization and how it can overwhelm users (16:00) The issues of integrating AI into user interfaces and UXs without proper design thinking (30:08) I encourage listeners to apply the insights shared to improve their data products (48:02) Quotes from Today’s Episode “One of the problems with bad design is that some of it we can see and some of it we can't — unless you know what you're looking for." - Brian O’Neill (02:23) “Design is usually not top of mind for an enterprise software product, especially one in the machine learning and analytics space. However, if you have human users, even enterprise ones, their tolerance for bad software is much lower today than in the past.” Brian O’Neill - (13:04) “Early on when you're trying to get product market fit, you can't be everything for everyone. You need to be an A+ experience for the person you're trying to satisfy.” -Brian O’Neill (15:39) “Often when I see customization, it is mostly used as a crutch for not making real product strategy and design decisions.” - Brian O’Neill (16:04) "Customization of data and dashboard products may be more of a tax than a benefit. In the marketing copy, customization sounds like a benefit...until you actually go in and try to do it. It puts the mental effort to design a good solution on the user." - Brian O’Neill (16:26) “We need to think strategically when implementing Gen AI or just AI in general into the product UX because it won’t automatically help drive sales or increase business value.” - Brian O’Neill (20:50) “A lot of times our analytics and machine learning tools… are insight decision support products. They're supposed to be rooted in facts and data, but when it comes to designing these products, there's not a whole lot of data and facts that are actually informing the product design choices.” Brian O’Neill - (30:37) “If your IP is that special, but also complex, it needs the proper UI/UX design treatment so that the value can be surfaced in such a way someone is willing to pay for it if not also find it indispensable and delightful.” - Brian O’Neill (45:02) Links The (5) big reasons AI/ML and analytics product leaders invest in UI/UX design help: https://designingforanalytics.com/resources/the-5-big-reasons-ai-ml-and-analytics-product-leaders-invest-in-ui-ux-design-help/ Subscribe for free insights on designing useful, high-value enterprise ML and analytical data products: https://designingforanalytics.com/list Access my free frameworks, guides, and additional reading for SAAS leaders on designing high-value ML and analytical data products: https://designingforanalytics.com/resources Need help getting your product’s design/UX on track—so you can see more sales, less churn, and higher user adoption? Schedule a free 60-minute Discovery Call with me and I’ll give you my read on your situation and my recommendations to get ahead: https://designingforanalytics.com/services/…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 142 - Live Webinar Recording: My UI/UX Design Audit of a New Podcast Analytics Service w/ Chris Hill (CEO, Humblepod) 50:56
Welcome to a special edition of Experiencing Data. This episode is the audio capture from a live Crowdcast video webinar I gave on April 26th, 2024 where I conducted a mini UI/UX design audit of a new podcast analytics service that Chris Hill, CEO of Humblepod, is working on to help podcast hosts grow their show. Humblepod is also the team-behind-the-scenes of Experiencing Data, and Chris had asked me to take a look at his new “Listener Lifecycle” tool to see if we could find ways to improve the UX and visualizations in the tool, how we might productize this MVP in the future, and how improving the tool’s design might help Chris help his prospective podcast clients learn how their listener data could help them grow their listenership and “true fans.” On a personal note, it was fun to talk to Chris on the show given we speak every week: Humblepod has been my trusted resource for audio mixing, transcription, and show note summarizing for probably over 100 of the most recent episodes of Experiencing Data. It was also fun to do a “live recording” with an audience—and we did answer questions in the full video version. (If you missed the invite, join my Insights mailing list to get notified of future free webinars). To watch the full audio and video recording on Crowdcast, free, head over to: https://www.crowdcast.io/c/podcast-analytics-ui-ux-design Highlights/ Skip to: Chris talks about using data to improve podcasts and his approach to podcast numbers (03:06) Chris introduces the Listener Lifecycle model which informed the dashboard design (08:17) Chris and I discuss the importance of labeling and terminology in analytics UIs (11:00) We discuss designing for practical use of analytics dashboards to provide actionable insights (17:05) We discuss the challenges podcast hosts face in understanding and utilizing data effectively and how design might help (21:44) I discuss how my CED UX framework for advanced analytics applications helps to facilitate actionable insights (24:37) I highlight the importance of presenting data effectively and in a way that centers to user needs (28:50) I express challenges users may have with podcast rankings and the reliability of data sources (34:24) Chris and I discuss tailoring data reports to meet the specific needs of clients (37:14) Quotes from Today’s Episode “The irony for me as someone who has a podcast about machine learning and analytics and design is that I basically never look at my analytics.” - Brian O’Neill (01:14) “The problem that I have found in podcasting is that the number that everybody uses to gauge whether a podcast is good or not is the download number…But there’s a lot of other factors in a podcast that can tell you how successful it’s going to be…where you can pull levers to…grow your show, or engage more with an audience.” - Chris Hill (03:20) “I have a framework for user experience design for analytics called CED, which stands for Conclusions, Evidence, Data… The basic idea is really simple: lead your analytic service with conclusions.”- Brian O’Neill (24:37) “Where the eyes glaze over is when tools are mostly about evidence generators, and we just give everybody the evidence, but there’s no actual analysis about how [this is] helping me improve my life or my business. It’s just evidence. I need someone to put that together.” - Brian O’Neill (25:23) “Sometimes the data doesn’t provide enough of a conclusion about what to do…This is where your opinion starts to matter” - Brian O’Neill (26:07) “It sounds like a benefit, but drilling down for most people into analytics stuff is usually a tax unless you’re an analyst.” - Brian O’Neill (27:39) “Where’s the source of this data, and who decided what these numbers are? Because so much of this stuff…is not shared. As someone who’s in this space, it’s not even that it’s confusing. It’s more like, you got to distill this down for me.” - Brian O’Neill (34:57) “Your clients are probably going to glaze over at this level of data because it’s not helping them make any decision about what to change.”- Brian O’Neill (37:53) Links Watch the original Crowdcast video recording of this episode Brian’s CED UX Framework for Advanced Analytics Solutions Join Brian’s Insights mailing list…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

In this week's episode of Experiencing Data, I'm joined by Duncan Milne, a Director, Data Investment & Product Management at the Royal Bank of Canada (RBC). Today, Duncan (who is also a member of the DPLC ) gives a preview of his upcoming webinar on April 24, 2024 entitled, “Is that Data Product Worth Building? Estimating Economic Value…Before You Build It!” Duncan shares his experience of implementing a product mindset within RBC's Chief Data Office, and he explains some of the challenges, successes, and insights gained along the way. He emphasizes the critical role of understanding user needs and evaluating the economic impact of data products—before they are built. Duncan was gracious to let us peek inside and see a transformation that is currently in progress and I’m excited to check out his webinar this month! Highlights/ Skip to: I introduce Duncan Milne from RBC (00:00) Duncan outlines the Chief Data Office's function at RBC (01:01) We discuss data products and how they are used to improve business process (04:05) The genesis behind RBC's move towards a product-centric approach in handling data, highlighting initial challenges and strategies for fostering a product mindset (07:26) Duncan discusses developing a framework to guide the lifecycle of data products at RBC (09:29) Duncan addresses initial resistance and adaptation strategies for engaging teams in a new product-centric methodology (12:04) The scaling challenges of applying a product mindset across a large organization like RBC (22:02) Insights into the framework for evaluating and prioritizing data product ideas based on their desirability, usability, feasibility, and viability. (26:30) Measuring success and value in data product management (30:45) Duncan explores process mapping challenges in banking (34:13) Duncan shares creating specialized training for data product management at RBC (36:39) Duncan offers advice and closing thoughts on data product management (41:38) Quotes from Today’s Episode “We think about data products as anything that solves a problem using data... it's helping someone do something they already do or want to do faster and better using data." - Duncan Milne (04:29) “The transition to data product management involves overcoming initial resistance by demonstrating the tangible value of this approach." - Duncan Milne (08:38) "You have to want to show up and do this kind of work [adopting a product mindset in data product management]…even if you do a product the right way, it doesn’t always work, right? The thing you make may not be desirable, it may not be as usable as it needs to be. It can be technically right and still fail. It’s not a guarantee, it’s just a better way of working.” - Brian T. O’Neill (15:03) “[Product management]... it's like baking versus cooking. Baking is a science... cooking is much more flexible. It’s about... did we produce a benefit for users? Did we produce an economic benefit? ...It’s a multivariate problem... a lot of it is experimentation and figuring out what works." - Brian T. O'Neill (23:03) "The easy thing to measure [in product management] is did you follow the process or not? That is not the point of product management at all. It's about delivering benefits to the stakeholders and to the customer." - Brian O'Neill (25:16) “Data product is not something that is set in stone... You can leverage learnings from a more traditional product approach, but don’t be afraid to improvise." - Duncan Milne (41:38) “Data products are fundamentally different from digital products, so even the traditional approach to product management in that space doesn’t necessarily work within the data products construct.” - Duncan Milne (41:55) “There is no textbook for data product management; the field is still being developed…don’t be afraid to create your own answer if what exists out there doesn’t necessarily work within your context.”- Duncan Milne (42:17) Links Duncan’s Linkedin: https://www.linkedin.com/in/duncanwmilne/?originalSubdomain=ca…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 140 - Why Data Visualization Alone Doesn’t Fix UI/UX Design Problems in Analytical Data Products with T from Data Rocks NZ 42:44
This week on Experiencing Data, I chat with a new kindred spirit! Recently, I connected with Thabata Romanowski—better known as "T from Data Rocks NZ"—to discuss her experience applying UX design principles to modern analytical data products and dashboards. T walks us through her experience working as a data analyst in the mining sector, sharing the journey of how these experiences laid the foundation for her transition to data visualization. Now, she specializes in transforming complex, industry-specific data sets into intuitive, user-friendly visual representations, and addresses the challenges faced by the analytics teams she supports through her design business. T and I tackle common misconceptions about design in the analytics field, discuss how we communicate and educate non-designers on applying UX design principles to their dashboard and application design work, and address the problem with "pretty charts." We also explore some of the core ideas in T's Design Manifesto, including principles like being purposeful, context-sensitive, collaborative, and humanistic—all aimed at increasing user adoption and business value by improving UX. Highlights/ Skip to: I welcome T from Data Rocks NZ onto the show (00:00) T's transition from mining to leading an information design and data visualization consultancy. (01:43) T discusses the critical role of clear communication in data design solutions. (03:39) We address the misconceptions around the role of design in data analytics. (06:54) T explains the importance of journey mapping in understanding users' needs. (15:25) We discuss the challenges of accurately capturing end-user needs. (19:00) T and I discuss the importance of talking directly to end-users when developing data products. (25:56) T shares her 'I like, I wish, I wonder' method for eliciting genuine user feedback. (33:03) T discusses her Data Design Manifesto for creating purposeful, context-aware, collaborative, and human-centered design principles in data. (36:37) We wrap up the conversation and share ways to connect with T. (40:49) Quotes from Today’s Episode "It's not so much that people…don't know what design is, it's more that they understand it differently from what it can actually do..." - T from Data Rocks NZ (06:59) "I think [misconception about design in technology] is rooted mainly in the fact that data has been very tied to IT teams, to technology teams, and they’re not always up to what design actually does.” - T from Data Rocks NZ (07:42) “If you strip design of function, it becomes art. So, it’s not art… it’s about being functional and being useful in helping people.” - T from Data Rocks NZ (09:06) "It’s not that people don’t know, really, that the word design exists, or that design applies to analytics and whatnot; it’s more that they have this misunderstanding that it’s about making things look a certain way, when in fact... It’s about function. It’s about helping people do stuff better." - T from Data Rocks NZ (09:19) “Journey Mapping means that you have to talk to people... Data is an inherently human thing. It is something that we create ourselves. So, it’s biased from the start. You can’t fully remove the human from the data" - T from Data Rocks NZ (15:36) “The biggest part of your data product success…happens outside of your technology and outside of your actual analysis. It’s defining who your audience is, what the context of this audience is, and to which purpose do they need that product. - T from Data Rocks NZ (19:08) “[In UX research], a tight, empowered product team needs regular exposure to end customers; there’s nothing that can replace that." - Brian O'Neill (25:58) “You have two sides [end-users and data team] that are frustrated with the same thing. The side who asked wasn’t really sure what to ask. And then the data team gets frustrated because the users don’t know what they want…Nobody really understood what the problem is. There’s a lot of assumptions happening there. And this is one of the hardest things to let go.” - T from Data Rocks NZ (29:38) “No piece of data product exists in isolation, so understanding what people do with it… is really important.” - T from Data Rocks NZ (38:51) Links Design Matters Newsletter: https://buttondown.email/datarocksnz Website: https://www.datarocks.co.nz/ LinkedIn: https://www.linkedin.com/company/datarocksnz/ BlueSky: https://bsky.app/profile/datarocksnz.bsky.social Mastodon: https://me.dm/@datarocksnz…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 139 - Monetizing SAAS Analytics and The Challenges of Designing a Successful Embedded BI Product (Promoted Episode) 51:02
This week on Experiencing Data, something new as promised at the beginning of the year. Today, I’m exploring the world of embedded analytics with Zalak Trivedi from Sigma Computing—and this is also the first approved Promoted Episode on the podcast. In today’s episode, Zalak shares his journey as the product lead for Sigma’s embedded analytics and reporting solution which seeks to accelerate and simplify the deployment of decision support dashboards to their SAAS companies’ customers. Right there, we have the first challenge that Zalak was willing to dig into with me: designing a platform UX when we have multiple stakeholder and user types. In Sigma’s case, this means Sigma’s buyers, the developers that work at these SAAS companies to integrate Sigma into their products, and then the actual customers of these SAAS companies who will be the final end users of the resulting dashboards. also discuss the challenges of creating products that serve both beginners and experts and how AI is being used in the BI industry. Highlights/ Skip to: I introduce Zalak Trivedi from Sigma Computing onto the show (03:15) Zalak shares his journey leading the vision for embedded analytics at Sigma and explains what Sigma looks like when implemented into a customer’s SAAS product . (03:54) Zalak and I discuss the challenge of integrating Sigma's analytics into various companies' software, since they need to account for a variety of stakeholders. (09:53) We explore Sigma's team approach to user experience with product management, design, and technical writing (15:14) Zalak reveals how Sigma leverages telemetry to understand and improve user interactions with their products (19:54) Zalak outlines why Sigma is a faster and more supportive alternative to building your own analytics (27:21) We cover data monetization, specifically looking at how SAAS companies can monetize analytics and insights (32:05) Zalak highlights how Sigma is integratingAI into their BI solution (36:15) Zalak share his customers' current pain points and interests (40:25) We wrap up with final thoughts and ways to connect with Zalak and learn more about Sigma (49:41) Quotes from Today’s Episode "Something I’m really excited about personally that we are working on is [moving] beyond analytics to help customers build entire data applications within Sigma. This is something we are really excited about as a company, and marching towards [achieving] this year." - Zalak Trivedi (04:04) “The whole point of an embedded analytics application is that it should look and feel exactly like the application it’s embedded in, and the workflow should be seamless.” - Zalak Trivedi (09:29) “We [at Sigma] had to switch the way that we were thinking about personas. It was not just about the analysts or the data teams; it was more about how do we give the right tools to the [SAAS] product managers and developers to embed Sigma into their product.” - Zalak Trivedi (11:30) “You can’t not have a design, and you can’t not have a user experience. There’s always an experience with every tool, solution, product that we use, whether it emerged organically as a byproduct, or it was intentionally created through knowledge data... it was intentional” - Brian O’Neill (14:52) “If we find that [in] certain user experiences,people are tripping up, and they’re not able to complete an entire workflow, we flag that, and then we work with the product managers, or [with] our customers essentially, and figure out how we can actually simplify these experiences.” - Zalak Trivedi (20:54) “We were able to convince many small to medium businesses and startups to sign up with Sigma. The success they experienced after embedding Sigma was tremendous. Many of our customers managed to monetize their existing data within weeks, or at most, a couple of months, with lean development teams of two to three developers and a few business-side personnel, generating seven-figure income streams from that.” - Zalak Trivedi (32:05) “At Sigma, our stance is, let’s not just add AI for the sake of adding AI. Let’s really identify [where] in the entire user journey does the intelligence really lie, and where are the different friction points, and let’s enhance those experiences.” - Zalak Trivedi (37:38) “Every time [we at Sigma Computing] think about a new feature or functionality, we have to ensure it works for both the first-degree persona and the second-degree persona, and consider how it will be viewed by these different personas, because that is not the primary persona for which the foundation of the product was built." - Zalak Trivedi (48:08) Links Sigma Computing: https://sigmacomputing.com Email: zalak@sigmacomputing.com LinkedIn: https://www.linkedin.com/in/trivedizalak/ Sigma Computing Embedded: https://sigmacomputing.com/embedded About Promoted Episodes on Experiencing Data: https://designingforanalytics.com/promoted…
E
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)

1 138 - VC Spotlight: The Impact of AI on SAAS and Data/Developer Products in 2024 w/ Ellen Chisa of BoldStart Ventures 33:05
In this episode of Experiencing Data, I speak with Ellen Chisa, Partner at BoldStart Ventures, about what she’s seeing in the venture capital space around AI-driven products and companies—particularly with all the new GenAI capabilities that have emerged in the last year. Ellen and I first met when we were both engaged in travel tech startups in Boston over a decade ago, so it was great to get her current perspective being on the “other side” of products and companies working as a VC. Ellen draws on her experience in product management and design to discuss how AI could democratize software creation and streamline backend coding, design integration, and analytics. We also delve into her work at Dark and the future prospects for developer tools and SaaS platforms. Given Ellen’s background in product management, human-centered design, and now VC, I thought she would have a lot to share—and she did! Highlights/ Skip to: I introduce the show and my guest, Ellen Chisa (00:00) Ellen discusses her transition from product and design to venture capital with BoldStart Ventures. (01:15) Ellen notes a shift from initial AI prototypes to more refined products, focusing on building and testing with minimal data. (03:22) Ellen mentions BoldStart Ventures' focus on early-stage companies providing developer and data tooling for businesses. (07:00) Ellen discusses what she learned from her time at Dark and Lola about narrowing target user groups for technology products (11:54) Ellen's Insights into the importance of user experience is in product design and the process venture capitalists endure to make sure it meets user needs (15:50) Ellen gives us her take on the impact of AI on creating new opportunities for data tools and engineering solutions, (20:00) Ellen and I explore the future of user interfaces, and how AI tools could enhance UI/UX for end users. (25:28) Closing remarks and the best way to find Ellen on online (32:07) Quotes from Today’s Episode “It's a really interesting time in the venture market because on top of the Gen AI wave, we obviously had the macroeconomic shift. And so we've seen a lot of people are saying the companies that come out now are going to be great companies because they're a little bit more capital-constrained from the beginning, typically, and they'll grow more thoughtfully and really be thinking about how do they build an efficient business.”- Ellen Chisa (03: 22) “We have this big technological shift around AI-enabled companies, and I think one of the things I’ve seen is, if you think back to a year ago, we saw a lot of early prototyping, and so there were like a couple of use cases that came up again and again.”-Ellen Chisa (3:42) “I don't think I've heard many pitches from founders who consider themselves data scientists first. We definitely get some from ML engineers and people who think about data architecture, for sure..”- Ellen Chisa (05:06) “I still prefer GUI interfaces to voice or text usually, but I think that might be an uncanny valley sort of thing where if you think of people who didn’t have technology growing up, they’re more comfortable with the more human interaction, and then you get, like, a chunk of people who are digital natives who prefer it.”- Ellen Chisa (24:51) [Citing some excellent Boston-area restaurants!] “The Arc browser just shipped a bunch of new functionality, where instead of opening a bunch of tabs, you can say, “Open the recipe pages for Oleana and Sarma,” and it just opens both of them, and so it’s like multiple search queries at once.” - Ellen Chisa (27:22) “The AI wave of technology biases towards people who already have products [in the market] and have existing datasets, and so I think everyone [at tech companies] is getting this big, top-down mandate from their executive team, like, ‘Oh, hey, you have to do something with AI now.’”- Ellen Chisa (28:37) “I think it’s hard to really grasp what an LLM is until you do a fair amount of experimentation on your own. The experience of asking ChatGPT a simple search question compared to the experience of trying to train it to do something specific for you are quite different experiences. Even beyond that, there’s a tool called superwhisper that I like that you can take audio content and end up with transcripts, but you can give it prompts to change your transcripts as you’re going. So, you can record something, and it will give you a different output if you say you’re recording an email compared to [if] you’re recording a journal entry compared to [if] you’re recording the transcript for a podcast.”- Ellen Chisa (30:11) Links Boldstart ventures: https://boldstart.vc/ LinkedIn: https://www.linkedin.com/in/ellenchisa/ Personal website: https://ellenchisa.com Email: ellen@boldstart.vc…
ขอต้อนรับสู่ Player FM!
Player FM กำลังหาเว็บ