Artwork

เนื้อหาจัดทำโดย Ross Dawson เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Ross Dawson หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

Marek Kowalkiewicz on the economy of algorithms, armies of chatbots, LLMs for scenarios, and becoming minion masters (AC Ep34)

32:34
 
แบ่งปัน
 

Manage episode 404910444 series 3510795
เนื้อหาจัดทำโดย Ross Dawson เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Ross Dawson หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

We humans need to be the ones who work with algorithms to make sure that they don’t take the wrong path, don’t deteriorate, don’t misunderstand our intentions, and don’t create outcomes that we don’t want.

– Marek Kowalkiewicz

Robert Scoble
About Marek Kowalkiewicz

Professor Marek Kowalkiewicz is founding director of the Centre for the Digital Economy at Queensland University of Technology, where he leads a portfolio of research projects into AI and the digital economy. He was recently named in the Top 100 Global Thought Leaders in AI by thinkers360. Marek’s new book, The Economy of Algorithms is out today as we launch this episode.

What you will learn

  • Exploring the economy of algorithms
  • The role of software and people in shaping digital futures
  • Navigating the balance between automation and human oversight
  • The metaphor of digital minions in the algorithmic world
  • Engaging with generative AI and chatbots for practical tasks
  • Implementing AI responsibly in academic and commercial research
  • The importance of human agency in the age of automated decision-making

Episode Resources

Transcript

Ross Dawson: Marek, it’s awesome to have you on the show.

Marek Kowalkiewicz: Thanks so much for having me, Ross.

Ross: So I love the work that you do and I think it’s a very interesting frame. You just got a new book coming out called the Economy of Algorithms, which we want to hear more about.

Marek: Very proud of this, I spent a couple of years working on this book. So it’s a labor of love. It’s very much a book about this new world that’s emerging. That, strangely, it’s called the Economy of Algorithms. But it’s not just algorithms, it’s software, it’s people, it’s corporations. I wanted to write about this world where increasingly, we’re giving more and more agency to software agents, right, or algorithms. We’ll let them buy things, we’ll let them sell things, and we’ll let them deliver services on our behalf or to us. And so in a way, they’re starting to behave a bit like humans, or like other organizations in the economy. So I thought, you know, we need to capture it somehow. And that, that was a spin in the Economy of Algorithms.

Ross: There are some things that I’d like to dig into. One is about the architecture. You have these smart people with PhDs working in big tech and AI algorithms who are designing the architecture of these algorithms. And I think there is also an element in the every day where all of us are interacting with AI — plays a role in shaping that human-AI relationship. How much of this happens at the level of the macro design of the AI architecture; and how much of this is shaped by each of us that uses it?

Marek: So there is this, I think it’s an interesting trajectory or a process of learning about what’s happening in the world. The more I thought about automation and giving autonomy to technologies, the more I realized how important the role of humans is in the entire process. So, while I will not question the fact that algorithms are taking over individual tasks, and that algorithms are performing entire jobs that used to be done by humans, I realized, through seeing a lot of examples of how it played out in reality, that, in fact, we need to. We humans need to be the ones that work with algorithms to make sure that they don’t take the wrong path. They don’t deteriorate, they don’t misunderstand our intentions. And create outcomes that we don’t want to create. In fact, when you look at the subtitle of the book, it’s AI and the rise of digital minions. So I specifically wanted to refer to those minions that some of your listeners might know from the movie the minions, right. So, lots of, you know, there’s those yellow creatures that want to be helpful. But if you let them do whatever they want, they will create all sorts of disasters. And I thought, that’s a perfect metaphor for a lot of algorithms in our economy. They’re a bit like digital minions. They want to be helpful. They want to work all the time for us. But sometimes they’re not very smart ,and you know, even though they think they’re being helpful, they’re creating a lot of disasters.

Ross: So, I want to get to some of the broader points around the skills that we can develop, which you addressed in the later sections of your book, but just wanted to start down by one of the some of the specific things that people can do. So there’s many tactics and approaches, and people describe some of these things as prompt engineering, but what are some of the things that people can take away and apply and be able to make themselves more productive or more effective in being able to use these tools in their work and to enhance their thinking?

Marek: So, in short, there’s something that has happened in the last year or so that has really changed the answer that I gave to such questions. So in the past, I would say, one of the most important aspects for anyone to think about is to learn at least a tiny bit of code, to understand how machines work, how algorithms, follow processes, or get to outcomes and produce outputs. But these days, I would say, given how easy it is to start working with generative artificial intelligence chatbots, and the likes, my answer would be, well, just start playing with them. Choosing any of those chatbots available really doesn’t matter. Go to Google’s Bard, or, you know, Open AI, ChatGPT. And just start interacting with them. And take maybe a task that you do at work, and see how those chatbots would perform.

Be careful, you need to be aware of a number of things, perhaps you don’t want to share confidential information with those chatbots unless you know what happens with the information that you submit to them. Be mindful of the fact that the responses that they might give you might not necessarily be correct. So, we will have to verify whether they are or not, but really, in short, just start playing with those technologies. And once you have done it, you will start getting to a point where you will ask yourself, how could I get more out of those interactions? How could I interact with those systems in such a way that the outputs I’m getting are better? And that’s where you will ask questions such as what is prompt engineering that you just mentioned, Ross. And you might even ask those very same systems to explain a bit to you what prompt engineering is or how to work with them. So in short, that’s all, it’s all about just getting started interacting with those. When I work with businesses, I tell them exactly the same thing, I just use slightly different terminology, I use a terminology called a sidecar approach, which I can sort of elaborate on in a second. But really just getting started is the main message.

Ross: So, I think many of our listeners are pretty well down the track on a lot of that. I mean, what are for example, some of the other techniques or approaches you use, as you use some of these generative AI tools in your work that you have found particularly effective any approaches or frames of mind that are particularly helpful?

Marek: I use relatively simple approaches, I maintain a small army of chatbots, if you all and each of them is a long living chatbot. Right, so some of them are some of those charts I have been maintaining for half a year or so where they get increasingly more sophisticated, the more information I give them. So my personal approach is to give individual chats a name. So I have a Simon, you know, Simon is a chart, which helps me with editing some of my substack posts, and Simon reads my substack posts, and provides me feedback with them. I asked Simon, what is maybe inconsistent in a post? Or what else should I consider to make it interesting for my readers? I don’t always trust Simon in their choices or in their suggestions, but I use it as a really good impact input. Now, the interesting part is that Simon has seen practically every sub stack post that I have written, and Simon has seen both of the drafts but also the final versions so you can sort of see my evolution and so whatever suggestions that Simon gives me right now, and I have to keep reiterating this when I’m asking Simon to provide some of the suggestions, those suggestions were really follow the style that I have, right, so I have a chatbot. For my substack, I have a chatbot for sort of general brainstorming around my business activities. And I have another chatbot that helps me with some more specific work-related issues. So that’s the strategy that I have been using for a while.

Ross: Are these built on shelf models such as GPT?

Marek: Correct, I’m using off-the-shelf models, I have experimented quite a bit with Lama and sort of running it on my machine and in and on playing with, with modifying those models and the behavior and sort of creating applications to sort of custom applications for those models. But I found that the effort that I have to put into it, to get results that I’m satisfied with, does not really justify that. It’s simply easier for me to play with those existing off-the-shelf models.

Ross: So if you’re interacting with GPT, in a single chat thread, my understanding is its ability to use the prior information is based on its content window, which is still not that large. So if you, if it’s been, how is it that it’s able to take in all of your interactions over an extended period in a single thread?

Marek: Yeah, that’s a very good point. And, to be very frank, I’m not sure how large that context window is right? So, practically, when I’m, I’m working within the chat. So what this means is, I occasionally have to remind my chat bots of the past interactions from the chat. So great question. I’m not sure what the answer is. But yeah, in practice, I do find myself having to remind the chatbot about the aspects of what I believe it should know. So well, I just learned something valuable from you here.

Ross: Well, I mean, it’s a really interesting approach, though – to tell I sometimes just go back to chats I’ve had before. Where I’ve been in a particular conversation of a particular thing, and I go back, oh, I’ll resume where I left off, because it has a bit of context already. And I don’t need to tell it again. It’s a lot less structured than you.

Marek: That’s right, And look, when GPTs were initially released by open AI, I was very curious, because I realized the approach was pretty much the same. So you know, it was more like an accelerated development of a GPT. Right? So here’s some documents, look at this page, look at that page. And with that knowledge, I create an instance, or actually an object that I can instantiate later on, right. So in a way, this is exactly what I’ve been doing with my sort of long standing chats.

Ross: So, how do you introduce these ideas in a business context? So could you talk about that a little bit more?

Marek: Absolutely. So just for a tiny bit of context, I work at Queensland University of Technology, I run a lot of commercial research projects. And in those commercial research projects, we often have to analyze data, we have to write reports, we have to create all sorts of summaries of large documents. Now, everything that I’m going to say from now on, you need to be careful applying this simply because of reasons such as data privacy, or concerns about where the data is being stored. And especially at a university, we’re very much constrained by our data management, processes and protocols. So what I’m going to say right now doesn’t work for every single aspect of our projects. But it certainly works for some parts of it.

So, for instance, what we do quite often, when we run workshops with our project participants, we run so called futures, thinking sessions or scenario planning sessions, we found that large language models are excellent for generating scenarios of the future based on research that we have performed. So in short, we run workshops or do some background research where we analyze potential directions of how the future might unfold and think about it as hundreds of post it notes. And then we upload those post it notes to a large language model we’ve been working with GPT-4 not ChatGPT but GPT-4 as the API and we would get GPT-4 to basically convert those have hundreds of post it notes into a sort of compelling one page descriptions of the future, then we would give those descriptions of the future to our participants. And that would, you know, be used as provocations for our workshops, absolutely excellent. In the past, it would take us a couple of days to create enough descriptions of the future that we could use in our workshops. Currently, it just takes a few minutes. And we could basically create unique descriptions of the future for every participant, but the background data is always the same, right? So they might sound different. But what drives them is the same. So we use large language models in this way a lot, we don’t care about hallucinations. In fact, we love hallucinations, right? Because they create those additional visions that we need for this particular type of exercise.

Ross: So what you described the post-it notes, are those human-generated?

Marek: That’s right, so it’s basically… there’s a workflow here or a bigger process. And there is a stage in which humans, either analyze documents, or perhaps perform interviews, where we then do qualitative analysis of the data, we derive concepts, and then we send them to the large language model. Now, having said all of that, I could also imagine how we could try to automate those currently human managed parts of the process. However, we do it because it’s in our projects. It is also important that our humans sort of get to know that knowledge that later on we’re working with, right? So, it’s not just a labor of humans, but it’s also a learning process for all of our project members.

Ross: I suppose one of the big questions is generative AI in the enterprise. So, obviously, technology is well embedded in organizations, usually AI is already there in various forms, generative AI, this is a bit of a different beast. So when you’re working with leaders, on bringing generative AI in the enterprise, I mean, it to add to the things which you’ve already said, I mean, how do you help frame that and in that journey or the roles you can find or how this changes the nature of the organization?

Marek: There’s a number of aspects here to consider, and I’ll talk about two of them. One of them is the fact that the leaders of organizations are acutely aware that, regardless of what they do, generative AI is already spreading through the organization. I call it shadow artificial intelligence, just like we had shadow automation, shadow generative AI, it’s so easy to access and use those systems these days that it’s fair to assume that a large proportion of employees are already toying with generative AI as part of their work. And so, a big challenge for a lot of executives and leaders is to surface that use of generative AI, because there are massive governance related questions around the use of it. So it’s not about stopping, it’s not about blocking people from using generative AI, but it’s all about making sure that the organization is aware of it. And so if an employee who’s extremely efficient, perhaps because they’re using generative AI, leaves the organization, the organization is not stuck, you know, trying to replace them with someone who doesn’t have that skill, right. So, just being aware of that understanding. So that’s one aspect of generative AI. I think it’s very, very important to shadow AI and governance of that shadow AI and surf it to the top.

The other aspect of generative AI is really fighting with the hypes. So there’s this massive hype around generative AI at the moment and you know, I work with every possible industry that you could imagine. And think about, say steel fabrication, what is the role of generative AI and steel fabrication, right? It’s probably not a large language model, although I could imagine some use cases for chatbots and so on for customers. It’s probably more around generative design and exploring completely, new forms of design, especially for those highly automated steel fabricators. That’s a really interesting area. So that’s the second aspect of it. And that’s really the business value of generative AI finding those used cases scenarios where organizations can make money on generative AI. It’s actually a bit trickier than we think it is mostly because there is so much hype and not enough substance just yet.

Ross Dawson: So, moving to the, I suppose the broader issues of how do we do well, and well like this? So we have a lovely chapter title in your books. Be the Minion Master. So tell me, how do I become a good minion master?

Marek: Well, I do have to come back to one of the points I said earlier, which is, you just need to start interacting with those minions, right? The worst thing anyone could do is to just, start preparing and preparing and preparing and at some stage, realizing, what, I’ve spent too much time preparing and not doing anything, anything with them. So gradually introducing them into everything that you do. What I suggest in the book is to, for every activity, every task that you are performing, to ask yourself, what if I could get a small artificial intelligence algorithm to perform that particular task? And then just try doing it, right? So if it’s an email that you have to respond to, ask yourself, Could I try that with Chat GPT?, or maybe I have a Grammarly extension, where I could just press those two buttons and see what the output is. And it might turn out that you don’t like it, that’s fine, you’ve just tested it. And you know that this is not a not a good scenario. But also in many cases, you’ll find out that it’s really helpful and, and then you start using those, you know, those generative AI systems. That’s how I’ve been introducing generative AI in every aspect of our projects at work. And now I’m doing the same with my personal life and activities that I’m doing online. Majority of the cases I’m not satisfied with, you know how well those algorithms do. But there are individual cases that work very, very well for me, and I’ve described a few of them, to you just like those generative AI reading my posts. And in providing me with feedback, that’s really good. I tried in the past to get some of them to write posts for me, I hated that, you know, they weren’t able to use my, my language, I didn’t really like what they were thinking. So I’m not using them to write posts, but I love the feedback that I’m getting from them, when they’re reading my posts, that saves me a lot of time.

Ross: And so in the Economy of Algorithms, you also have a couple of chapters around just our attitudes. You know our fundamental attitudes that we need to have in a world which is increasingly where value is created, or driven by or where its algorithms play their role.

Marek: One of the biggest issues that I see with this world of increased automation, and something that I’m honestly very worried about, is the fact that people relinquish their own agency. And they sort of give it all to algorithms, and they trust them a bit too early. And I think generative AI is partially to blame, you know, those algorithms have been trained to create outputs that, you know, as open AI would call it, is median human, right? So something that an average person would create, or an average person in that space, an average expert in that space, right? So whether it’s, you know, images or business models, and so on. But, you know, the reality is, those systems are pretty good at those early stages, they create average content, but every now and then they come up with, with outputs that are absolutely shockingly bad. And the problem is, they start doing it typically, after we have given them enough trust, after we’ve trusted them, trusted them so much, that we’re happy with them doing things by themselves, right? And that’s what I’m worried about, that there’s a lot of people who will look at the outputs of those generative AI systems. And so that’s great. I can happily just give them all the tasks and let them do it. And then disasters happen, right? So that’s something that I’m very worried about.

Ross: Over, I just want to say, the greatest risk is over-reliance. I don’t think we’re there yet. But when they start getting really good, that’s where we begin to say, Oh, well but why do I need to check it anymore? That’s not going to be pretty.

Marek: That’s correct. And you know, and this is many of us don’t don’t fully understand this shift in the world just yet. So the world prior to generative AI was a world where we had databases and workflow systems, and a database was a single source of truth, and the workflow system would tell me exactly what step I needed to perform after the previous step. And that would be confidence, but also, correct data behind it. Whereas with generative AI, there are no databases there, say, sort of a multi-dimensional representation of concepts and the relations between those concepts. And those systems are pretty good at effectively replicating databases or replicating workflow systems, but because they are not databases, and because they are not workflow systems, they will occasionally introduce all sorts of kinks into their outputs. And this is what we need to be on the lookout for, right?

So look, if we, if we love systems that are a bit creative, a bit random, sometimes, those are absolutely fantastic systems. And that’s why I use them for some parts of my processes. But if we want to run, say, an enterprise, we probably shouldn’t be using generative AI to manage our employees, right? We should be using some more traditional systems for that. But a lot of people don’t see that difference between those traditional, very predictable systems and those generative AI stochastic systems that we work with.

Ross: So, to round out, I mean, for any of us, in the next 5,10, 20 years, who do we need to become as humans? What is the what, what’s the role of humans going to be? How do we make ourselves into the relevant and one people who are creating value in this economy of algorithms?

Marek: Yeah, you look, you quoted the name of the chapter from the book becoming minion masters think we need to understand that our role will gradually shift toward those who orchestrate a lot of computer systems, a lot of algorithms, those who supervise those systems, but in order to be able to orchestrate to supervise them, well, we need to understand what their strengths what their weaknesses are. And so over the next 5,10 years, that’s basically the skills that we need to start developing, right?

So we used to say that soft skills are, you know, skills of working with people and understanding how humans behave, what to expect from them, how to influence them, and so on, and so on. We will certainly develop soft skills to work with generative AI with systems like that. These are not hardcore engineering skills, like we used to have to develop a piece of software in above that language for enterprise systems like SAP. Now we’ll need to, we still call it prompt engineering, as if this was, an engineering and a very hardcore skill. But soon I will realize that it’s more of a soft skill, right? You know, how do we work with those systems, to influence them to perform particular actions, but also to predict and expect any unexpected behaviors, right? So that will be an interesting, interesting aspect. And due to the emergent nature of those systems, this is not something we could learn by reading a book, this will be a skill that we’ll have to keep developing and honing as we work with those systems over time.

Ross: Yeah, no, I think that’s fabulous. The box that we interact with is, it’s we are interacting them with the language, we talk to them in our own language, but the way they respond to it depends not, it’s not just something which we find out by talking to them as we do with anybody, but also that’s changing over time. So, sometimes those responses, one day get a different one the next day or the next week or next month, as the models evolve. And so we need to keep on continuing to learn the language of machines. So where can people find your book, find your work? Where should they discover more?

Marek: If anyone’s interested in this sort of intersection of, of humanity and algorithms? Well, you can find my book on Amazon and in your favorite bookstore, just look for the economy of algorithms. I also maintain a Substack newsletter where I try to send a newsletter once a week, it’s called the Economy of Algorithms. And you can also find it by typing in marekkowal.substack.com. These are the two places where you can find me. In every weekly newsletter, I also say where I’m going to be in person if someone wants to catch up with me in person.

Ross: Fabulous, well, I subscribed to your newsletter and find it very useful, insightful and entertaining. You’re, you’re a very good communicator. So thank you so much for not just spending the time with you today. But all of your work. I think it’s really important to bring these insights and perspectives to bear in this extraordinarily interesting world.

Marek: Thanks for having me Ross.

The post Marek Kowalkiewicz on the economy of algorithms, armies of chatbots, LLMs for scenarios, and becoming minion masters (AC Ep34) appeared first on amplifyingcognition.

  continue reading

100 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 404910444 series 3510795
เนื้อหาจัดทำโดย Ross Dawson เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก Ross Dawson หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

We humans need to be the ones who work with algorithms to make sure that they don’t take the wrong path, don’t deteriorate, don’t misunderstand our intentions, and don’t create outcomes that we don’t want.

– Marek Kowalkiewicz

Robert Scoble
About Marek Kowalkiewicz

Professor Marek Kowalkiewicz is founding director of the Centre for the Digital Economy at Queensland University of Technology, where he leads a portfolio of research projects into AI and the digital economy. He was recently named in the Top 100 Global Thought Leaders in AI by thinkers360. Marek’s new book, The Economy of Algorithms is out today as we launch this episode.

What you will learn

  • Exploring the economy of algorithms
  • The role of software and people in shaping digital futures
  • Navigating the balance between automation and human oversight
  • The metaphor of digital minions in the algorithmic world
  • Engaging with generative AI and chatbots for practical tasks
  • Implementing AI responsibly in academic and commercial research
  • The importance of human agency in the age of automated decision-making

Episode Resources

Transcript

Ross Dawson: Marek, it’s awesome to have you on the show.

Marek Kowalkiewicz: Thanks so much for having me, Ross.

Ross: So I love the work that you do and I think it’s a very interesting frame. You just got a new book coming out called the Economy of Algorithms, which we want to hear more about.

Marek: Very proud of this, I spent a couple of years working on this book. So it’s a labor of love. It’s very much a book about this new world that’s emerging. That, strangely, it’s called the Economy of Algorithms. But it’s not just algorithms, it’s software, it’s people, it’s corporations. I wanted to write about this world where increasingly, we’re giving more and more agency to software agents, right, or algorithms. We’ll let them buy things, we’ll let them sell things, and we’ll let them deliver services on our behalf or to us. And so in a way, they’re starting to behave a bit like humans, or like other organizations in the economy. So I thought, you know, we need to capture it somehow. And that, that was a spin in the Economy of Algorithms.

Ross: There are some things that I’d like to dig into. One is about the architecture. You have these smart people with PhDs working in big tech and AI algorithms who are designing the architecture of these algorithms. And I think there is also an element in the every day where all of us are interacting with AI — plays a role in shaping that human-AI relationship. How much of this happens at the level of the macro design of the AI architecture; and how much of this is shaped by each of us that uses it?

Marek: So there is this, I think it’s an interesting trajectory or a process of learning about what’s happening in the world. The more I thought about automation and giving autonomy to technologies, the more I realized how important the role of humans is in the entire process. So, while I will not question the fact that algorithms are taking over individual tasks, and that algorithms are performing entire jobs that used to be done by humans, I realized, through seeing a lot of examples of how it played out in reality, that, in fact, we need to. We humans need to be the ones that work with algorithms to make sure that they don’t take the wrong path. They don’t deteriorate, they don’t misunderstand our intentions. And create outcomes that we don’t want to create. In fact, when you look at the subtitle of the book, it’s AI and the rise of digital minions. So I specifically wanted to refer to those minions that some of your listeners might know from the movie the minions, right. So, lots of, you know, there’s those yellow creatures that want to be helpful. But if you let them do whatever they want, they will create all sorts of disasters. And I thought, that’s a perfect metaphor for a lot of algorithms in our economy. They’re a bit like digital minions. They want to be helpful. They want to work all the time for us. But sometimes they’re not very smart ,and you know, even though they think they’re being helpful, they’re creating a lot of disasters.

Ross: So, I want to get to some of the broader points around the skills that we can develop, which you addressed in the later sections of your book, but just wanted to start down by one of the some of the specific things that people can do. So there’s many tactics and approaches, and people describe some of these things as prompt engineering, but what are some of the things that people can take away and apply and be able to make themselves more productive or more effective in being able to use these tools in their work and to enhance their thinking?

Marek: So, in short, there’s something that has happened in the last year or so that has really changed the answer that I gave to such questions. So in the past, I would say, one of the most important aspects for anyone to think about is to learn at least a tiny bit of code, to understand how machines work, how algorithms, follow processes, or get to outcomes and produce outputs. But these days, I would say, given how easy it is to start working with generative artificial intelligence chatbots, and the likes, my answer would be, well, just start playing with them. Choosing any of those chatbots available really doesn’t matter. Go to Google’s Bard, or, you know, Open AI, ChatGPT. And just start interacting with them. And take maybe a task that you do at work, and see how those chatbots would perform.

Be careful, you need to be aware of a number of things, perhaps you don’t want to share confidential information with those chatbots unless you know what happens with the information that you submit to them. Be mindful of the fact that the responses that they might give you might not necessarily be correct. So, we will have to verify whether they are or not, but really, in short, just start playing with those technologies. And once you have done it, you will start getting to a point where you will ask yourself, how could I get more out of those interactions? How could I interact with those systems in such a way that the outputs I’m getting are better? And that’s where you will ask questions such as what is prompt engineering that you just mentioned, Ross. And you might even ask those very same systems to explain a bit to you what prompt engineering is or how to work with them. So in short, that’s all, it’s all about just getting started interacting with those. When I work with businesses, I tell them exactly the same thing, I just use slightly different terminology, I use a terminology called a sidecar approach, which I can sort of elaborate on in a second. But really just getting started is the main message.

Ross: So, I think many of our listeners are pretty well down the track on a lot of that. I mean, what are for example, some of the other techniques or approaches you use, as you use some of these generative AI tools in your work that you have found particularly effective any approaches or frames of mind that are particularly helpful?

Marek: I use relatively simple approaches, I maintain a small army of chatbots, if you all and each of them is a long living chatbot. Right, so some of them are some of those charts I have been maintaining for half a year or so where they get increasingly more sophisticated, the more information I give them. So my personal approach is to give individual chats a name. So I have a Simon, you know, Simon is a chart, which helps me with editing some of my substack posts, and Simon reads my substack posts, and provides me feedback with them. I asked Simon, what is maybe inconsistent in a post? Or what else should I consider to make it interesting for my readers? I don’t always trust Simon in their choices or in their suggestions, but I use it as a really good impact input. Now, the interesting part is that Simon has seen practically every sub stack post that I have written, and Simon has seen both of the drafts but also the final versions so you can sort of see my evolution and so whatever suggestions that Simon gives me right now, and I have to keep reiterating this when I’m asking Simon to provide some of the suggestions, those suggestions were really follow the style that I have, right, so I have a chatbot. For my substack, I have a chatbot for sort of general brainstorming around my business activities. And I have another chatbot that helps me with some more specific work-related issues. So that’s the strategy that I have been using for a while.

Ross: Are these built on shelf models such as GPT?

Marek: Correct, I’m using off-the-shelf models, I have experimented quite a bit with Lama and sort of running it on my machine and in and on playing with, with modifying those models and the behavior and sort of creating applications to sort of custom applications for those models. But I found that the effort that I have to put into it, to get results that I’m satisfied with, does not really justify that. It’s simply easier for me to play with those existing off-the-shelf models.

Ross: So if you’re interacting with GPT, in a single chat thread, my understanding is its ability to use the prior information is based on its content window, which is still not that large. So if you, if it’s been, how is it that it’s able to take in all of your interactions over an extended period in a single thread?

Marek: Yeah, that’s a very good point. And, to be very frank, I’m not sure how large that context window is right? So, practically, when I’m, I’m working within the chat. So what this means is, I occasionally have to remind my chat bots of the past interactions from the chat. So great question. I’m not sure what the answer is. But yeah, in practice, I do find myself having to remind the chatbot about the aspects of what I believe it should know. So well, I just learned something valuable from you here.

Ross: Well, I mean, it’s a really interesting approach, though – to tell I sometimes just go back to chats I’ve had before. Where I’ve been in a particular conversation of a particular thing, and I go back, oh, I’ll resume where I left off, because it has a bit of context already. And I don’t need to tell it again. It’s a lot less structured than you.

Marek: That’s right, And look, when GPTs were initially released by open AI, I was very curious, because I realized the approach was pretty much the same. So you know, it was more like an accelerated development of a GPT. Right? So here’s some documents, look at this page, look at that page. And with that knowledge, I create an instance, or actually an object that I can instantiate later on, right. So in a way, this is exactly what I’ve been doing with my sort of long standing chats.

Ross: So, how do you introduce these ideas in a business context? So could you talk about that a little bit more?

Marek: Absolutely. So just for a tiny bit of context, I work at Queensland University of Technology, I run a lot of commercial research projects. And in those commercial research projects, we often have to analyze data, we have to write reports, we have to create all sorts of summaries of large documents. Now, everything that I’m going to say from now on, you need to be careful applying this simply because of reasons such as data privacy, or concerns about where the data is being stored. And especially at a university, we’re very much constrained by our data management, processes and protocols. So what I’m going to say right now doesn’t work for every single aspect of our projects. But it certainly works for some parts of it.

So, for instance, what we do quite often, when we run workshops with our project participants, we run so called futures, thinking sessions or scenario planning sessions, we found that large language models are excellent for generating scenarios of the future based on research that we have performed. So in short, we run workshops or do some background research where we analyze potential directions of how the future might unfold and think about it as hundreds of post it notes. And then we upload those post it notes to a large language model we’ve been working with GPT-4 not ChatGPT but GPT-4 as the API and we would get GPT-4 to basically convert those have hundreds of post it notes into a sort of compelling one page descriptions of the future, then we would give those descriptions of the future to our participants. And that would, you know, be used as provocations for our workshops, absolutely excellent. In the past, it would take us a couple of days to create enough descriptions of the future that we could use in our workshops. Currently, it just takes a few minutes. And we could basically create unique descriptions of the future for every participant, but the background data is always the same, right? So they might sound different. But what drives them is the same. So we use large language models in this way a lot, we don’t care about hallucinations. In fact, we love hallucinations, right? Because they create those additional visions that we need for this particular type of exercise.

Ross: So what you described the post-it notes, are those human-generated?

Marek: That’s right, so it’s basically… there’s a workflow here or a bigger process. And there is a stage in which humans, either analyze documents, or perhaps perform interviews, where we then do qualitative analysis of the data, we derive concepts, and then we send them to the large language model. Now, having said all of that, I could also imagine how we could try to automate those currently human managed parts of the process. However, we do it because it’s in our projects. It is also important that our humans sort of get to know that knowledge that later on we’re working with, right? So, it’s not just a labor of humans, but it’s also a learning process for all of our project members.

Ross: I suppose one of the big questions is generative AI in the enterprise. So, obviously, technology is well embedded in organizations, usually AI is already there in various forms, generative AI, this is a bit of a different beast. So when you’re working with leaders, on bringing generative AI in the enterprise, I mean, it to add to the things which you’ve already said, I mean, how do you help frame that and in that journey or the roles you can find or how this changes the nature of the organization?

Marek: There’s a number of aspects here to consider, and I’ll talk about two of them. One of them is the fact that the leaders of organizations are acutely aware that, regardless of what they do, generative AI is already spreading through the organization. I call it shadow artificial intelligence, just like we had shadow automation, shadow generative AI, it’s so easy to access and use those systems these days that it’s fair to assume that a large proportion of employees are already toying with generative AI as part of their work. And so, a big challenge for a lot of executives and leaders is to surface that use of generative AI, because there are massive governance related questions around the use of it. So it’s not about stopping, it’s not about blocking people from using generative AI, but it’s all about making sure that the organization is aware of it. And so if an employee who’s extremely efficient, perhaps because they’re using generative AI, leaves the organization, the organization is not stuck, you know, trying to replace them with someone who doesn’t have that skill, right. So, just being aware of that understanding. So that’s one aspect of generative AI. I think it’s very, very important to shadow AI and governance of that shadow AI and surf it to the top.

The other aspect of generative AI is really fighting with the hypes. So there’s this massive hype around generative AI at the moment and you know, I work with every possible industry that you could imagine. And think about, say steel fabrication, what is the role of generative AI and steel fabrication, right? It’s probably not a large language model, although I could imagine some use cases for chatbots and so on for customers. It’s probably more around generative design and exploring completely, new forms of design, especially for those highly automated steel fabricators. That’s a really interesting area. So that’s the second aspect of it. And that’s really the business value of generative AI finding those used cases scenarios where organizations can make money on generative AI. It’s actually a bit trickier than we think it is mostly because there is so much hype and not enough substance just yet.

Ross Dawson: So, moving to the, I suppose the broader issues of how do we do well, and well like this? So we have a lovely chapter title in your books. Be the Minion Master. So tell me, how do I become a good minion master?

Marek: Well, I do have to come back to one of the points I said earlier, which is, you just need to start interacting with those minions, right? The worst thing anyone could do is to just, start preparing and preparing and preparing and at some stage, realizing, what, I’ve spent too much time preparing and not doing anything, anything with them. So gradually introducing them into everything that you do. What I suggest in the book is to, for every activity, every task that you are performing, to ask yourself, what if I could get a small artificial intelligence algorithm to perform that particular task? And then just try doing it, right? So if it’s an email that you have to respond to, ask yourself, Could I try that with Chat GPT?, or maybe I have a Grammarly extension, where I could just press those two buttons and see what the output is. And it might turn out that you don’t like it, that’s fine, you’ve just tested it. And you know that this is not a not a good scenario. But also in many cases, you’ll find out that it’s really helpful and, and then you start using those, you know, those generative AI systems. That’s how I’ve been introducing generative AI in every aspect of our projects at work. And now I’m doing the same with my personal life and activities that I’m doing online. Majority of the cases I’m not satisfied with, you know how well those algorithms do. But there are individual cases that work very, very well for me, and I’ve described a few of them, to you just like those generative AI reading my posts. And in providing me with feedback, that’s really good. I tried in the past to get some of them to write posts for me, I hated that, you know, they weren’t able to use my, my language, I didn’t really like what they were thinking. So I’m not using them to write posts, but I love the feedback that I’m getting from them, when they’re reading my posts, that saves me a lot of time.

Ross: And so in the Economy of Algorithms, you also have a couple of chapters around just our attitudes. You know our fundamental attitudes that we need to have in a world which is increasingly where value is created, or driven by or where its algorithms play their role.

Marek: One of the biggest issues that I see with this world of increased automation, and something that I’m honestly very worried about, is the fact that people relinquish their own agency. And they sort of give it all to algorithms, and they trust them a bit too early. And I think generative AI is partially to blame, you know, those algorithms have been trained to create outputs that, you know, as open AI would call it, is median human, right? So something that an average person would create, or an average person in that space, an average expert in that space, right? So whether it’s, you know, images or business models, and so on. But, you know, the reality is, those systems are pretty good at those early stages, they create average content, but every now and then they come up with, with outputs that are absolutely shockingly bad. And the problem is, they start doing it typically, after we have given them enough trust, after we’ve trusted them, trusted them so much, that we’re happy with them doing things by themselves, right? And that’s what I’m worried about, that there’s a lot of people who will look at the outputs of those generative AI systems. And so that’s great. I can happily just give them all the tasks and let them do it. And then disasters happen, right? So that’s something that I’m very worried about.

Ross: Over, I just want to say, the greatest risk is over-reliance. I don’t think we’re there yet. But when they start getting really good, that’s where we begin to say, Oh, well but why do I need to check it anymore? That’s not going to be pretty.

Marek: That’s correct. And you know, and this is many of us don’t don’t fully understand this shift in the world just yet. So the world prior to generative AI was a world where we had databases and workflow systems, and a database was a single source of truth, and the workflow system would tell me exactly what step I needed to perform after the previous step. And that would be confidence, but also, correct data behind it. Whereas with generative AI, there are no databases there, say, sort of a multi-dimensional representation of concepts and the relations between those concepts. And those systems are pretty good at effectively replicating databases or replicating workflow systems, but because they are not databases, and because they are not workflow systems, they will occasionally introduce all sorts of kinks into their outputs. And this is what we need to be on the lookout for, right?

So look, if we, if we love systems that are a bit creative, a bit random, sometimes, those are absolutely fantastic systems. And that’s why I use them for some parts of my processes. But if we want to run, say, an enterprise, we probably shouldn’t be using generative AI to manage our employees, right? We should be using some more traditional systems for that. But a lot of people don’t see that difference between those traditional, very predictable systems and those generative AI stochastic systems that we work with.

Ross: So, to round out, I mean, for any of us, in the next 5,10, 20 years, who do we need to become as humans? What is the what, what’s the role of humans going to be? How do we make ourselves into the relevant and one people who are creating value in this economy of algorithms?

Marek: Yeah, you look, you quoted the name of the chapter from the book becoming minion masters think we need to understand that our role will gradually shift toward those who orchestrate a lot of computer systems, a lot of algorithms, those who supervise those systems, but in order to be able to orchestrate to supervise them, well, we need to understand what their strengths what their weaknesses are. And so over the next 5,10 years, that’s basically the skills that we need to start developing, right?

So we used to say that soft skills are, you know, skills of working with people and understanding how humans behave, what to expect from them, how to influence them, and so on, and so on. We will certainly develop soft skills to work with generative AI with systems like that. These are not hardcore engineering skills, like we used to have to develop a piece of software in above that language for enterprise systems like SAP. Now we’ll need to, we still call it prompt engineering, as if this was, an engineering and a very hardcore skill. But soon I will realize that it’s more of a soft skill, right? You know, how do we work with those systems, to influence them to perform particular actions, but also to predict and expect any unexpected behaviors, right? So that will be an interesting, interesting aspect. And due to the emergent nature of those systems, this is not something we could learn by reading a book, this will be a skill that we’ll have to keep developing and honing as we work with those systems over time.

Ross: Yeah, no, I think that’s fabulous. The box that we interact with is, it’s we are interacting them with the language, we talk to them in our own language, but the way they respond to it depends not, it’s not just something which we find out by talking to them as we do with anybody, but also that’s changing over time. So, sometimes those responses, one day get a different one the next day or the next week or next month, as the models evolve. And so we need to keep on continuing to learn the language of machines. So where can people find your book, find your work? Where should they discover more?

Marek: If anyone’s interested in this sort of intersection of, of humanity and algorithms? Well, you can find my book on Amazon and in your favorite bookstore, just look for the economy of algorithms. I also maintain a Substack newsletter where I try to send a newsletter once a week, it’s called the Economy of Algorithms. And you can also find it by typing in marekkowal.substack.com. These are the two places where you can find me. In every weekly newsletter, I also say where I’m going to be in person if someone wants to catch up with me in person.

Ross: Fabulous, well, I subscribed to your newsletter and find it very useful, insightful and entertaining. You’re, you’re a very good communicator. So thank you so much for not just spending the time with you today. But all of your work. I think it’s really important to bring these insights and perspectives to bear in this extraordinarily interesting world.

Marek: Thanks for having me Ross.

The post Marek Kowalkiewicz on the economy of algorithms, armies of chatbots, LLMs for scenarios, and becoming minion masters (AC Ep34) appeared first on amplifyingcognition.

  continue reading

100 ตอน

All episodes

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน