Artificial Intelligence for marketers: Your top questions answered by the experts

CPD Eligible
Published: 02 October 2025

The CIM Marketing Podcast is back! In our brand-new season we’ll be bringing you six episodes focusing on how emerging technology is reshaping marketing, including a special live recording at the CIM Business Centre this November. 

In this episode, we’ll be looking at AI and what it means for marketers and marketing as a whole. Our expert panel features CIM course director and AI copywriting specialist Kerry Harrison, who brings deep insights into tactical AI implementation, and Duncan Smith, renowned for his expertise in best practice and compliance. 
 
Together, they’ll demystify AI for marketers and share practical strategies and insights into best practice to help you harness AI’s potential responsibly.
 
Whether you’re just starting out or looking to upgrade your AI approach, this episode is packed full of insights to help you get the most out of this technology.
 
This podcast will:

  • Provide guidance on practical AI implementation strategies
  • Cover the key ethical concerns and compliance requirements when using AI in marketing
  • Offer actionable tips to make your AI use more effective and aligned with best practices

Ben Walker  00:15
Hello everybody, and welcome to the CIM marketing podcast, and today we're starting the season with AI question time. Delighted to say we are joined by two expert guests, Kerry Harrison, who is an AI trainer and AI consultant, and fun fact founder of the world's first AI gin. Hello, Kerry, how are you? I'm good. Thank you very good. Excited to be here.

Ben Walker  00:38
Great to have you on the show. Great to have you on the show.

Ben Walker  00:40
And we're also joined by Duncan Smith, who is a fellow of the CIM and is a data protection and compliance expert. He's been busy this morning by a data protection and compliance issue with which I'm sure he can say nothing at all. But he's with us. I'm thankful to say here this morning in borough to join us for the show. He's founder of Icompli, which is a data protection and compliance consultancy Duncan. How are you right? 

Duncan Smith  00:58
Very well, I can say a little about the data breach because it's probably very interesting. Don't put everybody's email in the To box, can Say nothing else. Oh, okay.

Ben Walker  01:18
Well, leave that there, but it's a lesson learned for lesson learner for another day? Yeah, well, we're going to learn many lessons today about how to use AI in marketing, which I think has become something of an obsession for marketers. It was not there, and then suddenly it was there. We're not quite sure in every aspect and every facet of AI life, how to use it to our best advantage. And we're going to find out a little bit today about how to do that. We're going to start with how. We start with AI, which starts, usually for most purposes that we use it in a day to day basis as marketers, with the prompt. One thing I've always wondered is, do the nature of the prompts Kerry really affect the output? I mean, to what extent do they affect the output?

Kerry Harrison  02:03
I think they really do have a big impact on the output. That's one of the things I teach in all of my courses, is prompting, and it's actually one of the things that people ask for most, like, how do I get the most out of these tools? So I think it does make a huge difference. I have a there's a really great prompting methodology called the GCSE prompting methodology, which I think came out of Microsoft, and it's really great, which talks about the importance of having a goal, some context, some sources, so giving it some kind of example. And then E, which is expectations, which is setting your expectations of what you want the model to deliver for you. And I think that's a really good sort of structure for any kind of prompt that you might create. And I feel like if you've got those four elements in it, you're in a really good you're in a really good place. So I think the structure does make a big difference. I think if you just obviously with these tools, one of the great things is you can just rock up and give it a one sentence prompt, and it will deliver something for you. But I think what we get with that is a very sort of vanilla, quite mediocre output. And actually taking the time for prompting makes it makes it makes that out much more valuable. 

Ben Walker  03:01
The difference between a bog standard, AI gin and an award winning AI gin

Kerry Harrison  03:05
 Maybe so, maybe so. So the context matters Duncan and what we actually say Kerry does make a difference, but it's still keeping you, as far as we can see, in most use, daily usages. It's keeping people in its own environment. So you ask a question to chat GPT, and your answer is then bound limited by chat GPT. In the future, what role will websites play so instead of you going into a bespoke AI app, you simply put it into a website or marketing tool or what have you, and will that lead to greater diversity of answer? I think, in terms of websites, what will change will probably be the greater use of AI agents. So if you think about chatgpt Now, they've got an agent tool built into it, so you literally just set it a task, and off it will go and it will do its thing. So it basically goes and visits those websites on your behalf, so you don't actually even have to go to them. And I wonder, over time, if maybe we'll have to think about the way that we currently create websites. At the moment, we create websites for people. We expect people are going to come onto our sites and look around them and make decisions. I think as agents become more commonplace, we're going to have to think about, how can we make our websites relevant, not just to people, but also, how can we make ensure that the agents that people send out on their behalf get the information that we want them to get? So I think they'll have to have this kind of hybrid, dual purpose, which they at the moment, they don't really have

Ben Walker  04:37
one thing that chat, GPT, this is the most common tool has been criticised, probably fairly, for is what it people call hallucinations, which is what every other normal person on the planet calls errors.

Duncan Smith  04:52
I call it lying Duncan.

Ben Walker  04:54
 You call it lying?

Duncan Smith  04:55
I have a particular when you're dealing with legal or. Regulatory issues. And again, this is something that marketers who are in that space may be using some sort of GPT engine to look for answers to those things. I have a folder now on chat GPT, which is all the time. I've had an argument with it, and I have some quite in depth arguments. And if you were to look at my chat history, I don't hold back, because quite regularly, I ask it for answers to questions, and it will come up with quotes, quotes from regulatory bodies. So in my instance, it's often the Information Commissioner's Office, and I don't recognise the quote. I know the guidance fairly well. And I say, hey, chat GPT, can you give me the citation for that? Give me the source document so I can go and check it. And it says, Sure, it. And it says, Sure, no problem. Duncan, then it comes back and says, Ah, now look, I haven't been able to find the exact cited. And I go, You're lying again, aren't you? And it says, You're right to challenge me, Duncan,

Ben Walker  05:54
 Yeah. 

Duncan Smith  05:54
And we have this conversation. And ultimately, I have several, many examples where ultimately, it says you got me. 

Ben Walker  06:02
It's a very polite liar. 

Duncan Smith  06:03
It says you got me. I made it up. You're right, that really, and also from that quote engine that that E at the end, the expectation. So I've now trained, as you can the persona. I have several personas on there, and I've asked it whether I've successfully trained it or not. Don't make stuff up. 

Ben Walker  06:24
Yeah

Duncan Smith  06:25
If you're going to quote, quote.

Ben Walker  06:27
Where is it coming from? This stuff? I mean, I've encountered it. Journalists encounter it, you know? Well, I use it, actually, use it less now than I did when it first came out, because, because of the reliability concern.

Duncan Smith  06:37
It, ultimately, it wants to give you an answer. So it uses its text prediction. It's, it's, LLM, it uses that to provide an answer. It's, I don't think it's capable of saying, Oh, that's a tough one. Duncan, you know, I don't know.

Kerry Harrison  06:52
Yeah, whatever they whatever. So the it's very, very rare, but, and I've had it on notebook LM from Google, where it's actually said, I can't answer that question. I was so excited. Actually, that's because they never, ever say, No, I can't do that. Yeah, because, as you say, they're trained to be helpful, correct, and actually, they're more likely to hallucinate when you when you put them into a corner. So I read a research paper recently that was saying that if you ask a model, and we do it a lot, where we ask it to be really concise in its response, and actually, if you ask it to be really concise, but actually the topic is massive, like, if you ask a human to do that, that would be pretty impossible. 

Duncan Smith  07:29
Yeah

Kerry Harrison  07:30
 So and then, and then it's not going to say no, because it doesn't say no, but it also can't actually make something that's like a 12 Word document into a sentence, and so it just makes it up. 

Duncan Smith  07:41
I think what's, what's really challenging for anybody looking at the response is when it makes those things up, the fact that it puts it in a quotation block and cites the source, I'm sorry, but give me an opinion. I think it could be this Duncan, but I'm not sure. But when it quotes a regulatory source and says the regulator says it's okay to do that, that is just plain dangerous. 

Ben Walker  08:06
It is. I would have been fired 20 years ago if I did, if I'd ever did that in my in my profession. Are anything, any tools more reliable than others in this regard. You know, chat GPT gets the brick bats because, probably because it's the most high profile Google now, when you enter a search in Google, it automatically pings up an AI answer. Are those overviews any more reliable than what we get from chat GPT?

Kerry Harrison  08:33
There's been some sort of peer reviewed research from universities checking these things. I mean, we all have anecdotal evidence. I know from my own prompts, I can see what's coming back. There's been some research that says chat GPT and other models, Gemini, for example, doesn't fare very well, but I don't use that on a regular basis. I tend to use OpenAI's tools. I haven't used Claude as much as I've used chat GPT. So I think you need to be careful about personal experience versus something which is rigorous, peer reviewed scientific research. Some of that suggests that they can hallucinate up to 75% but  I think that's getting less. I think the chat GPT five, one of the things that they celebrated in the release of that was the lowering of hallucination rates. And I think from the ones that I've seen before chatgpt five in the research papers that actually Claude and on top, it would coming out slightly better. But it's, I think, the fact that it just can hallucinate is, is the big is the issue, isn't it? It's like, whether it's 70% of time or 10% of the time, it's just the fact that we have to be aware that there's a it's a possibility, and therefore we have to be on guard whenever we use this.

Speaker 1  09:42
And I think that leads neatly into the, you know, is actually the use of AI valuable to us when we make time savings. But then do we have to use that time saving to then check the output of everything that comes out? I don't think we'd have to check everything. There's some great stuff that comes out without potential for harm, but the stuff that has potential for harm leads checking

Ben Walker  10:03
a mean analysis of that is that a very mean analysis of that is a bit like having an untrained intern in the office and asking them to do a professional task, legal compliance, journalistic pieces for publications that very often if you're marking their homework, if you're checking their work, it is quicker and more reliable to do it from scratch yourself. And I think that is what a lot of marketers finding, sometimes in a not very nice way, about AI technology today.

Duncan Smith  10:40
Yeah, yeah. I think that when you're looking at the use of AI, particularly generative content, it's a fabulous tool for generating lots of content, but it's, it's all about the context. What? What is the content about? So if we're talking about sustainability, for example, say, Yeah, let's write some let's write some podcasts, or some content for that. Obviously, there's legislation and consumer protection legislation around those things. That means we have to check, as we always would, we would check the content of something that matters. If you're in finance and you ask it to create the copy for a finance advert. We know there are rules that the FCA puts around us as to what needs to be said your APIs and things have to be in there. And if the chat GPT or whatever doesn't put those in, somebody has to check them. So I think that's a that's a as we go forward, that's something we really have to work hard on. 

Kerry Harrison  11:32
I think it also depends on the kind of what you're creating with it as well, though. So for me, because I work mainly in copying content, it's actually, it's you don't necessarily the things that it's generating. You don't necessarily have to fact check, because it might be some ideas or a starting point, or it might be some social media posts, but there's no stats or facts or statistics in there. It's not gone off, and had to look at some research papers, necessarily. And so I think it depends where you're doing if I always just say it's anything you know, statistical or factual or it's bringing back, like you say something from some paper somewhere, then yeah, you need to fact check it. But there's often times in marketing where you you wouldn't necessarily need to go through that process, because it sounds quite the way you say. It sounds quite laborious. But actually, if you just want 20 ideas really quickly, just to get your own brain working, it's like, well, that's, you know, it's great.

Ben Walker  12:20
That's right. It's a great horses, of course, isn't it? I think as an ideas listing tool, it can be very, very useful. It takes out that initial sort of, oh, that's initial brain fog. I've got to get in and try and try and find a load of examples to start me off.

Duncan Smith  12:37
It's a fabulous springboard. I mean, we always talk about, how do you Springboard ideas and meetings? And everybody sits around and goes, right? We need to. We just come up with an idea for a white paper, an article or something. And that content, recently, I did an article similarly, and it just said, Have you thought about science fiction and AI and and you know what happened with replicants and Philip K Dick's writing and things? And immediately that I did, you know, I hadn't really thought of that, but then when you go away and think about that, it suddenly becomes an article. So it's a great surfacing tool for ideas. And as you said, if there are stats, my editor called me out on what Yuel Brynner actually did. Was he in Westworld, or was he in Yeah, and the editor caught me out on that that was a human in the loop that said, Ha Duncan, you've made a mistake there. Actually it was this movie and then later the TV adaptation, 

Ben Walker  13:29
Yeah 

Duncan Smith  13:29
and that's a brilliant role for the editor to spot that initial the danger, I think, is that you take that content as read and you run with it, and you don't fact check it, and that's why it's to have a copywriter or an editor that really knows their stuff is vital.

Ben Walker  13:47
I think, is it about knowing one's own limits and the limits of the technology. You know the technology the way it's been presented, or the or certainly the way some of its greatest advocates present it is that it is 42 the life universe and everything. And if you treat it in that way, you're going to come unstuck very quickly. If you use it within the limits of what it can do as an idea generation tool as a springboard, it can be incredibly powerful, but you've got to know the limits of it and yourself.

Kerry Harrison  14:19
Yeah, I totally agree with that. I think it's it's important to know when not to use it. So for example, I think for highly conceptual creative work, I just don't think you can do it. And believe me, I've tried everything. But if you're doing above the line advertising, where you know you need to cover something really, quite off the wall, really like lateral thinking, it just can't do it. It's a generative engine. So it's brilliant for starting starting starting points, but if you want something off the wall, you're going to have to do that yourself. And I've tried it for lots of things, but especially with conceptual ideas. And then what I realised after spending hours and hours trying to make it think laterally and even giving it things like Edward de Bono's tools around lateral thinking and really trying to make it think like that, I have to be aware that it's. Generative tool. And so now I just don't go there. I just sit down with a note paper and a pen, which I've done for the last 20 years of any kind of conceptual work. Yeah, because I know that there's no point in going. So I think it's it's also quite helpful to know what it can't do. Otherwise you spend hours and hours trying to make it do something that it actually can't do and just waste loads of time. And that's not the you know, that's the point of AI. It's super helpful for us to know where it can work. Well, so yeah, and I think, I think in terms of our own limitations, I guess there's certain things that AI can do better. You know, it's very good at give you can get 100 ideas from it in a matter of seconds to start content ideas, for example. Yeah, so I guess it's knowing where, actually that might be better than me, because I'm not going to come up with 100 ideas in two seconds. I can't possibly do that. So I think it's being aware of that, but also knowing where you as a human can make a really big difference. So I think you know, conceptually, strategically, anything super creative, I just feel like it's so important. And actually something you just mentioned as well around the kind of deep expertise that is necessary. And I'm a copywriter by trade, so 23 years of copywriting, and I feel like that's a real benefit, having that those skills. Because when I look at AI output, I know what good looks like, and so I look at it and I can go, Okay, well, this isn't working for me for this reason, this way. No, this is a bit repetitive, so I'm going to shift that round. And so leaning into your expertise in your field, I think, is really important with AI and that kind of combination of deep expertise, human expertise, and AI, I think, is like a really brilliant combination.

Ben Walker  16:22
You've hit on something really interesting there. Because, of course, one of the we've been a bit down on AI the start of this show, but no needs, no need to be because, as we've said, it can be a very powerful and positive tool. One of the criticisms we've covered now, which is that the hallucinations, the errors, the lying, as Duncan puts it, another flaw that marketers have encountered very quickly is this idea of homogenisation, that if you devolve your creative work to an AI, it will generate for you pretty much what it's going to generate for every other marketer in the world. And you've hit on it there Kerry that that's not the way to use it if you'll want to do something conceptual and interesting that's got to come from inside here.

Kerry Harrison  17:05
and it's hard work. Yeah, of course, but it's no harder work than it was before.  No, no. 

Ben Walker  17:08
It's just that people maybe. Do you think it's fair to say Duncan that too many people started using it as a crutch, because the crutch is there.

Speaker 1  17:17
It's a question of scale as well. If you're I'm fortunate in that I don't have to deal with scale. I don't have to write several 100 pieces for social media a week. You know? I can spend time. I can spend time crafting, creating an article and thinking about the technical aspects. But if I've got a task where volume of content is important to me or the organisation, of course, I'm going to use AI to generate a lot of that content, and I'm going to have a team of editors who are looking at that content specifically. I'm going to employ professionals to just check brand voice and those sorts of things. But boy, can that thing put content out. It will put content out there. And there is from the homogenisation. There is that sort of very philosophical thing that says at some point in the not too distant future, AI is training itself on its own output. And of course, if we allow that to happen, then content is going to become pretty dull and fairly inaccurate. I would say

Ben Walker  18:16
Dull and inaccurate and inauthentic. Theory, we're not going to have that brand authenticity, which is what we all crave as marketers, if we do as Duncan says, and not as Duncan recommends, as Duncan describes, and evolve this sort of process to an AI.

Kerry Harrison  18:33
Yeah, and I think that's something that I have been concerned about for quite a long time with AI, and it's something that I get asked a lot, actually, in the courses, because I run the AI copywriting course and people, it's one of the key questions from copywriters, how can I use these tools and still say, stay on brand? How can I still stay authentic? And so I have this, I have this methodology. It's called the AI sandwich, which, which I created at the beginning of 2024, and it's this idea that to get the best kind of content out of AI as authentic as we can get, we need a combination of a sort of three part process of human AI and then human so the human before we use the AI. So as tempting as it is to just turn up at chat GPT with a one sentence prompt and just go create me a blog on workplace well being, and actually the prompting that we give it the time that we take before we even touch the tools is really important, having an objective, knowing what the brand is, putting all this, any kind of research we've got into our into our prompting, thinking about any criteria of ideas that we've got of our own, so that so the promptings that we create are as customised as they can be. Then we get the AI to do its thing, which is amazing. And then afterwards, we've got the human again. And this is where we sense check it, we fact check it, and we also add our own deep expertise of our brands, of our marketers, our customers, of the world that we live in, and adding those into the copy or the output afterwards. So for me, that's how we get. Get the best work out of it. So it's Human - AI - Human. And I think if you do that, you can remain, maintain a level of authenticity that we wouldn't get if we just turned up with a single sentence box and didn't look after, look at it afterwards. Because I think one of the things is, one of the biggest interests I constantly plead in my course, is please don't copy and paste output and just use it like please add something of yourself into it, because it's so important for for that authenticity side of things.

Ben Walker  20:26
That's a really interesting tool, that three word tool, Human - AI - Human, yeah, if you hold that in your mind, Duncan, human, AI, human, you know that the last person before anything goes out there, the last thing in the process is a human and not a machine.

Speaker 1  20:41
The idea of where in a process, from a marketer's perspective, if you start engaging with AI, if you start and really getting to grips with the tool, you have to think about how it fits into the marketing workflow. How do we use that tool? And you can use there's a lot of analogies that you can use, but in food manufacturing, there was always something called HACCP, hazard analysis, critical control point. You know, the last thing you want is a blue elastoplast in your in your donut. So there's something that checks for that before it goes out. So the food industry knows it needs to check there's, there's high risk involved in it. And what we need to do as marketers is think, where in that human sandwich? Where does that happen in a process? I have a process which says I need to get copy out on Thursday afternoon, and I need to email it to an editor, and that's it's gone well, where in that process do I put that? We call it pulling the domino. So if you set dominoes up in a room and somebody walks in and trips over them, and all the dominoes go, pfffff, you really want is somebody to be holding the 10th Domino. And go, you shouldn't have kicked that first one off. Should you I hold the 10th Domino, which means I decide when it's going to run, and I put it in, and you go, Okay, now flick the dominoes. So that idea that they caught down. People talk a lot about human in the loop. What does it actually mean? Means I have to think about who presses send on a particular article to go to the editor or whatever, and who does the checking, and who has checked that the check took place. Yeah, and that's so. So we can learn a lot from hazardous environments like hospital theatres, you know, commercial airlines. We can learn how those people keep us safe and take a little bit of that learning and put it into our workflows.

Ben Walker  22:26
It's great advice. How common is it that people, companies, marketers, are following it? My anecdotal impression is that there's been a lot of black, a lot more bland around now than there was two years

Kerry Harrison  22:37
ago. Probably I because I'm really obsessed with copy. So I look like, even just today on the tube, just looking at all the ads and thinking like, I wonder if that's AI generated. And I saw this really lovely one, actually, that was almost like a story that was on a board in the tube this morning. And I just, I read it, because I bet this is going to be aI generated. It will just be really generic. And it wasn't at all. It was really thought out, and really it was just so lovely. And I was just so glad that it wasn't AI generated. I just knew that it wasn't because it was no way that AI could be that creative. And so I feel like that, although I think we will see more of that homogeny, I also feel like, if you can be creative and you can actually be bothered to do the hard work, you will also really stand out and people like, maybe it's just me having the joy and seeing something that I know could AI couldn't do, and I see it that I'm like, Yes, but I think it will be a there'll be, like, a real divide between the stuff that's very generic, that we just kind of pass over, and then there'll be the stuff that's really beautiful, and I think that that's where we'll get the most from.

Duncan Smith  23:37
And that's a serious it's a serious problem as well, when we talk about adoption of AI and sort of being on brand, but human values, machines are struggling with human values. And why is it that I'm going to call you a human? That's a bit why is it that human can recognise another? What is it in that text that goes, Ah, thankfully that's not AI. We're very capable of spotting that. So we need to be very careful when we start putting the AI engine in place for our chat bots, for example, because that insincerity, that lack of empathy, that lack of humanness, the spotting the replicant as Blade Runner would have us that we're very good at doing it. You know, we might, we might test its emotional empathy, and we go, that's not written by a human. And so when we get responses back from customer complaints, for example, and we read it, and we go, Oh, that is so written by a bot. I accused somebody at Microsoft the other day of being a bot. We were having a bit of an argument about something. Well, Please, could I have something other than a bot answer me and and it was, came back and said, I'm not a bot. I said, Yeah, but

Ben Walker  24:48
would say that using

Duncan Smith  24:50
AI to pull as Carrie said, they're cutting and pasting answers into my response. And it's so obvious. I mean, even the M dash, you know, it's obvious. US where it's come from, and it's it. So I think, well, that's one of the things we have to be careful about, particularly if your brand is about authenticity and being in touch with people.

Ben Walker  25:11
There is. There has been a sort of AI ease, hasn't there sort of a language of AR that is quite become a bit too easy to spot, certainly from the sort of off the shelf AI is, if we are companies and we're marketing departments or whatever, and we're trying to design our own AI agent, create our own AI tool, is there a way that we can make it harder to spot, make it more convincing, make it more human? 

Duncan Smith  25:33
Tell, tell your persona to never use em dash, to never have five bullet points at the end that says, this is it's just they are. So AI, you know, we can do that. And Kerry's gonna have a lot more examples, but,

Kerry Harrison  25:46
yeah, I mean, there's lots of ways that I cover in the course around tone of voice and preserving tone of voice, I think it's just about breaking down your own processes. And again, this is why I go back to the real importance of expertise. So I've created custom gpts, for example, that write LinkedIn posts for me. And in order to do that, I had to almost go back to my like, when I write LinkedIn posts, what do I do? What process do I move through? What does it mean to create a hook, and what does that look like, and how do I write it? I think it's being able to really break down your own processes. Think about what it takes to write great copy or great content or a great image, or whatever it might be. So I think that is really, really helpful. And I think having, again, a deep understanding of your tone of voice and how it differs from conventional, normal chat GPT outputs. But again, a lot of that comes down to the prompting. Again, that's the difference between turning up with a one sentence prompt, write me a blog on versus write me a blog on. You know, here's my here's my information. These. These are kind of, this is a tone of voice that I'd like you to adopt. Here's some examples of tone or style that I'd like you to follow. So again, going back to that GCSE side of things. But I think giving examples of what great copy looks like, or what on brand kind of content looks like, can make a really big difference as well. So I've not tried

Duncan Smith  27:03
it. I don't know whether you've tried that. Somebody advised me the other day to when you're talking about prompts and improving prompts, because it's so important, was to take the prompt from one engine and put it in another. So you write a prompt and it comes back, and then you say, oh, Claude, could you take a look at these prompts that have come out of chat? Can you improve those and actually bouncing the prompts around all the different engines, is that,

Kerry Harrison  27:24
yeah, I have done that quite a few times, where I'd say, What do you think about this? So I've taken, you know, I tend to use, I love Claude, so I tend to stick there. But teach LGBT as well, and I'll get them to assess each other's work. The only thing is, I think again, going back to the thing we talked about earlier, because you it's never going to say, no, it will always find an improvement. Because you can also ask the model to find problems with its own answer. So you can do that as well. But it never says, oh, there's no problem here. Like, it'll always say, and then you get it. And so, because I did it, literally, a couple of days ago again, where I was like, I wonder if I then give it back, I've changed everything it said, put it back in, it will still find it's like an ongoing it'll find more and more and more and more, because it's never going to go, No, there's no problems

Duncan Smith  28:03
here. I always make sure that I tell the other one where it came from, yeah. So just guess. So it's like, okay, yeah, this came from chat. GPT, come on. Claude, what can you do? Yeah, and then yeah, just see if they'll and maybe that will improve the maybe that improves the output, but And ultimately, someone's going to or it's already been created, I'm sure, the agent that literally bounces it around as many of those as possible and iterates it until we get the right one.

Ben Walker  28:29
It does sound like the sort of start of the prologue of a Robert Harris novel. You know, where these AIs end up prompting each other. And there are no, there are no there are no humans in the loop, but, but, but no, it's good advice, are the premium versions any better? We've been reasonably critical today with good reason. We need to know the limitations are the premium traditions of the main off the shelf platforms any better? Less homogenous, more accurate, more authentic. Is it worth it?

Kerry Harrison  29:04
I think the reason to upgrade would be more around usage limits. So I think if you staying in the free model, and you move up to chat, GPT, the first level of paid, you just have more usage so you don't run out. Because it's really frustrating if you're doing work and then you run out of credits, or you have to wait. Sometimes, with Claude, even though I'm the paid version, I'll still, I use it quite a lot. It'll sometimes say, right now you've got to wait, wait for three hours until you can carry on. It's like, oh, so frustrating. So I think that would be a reason to go. You often get better models, so maybe you would get a slightly more nuanced output. And also, I guess, with chat GPT, if you're paying you also have the and also with Claude, in fact, all of them where they've got custom GPT options. Or in Claude, it's called projects, where you create something that's more, yeah, or customised if you you don't get that letter in the page. So I think there are certain reasons to upgrade. I don't, I don't think it's necessary going to, like hallucin. Conversations, or necessarily get over some of the issues that we've been talking about, but you'll definitely avoid the frustration of reasons, and you can definitely customise it, which can be, again, very helpful in helping to us to get something that's not a very like a really generic, boring, vanilla, like everyone else output. You know, custom GPT is one of the great things, is you can give it instructions. You can give it knowledge. So what it's generating is not what everyone else is generating. So for me, I also, because I'm training, obviously, I pay for all of them, and I, yeah, I do think it's worth it

Duncan Smith  30:29
for that. And is the worth it from another perspective as well, which is we often in larger organisations, we're often discussing with compliance, with legal, with marketing, for a budget, can we put this in? And one of the ways we can move forward faster is to think about the compliance issue, if it's being blocked. So somebody's saying, Well, we're just going to go slowly because we're not sure if it's safe yet. The Enterprise versions, the paid for versions, will give you the possibility of on premises. They'll certainly give you a ring fence around it, so you can have a no train command, so whatever you put in there doesn't go into the public training model. You can encrypt content, and you can actually specify geographically where the data is being stored, which is super dull, but it's the world that I live in, and actually it's what makes us buy the product, so the marketers can then use it. I don't think

Ben Walker  31:20
it's super little at all. I think that's one of the stumbling blocks. One of the biggest stumbling blocks I've had to using AIS is for that exact reason, if I'm using the standard off the shelf chat, GPT, I'm working on stuff that's not in the public domain. It's yet to be published. It may be, to some degree confidential prior to its publication. Like most you know, written work is, if I took it into chat, GPT, for example, say that here's here's what we've got. Can you give us some ideas for some three other case studies which are on this theme? I don't do that because I think I've just given you all of that information which you're now going to disseminate to the world and her wife, it

Duncan Smith  32:06
is super important to think to be in context. Some organisations are not going to have to worry about this at all, whereas for other organisations, if I'm in a law firm looking at AI and I'm thinking about loading up case files, and you know, it's like, no, please, please, don't do that until we've ring fenced the whole thing and we're satisfied that we're not in we're not going to breach somebody's confidentiality. So there's, there's also those kind of enterprise risks which can be mitigated by buying the enterprise version. Yeah. And it also makes a very strong message that we talk about shadow AI, which is a huge problem in terms of compliance, which is, if you don't implement AI, half the team have got it on their phone anyway, yeah, and they're using, of course, the free version to do what you would like them to do on the paid version. So it's a really important step to take.

Ben Walker  32:55
That strikes me as one of the problems that organisations have, companies have, which they think they control and absolutely had no chance of controlling I remember the days when mobile phones first came out. You weren't allowed to have a mobile phone at your desk, in theory, so everyone put it in their pocket or the handbag, and nobody knew. The same is now happening with Shadow AI. That doesn't matter how many rules you have in an organisation, people can go on a lunch break and use a shadow AI, and there's no way of stopping it, so better to embrace it. We think probably worthwhile getting an enterprise version it. It certainly addresses the quantity issue, which is the first barrier.

Kerry Harrison  33:35
I think it's also, though, just sorry to put in around the enterprise version. I see, I feel like, if you're a large business, then that is a perfect answer. But if you're a small business idea, I work a lot with small businesses, and I'm an independent consultant that I can't access the enterprise version. So it's also, I think knowing, I think if you get to teams, you get a certain level, don't you, but not anywhere near the enterprise level. So in that case, this idea of, don't put anything private, confidential, proprietary, into the models you into the models. You just have to have that in mind all the time, because not everyone can. Smaller businesses wouldn't be able to access that. But just, I was just saying that because, not because not everyone can access their enterprise. It would be amazing if I'd love to be able to access that and just be able to just say, tell it all the things I want to tell it, but otherwise it's, I have to just be quite careful. Yeah

Duncan Smith  34:21
You do. I mean, there's, there's that kind of people just Bandy around, oh, let's, let's put it on premises. Well, that means you've got to have a massive GPU engine sitting somewhere. Yeah, you know there's. So the reality check is, for the vast majority of people that you and I train, it's just not going to happen. We're going to be taking commercial off the self software. We're going to be looking at the what we can literally access straight from the web. So it chat GPT, and I'm probably going to pay for the cheapest version I can get to get the most attractive model out of it. And that's where we should be training and plus, that's why we're talking about it today, because we should be talking to marketers about the reality check of we're not living. In the worlds of enterprise, million pound budgets. It's, that's

Ben Walker  35:04
interesting. It's, that's really interesting. Yeah, there is. There is technically, theoretically, a solution where you have a sort of locked up enterprise solution which addresses all of the issues we've talked about this morning. But the reality is, it just ain't going to happen for the most of the public. And it's not going to happen for most marketers, because most marketers don't work in a company which is a multi billion pound blue chip that can afford to have its own data centre on site and create this thing. So again, Human - AI - Human, we've got to be the guardians. We've got to be in charge of what we're doing ourselves. It comes down to personal responsibility,

Kerry Harrison  35:42
I think so. And also just, I think, transparency with clients as well, you know, just making sure that if they I've got clients in terms of copywriting, clients that are very happy for me to use AI. And I always say to them, are you happy for me to use AI with this? And some are just like, Yeah, put everything, and I don't care, like, put all these reports in, and other ones actually this, you know, this is still embargoed, or this hasn't gone out to the public yet. So I think having those conversations, and yes, it is a personal you have to take personal responsibility for that, otherwise, people are just uploading anything really

Duncan Smith  36:12
Its a really tricky balance as well, because you're talking about getting authenticity into all the content, which means you need to feed in your your corporate brand, voice, your document. There's all sorts of things that you should do to get the best content out of AI. But as that's happening, you need someone to grab that 10th Domino, or whatever it is, and go, sorry, what's that you're you're putting in? 

Kerry Harrison  36:30
Yeah.

Duncan Smith  36:31
So, you know, so let's, let's, let's have a pile of documents or PDFs those. Yeah, fine. Those are good. Those, not sure. Legal, can you check those? before we just upload everything into, you know, just to get that persona right. So training that persona involves putting all that information in just be a little bit careful the human being careful. 

Ben Walker  36:51
Some organisations are so worried about copyright and contravening copyright or other regulation of the law that they're actually making people supply sign to say they have not used AI in the work they submitted. Do you think becoming more common?

Duncan Smith  37:07
We're way, way, way behind in terms of copyright legislation? It's a can that's being kicked down the road at the moment, very definitely. But I think that's Canute and the tide, or any other analogy. AI is going to be a very valuable tool for content creation, but it does segue into this issue about transparency and provenance and whether we should be watermarking content again. That's something that's coming up. I don't know whether you're copywriting anything that says joint authored or I don't know where we're teaching people now to whether they should cop watermark content with AI.

Ben Walker  37:43
So this would be my byline would become not Ben Walker, but Ben Walker and AI.

Duncan Smith  37:48
Ben Walker and my mate the AI engine. 

Ben Walker  37:51
Yeah, for the record. I haven't used AI, so far.

Duncan Smith  37:55
But as if what, there's legislation coming forward already. We see legislation coming forward. It's already in EU codes of practice that says, in terms of the provenance of content, it's so important that that the rest of us understand where it came from. So we're gonna have to think about

Kerry Harrison  38:10
that. Yeah, and I think especially with things like images. So when, so I write a newsletter, I use, sometimes use AI generated images in my newsletter, but I always mark, mark it as AI generated, so I'll say generated on mid journey, or generated with Google, or whatever it might be, because I don't ever want someone to come into my newsletter and say, is that Kerry really on a merry go round in Oxford, which I was, or is that an AI generated thing of Kerry on like I just never want that to be a thing. I want people to come into my newsletter and trust what they see is what they see. And so I always market so again, it comes down to that personal responsibility, I think, and for me as a copywriter with my clients. I mean, because I've worked in AI for six years now, I think if a client comes to me for their copy, they kind of know that AI is on the agenda. But regardless of that, I would I always have a conversation say, where are you happy for me to use it? Where are you not? Are you okay for me to use it for research and initial thoughts? Do you want me to write with it? Some people who wants to very, very quickly are like, we're happy for you to just, you know, do it as quick as you can and maybe just give it a light edit. Other ones don't want me to use it at all, and that's absolutely fine, but I think that having those conversations is really important. But I also know a lot of copywriters who feel stuck between a rock and a hard place with that, because it's like, I kind of need to use the tools, but if I worry that, if I tell them I'm using AI, then they're not going to trust me anymore. So, but I do think we have to have those awkward conversations with clients, because trust, I think, is such an important part of a relationship. And I think with AI people, if you lose trust with your clients, long standing clients, that could be super detrimental.

Ben Walker  39:43
You've hit on something really important. There is that if you're the discussions you're having with your clients are not binary discussions. It's not have you used AI? Yes or No? Probably because the profession you're in, the job you do, people expect you to use AI to some extent

Kerry Harrison  39:58
Yeah, yeah. 

Ben Walker  39:59
But the discussions you have is around the extent to which you have used it and to the extent which it has contributed to your final output. So you're transparent about it, and you're having a conversation about the grey areas in the middle the matter of degree is that common? Are people having those conversations enough? 

Kerry Harrison  40:19
I think from people I've spoken to. It's not an easy conversation to have. I feel, in a way, that I'm quite lucky, because I've been working in AI since 2018 and I'm very much known in the AI space. So I think if someone comes to me for copy, they already know that it's probably going to be in the equation somewhere. I think it's more difficult for a copywriter that's always been 100% human, that hasn't really ever started playing in that space before to have those conversations, but I do think it's I just think it's so important, because if you lose trust with your client, I feel like you could potentially lose your client, so however hard it is. So I know clients, for example, that I've trained up where. So I also do my own courses as well. Outside of the CIM where I go into organisations, went into a copywriting Agency recently, and we sort of reached this idea of actually putting it in the brain. It in the brief, you know, having the conversation with the client at brief stage and saying, Where are we going to use AI for this? Do you want to use it here or here, or do not want to use it at all? And so I thought that was really nice, because that briefing process is the is the point where we do speak to the client, and so that may it makes sense to do it there. And it just felt like a more, less of an awkward conversation, because those conversations around how we're going to do the process and what it's going to look like happen there anyway. So why not talk about AI? And actually, I should imagine that clients are probably quite relieved that that conversation is happening, because it's probably awkward for them as well to go, oh, have they used AI or haven't they used AI? Should I still be paying them the same amount if they've used it like actually, to just have that conversation a brief stage. It's like, well, that's everyone's in the clear. We all know what we're doing, and we've been very

Ben Walker  41:43
transparent. There's a danger that the client feels it's sort of impolite to ask, yeah, potentially, yeah. Are we? Are we? Are we telling this supplier that we suspect he or she of not this not being their own work? Yeah?

Duncan Smith  41:55
But, and you know, you're you're paying for it as a client, you're paying at the end of the day. So there's a, there's a value that you attach to. What is it you attach the value to? Is it authentic content, or is it content that does a job? You know, if the content does the job and it was written by AI, do I have a problem? I paid some money and it did the job? Or do I am I paying for a real person to give me that that empathetic view, or whatever it is. So that conversation up front is, is, I think, so important just get that out the way.

Ben Walker  42:26
Do you think enough people have that conversation up front at the moment?

Duncan Smith  42:29
No, no, it's just not. It's not happening yet. And you know, people are, people are starting to run your content through AI checkers and so, and I'm sure they read it and go, Well, it's come up as 90% AI, and you're thinking, No, I definitely wrote that. But I think it's when you read something that's written by a person, it's fairly obvious, it's been written by a person. And I think also, if you can add, depending upon your audience, the colloquialisms that you can add as perhaps an English speaker or as a native speaker in a particular language. That is something which you can genuinely put into articles, but it's also something which is a huge risk from AI when you start to use those localisation tools. So you write some content in one language, and then you go, well, that's great. We'll just use AI to reach out into the Spanish market or to the Chinese market. The I mean, you only have to Google, just Google mistakes that AI makes in localisation, and you'll very soon find the Spanish one on the diarrhoea pills. It's just, you know, Coors light will never forget that one. Yeah. And so that, that, I think is an important part of understanding what part does AI play in the authoring content? Because if you've used the machine language learning tools and not checked yourself as that native Spanish speaker or native English speaker, then you are short changing the client big time.

Kerry Harrison  43:56
And also, how reliable are those tools that analyse and say whether it's AI generated or not. They're often really unreliable. So I also know copywriters who've had, you know, their clients have checked and said, Well, this is clearly I generated in their life, but I wrote it from scratch. And yes, I'm using em dashes. Like, copywriters love an em dash and now it's like, it's really upsetting that chatty machine now loves an em dash because it's because then loads of copies are debating, like, do I stop using it? It's like, no, it's a, it's a, it's a great piece of punctuation.

Duncan Smith  44:23
I think that's where we start dangling the kitten in front of the copywriter and seeing it past the empathy replicant check. You sound like AI, but you said it wasn't Wait a second,

Kerry Harrison  44:32
yeah, I know it's really hard. So those tools aren't that reliable. I remember the first one that what the chat open AI created one didn't they originally and then shut it out because they couldn't make it work. And it was just like, if open AI can't make a detection tool that's reliable enough for them to run it. Then I just thought, well, who can  Its a worry isn't when university and teachers are marking your work and they're saying it's plagiarised or AI written and it's wrong I wrote it. Give me my GCSE.

Ben Walker  44:58
Well. History is History is littered with examples of people being accused of plagiarism when they're not plagiarists. Yeah. So even the tool, the detection tools aren't up to scratch. But you know, it's interesting that this conversation has made me more positive about one of the big, hot issues about AI for marketers at the moment, which is the machines are going to take our jobs. Because what I've heard today is that if you devolve your work to machines, it will pretty much be homogenous and inauthentic. There's a high risk of it, it will be littered with hallucinations, slash errors, slash lies. Or there's a great risk of it, and it will be it risks being inauthentic for your brand. You know the brand, the authenticity of product, is likely to suffer. And the answer to this is Human - AI - Human, a human being, the marketer at the end of the at the start and the end of the process. And yet, and yet, if you speak to lots of marketers, particularly those earlier in their career or in the middle of their career, and not at the top of departments, they do fear for their job, when these tools, since the advent of these tools, is it a rational fear Kerry?

Kerry Harrison  46:18
Oh, it's such a difficult question. I feel like for young people, or people in the early stage of their career, it's just really a good idea to just get on top of AI and just to see how it can help you. I feel like, in a way, if you're further in your career and you have the deep expertise that we've mentioned a few times, I feel like I'm in a really privileged position to have had 20 years of experience, to have been trained up as a graduate trainee copywriter, to learn from people above me at every single stage of my career. I'm so lucky to have that and to have those skills. And I'm not sure you know whether the young people today will have those opportunities, if, if a lot of the more basic roles are a bit are able to happen with AI, whether or not we'll need as many juniors, but I do think as an as employees, will have to think about, how can we make sure that we do keep training the young people? How can we make sure that we are developing expertise alongside developing our AI? Because at some point when the older people like me leaves a profession, then who's going to train up the people below? And so I feel like it's something I feel like we need to think longer term, and we're very good at thinking short term about the immediate impact of AI and how it can help us to improve our bottom line, but we also need to be thinking about that side of things as well, like we can't just get AI to do all the basic jobs we do need to be training junior members of the team as well. But I don't know how we do that, but, you know, that's what I'm thinking.

Duncan Smith  47:49
I think, I think I'm going to be blunt and say some are out of a job. 

Ben Walker  47:52
Okay

Duncan Smith  47:54
I think the writing's on the wall when it comes to anything that has large volumes of statistical, data, input analysis, programmatic marketing, those kind of things. I think some of the agencies that are involved in programmatic and some of the jobs that are currently done by people, the machine does it better. Okay, so machine does it better. So I think those jobs, I think those jobs are going but that doesn't necessarily mean that the market has lost the job. The marketer has to retrain, rethink. So we need to be understanding what the tool does, and then how do I use the tool? So I need to be the person that's controlling the tool. So yes, at the top of the tee, there's the strategist, but the senior marketers, they're very comfortable, because we need those people to think about how you use the tool strategically. The real risk, as Kerry said, Is that, is that the seedlings. Real risk is thinking, how do we maintain a flow of marketers, new to the profession, going through and becoming valuable enough to survive the cut? And I think the responsibility there is for the employer to recognise that you have to bring those people through, otherwise there is anybody in place. Yeah, yeah, exactly.

Ben Walker  49:05
Interesting. So you're the totality, the total number of jobs, the totality of the jobs probably similar and maybe even greater. But the simple truth is that those right at the bottom rung, if you're taking the tasks that those people do, in a lot of cases, they could be done by machines, yeah, but in order for organisations to protect their pipeline, in order for them to protect their future, make sure they've got a talent flow coming into the business, they're going to have to tolerate the fact that at the entry level, sometimes the task could be done by machines and just let Human beings do them,

Kerry Harrison  49:40
yeah, or work alongside AI to do them. So you are still the conductor of the orchestra, as it were. I think, you know, giving them this is why I think young people, it's really helpful for them to understand AI and AI tools, so that when they arrive in the job market, there's a stat that I heard recently, again, it came out of Microsoft. I think it's around 70. I can't remember something like 72 or 73 it was saying that 73% of employers would be would hire someone with AI skills and lead and less experience than hire someone with more experience without them. And I thought that was a really interesting indication of of where we're at now, that actually people really value these AI skills. So yeah, I think that having a junior who you can train up in the in the real expertise of marketing, but also to help them to use AI tools to support the

Speaker 1  50:25
process even, even if a junior listened to this podcast, they sat down and said, I'm about to go for an interview. What are the sorts of things? What would I respond to a question that said, What do you think about the use of AI? And you go, well, immediately, if I if I said something along the lines of, you know, what's really important is the human in the loop sandwich, or what's really important is the ability to critically think the output. And actually, when I did my degree, I did something on critical thinking, and that really helped me understand how AI is used in marketing. You know, bingo. 

Ben Walker  50:57
You're hired Duncan, You're hired.

Duncan Smith  51:00
That's a marketer that doesn't know all the skills yet, but has awareness of the impact of the tools, and that's somebody you can train. So I think that nurturing employers have a real responsibility here to think about nurturing those people with the training that they can give on on ethics. What marketer has had any training on ethics, other than perhaps watching Hot Fuzz and realising that was all about ethics. You know that? You know that idea that's for the greater good, or is it? Am I following rules? Well, if you're taught about those and then you suddenly see a CRM system that is now binning people because they don't come from the right postcode. You go, wait a second. That looks a little biassed. I remember learning about ethics, and we should be thinking a bit deeper about this. That's a marketer who is looking at a CRM tool and actually thinking, there's some really important things here that I need to be thinking about. That's great. That tool is not a marketing tool, but it's somebody who is very useful in an organisation.

Ben Walker  52:04
It's fantastic advice. It's certainly fantastic advice to the applicant. There's got to be a bit of internal education, because marketers already have, marketing already has an issue that it's seen as a cost centre, you know, it is a department in a business that spends money. You know, it's not there to make money. Of course, we all know that's a load of rubbish. Some FD's get it. Some FD's don't probably. I think if you spoke to most marketers, they would say, too few, too low a proportion of finance departments get that. And this makes it even more challenging, doesn't it, that if you're saying to the FD, you're saying to finance, we're going to bring, continue to bring young people into the profession that is lower ladder. Yes, some of the tasks we're going to be asked them to do could be done by machines, but they are bringing a whole bunch of ethereal things like ethics and morals and critical thinking into the business that we're going to need when they promote it in one or two or three years time, if not immediately. It's not an easy language for finance people to understand. Is it?

Duncan Smith  53:04
It might be the same problem that you let somebody else do that training and then whip them out of that other organisation, to let one of the larger enterprise, you know, let one of the larger blue chips, train them up and then take those people on. It's a hard ask to pen to spend money on that kind of training, because the return on investment is not immediate for sure.

Ben Walker  53:21
Is it doable? Do you think people organisations can be convinced Kerry

Kerry Harrison  53:26
in terms of ROI? I think so. I think it's difficult. I would say that in terms of ROI, the first place I would start would be around time saved. So if you're implementing any tools and you want to get someone to sort of side with you from that point of view, I guess it would be looking at, how long did this job take us before? How long does it take us now, what's, you know, what's the cost of that? I guess that's an easy measurement in terms of ROI. But then I think, like you're saying, we also need to consider things that maybe don't have such an obvious ROI attached to them, you know, like if we use AI tools and we free up more time for those people, what's the value of the strategic or creative thinking that they now get to do that maybe they didn't have time to do before, and that's obviously a lot more difficult to attach a to attach a kind of ROI label to. So in terms, I don't know, that's what I always think in terms of ROI, just look at the time save. But it's hard to measure, isn't it?

Duncan Smith  54:21
It is and but I would be talking, if I'm talking to the finance director about this, then I'm going to be saying to him or her, just just just expand out a little bit. It's the same when you're spending money on cyber risk and cyber security. How much does it cost to put endpoint security on a laptop? It's a lot of money. So the risk equation is quite important to get the head round, which is, we're doing all of this to increase productivity, to get greater throughput, and all the rest of it. But in terms of the ROI that we're spending on training all of these people, to think about this tool is we're training them to spot some of the risks. You know, we might, for example. Or was it Air Canada? I think Air Canada is one of those stories that everybody leaves, yeah, where their chat bot mishandled a grieving customer who had lost a relative or something. That the reputational cost, the legal cost that had we not trained somebody to think about that tool better, we could have avoided all of those costs that you haven't factored into buying the tool. I think that's important.

Ben Walker  55:23
I think that's a great example. And I think examples generally a great example of what we perhaps need to do. Because it's so young this stuff, we haven't got enough case studies of where things have gone wrong. Air Canada's one Marks and Spencer, the recent cyber attack. Other high street retail brands are, of course, available they cyber security is one of those things that can be easily I'm not saying Mns did this for the record, but cyber security is one thing. Cyber security projects are things that can easily be put off. Can easily be put off to the next FY, to the next FY, because unless there's an immediate threat or you can see something going wrong, it's just a big cost centre. It's seen as a big cost centre. But if you're trying to make the case for that preparatory stuff, you've tried to make the case for that preventative stuff. And you can then say, look at Air Canada, look at M&S. You can use case studies of where things have gone wrong. It's much easier to make a medium term and long term case, isn't it? 

Duncan Smith  56:20
And you're asking them to they should. They're already very familiar with the concept of risk. That's part of their job. So if they know what you do as an organisation, then they can, you basically say to somebody, what's the worst thing that could happen to this organisation? Never mind. AI, what's the worst thing that could happen? Oh, well, we tell a client some incorrect information, and they lose a massive case, or whatever, or we give them some financial advice, and it's the worst finance financial advice ever. Ask yourself the question now, is AI going to increase the likelihood of that or reduce the likelihood of that? And the answer is, often it increases the likelihood of it, unless we have this human check in the process. So from a risk, a really simple risk equation, which your FD will get instantly is there's the severity and likelihood. The severity hasn't changed, right? We still gave bad advice. The human gave bad advice, the machine gave bad advice, so that hasn't changed. So the only factor that's changing is the likelihood. Part of the equation. Does AI increase the likelihood Yes, at this point, most rational people would go, Okay, we see our risk going up. I think we need to start training people a little more. We need to start feeding in more money into this process so we can use AI, but we manage that risk process in the ROI

Ben Walker  57:39
It helps us rapidly scale Kerry. It increases our ability to do things at volume, but it also increases risk. And if we make that case, then suddenly the argument for human check as humans in the loops becomes a lot more powerful when we're talking to finance departments.

Kerry Harrison  57:57
Yeah, and I was just thinking around high profile cases, which I know is not linked to risk, but the whole Klarna case where they, I can't remember when it was, they got rid of 700 staff members and saying, Oh, it's okay. AI is going to do, you know, our job for us, we're going to, we've got an AI that can do the job of 700 I think it's customer service, advisor, something. And now in 2025 slowly, like bringing those people back in because I like, it's just, it's not nuanced enough, it's not human enough. And so we need to bring them back in. And they've had fair place, and they've kind of gone, yeah, we, we made a mistake because they've, I suppose they've had to, but it's also been, it's also become, a bit, I read some recently called it the Klarna effect, and I just thought it was quite interesting, and that now the brand's being used in a kind of like, oh, look what happened. Like, how embarrassing scenario. Yeah, it's like the clutter effect. So I thought that was quite interesting.

Ben Walker  58:43
Did they devolve too much of the customer service to a bot and alien into the customers effectively?

Kerry Harrison  58:48
Yeah, I think the customers just didn't enjoy interacting with them with the same reasons we talked

Duncan Smith  58:53
about earlier. Yeah, that's one of the if you start introducing AI into things like customer service chatbots, the one thing that you have to do is almost like adding salt sugar in a recipe, is you have to balance that up, perhaps with more sentiment analysis. So start thinking about social media scanning and sentiment analysis, but they're having a go at you online because they can't get through. The bot won't let you do this then. So introducing that technology means you also need to be thinking about any harm that comes out of that. Klarna obviously didn't, and that's, you know, as a consequence of which they missed. They missed that golden opportunity to listen to the to the customer and go, I think we got this wrong. Air Canada slightly missed it. Well, we're

Ben Walker  59:31
glad they did. Now, gives us a useful case study for human beings risk harm, factors. We can't finish, we can't conclude this conversation without talking about big elephant in the room, which is the environmental impact of AIS and llms, which are incredibly or at least at the moment, power hungry.

Duncan Smith  59:52
I don't recall where I heard it from. I think it was a Californian or Texas University that did some research on it about water impact in sustainable. Reality. And just to me, that, just that just struck home when I first heard it was that a prompt is about a 500 mil bottle of water, yeah. So, you know, and I only heard it recently. I think it was on a CIM panel that we were discussing. And you suddenly realise that I preparing for today's podcast, I probably prompted a couple of dozen times on chat GPT, that's several litres of water that was required from the energy consumption perspective. And I think just something like that just means I do need to start thinking about that process.

Kerry Harrison  1:00:34
I think though that that's that number was from a relatively old piece of research, yeah. So then, so the newer research, so I read something literally this week, but from a research. I don't know if he wrote the research, but it's Ethan Mack, who, I'm sure you know, it's got a really great sub stack. I'd recommend looking at it. But he was saying now that, as an agreed, amount of power for one prompt is naught. Point. Naught, naught, naught. Three kilowatt hours, three which is the equivalent, really interestingly, of eight to 10 seconds of streaming on Netflix, eight to 10 seconds. And I think this is the thing I feel like I could go into, I won't go on forever. But I think it's quite interesting that that we are sort of demonising the use of AI, a lot of people, and I know this fits my narrative, but a lot of people are nervous about using chatgpt because, like, oh, I use these prompts, and it's going to be really, really bad for the environment, but the eight to 10 seconds, the equivalent one prompt is the equivalent eight to 10 seconds of watching Netflix, and no one thinks about the impact of net of Netflix, of watching Netflix. Yeah. So I think we sometimes need to, I feel like sometimes the environmental conversation, we're kind of focusing a little bit on the wrong things and not having the wider conversations that we need to have, and making people feel guilty about using AI. I'm not saying obviously that the massive AI algorithms that underpin things like social media networks. I mean, those are huge. I'm not I'm not denying in any way, shape or form, that it's not impacting the environment because it is. Yeah, but I just think when we worry about, oh, if I prompt chat GPT, then maybe, you know, if you watch the series on Netflix, it's a hell of a lot worse for the environment.

Ben Walker  1:02:07
Yes, no one's minimising it, but we are in danger of another the aviation paradox, which is aviation is the bet noir, because it does have a big carbon impact. It's 3% of the world's carbon output is due to aviation. Construction is 40% of the world's carbon output, but, but yeah, and yet, aviation is the bet noir, and construction is barely mentioned in this debate. We should have more of a holistic conversation about environmental impact, as you say, who knows that streaming their favourite show on Netflix?

Kerry Harrison  1:02:38
Yeah, we don't talk about guilt attached. But then

Duncan Smith  1:02:42
also if I think AI was an important part of coming up with a new antibiotic, recently, a novel antibiotic, well, if you balance up, I mean, frankly, if we've saved the world by creating a novel antibiotic from using AI, that holistic view is actually quite important. We need to look at the benefits of it as well as the cost.

Ben Walker  1:03:02
Always, we need to look at these in holistic way. Nevertheless, is it likely to become more efficient? Is there a sustainable future for this technology that doesn't have as much of an environmental impact?

Kerry Harrison  1:03:16
I think there's, I think, just in the conversation here, and I've heard that, in fact, I use that stat quite a lot. The 500 millilitres one came out of research. I'm trying to think when it would have been but quite a few years ago, I think, from what I've seen recently in the research papers, is that the models are getting more efficient than they than they were then. And the water, I don't know about the water, but I think the water's less than that. Now. I can't remember what it was like a shot glass or something, rather than half a Yeah, half a litre, yeah. So. So I think I feel like they're already going there. And I think this, this what we've just talked about around this kind of wider concepts, like you're saying about antibiotics, and we have to kind of put it into into context of, like, yeah, AI can be really detrimental to the environment. I wonder what the impact would be. And there's also a lot of people saying that actually, maybe AI could help us to solve the climate crisis.

Duncan Smith  1:04:03
I remember there was a professor who was researching a paper, I'll have to dig the Quirk. I'm happy to dig the quota. I can't find it. But actually prompted a GPT, and it came up with an answer. And it was novel thinking. He said, I haven't published any of my work. None of my work is exposed to any of the AI crawlers, and yet, what it came back with was my research. Okay, so Google it. Have a look, see if you can find it. I read it recently, but what that suggests is that AI is was obviously not sentient, not general AI or it's not that kind of a bot that's going to create and rule the world. It is capable of coming up with new and novel ideas. So asking it to come up with a solution for its own problems is not beyond the realms of possibility. So it could think about we're talking. About energy and water here. So why can't it raise the question about renewable energy? So yes, we use a lot of energy, but let's make it renewable then, 

Ben Walker  1:05:08
Yeah, it is able Kerry to know or understand a series of facts and perhaps come up piece together a solution that perhaps human beings hadn't previously thought about. So in terms of things like big challenges of the day, like sustainability, it's not beyond the realms of possibility that it can certainly, at the very least aid humans in coming up with the great technological advances which will help us crack some of these problems. 

Kerry Harrison  1:05:38
Yeah. I mean, I hope, I hope that's the case. I think that would be, that would be great. And, yeah, there is a lot of positivity around AI and what it might do for us from a sustainability point of view. So it's just, I guess it's just seeing how we know, as we both said in this session, so far, we're still so early days. The technology is still really new, and I guess it's just seeing how things play out.

Ben Walker  1:05:57
Is it likely to be on the social side? Do you think it's there's a chance that it could become more inclusive and a bit more socially adept than its search for runners.

Duncan Smith  1:06:08
Actually, this very interesting, because I do use, I'm a little cheap when it comes to hiring graphic designers, so I tend to use AI to create pictures quite a lot. And recently did a slide presentation for a client that involved a medical company, and I asked it, bizarrely, don't ask why, but I asked it to produce an image of doctors in a jam jar. It's an analogy for CRM systems and looking after people. But anyway, I said, I'd like a jam jar, and I'd like lots of doctors in it. Guess what?

Ben Walker  1:06:35
All male

Kerry Harrison  1:06:37
All white

Duncan Smith  1:06:39
all male, all white, yeah. And I went back, I just said, what the Yeah? Do better? Yes. And it came back in it and it then. But even then, the ladies, women that came back in the model were in theatre scrubs. They weren't in white coats with stethoscopes. They weren't consultants, yeah, and it's just that, and this is the incredibly we there's a huge there's another podcast, there's another debate on bias in algorithms and our ability to not see it, because it's a black box algorithm, where we're unable to get into it to see why that bias is there. But it is absolutely there. It was a smack in the face when that image came back, if

Ben Walker  1:07:21
it's going to repeat its own and this is probably a question for a whole new show when you come in again, but if it's going to repeat its own information, of course, AI from one of the things about AI, the black box, is that it looks at other AI output it creates. Is that bias? Is that social sort of exclusivity, rather than inclusivity, ever going to improve

Duncan Smith  1:07:43
The human sandwich 

Kerry Harrison  1:07:44
The AI sandwhich

Duncan Smith  1:07:45
you have, you have, you have got to catch it at the output level, as I did so that image came up. I thought, oh, let's cut and paste that into my slide. Not so. By checking the output against my own internal bias compass. I was able to not reproduce that image. But that's the important thing about having that either a moral, ethical compass, bias compass. And that's where marketers want to look after their jobs. That's where you need to be checking for you need to be spotting those things. 

Ben Walker  1:08:02
That's a fourth thing that the human in the loop. Do we know that they can help solve inaccuracies, they can catch inaccuracies, and they can combat inauthenticity, and they can combat homogeneity. They can also combat bias in the system.

Kerry Harrison  1:08:32
Yeah, absolutely. There was a really nice campaign that dove did, actually a marketing campaign. I'm not sure if you saw it was a couple of years ago around the problems with the image generation. We just talked about, they did this whole really nice campaign about how to prompt for diversity and how to make sure that women had, you know, gaps in their teeth and freckles on their faces. And it's just, it was a it was really lovely. And they even create, created a prompt guidebook as part of that campaign. And it's and sometimes, in my in my sessions, to say, I think it's really weird that I'm asking you to go and have a look at a soap brand for prompting advice. But like, download that workbook. It's really, really good, and I'd say that would probably be still relevant now, so if you want to have a look at it. But they did look because it fits so, really lovely. So well with their brand, isn't it this whole, like, real women campaign kind of thing? But yeah, they did a whole prompting guide on it. 

Duncan Smith  1:09:18
It does work nice as well. In cosmetics, there was a web based tool, which you can buy. It's a third party product, and it was a well known skincare brand, and it allowed you to take a selfie, and then you could it would then recommend products for skincare. So I thought, Okay, I'll have a go. So I took a selfie, and it then came up with a big splash screen and the whirring wheel, and it said, comparing your image with 1000s of other women. All right, quick screen grab. That's going

Ben Walker  1:09:50
Did you recommend the good products?

Duncan Smith  1:09:52
I suspect it recommended the products that they were having trouble shifting. But it did. It certainly spotted my crow's feet and a few other things. I just immediately thought, you know, there was a really, I wasn't offended, but there was a really simple piece of bias that somebody coded, that somebody wrote that web page. It was a splash screen. It wasn't really, you know, it was just a splash screen to say, look how good this tool is. But they just forgot that men have skincare routines. Simple catch. Somebody could have caught that, but they didn't.

Ben Walker  1:10:23
Really didn't. But we get humans in the loop. We get good marketers in the loop. Yeah, and they should catch it, yeah. Are we going to do it? Are we going to listen to this amazing sage advice we've had today, or is industry going to listen? Our department's going to listen. Our finance department's going to listen quickly  enough Kerry? 

Kerry Harrison  1:10:42
Great question. I guess we could just just, just hope, I think, just with AI, if you're just starting out, just start with a small project, a small pilot, and just see how it goes. And I feel like, if you do something small, low risk to begin with, then often these things come up, and you can go, actually, we really need to fact check. We really need to check for buyers. It's like you always have to do it on a very small, low risk sales scale first. And then I think that once that's in your head, that all those things are possible, then maybe you're more likely to be able to say, it's really important that we do this. And I think obviously training again, this suits my narrative really well. But to just train staff, not just the C suite, but right down to the junior members, so that people are aware of these because you might just not be aware of it, and things are not just the kind of how to use it, but the ethics side of things, I think is so important, because those are things that are easily missed and could be really detrimental to a brand's business and reputation. 

Ben Walker  1:11:29
Do we need a few more high profile blunders Duncan?

Duncan Smith  1:11:42
You read my mind? Yeah, I was just thinking that, you know, we learn from others mistakes. You know, here we are on a podcast, and of course, and when we're delivering training, one of the one of the best things that we can have is somebody else tripping up and say, "don't do this. Do something different". So there will be more high profile blunders, I'm sure. And I think what worries me is that those blunders aren't just humorous. So when we were talking about bias, you know, there's, again, put it into chat, GPT, ask the question, you know, what evidence is there that bias in AI causes real human harm, and just read, read the output of that simple prompt to be shocked. So there's that will happen. So really, if we can convince people to do training, even just very basic training, or knowledge or awareness, to to be catch that risk before it happens, it will happen. But if we can get people to be thinking about that, to start with job done. And you know, if you're, if you're worried about this top tip, stick it into AI and ask the question, what are the risks of implementing AI in our organisation? And it will do a really good job of surfacing some of the headlines for a PowerPoint presentation. Okay, have we covered this? Have we covered that? Have we covered this? You know, it's the engine already knows the answer to that question. Take that prompt and then put it into your own business. That's what I would suggest people do. And then, obviously, come on a training course with CIM

Ben Walker  1:13:17
And on that naked plug, we'll finish. Thank you very much indeed, and to you as well. Kerry, thank you very much indeed for joining us on the show today. Ai, it's powerful. It enables us to do things at scale, but it's not a free lunch. Be vigilant and get humans in the loop. Thank you to all of our listeners.

Karen Barnett  1:13:40
Be sure to subscribe to the CIM Marketing podcast. Please leave us a rating and review. We'd love to hear your feedback. 

CIM Team
CIM
Ben Walker
Host, CIM Marketing Podcast
Kerry Harrison
Course Director, CIM
Duncan Smith
Director, iCompli