Episode 159

Deepfakes & Liability: Protect Your Brand From Malicious AI

Chris Harihar
EVP of Public Relations at Mod Op

Chris Harihar, EVP of Public Relations at Mod Op

"We're really excited about this capability that we're launching called AI Risk Intelligence."

Chris Harihar

AI is evolving fast—and so are the risks that come with it.

In this episode of Leader Generation, Tessa Burg talks with Mod Op’s EVP of PR, Chris Harihar, to unpack a growing issue most brands aren’t fully prepared for: AI-driven brand misrepresentation. From deepfakes to manipulated logos and inappropriate brand placements, the conversation explores how generative AI tools are creating new reputational threats in ways that feel chaotic, fast-moving and hard to control.


“Every brand will be at some deal of risk for malicious generative AI to occur.”


Chris introduces Mod Op’s new AI Risk Intelligence capability, designed to help brands proactively identify and address harmful AI-generated content before it spirals. They dig into real examples—including manipulated executive deepfakes and brand misuse across platforms like Sora and Grok—and explain why this isn’t just a cybersecurity issue, but a reputational one that belongs squarely in the PR and communications world.

Highlights:

  • The rise of AI Risk Intelligence and why it matters now
  • AI-generated brand misrepresentation and reputational harm
  • “AI slop” and brand safety in digital advertising
  • The “neutral input trap” and logos treated like public domain
  • Deepfakes of executives and manipulated branded content
  • The Grok scandal and brand misuse on ad-supported platforms
  • Differences in risk between B2C and B2B brands
  • Reactive vs. proactive brand protection strategies
  • Mechanizing and automating AI risk detection over time
  • The role of PR in managing AI-related reputational threats
  • Why brands must apply pressure early to influence platform safeguards
  • Balancing AI innovation with brand control

Watch the Live Recording

[00:00:00] Tessa Burg: Hello and welcome to another episode of Leader Generation, brought to you by Mod Op. I’m your host, Tessa Burg, and today I’m once again joined by our EVP of PR and Strategic Communications, Chris Harihar. Chris, thanks so much for joining us.

[00:00:16] Chris Harihar: Of course. Happy to be here.

[00:00:18] Tessa Burg: So this year has been packed with headlines, some exciting, but a lot are really.

[00:00:26] Tessa Burg: I would say overwhelmingly, uh, nervous or driving a ton of anxiety for people thinking about the future of work for companies, wondering if the investments they’ve made in their technology and AI are gonna pay off and in strategic communications and brand and creative. We’re concerned about, you know, what is the impact of generative AI and where our brands show up, and is that reflective of our company’s and our brand’s values?

[00:00:55] Tessa Burg: And we’re, and that’s what we’re gonna talk about today, and I’m really excited to have you here. Not only do you lead strategic communications here, but a couple of our clients have deep expertise in this arena. Can you set the stage for us? Like why is. AI Risk Intelligence, so important to talk about right now at the beginning of 2026.

[00:01:19] Chris Harihar: Yeah. We’re really excited about this capability that we’re launching called AI Risk Intelligence, uh, which is really focused on, uh, human enabled and, uh, AI supported audits of AI created content, uh, and in some cases AI manipulated content. Where we can better understand, um, on behalf of brands, how they’re, how they’re being featured or presented in some of this content.

[00:01:47] Chris Harihar: Um, everyone is talking about AI slop and how there has been this explosion in low quality, uh, uncanny valley esque content online, both in walled gardens and the open web. Um, and a lot of the focus has been on how do we prevent our ads and our marketing materials and campaigns from running alongside some of this content because low quality generally means that, uh, that the content won’t allow the campaign to perform.

[00:02:18] Chris Harihar: And if you look at the media verification space, players like Double Verify, who’s a client of ours, uh, do great work in this category to ensure that advertisers and marketers and brands. Uh, don’t have their content appearing alongside that type of slop. Right? Um, but there’s also another danger and harm here that I think is less discussed, and that is what if we’re using this explosion in new tools that are really incredible, whether it’s Higgsfield or Grok even, or, um, or ChatGPT or, Gemini.

[00:02:50] Chris Harihar: Uh, what if these tools are actually used to create harmful content. We’re guarding brands, featuring logos, uh, characters, uh, you know, executives. There’s no shortage of ways in which this, you know, content can be harmful or threatening from a branding perspective. And so what we’ve done with the AI Risk Intelligence capabilities, uh, using our expertise in online forensic research, using our expertise in PR and communications, and understanding what could be a potential threat.

[00:03:23] Chris Harihar: Using our understanding of marketing channels and media channels and, um, where this content might live. We are working to help brands identify these brand damaging and brand threatening moments, um, and surface them so that they can understand. How to reactively address them and proactively plan for them, because there’s only gonna be more of this moving forward.

[00:03:45] Chris Harihar: And so that’s what we’ve really done with AI Risk Intelligence. Uh, and we’re excited about the launch because I don’t know that there is anything like this out there that sits within a independent agency like Mod Op and specifically sits within the, the PR and communications, uh, group. Uh, I think you, you might see this with cybersecurity firms, maybe it’s.

[00:04:07] Chris Harihar: Sort of ad hoc work here or there. It’s reactive, driven by brand need. If they’re featured somewhere, find out and then have to. Call attention to it. Sometimes it sits more with like a legal function related to copyright. Um, but at its core, this is a reputational issue and so it should sit with PR and communications.

[00:04:25] Chris Harihar: Uh, and we’re excited to, to work on more of these projects and, and help, you know, enterprise brands do this work so that they can understand how they’re being featured, uh, and in some cases misrepresented.

[00:04:38] Tessa Burg: Yeah, and I think when I talk to clients. Sometimes this all feels like a little hocus pocus, so like, well, there’s a lot of stuff outside of my control.

[00:04:50] Tessa Burg: But it was interesting the, there was a alliance Risk Barometer ranked AI as the second greatest concern for companies. Worldwide and just last year it was ranked number 10 and now it’s at number two. And the post that you wrote about this release, you mentioned the neutral input trap. And in it it says, you know, CMOs that their logos are being treated like public domain.

[00:05:24] Tessa Burg: And you have that conversation with CMOs that a part of that AI risk. Is sort of this treatment of LLMs and how are you, how is AI Risk Intelligence more proactive than reactive? So two questions.

[00:05:39] Chris Harihar: Yeah, for sure.

[00:05:40] Tessa Burg: Making CMOs and brands aware of the very real risk and that neutral input threat. And then two, how are we getting ahead of it?

[00:05:52] Chris Harihar: No great questions. Uh, and I think showing is it, it helps to help, you know, make the case for why this is such a. Powerful, dangerous, and important topic, uh, and, and issue for the marketing space and for brands, just generally speaking, uh, this actually stems from some work that we helped another client with called Copy Leaks or AI manipulation, uh, and detection and governance platform.

[00:06:17] Chris Harihar: And we worked with them on this research related to Sora, which is the video, the generative AI video platform, um, from OpenAI. Where we actually were able to identify instances where, uh, executives like Sam Altman or Mark Cuban, uh, were actually being deep faked on the platform to say, uh, racial slurs, et cetera.

[00:06:43] Chris Harihar: Uh, essentially, um, where they, the, the bad actors were finding ways to, uh, game the prompt so that the. Prominent executives were saying things that weren’t exactly the racial slur necessarily, but they found a way to make it sound phonetically like that. And this is a good example of bad actors will always be very creative in thinking through how to game prompts, uh, and get around safeguards and, and protections that are in place across these platforms.

[00:07:17] Chris Harihar: And we did that analysis. And what was fascinating was that in all the videos we saw, it was part of a broader meme. Where every one of the execs was wearing a Burger King crown. And so we have some of the most well-known executives in the world wearing a Burger King crown, shouting what it sounded like, racial slurs.

[00:07:39] Chris Harihar: And so that was shocking to me. Because why, first of all, why would Burger King as an input be allowed alongside some of the other context that was inputted in the prompt, but then also, um, why is Burger King, you know, uh. Uh, in the, in a unfortunate position to have the crown appear in any way, shape or form on Sora, uh, with no, uh, you know, protections in place.

[00:08:07] Chris Harihar: It seemed like to ensure that the crown that they value, uh, is used in a way that’s brand safe and appropriate for. Who they are and their values. And so that, you know, from that we started thinking more about how this is problematic beyond that specific instance. And we saw so many other examples in researching this on Sora and other platforms where this is quite common actually, where brands are being, being featured in inappropriate ways, uh, ways that are entirely counter to who they are.

[00:08:38] Chris Harihar: And, um, we think that. There is an opportunity for brands to, to know this and reactively address it, but proactively prepare because there’s just gonna be more of this moving forward as the tools get better. Um, because ultimately protections, uh, protections don’t move as quickly as innovation, unfortunately.

[00:08:58] Chris Harihar: And so every brand will be at some, you know, uh, some deal of risk for this to recur.

[00:09:05] Tessa Burg: Yeah. So every brand will be at some deal of risk, but are there some industries that you think should be prioritizing this or moving faster than others that are at more risk right now?

[00:09:16] Chris Harihar: Yeah, I think certainly, I think consumer facing brands that have well-known characters, uh, well-known logos, uh, they should be thinking about this, uh, whether you’re, you know, Disney, Burger King, McDonald’s, um, anyone with a, a, a well-known logo.

[00:09:33] Chris Harihar: Is at risk for this to occur. We, we saw instances that of, of that, for example, with, uh, with Grok on X. And this is part of the analysis that we did, uh, in conjunction with the launch of AI Risk intelligence. But of course everybody knows that about a month ago, Grok was, uh, the, the built-in AI platform on X.

[00:09:55] Chris Harihar: Was actually non consensually undressing, uh, users, uh, on the platform, men and women. Uh, and we found in our research over a hundred examples actually of Grok, uh, being asked to put people in branded bikinis. So McDonald’s, for example, uh, people were asking rock. To put, you know, women non consensually in a, uh, Grok in, in a, in a McDonald’s bikini, or to put women, um, in undressed in, in a McDonald’s, an actual venue.

[00:10:29] Chris Harihar: Uh, in an actual restaurant. And so it, it’s to, to some degree, it’s kind of shocking to me that, uh, an ad supported platform, for example, like X would allow this to occur. Um, but I think it just goes to show that there’s so many different scenarios. And so many, you know, use cases that I think we don’t generally think about, but bad actors are thinking about that we have to prepare for, all of us have to prepare for, and I think brands need to know that this is occurring.

[00:10:58] Chris Harihar: They need to see examples of it. When it does occur in the wild, we need to surface those for them so that they can understand how damaging they can be. Um, and there, there also has to be that level of education and knowledge so that the platforms themselves. Can work to install better safeguards to ensure that brands are, to some degree treated like people in terms of how you can use their likeness, uh, on, you know, in these platforms and in these ways.

[00:11:24] Chris Harihar: And so I think that’s part of the push here, where in driving education at the top of the funnel, you know, hopefully that will create pressure that will move further down the funnel where we’ll see more safeguards put in place, uh, at the platform level.

[00:11:41] Tessa Burg: That Grok scandal I thought was like completely nuts.

[00:11:46] Tessa Burg: And it certainly shows that gap in prioritizing what they call creative freedom or spicy mode over protecting the reputation of the brands that advertise on their platform. Are you seing brands rethink. Their mix of spending to ensure to as one of the ways that they’re being proactive in safety and

[00:12:11] Chris Harihar: for sure

[00:12:12] Tessa Burg: in practice.

[00:12:13] Tessa Burg: Like other than just looking at where your brand can surface, um, how are you moving from sort of those project based assessments to like a full system? And, and I don’t know if this is possible, but like that you’d be able to detect that there’s a likelihood of a Grok-like scandal situation going to occur somewhere else, so that you can sure proactively instruct our clients and our brands like, Hey, here’s some other spaces based on this type of activity that we might, might wanna start avoiding.

[00:12:42] Chris Harihar: Of course. No. All, all great questions on the, on the spend side, you know, uh, X has already. You know, fallen victim, uh, to some suitability concerns in the past, uh, which has led to some, you know, a number of advertisers. It seems to leave the platform entirely or cut back on spend. And I’m sure that the, the Grok, issue, which continues to this day to some degree, but.

[00:13:07] Chris Harihar: Uh, not at the extent that we saw in January. Um, that will always give an advertiser pause, especially because on Grok there was also, um, instances of CSAM, uh, where children unfortunately were being undressed in some of these images. And so that has become more of a regulatory nightmare for, for X and uh, in a way that no brand wants to be.

[00:13:28] Chris Harihar: Wrapped up in that. Right. And so I do think that we’re, we’re seeing, uh, pressure from brands where dollars are leaving these platforms when this does occur. But we’re in this interesting moment where OpenAI, for example, is now set to feature advertising in platform. This week they launched ads within ChatGPT Go, uh, uh, I think that’s the test cohort for it.

[00:13:49] Chris Harihar: And I think that creates more pressure when you start taking advertiser dollars to support revenue. Then you have to make sure that brands are, uh, protected in these platforms, uh, so that they keep spending with you. But also just, you know, in order to be a good industry partner, you wanna make sure that the brands that you work with and don’t work with, uh, feel like they can safely be featured in your platforms, whether it’s ChatGPT or Sora.

[00:14:18] Chris Harihar: Um, and. You know, we’re seeing, and Sora is a good example where it doesn’t feature advertising. I do think at some point it will maybe even this year as a test. Um, but we’ve already seen Disney come on as an investor, uh, with, uh, and a partner with OpenAI where they will do a lot of work in the Sora platform to feature characters from Disney.

[00:14:40] Chris Harihar: Uh, now. You know, I can go into Sora right now and show you five different channels that are wildly inappropriate, that feature, um, brands in horrible ways. I mean, uh, and so as brands start to do more with these tools and these platforms. And, and, uh, we see more, um, you know, of a relationship, uh, cultivated between brands and the gen AI tools, then I think there’s just gonna be more of a need for solutions like what we’re offering.

[00:15:12] Chris Harihar: At the same time, what we’re offering is early stage. This is gonna be human-led research, essentially with AI tools supporting it. Um, but over time the goal is to mechanize this so it become more so it can become more automated. Um. Because the, the problem is only gonna grow in scale. And so if it’s growing in scale because the tools are getting better and because bad actors are getting smarter, we need to figure out how to mechanize that and make it more automated, um, to the point where in real time the goal will be to help surface or in, in near real time.

[00:15:45] Chris Harihar: The goal will be to help surface and pinpoint when these instances occur, and then down the line, you know, if there is an opportunity to be more predictive. And I think a predictive solution would have to be based off of. Um, platform type, you know, the propensity for this to occur on a specific platform can vary.

[00:16:02] Chris Harihar: Because different platforms have different, uh, protections and safeguards, but then also brand type. Um, because if you’re a B2B SaaS company, you’re probably more, uh, you’re probably less at risk for this to occur than if you were McDonald’s or Taco Bell. Right? And so, um. I do think that there will be an opportunity to create more of a predictive engine down the road.

[00:16:23] Chris Harihar: Um, but right now we’re just sort of scratching the surface and, and uh, getting in at the ground level to try to help brands understand how much of a problem this is right now while pointing to the future, because it’s gonna be even more of a problem five months from now.

[00:16:38] Tessa Burg: Yeah, and I love that approach.

[00:16:39] Tessa Burg: I mean, very similar to how when we started building our internal tool for social media Mod Heat. It starts with that pattern identification and getting the right kind of implicit feedback, like just monitoring, behavior engagement. And when you start to see the variances in that and then put a circle around it and say, let’s go a little bit deeper.

[00:17:03] Tessa Burg: What’s triggering this? What do we need to understand better, then that will solidly lead to better predictive models that. Uh, will help us get ahead and warn clients yeah, that this might not be the best place for our placement. This might need other type of human intervention to remediate if a brand has been exposed in an inappropriate way.

[00:17:28] Tessa Burg: Uh, so I think this is a great start and even with

[00:17:32] Chris Harihar: Sure,

[00:17:32] Tessa Burg: being human led. Just the fact that we’re thinking about it now and collecting the data will we’ll be able to get that predictive state in a pretty efficient manner and learn across industries. ’cause like you said, it is gonna be different, but if we’re tracking the highest risk first, that actually benefits everyone.

[00:17:54] Tessa Burg: So even if I’m like, if, if some, if we are picking up patterns for more B2C, consumer facing brands. Then we can assume that the guardrails within that platform are more lax than maybe we’re comfortable with. And then all of our clients can benefit from that information.

[00:18:12] Chris Harihar: 100% it’s, it, I would describe the moment we’re in right now, it’s early days and it’s chaotic.

[00:18:19] Chris Harihar: Uh, and so working to just try to create a baseline I think is really important on the, uh. Early days side, for example, Sora has been out for not even a year. I feel like at least the, the latest version that’s incredibly high powered and very high quality. Uh, and trying to understand what the safeguards are for brands has been, you know, uh, a bit of a mess.

[00:18:44] Chris Harihar: Why it launched there were there very lax safeguards? I think that was to drive. Users, uh, and make sure that people could understand the value of the technology. Um, and then they pulled some of, and then they ins in, then they installed more safeguards that were primarily around people and making sure that likenesses of celebrities, et cetera, were, um, were protected.

[00:19:05] Chris Harihar: Uh, but I haven’t really seen the same rigor or thought put in regarding brands. And a good example of that would be in our testing that we’ve done. Uh, and we’ll pub you know, we, we will share this with the market of course, but. I think we tried roughly 20 different branded prompts on Sora to, you know, that were putting brands and damaging, you know, high risk situations.

[00:19:28] Chris Harihar: Uh, like Sam Altman, for example, uh, drinking a Diet Coke and then fainting or, uh, him applying, uh. Vaseline on his face and screaming, you know, and it ranged, uh, to him being on a, you know, a brand name airline and then the engine catching on fire with the logo behind it. Right? We like, we, we put all of these prompts in and, and I mean, there are 20 of them.

[00:19:53] Chris Harihar: It’s not a huge sample size, but I think virtually all of them were allowed. The videos are really high quality and, um, they would be, they’re concerning on Sora for sure. Uh, though the, um, viewership on Sora is fairly low and user, you know, active users on Sora, I believe is fairly low to this point. But if you can download these videos and then bring them to other channels, that becomes even more problematic.

[00:20:20] Chris Harihar: That’s what we saw with the CopyLeaks Burger King example that I cited earlier where. Uh, soa, you know, engagement was low. Maybe a couple of likes for one of these videos, but on TikTok, there are dedicated channels specifically around videos like that where they have hundreds of thousands of views and tens of thousands of likes and shares.

[00:20:39] Chris Harihar: And so I, I think like there’s so much early stage. Stuff happening here where like we’re all learning how bad actors are, sort of like gaming the, the systems. What safeguards are in place, what’s the user behavior after you make a video like this? Like does it live on Sora? Do you, you know, screen record it and then bring it to another platform?

[00:20:59] Chris Harihar: All of that we’re kind of figuring out now, and I think it’s important to raise all of these questions so that brands can have a partner like us to help them figure that out. Uh, we also want to go a step further. And when there is a moment where a brand is featured in a way that could be brand damaging or threatening, uh, we want to flag it to the platforms.

[00:21:19] Chris Harihar: We want to help flag it to the platforms possibly, and potentially in, uh, coordination with legal teams that, you know, are supporting these brand partners that we might have to make sure that the content is taken down and that there is more thought put into how. Uh, the, the platforms can protect our brands, you know, at the prompt level to some degree, uh, and at the LLM level, I should say, because I think this should all be baked in at a, in a deeper way, and it doesn’t appear that it is right now.

[00:21:48] Chris Harihar: And so we’ll see ultimately how things go over the next several months. But I feel like if you’re not putting pressure in at this early stage, it’s gonna be way harder to do it down the road, especially after so much potentially harmful content would’ve been created by that point.

[00:22:06] Tessa Burg: Yeah, and I think your last point, reaching out to the platforms is so important.

[00:22:11] Tessa Burg: Just this week we had a workshop where for like 200 people in our company learn how to build their own agents. And as a part of that training, everyone learned how to put in their own guardrails. So we know that the platforms have the intelligence, they have the tools. To do more to protect brands.

[00:22:34] Tessa Burg: And it reminds me of early days of Google, like when AdWords first launched, they let anyone bid on stinking anything. Yeah. And there, there were some damaging messages that would show up if someone Googled your brand and a competitor or just someone who is hosting a link farm with a terrible experience would show up and.

[00:22:59] Tessa Burg: They could have acted faster, but they didn’t because they were making money off of what were at the time. Um, affiliate marketers. And we, like brands need to recognize that their dollar does mean something. And if you are advertising on a platform that isn’t hearing your concern about your brand and they do have the control to do more, then you know.

[00:23:28] Tessa Burg: We can help coach you through what’s the right next step to do. But

[00:23:33] Chris Harihar: for sure,

[00:23:33] Tessa Burg: I think one of the downsides of, you know, what was happening in Google or how we had structured at the time at some of the, the companies I was working with, they were too dependent on Google. And today I think. Brands and companies that are in different position where we don’t have to be too dependent on where we’re getting our traffic from.

[00:23:55] Tessa Burg: There’s more channels than ever, and AI not only allows you to personalize the timing of the message, the experience, how it, how it gets delivered, but it allows you to reimagine the types of experiences and where your brand can come to life in both physical and digital ways. And that’s something that brand should also lean into.

[00:24:16] Tessa Burg: Where are they going to have the most control, so that they can have that flexibility and not be too dependent on a few very large, very influential, uh, platforms. And, and even back in the day, those early days, Google AdWords, that was like, the lesson I always took is, okay, I never wanna be or have my clients be in a situation.

[00:24:38] Tessa Burg: Where, because we can’t run on AdWords, we’re gonna not meet our sales goals.

[00:24:42] Chris Harihar: Sure. No. And, and what’s funny is, uh, I think there are still issues when it comes to, uh, like brand suitability as it relates to, you know, keyword hijacking and all these different things that still occur. Um, and, and that goes to show that, uh, you need to apply pressure early in order to affect change.

[00:25:02] Chris Harihar: And that is what we aim to do here. And what’s fascinating is that. I think a lot of people are very focused on the awesome potential of these tools, and there is incredible potential. These, we love these tools. I mean, these are great services, great platforms. At the same time, I do think that it is difficult to think of all the harmful ways.

[00:25:23] Chris Harihar: In which somebody might do something and feature a brand or, you know, come up with a prompt. Uh, and it requires, you know, clearly bad actors are always creative and always thinking ahead in terms of, uh, how they might be able to, um, you know, come up with a prompt that lets them get the output that they want regardless of how harmful it is.

[00:25:45] Chris Harihar: Uh, and you need a partner that. Like a PR person, for example, that is literally tasked with thinking about harmful situations and scenarios so that we can help clients prepare in traditional PR. We’re bringing that same thinking, uh, to prompt harm and prompt suitability, if you will. Uh, and AI suitability to think about how uh, brands can be featured in these harmful ways.

[00:26:11] Chris Harihar: And then we’ll find them for you. We won’t create them, obviously. That’s not what we do. And the, the test case I outlined before about Sora, we didn’t publish those videos. Uh, we recorded them and then just left them in drafts and, and, uh, they don’t live anywhere. But we are thinking about the ways in which bad actors.

[00:26:30] Chris Harihar: Can go about creating this content and creatively, uh, thinking about it because that’s what is required in order to find it. Um, and that’s the type of thinking that we just wanna bring to brands, is they think about, uh, the value of these platforms. It’s very high, but at the same time, uh, whether you’re, whether you have a customer who had a bad, uh, you know, service experience, or you just have.

[00:26:53] Chris Harihar: Uh, people online who are trying to have fun with brand imagery. Like there’s so many different scenarios that can occur here that, um, range from like low level to way more nefarious and we just wanna help, you know, identify those.

[00:27:05] Tessa Burg: Yep. Well, I love it. Chris, you’ve always been ahead of the curve when it comes to AI.

[00:27:12] Chris Harihar: I have a horribly, the, the, you know, my brain doesn’t turn off when it comes to thinking about bad, how bad things can happen for clients that we work with. And I think that’s the type of sad but helpful thinking that we’re trying to bring here with, uh, AI Risk Intelligence.

[00:27:28] Tessa Burg: Yes. Now it’s. It’s great and this is a creative solution.

[00:27:32] Tessa Burg: It’s one that’s gonna continue to grow and scale and unlock even more opportunities. So thank you for joining us today. Uh, clients now walking away with, you know, what to think about, how to take those proactive next steps with their PR team and where they have the power, where they can have the influence and control to strike a balance in what feels right now, like a very crazy world.

[00:27:59] Chris Harihar: For sure, 100%.

[00:28:02] Tessa Burg: Uh, if people wanna find out more information or reach you, where, where can they find you?

[00:28:07] Chris Harihar: Yeah, they can, uh, they can reach me at, uh, [email protected], or you could find me on. On X, uh, at, uh, Chris Harihar. Uh, that’s my handle. Um, but we’ll, we’ll be publishing a blog post or we we’re, we’ve published research in relation to this launch that you’ll be able to access and see.

[00:28:27] Chris Harihar: But we have a number of other things that we plan on, uh, sharing with the market soon that just highlight how much of an issue this is, unfortunately. And, uh, if you are a brand that has a well-known logo that has well-known characters. You are at risk and we’re here to help.

[00:28:47] Tessa Burg: Yeah, and I’ll call out that if you visit modop.com and go to The Van Guardian.

[00:28:55] Tessa Burg: Uh, there are links to all of our podcast episodes. We’ve talked about AI slop, we’ve interviewed, uh, clients from CopyLeaks and DoubleVerify. So a lot of good information there. And then the research will be on the blog, which is the other link. And you can also learn more about Digital Twins, the CMO priorities for 2026 and overall future proofing your marketing team for 2026.

[00:29:22] Tessa Burg: So check out modop.com, click on Van Guardian or back slash magazine. And until next time, thanks so much for joining us.

[00:29:32] Chris Harihar: Thank you all.

Chris Harihar

EVP of Public Relations at Mod Op
Chris Harihar, EVP of Public Relations at Mod Op

With deep expertise in business and tech media relations, Chris counsels clients at a high level while maintaining hands-on involvement in media relations and content strategy. He has developed and run highly successful programs for leading B2B and tech brands, from Verizon Media/Yahoo and DoubleVerify to Signal AI, IDG (now Foundry) and WeTransfer. Chris can be reached on LinkedIn or at [email protected].

Scroll to Top