Beyond Prompts: Better AI Outputs With Context Engineering
Aaron Grando
VP of Creative Innovation at Mod Op

“Context engineering is the art and science of being able to assemble context together for AI.”
Tessa Burg and Aaron Grando break down one of the most talked-about topics in AI right now: context engineering. If you’ve been using AI tools and wondering why your outputs sometimes sound generic or miss the mark, this episode is for you.
“There’s more risk to having a badly loaded context than there is to having none at all.”
Aaron explains how context engineering helps AI better understand what you want by feeding it the right information—from your brand voice to real-time data—before you even type your prompt. Aaron and Tessa also explore how context engineering plays a key role in reducing hallucinations, maintaining brand integrity and building automated agents that can actually take meaningful action.
Highlights:
- What context and context engineering really mean
- How context improves AI output quality and personalization
- Where and how users can insert context into LLM prompts
- The difference between training data and contextual input
- Prompt engineering as a basic form of context engineering
- Examples of using branded and real-time data as context
- How context reduces hallucinations and increases accuracy
- Safe and responsible use of proprietary data in AI systems
- The risk of context poisoning and how to avoid it
- How context enables automation and agent action
- Importance of cross-departmental collaboration
- Emerging roles like solution architect in AI workflows
Watch the Live Recording
[00:00:00] Tessa Burg: Hello, and welcome to another episode of Leader Generation, brought to you by Mod Op. I’m your host, Tessa Burg, and today I’m joined by a familiar voice on the podcast, Aaron Grando. He leads our creative and AI innovation and the perfect person to have a conversation with about what is context engineering getting a lot of buzz all of a sudden, and the value and what are some ways, you know, just some easy tips that people can start to apply to how they use LLMs to leverage and get more value out of those questions, and we’re getting into a lot. So I’ll stop there. Aaron, thanks so much for joining us today.
[00:00:39] Aaron Grando: Yeah, nice to see you again. Uh, happy Thursday. I’m really happy that this context engineering conversation has started happening online. Um, I think a theme of this conversation is gonna be a little bit of, hey, we’ve been doing this for a long time at this point now.
[00:00:59] Aaron Grando: Um, but I think it really signals like kind of greater awareness of, you know, some fundamental building blocks of these systems that are really helpful for everyone outside of the technology group. Um, you know, of course, you know, across the overall org to understand, kind of wrap their head around. And this is really a fundamental piece of like any AI-based solution.
[00:01:20] Aaron Grando: So I’m excited to get into it.
[00:01:23] Tessa Burg: Awesome. Well, let’s start with the definition. What is context engineering?
[00:01:30] Aaron Grando: Yeah, so I would just back up and say what is context first? Um, so for everyone that’s not totally familiar with the word context, it’s been thrown around a lot. Um, you’ll see it. First show up in, you know, kind of overall communication as the context of window for an LLM.
[00:01:47] Aaron Grando: Basically the amount of information it can kind of hold in its brain at a given time. Thinking about it like from a classical computing standpoint, the context is the RAM that, uh, your computer uses. It’s kind of a short term memory. What is informing its response at the time that it is responding?
[00:02:04] Aaron Grando: It’s not what is trained into the model, but it’s kind of what you give it on top of what it’s trained on to be able to help you answer a question. So context engineering is the art, the science of being able to assemble that context together. And produce the right information for the tasks that the user user has at hand at the right time.
[00:02:26] Aaron Grando: So, as an example, um, if I have an agent that wants to help me write a newsletter, I need some context about my organization, about maybe some past newsletters that have examples of the tone of voice and the format of the newsletter. Um, and also some information about like what I actually want the newsletter to be about.
[00:02:46] Aaron Grando: It could also have maybe some realtime information like, um, stats from a user dashboard that you include in the email. Or um, maybe news items that are relevant to the newsletter that get automatically pulled in and then used, get used in the generation of that response to your user prompt.
[00:03:02] Aaron Grando: So it’s that process before a user even asks, uh, an agent a question of saying, okay, when a user does ask a prompt to an AI agent, what are the pieces of info that are flowing into its short-term memory that help it answer essentially?
[00:03:20] Tessa Burg: That is a great explanation. I think it’s very easy to understand.
[00:03:24] Tessa Burg: One of the things though, when we talk about data flowing in and it’s ingested above and beyond what the training data was, where does the context, like if I’m just a normal, everyday user and I wanted to add context, where does it live and, and how could I do that?
[00:03:42] Aaron Grando: Yeah, so this is a great question. This starts to speak to like full-on AI solutions.
[00:03:48] Aaron Grando: If you’re a user of like ChatGPT, or you know, any of the off the shelf. Um, AI solutions. You can think of context as like the memory that those, uh, programs, um, offer. Like that’s one form of context engineering that those, uh, user interfaces do for you in ChatGPT, memory kind of automatically condenses past conversations that includes it in context for your further conversations.
[00:04:13] Aaron Grando: If you’re trying to actually insert very specific context into a conversation with an agent. Or maybe you want that context to be always accessible when you’re talking to that agent. You can create like a GPT and give it some files that it can look up.
[00:04:28] Aaron Grando: Um, just generally speaking, we call that file search. Um, a lot of different types of agents have this capability. Our brand agents have this capability. Um. You’ll hear it called RAG or retrieval augmented generation. That’s essentially like the technical way of describing this workflow. Um, but you know, generally speaking, what that means is you have an agent, you have the ability to attach a file to it, and then that agent in conversation has the ability to go out and grab that file, read it, and then include that in a response if it’s relevant to what you ask it.
[00:05:00] Aaron Grando: Um, but then on top of that, there’s other ways that you can do it, like, um. Live data integrations with stuff like MCPs, which we talked about in a previous episode. Um, those are ways that you can augment context into your generations. So that would help pull, pull in data from an external data source and feed that into, and you give like prompt response.
[00:05:23] Aaron Grando: And then the other way that I like to do, and, and let’s call this our first tip, a great way to practice using, uh, you know, practice context engineering. Is just prompt engineering. Like prompt engineering is kind of just a basic form of context engineering. And the way that I would like recommend folks try this out is, you know, you have a prompt, you ask it, um, you know, a specific thing that you want it to help you produce.
[00:05:47] Aaron Grando: Uh, and then, you know, hit enter twice and then just say in a heading, additional information colon, and then just paste like whatever context information you might wanna attach to it. Maybe that’s a set of bios about people that are in your org, or it’s like a couple emails from people, um, about the project that you’re working on.
[00:06:05] Aaron Grando: That’ll just help the agent know that that information is not part of your main request. Um, and it just allows it to use that to like pull it up as it’s generating. Um, that’s like a really basic form of context engineering, but it’s essentially really, really similar to what we do when we’re thinking about like context engineering as like a, a product, uh, practice.
[00:06:27] Tessa Burg: Yeah. So I think when people are not doing what you just described and not giving the LLM context for their prompts. It’s almost like in real life when stuff is taken outta context, then you’re just relying on the engine to interpret it in the lens or experience or the data it was trained on from that perspective.
[00:06:49] Tessa Burg: But with context, you are setting us, you’re actually setting a setting. You’re giving it, um, a specific arena in which to think and operate. And this was the, the nexus of our first agent, the brand agent, where I’m sure a lot of people have had this experience. The more you use ChatGPT, and if you don’t do any contextual engineering or pull in data from external sources, uh, the more things start to sound the same.
[00:07:16] Tessa Burg: And it’s, you know, we know that the models are very powerful, but context adds that layer that helps you get that more specific value. Uh, is there any other value to context engineering or having those additional data sources be a part of the set that go above and beyond just the higher quality and more specific answers?
[00:07:39] Aaron Grando: Yeah, I mean, I think the, the higher quality that you just mentioned, it can be measured in a few dimensions. Like there’s one, there’s a higher quality of an expressive, uh, response. So that’s kind of what you were just saying. All those answers sound the same from ChatGPT. They all use the same set of words. They all have the same tone of voice.
[00:07:55] Aaron Grando: Context is a really great way. Context engineering is a really great way to give a AI model an example of how you want it to sound. Um, in, in tech circles, this is called Few Shot, uh, examples. So what we do is we get, um, really good examples of, uh, whatever we’re trying to produce, and we just provide that to the AI agent and say, Hey, this is a really good example of what we’re asking you to do.
[00:08:24] Aaron Grando: You can refer to this and you can mimic its tone of voice. You can mimic the formatting. You can mimic kind of like the, the overall structure and the layout of what’s in this. And that really, really helps bring exactly what you are asking the agent to do, uh, forward from the agent as opposed to, you know, having to massage it into the format that you want after the fact, like giving it that info upfront or, you know, even better in the background through context engineering is really helpful.
[00:08:53] Aaron Grando: And then I think the, the other dimension is. Um, actual accuracy and facts and factual recall. So obviously hallucinations are a big thing that everyone, you know, is concerned about with these AI systems.
[00:09:06] Aaron Grando: For good reason. Um, context is one of the biggest tools we have to mitigate the risk of hallucination in a system that we’re building. So essentially you can look at hallucination as a lack of context on a specific thing. If, if an agent doesn’t have a piece of information, it’s prone to making something up or trying to fill in a blank that it doesn’t have, and that context that you give it.
[00:09:32] Aaron Grando: Allows it to, one, reference that info and probably pull it up and help answer, um, you know, a query that you have, but also gives you control over the answer. Like, if it’s answering something, maybe in a suboptimal way, it will give you some knob that you can turn to say like, Hey, instead of saying it like this, you should say this.
[00:09:51] Aaron Grando: Um, it, it gives you a little bit of a control surface. So, yeah, both on the, the expressive axis and also on the accuracy axis of quality. Like context engineering really helps in those cases. And then just overall, when you put all that together, the value of these AI systems compared to like one of the off the shelf systems, like a ChatGPT, a proprietary system gets most of its proprietary value right now if you’re just wrapping ChatGPT from the context that you give it.
[00:10:21] Aaron Grando: So if you as a company for the past, you know, 15, 20 years have been. Collecting data, collecting background information. You know, you have this idea that big data is gonna be really valuable to you in the future, one, that’s partly true. Um, but you have a, you have a leg up because you do have context that other people do not have.
[00:10:40] Aaron Grando: And then you have an interface now with the AI agents that allows people to access it in a way that they weren’t able to do before. So context is kind of the means by which you can gain value from that data.
[00:10:51] Tessa Burg: Yeah, I love that example because I think a lot of people. A lot of marketers, especially on the B2B side, might have this definition of data being leads, being who I’ve contacted.
[00:11:02] Tessa Burg: But really data is goes well beyond that, it’s your expertise, it’s your knowledge store, and when you start to expose, uh, owned tools that you know can be built off of the existing models but are leveraging. Your context, your expertise, your strengths, your brand value in a very different way. Uh, the it, the possibilities are endless on how a brand can start to show up and give really accurate and highly personalized experiences to consumers and buyers.
[00:11:35] Tessa Burg: So it, you know, it’s funny when this started trending. I was really excited too. ’cause I’m like, I feel like now the dots are connecting.
[00:11:43] Aaron Grando: Yeah.
[00:11:44] Tessa Burg: You know what? The real value of AI in how we connect and communicate and not in just like, oh my God, am I getting this done faster? You know?
[00:11:52] Aaron Grando: Yeah. I think it, it, it’s starting to illuminate like really where you as a user can control the AI and, and can get some specific value out of it that everyone else can get.
[00:12:05] Aaron Grando: Um, it lets you do the thing where you can kind of control the AI. It lets you do the Westworld thing where you pull up your laptop and you turn the knob and you adjust, like how the agent responds to a specific, uh, request or, you know, adjust their attitude up the scale all the way to 10. So they’re really saucy.
[00:12:23] Aaron Grando: Um, but uh, yeah, I think like overall the, um, the conversation happening is, is super positive and I think it, it’s, it’s also happening outside of technical circles, which I think is great. Like, uh, you, we’ve talked about this already, but like. One of the things that we’re working on right now is agents that help us do work internally, and we’re really, really finding that we have to make this an org-wide responsibility.
[00:12:50] Aaron Grando: Um, one so that people have real ownership over the agents that help them, uh, but two. Like, you don’t want to ask your engineers to go out and do the context engineering for every other role in the company. Um, so having, um, this conversation exists outside of just the engineering org, I think is extremely productive.
[00:13:11] Aaron Grando: Um, and, you know, hopefully this is, you know, something that we see start to manifest in like, even further places. Like I would love to see. Like a brand context start to become something that, like a brand toolkit uses. Um, like when we ship a, a brand style guide or something for, uh, one of our B2C clients and provide a logo and a set of like guidelines, I could imagine a manifesto about how an agent should behave or, or some brand context about like how we want a brand’s agent. ’cause I think a lot of age, like a lot of brands will have an agent at some point that does some kind of customer interaction. How we want that agent to live out the brand. Um, and having the ability to articulate that, um, is I think going to be something in the future.
[00:14:01] Aaron Grando: Um, and, uh, yeah, like I, I could see this evolving from. Uh, outside of just, you know, task-based, process-based stuff to embodying a brand, embodying a culture, embodying kind of the way that different organizations that agents interact with wanna do things.
[00:14:20] Tessa Burg: Yeah. And that also points to how important it’s to collaborate across departments and the quality of that output.
[00:14:30] Tessa Burg: And of that brand experience is really dependent on the data that’s going in. So when we think about, and you mentioned we did a previous episode on MCP. I have no idea what episode number it was. So if people are interested, you just gotta go to modop.com, click Leader Generation, and look for Aaron’s headshot on another episode.
[00:14:50] Tessa Burg: And that is when we talked about MCP, which is model context protocol, and that allows you to pull in data from different sources. And for us and what we’re doing, I think a lot of companies are moving towards the different sources exist internally to us, but they are also third-party licensed sources and client sources, and there is an arts in coming with marketing, collaborating with engineering to determine what is the best input.
[00:15:22] Tessa Burg: We don’t want all of the data. We want the best data that’s going to come in and provide the relevant context for a specific agent based on the agent’s objectives. Anytime we talk about pulling in data. Making sure it’s high quality data. I think a lot of guardrails go up.
[00:15:38] Aaron Grando: Yeah.
[00:15:39] Tessa Burg: How are some of the ways that we’re making sure that we’re doing this in a, in a safe and responsible way, that we’re not exposing any of our clients or even our own proprietary data to the outside world.
[00:15:50] Aaron Grando: Yeah, so in our context specifically because we are a client service business and like, we don’t want, uh, one client’s data mixing with another client’s data, or like one agent to maybe grab facts from one and leak it out into another. Um, so what we do is we have isolated buckets essentially for all of our, our client data.
[00:16:09] Aaron Grando: We give everything, um, fully siloed so that, um, when we are doing RAG uh, retrieval augmented generation for like a brand agent. We’re never searching multiple different, um, vector stores, which is kind of the technical backend for this. Um, to pull that information out. Everything is kind of always verticalized inside of the context of the organization that we’re working for.
[00:16:32] Aaron Grando: And of course we’re using the, um, this in like an enterprise tier. Um, hosting and, and server environment, um, using, uh, safe APIs from an AI standpoint that, you know, won’t train on data and, you know, otherwise respect our, our privacy and our client’s privacy especially. Um, and there was, I think one thing that you mentioned earlier there that I, I really did want to hang on to.
[00:16:54] Aaron Grando: Um, the quality of, uh, the data that we’re putting into these systems is super important. Like the. The difference between a poorly loaded agent and a well loaded agent is everything like there, there is, um, there’s more risk to having a bad, badly loaded context, uh, than there is to just having no agent at all.
[00:17:20] Aaron Grando: Um, what can happen if you have, like, say you have an incorrect fact in a piece of context that you load into an agent? Um. A couple of users go in there, they develop a couple documents that wind up pulling that fact, that incorrect fact into their workflow, that work goes out. Ouch. Like we don’t want that to happen.
[00:17:38] Aaron Grando: Obviously that’s bad, but then maybe that work gets fed back into the agent and now all of a sudden you don’t have bad context in one place. You have it in multiple places, and from there it can just spread. Like this is something that I think people call contact poisoning. Mm. You really work hard to try and avoid this as much as possible.
[00:17:57] Aaron Grando: Um, but that’s, that’s really one thing that you have to work out. And then the other thing I think that’s important when we’re talking about enterprise data sets and, um, you know, full customer databases and stuff like that, is. We can’t overload the context. So we, we do have real technical limitations, but also just practical limitations on, um, how much data is actually useful for the, the agent to, um, use to generate.
[00:18:20] Aaron Grando: It’s very possible to give it way more information than it actually needs, and very possible to confuse it. Very possible to kind of mix up, um, states of the way that the, um, the facts that it is, uh, organizing. Look at. Um, you could think about a situation where maybe you’re working on a brand and they had a, um, a campaign last year and they had a campaign this year, and you want it to, uh, evoke, um, you know, the, the latest campaign headlines.
[00:18:52] Aaron Grando: Um, so if you ask that question, there’s a chance, because you’re asking for campaign headlines that it’s gonna turn up. Last year’s headlines, and obviously those are out of date. So, um, there’s, there’s risk, there’s, this is kind of where the engineering part of it comes into play. Um, it’s not just grabbing everything that you can possibly put your hands around, uh, from your, your dataset and putting it into the agent and saying Good luck.
[00:19:16] Aaron Grando: It’s, um, you know, this kind of art of assembling the right context for the right ask, uh, at the, you know, at the right time.
[00:19:24] Tessa Burg: We’ve definitely covered a lot. So I’m gonna recap because my next question’s kind of big and I wanna just ground ourselves on where we started. So we know that when context engineering occurs at any level, it’s going to increase the value of the output, it’s gonna be more accurate.
[00:19:40] Tessa Burg: And you can do that through prompting, with context, attaching files, uh, following templates that set the LLM into a role. And then you can also do that by. Partnering with engineers or partnering with data architects to understand how do we a, add more high quality data on the right kind of high quality data, either through a combination of external resources using MCP or well labeled well structured files on the back end of an agent that is built, um, off and a core model.
[00:20:12] Tessa Burg: And that will get you very far. And even in the description of that, you can hear a few different roles. I think that. We said earlier in the episode, this conversation has begun to unlock the what’s possible and not just what can be done faster. We’re getting into quality. We’re getting into opportunities for companies to really leverage their own data and expertise to monetize differently, to show up differently, create different experiences, more personalized, but.
[00:20:44] Tessa Burg: It also is starting to help us move into the next stage, which is automation. So if this quality is high, is higher quality output, then how do we start to take this next step and have agents actually do something and and work on our behalf.
[00:21:05] Aaron Grando: Yeah. So one element of context that we haven’t really talked about yet is tool context. So I think this is, um, becoming more and more relevant as the agents that we’re interacting with and building have a lot more capabilities. But part of that context that we give it are both instructions on how to do the things that we want it to do. Like you could think about those as kind of like SOPs.
[00:21:28] Aaron Grando: Like, okay, I want to go and break down my, my task on a step-by-step basis and kind of understand the ins and outs and what I expect at the end. But then there’s also the literal like. Tech tools that we give the agents, so MCPs, other agent tools like image generation, um, the ability to go and maybe make a request out to an API that performs an action, like sending an email or, um, reading a calendar or, you know, doing, you know, all of these different tasks.
[00:21:58] Aaron Grando: Eventually all of these tasks are going to have some kind of hook that allows you to connect an AI agent to be able to interact with these systems. Um, we’re in a, like really in between period right now where we have the agents. We have all these functionalities, but we don’t have the connective tissue for them.
[00:22:14] Aaron Grando: But yeah, this, this ability to have agents do things and the ability to, um, automate those decisions. Good context. Good context. Make sure that. One, the agents are making good decisions or as best decisions as they can make if they are automating things. And two, that they actually have the tools to go out and perform actions on your behalf.
[00:22:35] Aaron Grando: And loading context. And part of the context engineering is instructing agents on what are the tools you have? How do you use them? What are the formats of the data that they expect? If the user hasn’t provided the right data, what are the some of the right questions that you need to ask it or ask the user to provide the right data?
[00:22:53] Aaron Grando: So there’s a lot of, um, it’s kind of a, a funky, almost like. Um, social intelligence that, uh, is being developed right now, both on the side of the AI and on the side of all of us where we’re learning to interact with the agents, but they’re also learning how to interact with us through context engineering, um, and through how we tell them to work with us.
[00:23:15] Aaron Grando: Um, so yeah, when we’re talking about automation, there’s a lot of context engineering work going into tool development and, and tool instructions.
[00:23:24] Tessa Burg: I am so excited as we continue to leverage these agents to not just do work on our behalf, but even to give us different types of insights and little ahas that otherwise, because we, we certainly can’t consume all this context all at once, that without them.
[00:23:43] Tessa Burg: We wouldn’t have realized. And I think that as we continue to move to where can agents become those points of inspiration, become those points of being an assistant. Uh, we’re in a better place with AI than just continuously looking at what does this replace, who does this replace? I keep thinking about this is, this is where we really have the opportunity to say, let’s show up differently.
[00:24:06] Tessa Burg: Let’s be more productive. But it really comes back to and higher quality, allowing us to do things we we’re not. Able to do before.
[00:24:15] Aaron Grando: Yeah. Yeah. I would say like on that note of like thinking about, Hey, I don’t want this to replace me, obviously, I mean, that’s the case with, with all of this technology that is coming out right now, I.
[00:24:27] Aaron Grando: My, my point of view is grab it by the horn, the horns, like, you know, take it, um, build your own context library. Like this is something that I’ve started to do. It’s super powerful and it’s pretty easy these days with the GPTs. And we just deployed agents, uh, like customizable agents on our internal tool, um, that.
[00:24:45] Aaron Grando: Lets us build like a library of context, like personal context for agents that, um, knows about my goals, my, like personal priorities. Um, the way that I like to format things the way that I like to work. Um, and like it knows about the things that I do on a day-to-day basis so that I can say like, Hey, I have like this thing in front of me today, like, help me organize my day.
[00:25:07] Aaron Grando: Um, but I, I have found, and I would definitely recommend anybody that’s thinking about. Um, how they do grab this by the horns to go and start to think about articulating clearly. What you do every day, when you, when you go to work and, and how you think about things. One, I think like one that’s just like a good practice to do, um, for the AI but two, like as a human, it’s really nice to kind of take a step back and look at.
[00:25:37] Aaron Grando: The entire process of what you do and actually where you as a human are an indispensable piece of that because you can look at like an automation workflow and be like, man, there is no room for a human in this process. But if you actually take the, the, the time to go through what you do. Um, you, you will find that there are things, there are decisions that are made.
[00:25:59] Aaron Grando: There are, um, elements that need you to infuse a bit of humanity into them that right now that AI can absolutely not replace. Um, and putting your context together is a really good way to kind of like go through the exercise of discovering that.
[00:26:13] Tessa Burg: Yeah. And I think it opens that door to testing all different ways to do that.
[00:26:18] Tessa Burg: That gives you the experience and expertise you’ll need to be like, I don’t know what the title will be, but a solution architect.
[00:26:25] Aaron Grando: Yeah,
[00:26:25] Tessa Burg: And everything you just described and in how we talked about collaboration, that requires a person, a human who has um, some product management skills, who knows enough about how the models work, who knows enough about data and data sources, but most importantly.
[00:26:43] Tessa Burg: What you need to be an expert in is the people you’re serving, the problem you’re solving, and then how you can take the skills that you’ve developed from testing different ways of generating high quality output via context engineering and connecting different data sources to serve that audience and solve those problems better.
[00:27:02] Tessa Burg: Yeah, but that’s human creativity and yeah, sure. You can use ChatGPT to give you ideas. But it’s a person who’s ultimately gonna have to evaluate the tech, evaluate the data, align it under the process, bring the people along to help them learn, uh, how am I gonna make the most out of this valuable output?
[00:27:20] Tessa Burg: Is it solving a real challenge and problem? I mean, like you and I live this every day and we’re hiring more people for it. So it’s like, I, I was just telling my husband like, it’s crazy when we’re totally overloaded with. Creating agents, rolling the agents out, meeting with clients, helping them to create the agents, and it’s, we don’t have enough people who are on that side of being those solution architects and some folks haven’t even started their journey and learning how to do, like learning what context engineering really means and how to get the most value outta LLMs.
[00:28:00] Tessa Burg: But it really is that simple. Like, I loved your example. The more you start doing it for yourself and testing different things and take a stab at creating a GPT, you’ll start to connect the dots. Because most marketers, I wanna say all, but I guess you’re not supposed to speak in like the alls in a hundred percent.
[00:28:18] Tessa Burg: Uh, especially since I set some of our goals at a hundred percent and then later realized it was impossible. But most, like 99.9% of marketers are already experts in their customers and clients and the people they serve. And that is what can be leveraged when you start to pair it with real skills for building AI enabled solutions.
[00:28:38] Aaron Grando: Yeah, absolutely.
[00:28:40] Tessa Burg: So we’re at time, but I, I hope that for the audience, this is clarified. You know, why is context engineering getting so much buzz? Uh, I hope it’s giving you some little antidotes and, and questions that you can ask of anyone you’re working with who’s creating an agent and be a part of that process.
[00:28:57] Tessa Burg: Bring your skills, bring your expertise, help them pick the right data to solve real problems, and I hope that you practice it yourself. So, Aaron. Thank you so much for joining us. Again, I know you’ll be on more episodes. Uh, if people want to find you and ask you more questions, what’s the best way to reach you?
[00:29:15] Aaron Grando: Yeah, look me up on LinkedIn. I’m Aaron Grando on LinkedIn. I work at Mod Op, so you should be able to find me fairly easily. Um, that’s the best place, um, or you can reach out via email. Um, so I’m at [email protected].
[00:29:29] Tessa Burg: Awesome. And until next time, if you wanna hear more episodes of Leader Generation, you can find ’em at modop.com or anywhere where you listen to podcasts.
[00:29:40] Tessa Burg: Uh, and that is it, whatever, have that. We’re recording this right before the 4th of July, so I hope y’all had a great 4th of July and we’ll talk to you again soon.
[00:29:49] Aaron Grando: Have a great fourth of July.
Aaron Grando
VP of Creative Innovation at Mod Op

Aaron Grando, VP, Creative Innovation on Mod Op’s Innovation team, is a seasoned technologist with over 15 years of experience at creative agencies. With a background in strategy, design, engineering, and marketing, Aaron has worked extensively in industries like media, entertainment, gaming, food & beverage, fashion, and technology. At Mod Op, Aaron leads efforts to integrate AI into creative processes, creating tools that connect creatives and clients with insights, spark ideas, and enable new brand experiences. Projects include collaborations with companies like NBCUniversal, Bethesda Softworks, Under Armour, Planet Fitness, Dietz & Watson, and more, focusing on infusing creative strategies with innovative technology to create cutting-edge brand experiences. Aaron can be reached on LinkedIn or at [email protected].