Episode 144

The Future of Work: AI Challenges & Opportunities

Joseph Miller
Chief AI Officer and Co-Founder of Vivun

Joseph Miller

“As soon as you start using AI to answer important high-stakes questions, it's not enough for it just to be right. If it's wrong, it becomes a liability.”

Joseph Miller

AI isn’t just about writing emails faster, it’s about capturing and scaling the expert knowledge that actually wins deals.

In this conversation, Vivun’s Chief AI Officer Joseph Miller explains why most “just add a chatbot” projects stall, and how building a real-world model of your business (your terms, processes and judgments) unlocks meaningful results. He breaks down the difference between low-stakes productivity and high-stakes outcomes—and what it takes to get from New York to San Francisco, not just Newark.

You’ll hear practical guidance for marketers, sales leaders and operators: how to codify tribal knowledge, ground AI in your company’s definitions and design iterative experiments that deliver time-to-value.

Highlights:

  • Why dashboards and early SaaS fell short—and what changed with LLMs
  • “World models” vs. generic AI: grounding terms, roles, and processes
  • Tribal/expert knowledge and how to codify it for teams
  • Failure modes of LLMs (ambiguity, confounders, multi-hop reasoning)
  • Low-stakes productivity vs. high-stakes outcomes
  • Labor disruption and hiring implications of AI adoption
  • Scientific method for AI projects: call the shot, test, iterate
  • How marketers and sales can start small and prioritize time-to-value
  • Leading teams empathetically through AI-driven change

Watch the Live Recording

[00:00:00] Tessa Burg: Hello, and welcome to another episode of Leader Generation, brought to you by Mod Op. Today I am joined by Joseph Miller. He’s the Chief AI officer and co-founder of Vivun. We’re really excited to get into this conversation and explore the role of expert knowledge in the age of AI. We know that a lot of leaders are exploring or struggling really with.

[00:00:25] Tessa Burg: What is their role as AI proliferates through their job, through the roles and skills of their teams and changes the way they take their products and services to market? How do we reimagine how we reconnect with our target accounts, with, uh, lead generation and all of the other tools that we have at our disposal to start to bring.

[00:00:49] Tessa Burg: Our knowledge to bear value for those that we serve. Joseph, thank you so much for joining us today. Very excited to hear about your journey and your insight. Um, so let’s get into it.

[00:01:02] Joseph Miller: All right. Thanks for having me.

[00:01:04] Tessa Burg: So you are a serial entrepreneur, you have founded many different types of companies, and now you’re at Vivun.

[00:01:11] Tessa Burg: Tell us a little bit about your journey and your role at Vivun today.

[00:01:15] Joseph Miller: Thank you. Uh, so, uh, so, so I, I got, uh, brought to into Vivun through John Bruce, who’s another one of the co-founders. Um, John and I actually have a, a, a pretty rich history of trying to create companies ourselves. Um, taking me all the way back to, uh, mid two thousands really.

[00:01:32] Joseph Miller: Um, we were both out in Berkeley at that time and, um, and I was, I’m actually a physicist by training and so I was studying physics and economics there, and then I was working as a nuclear physicist at the National lab, um, you know, building particle simulations and, and trying to do predictive analytics and these types of things.

[00:01:50] Joseph Miller: And, uh, and we were all, both all into tech and so we were trying to build out different things, uh, throughout, throughout. Really our entire, our entire adult career is here. And, uh, and Vivun came along and it was one of the, um, one of the projects that actually ended up working out for us. So it was exciting because when John called me, it was really in his domain.

[00:02:08] Joseph Miller: He was, it is, you know, we started out in pre-sales focusing in, in that space. And, uh, Matt Darrow, the CEO and co-founder as well as also has a rich history in, in presales. And by the time that came around, I was pretty deep into building expert knowledge systems and trying to build, um, AIs that represent, uh, you know, like the, the, the processes and the ways of thinking of, um, of managers and executives and, and of scientists of different places and all kinds of stuff like that.

[00:02:37] Joseph Miller: And I didn’t really have a lot of experience in, in pre-sales, but it was interesting to me because, because of that reason, I didn’t really have a lot of experience in it. But I had, you know, been a, a, a musician, I had been a scientist, I’ve been a, you know, different business leader in different places.

[00:02:49] Joseph Miller: And so I, I had seen that like the way that you represent expert knowledge and the way that you rep you, you think about codifying. Um, best practices and things like that, and trying to build systems to, to deliver that, that value, um, was really an abstract thing from the domain. And so when I saw presales, I said, Hey, I don’t know anything about this.

[00:03:08] Joseph Miller: Um, that excites me. And, um, and we knew what we wanted to do here. We wanted to take Matt’s and John’s, um, expert knowledge in that system or in that domain and, and codify that into a system that we can deliver as a product. And so that’s how we got, that’s how we got going on Vivun, um, uh, initially and. I think it was interesting because, um, the early days of, you know, like mostly all of the early two thousands and the mid two thousands as well, there was like this sort of proliferation of dashboards and like, this is what SaaS really was, was like effectively a bunch of dashboards and integrations and such.

[00:03:44] Joseph Miller: And um, and if you thought about why was that the way that software was sort of developing. Um, dashboard is a sort of form of representing, uh, expert knowledge. You have to a priority sort of know what are you going to show the users, what are you going to deliver, um, how are you gonna frame these charts, what is the narratives and, and all the pages and such, um, that you’re actually allowing them to go explore their own data and answer questions that they’re, you know, um, trying to make decisions on and such.

[00:04:10] Joseph Miller: And, um, what was interesting with this was, uh, around the same time I wa while we, we founded, um, Vivun and, and I think they, it started, I think founded in 2017 maybe, or 16. Matt Darrow. And then John joined a year later, and then I joined a year later. And, uh, then we raised our seed round. And while that was happening, I was actually at Bridgewater Associates, um, working on a team that reported to Ray.

[00:04:35] Joseph Miller: Uh, Ray Dalio himself building an expert system for him. So I was really interested in this way of, of trying to do exactly what we were trying to do at Vivun, but in a different domain in finance. And so when we got together, um, that was basically what happened. That’s how we got it kicked off a bunch of the team from, uh, Bridgewater.

[00:04:51] Joseph Miller: I was very fortunate for a lot of the engineers that worked for me there. Um, followed me over into, into Vivun and that’s sort of how we got it kicked off. But it was always rooted in this idea. Of expert knowledge systems trying to deliver, trying to represent, um, the best domain knowledge, the best ways of thinking about working through a deal and all of that, and then deliver that up to the user.

[00:05:12] Joseph Miller: And in those days, in 2018 or whatever it was. Um, you know, we didn’t have good LLMs and, uh, while we were using some LLMs actually in our early products, we have, you know, patents on, on using ll the early days of LLMs before, before they were any good really. Um, and, uh, so Vivun has always been on the edge of that, but we, you couldn’t engage with that information naturally.

[00:05:34] Joseph Miller: And so the LLM revolution, if you will, in, you know, 2020 and 2021 was like the big aha moment for Vivun. Like we had sort of laid the foundation of expert knowledge. And now we finally had a way to engage with that. And, um, and that to me is really what the pivot that not only Vivun, but I think every company is sort of going through, going through at this moment.

[00:05:54] Tessa Burg: Yeah. And I, you said a couple of things that are really big unlocked, especially when we think about the pre-sales change in behavior that’s happening right now. Like we work with Forrester. I feel like they say that all the time. Uh, and one of the studies they released shows. That there, the buying cycle and the buying process is evolving from that seven to eight people involved in a buying decision, like at an enterprise level to buying networks.

[00:06:26] Tessa Burg: And there are influencers in that networks and there are different people and it’s really no longer. The recipe that you have a single salesperson and everything sort of revolves around that specific salesperson’s knowledge of the product, their contacts, their Rolodex, like the role of the salesperson is still important, but they can’t be on like 24/7 and there is no way humans can have.

[00:06:57] Tessa Burg: A hundred percent of all the knowledge that the company has, that the engineers have, that the product people have, um, behind it, like the, the amount of expertise that, uh, is in any company has almost this opportunity to get into the world into a very different way. How does Vivun play a role with that sales person today in getting the most value out of the company’s expertise and the other sales leaders and marketing leaders at the company?

[00:07:27] Joseph Miller: Yeah, it’s a good question. It’s like, it, it is. There’s sort of, um, something very fundamental about expert knowledge that is, um, that, you know, sometimes people call it like tribal knowledge in a, in a company. Like what has the company as almost like an organism has learned that no one person knows all the pieces of, but then there’s also like, you know, business processes, like, we call it procedural knowledge.

[00:07:50] Joseph Miller: Like how, how do you work through a deal? What is the way that. Um, you know, the, the, the company has established that works for that company sales cycle or, or whatever the goal, the, the decision that they’re trying to make. It may be a sales thing, may be a hiring thing, right? Could be any number of things that companies have come up with saying like, this is how we go about doing this work.

[00:08:09] Joseph Miller: And there’s lots of kinds of knowledge like that that I, I think that it, it’d help if we go back a little bit and just. Understand, like that sort of concept right there is that there’s different kinds of knowledge that is held up. Like people tend to think of that the only knowledge is sort of like the Library of Congress kind of knowledge or something like, you know, ChatGPT knows all the books in the library Congress.

[00:08:33] Joseph Miller: Um, and so it’s very smart and you’re like, that’s not, that’s not really what we mean when we think of intelligence. Um, that’s not really what we mean when we think about expert knowledge. It’s not just. Declarative facts of this person is working that deal, or this is the pain point on that project. It’s what you do with all of that information.

[00:08:50] Joseph Miller: It’s like, you know, what does it imply about what you ought to go do? And that word ought is a very, very heavy word that suggests that there is some world model that preexists the knowledge or the, the declarative knowledge that you go find out when you’re working through a deal. And that world model, um, doesn’t exist in an LLM.

[00:09:11] Joseph Miller: It just doesn’t emerge out of a, a, people debate whether or not there’s emergent LLMs, but expert knowledge that we define it as is that is a world model that you’re saying what are the things that are important in this company? Um, and it might be in the sales cycle, like what are the concepts that are important about working through a deal, um, and how are they related to each other?

[00:09:29] Joseph Miller: And. What’s interesting about companies is that, um, you, you know, if you force ranked all of your best salespeople in, in a line, um, more often than not, your best salespeople sort of have a richer world model of like, what are the things that are important? How are they interested, interestingly connected, and what does it imply about working a deal?

[00:09:49] Joseph Miller: Like they pivot faster. They understand what right questions to ask. When you think about what those types of actions are, um, it, it sort of betrays the need for this thing. Um, knowing what you don’t know. Is a very, very powerful skill. And um, and that is what that word ought means. It means that I ought to know this.

[00:10:09] Joseph Miller: If I want to answer this question excellently, I must know these five things. If I do not know one of them, I can’t do it. Now, if you ask an LLM, of course it will just do its best with the four things that it does know, right? Um, ’cause it has no world model. It doesn’t know that, at least in your company or in your business process.

[00:10:27] Joseph Miller: That that fifth thing is a necessary condition for you to be able to answer the question excellently or run it through the business process. And so when you think about. What do companies have, um, this tribal knowledge, this expert knowledge across their product, across their product managers, their engineers, their salespeople.

[00:10:44] Joseph Miller: How do you aggregate all of that and make it accessible to each of the people to each human in, in any given role? That is sort of the power that I think AI is really, is really bringing out. If you establish a world model of what’s important, ask the LLM to go. Organize the unstructured data of the world, put it into this world model.

[00:11:02] Joseph Miller: Now this can converse with any different role in your company, and that’s really the powerful revolution that I think is happening right now.

[00:11:11] Tessa Burg: Yeah, I agree. And I think you’re hitting on something that. Is determinant of success when using AI as opposed to trying to build an AI uh, model or an AI solution or tool internally to eliminate time to make things more productive. Like we hear a lot of roles of AI today, and a lot of it is efficiency. We let go of x number of people. Um, we started a new revenue line because we productized something that’s using AI, but we were talking about before we started recording the study done by MIT. And people go back and forth on that study because it was 54 respondents, but it got a lot of buzz.

[00:11:57] Joseph Miller: Yeah.

[00:11:57] Tessa Burg: And there’s no doubt that one, we’re in a cycle of a ton of news media giving a lot of attention to headlines where things are failing. People are getting fired, like and. How, how can people and what you’ve just given, I think is much more empowering. Look at the knowledge you have, like every company that is completely within the power I have as a marketer.

[00:12:24] Tessa Burg: I have in any role, I’m playing at the company and what is that special sauce, but

[00:12:30] Joseph Miller: Right.

[00:12:30] Tessa Burg: How, how can we break through the hype and focus on that? And is this. Different than hype cycles in the past of other technologies that have, um, launched.

[00:12:44] Joseph Miller: Yeah, it is, uh, yeah, that this, this, this paper has made a ton of, uh, a ton of rounds.

[00:12:49] Joseph Miller: And I think it’s, I think the reason that it’s catching on is because there’s this deep suspicion amongst business leaders that this is like the.com bust or like, you know, there is a bubble that’s happening and I think that’s probably like. You know, co-linear with a lot of like market hype that is happening as well.

[00:13:06] Joseph Miller: So there’s sort of confounding factors that are boiling in our minds and, uh, and we’re asking ourselves, you know, is this thing real? Is AI actually going to do labor substitution? Are we actually gonna be able to replace people to do it? Uh, my answer to that is absolutely it is already happening. Um, I think that, uh.

[00:13:25] Joseph Miller: The, the first thing in the labor disruption is that it’s a little bit like population collapse. Like there’s a lot of population collapse demographically across the world, and people tend to think like, oh, the disaster means like there’s going to be a mass death that happens, and that’s not the way populations collapse.

[00:13:42] Joseph Miller: That’s not the way that, um, labor, the labor market’s gonna collapse. It actually collapses that there is a giant decrease in births and so. In, in, in the business world that would be akin to we are not going to hire as many people. So already I know a lot of business executives that are doing that. They have plans that are saying, I’ve already reduced my hiring budget over the next two to five years by 50%.

[00:14:06] Joseph Miller: Well, you’re like, that’s 50% less labor that is expected to come online, um, from humans anyways, um, over the next five years. And you’re like, that is. That is how it’s actually gonna happen. It’s gonna happen through this, this attrition and not replacement mechanism. And that will also give the AI time to get more practical and overcome the the, I think, um.

[00:14:30] Joseph Miller: The rightful critiques of the MIT study. Um, and then on that point is that I think the best way to understand why that hype cycle has had, why, why are businesses failing to deliver practical value out of AI, um, is not because. The hype is not real. Um, I think the hype is very real. I think that, that we are, it is real.

[00:14:51] Joseph Miller: Like this is, this is actually happening this time. This time is different. Um, people that have worked in AI for their careers like I have, it is, um, it is. It is very apparent that this time is different. Um, the tools that we have available to ourselves is just, is just enormously more confident. Um, now they’re not all, it’s not AGI, right?

[00:15:12] Joseph Miller: Like that’s what people are expecting is that like, oh, I could just throw a wrapper around ChatGBT and ChatGBT can go sell my product. And you’re like, yeah, that’s not, that’s not gonna happen. But if you understand why that doesn’t happen, um, then you actually understand all the technology exists. For that to actually happen.

[00:15:29] Joseph Miller: You’re not actually waiting for another innovation. You just have to architect it differently. And I would say to leaders that are asking, well, how do we, what, how is the, what is the right way to architect it? Um, I would say you, what you need to do is look at the failure modes of these projects, of these 95% of projects that are, that are failing and within your own company or your own efforts, why are they struggling?

[00:15:50] Joseph Miller: Look at the failure modes of those and you, and they will give you the clue of what’s going on. So typical failure modes of LLMs would be things like ambiguity in language. Um, confounding con, uh, like, uh, hidden confounders, like something causes both things. And the LLM doesn’t actually have a world model that can reason causally, so it doesn’t know that there’s a third option that’s going on.

[00:16:14] Joseph Miller: Um, multi hop reasoning, uh, why can’t it do that? Why, why does it seem like it’s, it’s it’s thinking, but um, and like the, it’s producing a chain of thought that looks legitimate, but then like. Can arrive at very bizarre or very absurd conclusions when you, when you break that down, this takes a little bit more philosophy, but, uh, like to bring everybody back to the, like Logic 1 0 1 courses.

[00:16:38] Joseph Miller: There’s basically three things you need to do to, uh, have to accept a conclusion. Uh, the first is that you have to have very clear terms. You have to know what you mean when you say a particular word. Um, you combine those terms to form premises, and your premises have to be true, right? So there’s some truthiness to a premise, and then you can combine multiple premises together in a valid way.

[00:17:00] Joseph Miller: Um, this is where fallacies come about, right? But if you have a valid, uh, construction of your premises, then you have to accept the conclusion. So that’s how you get to truth. So now when you look at LLMs and you say, well, how come they arrive at all of these crazy things? You can point to one of those things.

[00:17:16] Joseph Miller: Ah, it thinks the word is different. This happens in sales and domain knowledge all the time. Um, like for example, I always use this example, but, um, if you ask ChatGPT5 A, um, who is the user of my product? And who is the buyer? It will often confuse those two things. They think it’s the same thing. Well, of course it does because in most cases, the buyer is the user of the product.

[00:17:40] Joseph Miller: But in sales, these are very, very, and B2B sales, like those are two different people. They’re like not even, they’re like very, very often in two different departments. So we mean that, we mean these things different. Like what is a saboteur? In when you’re running a deal, who is your champion? Like these are terms that are, that have domain specific meaning and jargon to them that the LLM will often get wrong because that’s not, that’s not the, the overall representation of the training data of that use of that term.

[00:18:08] Joseph Miller: So your terms can get very, uh, ambiguous. So that’s one reason that it, that it can fail. The second is that, um, premises can be false because again, you have domain specific things that are, that are about your little niche. Like, oh, we, you know, um, security is a good example. Like a lot of times, um, people will have a product that is, that does like monitoring, like security monitoring, and then the separate product that will do prevention.

[00:18:37] Joseph Miller: These are different things, but if you ask an LLM, it will think, well, obviously if you’re monitoring. Risks, you’re trying to prevent it, and so it thinks these are the same product and you’re like, they’re not the same product. These are different things. They’re different skews, different line items. They sell to different in different ways.

[00:18:53] Joseph Miller: So LLMs get confused about that. Then of course there’s this multi hop reasoning problem, which is, has to do with the nature of the way LLMs quote unquote think, which is auto aggressive. It’s, it’s the next token prediction and sure there’s like, you know, non-linearities, there’s all kinds of stuff about the context of these things, but the reality is, is that it’s, it’s forming structures.

[00:19:14] Joseph Miller: It’s forming arguments, and if it gets one of the arguments false. Next argument that is an input into the next argument. And so I always use an example of like, you know, it’s, it’s like if you leave New York on a flight to San Francisco and you’re a degree off, well, like if you’re only actually flying to Newark, then yeah, you’ll probably still make it.

[00:19:34] Joseph Miller: It’ll be fine. But if you’re flying to San Francisco, you’re gonna land in Alaska. Like you’re gonna be way off by the time you get there. So the longer the reasoning chain is, the worse this thing will do. And often in the case of like very human endeavors, like sales is a very human nature thing. It’s a very ambiguous, we call it like high entropy, uh, domain.

[00:19:53] Joseph Miller: There’s lots of little edge cases. Everything is basically an edge case. Um, that is, is just, you’re gonna have to do a lot of change reasoning. And if you just ask the LLM to do it for you, um, you’re gonna find yourself in Alaska. So the, it’s the, these sorts of things kind of betray. What is actually going on in the language model and why it’s not actually reasoning.

[00:20:16] Joseph Miller: It’s not, it’s not, it’s not the way that you would, not the way that we mean when we say reasoning. We mean, uh, you know, sort of from a causal framework or from some sort of framework that this implies this, that implies that, um, there’s no such thing in an LLM. There is no. There is no world model to be to, to imply the next thing.

[00:20:33] Joseph Miller: It’s just a probabilistic model. So when you realize that and you say, ah, the solution then is to build that model. To have this world model outside of the LLM, then have the LLM integrate and, and uh, uh, integrate with that world model and be, we call it grounding. Like we ground the LLM and the world model.

[00:20:54] Joseph Miller: And then the world model does handles the reasoning. So if something changes in this little node that’s connected to this other node, a change here implies a change here. Or if this node is, if we have information about this one and we don’t have information about that one, then it implies you have a question.

[00:21:11] Joseph Miller: The LLM should, or the agent should have a question if it need, if it knows that it needs to know something and that it doesn’t know, that’s a, that’s a good definition of a question. So now your agent can be like, ah, I need to ask the user this thing. And now you’re getting very close to like actual, this is what a real human would do.

[00:21:27] Joseph Miller: This is the way we think, this is the way we operate in businesses. So now you’re getting very close to that idea of like, this is, this is how labor substitution will eventually happen.

[00:21:36] Tessa Burg: Yeah, I, so I love this answer. I’m gonna break it apart a little bit for the audience. Sure. Because I know the majority of us are marketers, but I think if I were to like pull out some really important nuggets, you highlighted three different areas were challenges and problems.

[00:21:55] Tessa Burg: New challenges and problems we haven’t had to solve in the past are now present when we look at how we go to market. So if we want to use an AI tool. It doesn’t mean give all the salespeople ChatCPT, and that will help them with writing emails and cleaning up their pitches and maybe some other gen AI that will do a PowerPoint.

[00:22:19] Tessa Burg: We are talking about if we want to get the most value and we don’t want things to fail, we have to figure out what is that world model, which is logic and computation against knowledge that we need to pair with. And I think what we’ve seen is a lot of people are trying to measure the productivity gains that they’re getting.

[00:22:42] Tessa Burg: From issuing copilot or ChatGPT and and being like, oh my gosh. Because when you first use it, it’s like you go through this, you know, I don’t know, bell curve of excitement. You’re like, I don’t really know if I’m using this right or if I’m getting any value. And then you do a little prompt engineer.

[00:23:00] Tessa Burg: You’re like, holy crap. Take me, you know, an hour to write these emails. Now I’m doing in minutes and I’m just copying and pasting and you get really excited and then someone asks you to take on more work or someone asks if, or you do, you are an apartment where people are being eliminated and mostly prematurely ’cause they’re just looking at Excel sheets and trying to cut costs and you’re like, well crap.

[00:23:24] Tessa Burg: Now I have. More work, more things to do. Did I really save time? And then we get in this stuck arena where I, if companies look at what are the new problems, the new challenges we need to solve for and bring on the right people. ’cause that’s where we also fall down. Uh, I think a lot of the listeners honestly might not have understood.

[00:23:51] Tessa Burg: Very much of what you just said, like technically be able to follow it. But I think the important takeaway is that’s why you need to partner and when you stand up, durable teams that are working against an initiative. It includes people from different disciplines, different skills, but for sure includes scientists and data scientists and data folks who maybe haven’t had that frontline experience as well as your frontline team members who have that knowledge, who are gonna be a big part of what you want to codify.

[00:24:27] Tessa Burg: And, um, yeah, so I, I absolutely love that. ’cause I think there’s the phrase you said earlier. Knowing what you don’t know. I mean, that sinks so many ships when, when you don’t acknowledge that there’s a lot you don’t know.

[00:24:45] Joseph Miller: I mean, like, people are using ChatGPT, and they’re getting lift out of like, helping them write emails or rewrite an outline or get going on, you know, like the, like every author says, like, the first word is the hardest, right?

[00:24:55] Joseph Miller: So getting, just getting going. I think that there’s a lot of, uh, help that, you know, um, GPT or or whatever, whatever LLM you want to use. Um, can help you. But it, that’s all low stakes work. That’s the point. It’s like, yeah, it, it’s getting you from New York to Newark, but like it can’t get you to San Francisco.

[00:25:14] Joseph Miller: And the reason that you’re hired is not for the low stake work. You’re hired to do high stakes work and that. Requires like a much higher bar of knowledge, of technical feasibility, all that type of stuff. You gotta know what you’re trying to do. You have to have hypotheses that you’re gonna go out and test.

[00:25:31] Joseph Miller: Um, you need to execute in a particular way because you might have a great idea, but then execution is bad and you’re like, all of this type of stuff is like, there’s lots of more, there’s many, many more failure modes once you start doing work that matters. And I think that that’s, I think that’s sort of the point, uh, that is, is coming around is that LLMs can’t really help you.

[00:25:50] Joseph Miller: Get from New York to San Francisco. It’s too high stakes, and there’s just too many ways that it is, it, it will fail to help you along that way. And then when you start to think like, okay, well what is, what is the gap between there? You’re like, ah, that’s the expert knowledge. That is what the humans are, are good for.

[00:26:05] Joseph Miller: Um, you know, being able to say what are the right questions to be asking? Not just, uh, you know, um, producing simulated reasoning responses that like, look like they’re pretty good. But then when you look at ’em pretty close, you’re like, oh man, this is a mess. Like this is it. This isn’t accurate, or like, this isn’t actually all that helpful, um, to do anything meaningful.

[00:26:25] Joseph Miller: So yeah, that’s what I think is, I think that’s where like all AI work is going is when people are, people are realizing this like height, it, it feels like a hype bubble popping because the, the practicality of doing meaningful high stakes work, um, ais have not done well, but. What I’m here to say is that they can do it.

[00:26:46] Joseph Miller: It’s just that it has not been the popular approach to building these systems the way that I’m discussing it now, but people are coming around to that because that’s, that’s how you get it to work. So that is happening, and then you’ll be able to have, you know, your CMO will be able to ask. Of its agent.

[00:27:02] Joseph Miller: Any deep technical question about the technical software, it’s selling. Like, like we have, you know, um, Jarod is our CMO at Vivun, and he’s like, excellent with this product, right? And you can ask these things and like, is this an accurate way of representing the value prop? And the agent can give him a response that we feel is good and not a liability, right?

[00:27:23] Joseph Miller: Because that’s the other thing is like. As soon as you start using AI to answer these important high stakes questions, it’s not enough that it’s it, that it has to just be right. It, if it’s wrong, it becomes a liability. That becomes a real, a real risk for businesses. So you gotta, you gotta know that this thing is grounded in the expert knowledge, in the right way of thinking about the world and, um, and, and your domain specifically.

[00:27:46] Tessa Burg: Yeah, I, I really like that example. I was just trying to think of, you know, where, where can marketers start in, in partnering and making sure that they’re setting up AI initiatives that are gonna work? And I went back to something you said earlier on the dashboards, because you’re like, everybody wants to see the data, they wanna see what’s working, what’s not.

[00:28:11] Tessa Burg: But the problem that has always existed with dashboards is it’s hindsight, like hindsight here. Is how we can measure these outcomes. And if you start to look at what didn’t work and layer on knowledge plus hindsight results and ask the questions of what, like look at the, I always say like, look at the 80%, so 20%, you know, seems pretty good from a pipeline conversion standpoint.

[00:28:42] Tessa Burg: Um, but what happened with the 80% and where are there opportunities? And get qualitative and quantitative data. Ask people what happened with these 80%? What, what knowledge can we codify out of our losses and where could we spend some more time in trying to uncover what we don’t know and bring in the right people who might have the answers or might have a different perspective.

[00:29:11] Joseph Miller: Yeah.

[00:29:12] Tessa Burg: What are some ways that you help people kind of get off the ground and prioritize? Like, where do we start? Who do we need in the room?

[00:29:19] Joseph Miller: Like, this is, this is why I think that, um, everybody should be like a scientist. Um, it is, uh, it is. There’s a really good just. Basic process of like the scientific method that applies universally to the way that we should be running businesses as well.

[00:29:34] Joseph Miller: Um, and we do it and we do the same thing at Vivun. And, um, any company I’ve ever been part of, we do the same thing. It’s, and it’s just to say, when you start out a thing, you gotta call a shot. You gotta, you know, we call it a hypothesis and, and science, but it’s like you just gotta call a shot. You can’t just go out and do something random.

[00:29:50] Joseph Miller: You have to say, this is what our intention is. This is what we think will happen. This is what we’re gonna go do. And then, you know, I’ll, I’ll borrow some, some lessons that I learned from Bridgewater. But if you, if you start out with your goal there and then you, you kind of go into what we call like a machine or like Ray used to call a machine, and that’s like, there’s pr, there’s a process that you’re gonna run and there’s people that are gonna run that process and you just have to say, did I get the process right?

[00:30:13] Joseph Miller: Do I have the right people in this machine? That machine turns it out, produces an outcome, and you can just compare it to what you thought. You had, and the things that you can change are the people or the process. And when you really are rigorous about that, that’s just like the scientific method as well.

[00:30:26] Joseph Miller: So it’s not like Ray invented it, but it is. But it is a useful way of thinking about how you should be running, especially in AI in these projects, because there’s a lot of black box nature of things here. We’re like, if you can’t do that. Uh, you have to ask yourself like, what are you even doing?

[00:30:41] Joseph Miller: Like, how can you improve if you can’t go through that process, that step, and then compare your outcomes and then do something about it to affect it? Deterministically, what can you do? So a lot, I see a lot of people like, you know, they just using ChatGPT and you’re like, you got it wrong. What are you gonna do next?

[00:31:00] Joseph Miller: Like, you don’t know why it’s wrong. You can’t control. You can’t control the, the LLM. You can, you can prompt it differently, but the prompt is only one very small piece of this giant machine that is the LLM. So. Like, what are you gonna do? Um, you know, so I think that like, making sure that people understand, like, especially in this world where like AI is really starting to take over a lot of the, it’s like starting to be a, a part, a big part of a lot of the machines that we run in, in business.

[00:31:28] Joseph Miller: It’s, um, you’ve really gotta be careful to break it down and make sure that you’re like, do I understand exactly what’s happening here? Can I make a change? That will, will, will, you know, I in increase the probability that my outcome is the cold shot that I had. Um, like is this a deterministic function or am I just actually kind of out here being random?

[00:31:48] Joseph Miller: Um, ’cause that’s. More often than not, the thing that I see the most is that people just don’t, haven’t spent the time to like really define this process. And so as a result, like I, I, I think it’s really tough for them to learn.

[00:32:01] Tessa Burg: Yeah, I agree. And I, you’re hitting on the importance of being iterative, even starting small.

[00:32:07] Joseph Miller: Yep.

[00:32:08] Tessa Burg: So when you’re looking at data, you’re seeing all these big opportunities, prioritizing based on time to value and what. What do you understand? What can you impact is really important. ’cause I do, I don’t think people take the time to get into those steps in the process. You know, they, they’re like, I did think about it.

[00:32:28] Tessa Burg: I looked at this report, I, I heard this feedback actually reminds me a lot of, um, even some very good sales people where, when. Was in product, they’d come back and they’re like, if we just added these two features, everyone’s gonna say, yes, I can sell this like hotcakes. You just gotta do these two features.

[00:32:44] Joseph Miller: I’ve heard those same, those same talks.

[00:32:46] Tessa Burg: Yeah. And you’re, as a product person, you’re like, okay, what’s the big challenge that’s solving, uh, how would they define success in the sales rep role Sometimes. We’ll race ahead and make some assumptions, but what you really have to do as someone who is leading initiative, as someone who is leading product development or using AI, is make sure you’re very clear on the core problem of the challenge, and you’ve gotten it down small enough that you can start to test and iterate, um, in a short amount of time to prove out the value.

[00:33:19] Joseph Miller: Yeah. I mean, and like, uh, like as last point on this I guess, but it’s, um, you know, when I talk about like the importance of building this world model outside of the LLM, this is, this is like a, it is the same thing we’re talking about. Like there’s a, there’s a cold shot and it is like, what is the reasoning that I have codified into this world model?

[00:33:38] Joseph Miller: So when I say buyer versus, you know, user, um, I have defined those terms in this graph. And then I can do something about that, that, that gives me an enormous amount of power, because then if the, if the agent makes the mistake that confuses the two, I can go back and I could say, ah, like, why, why did it confuse, oh, my, my definition of these two things is too similar.

[00:34:00] Joseph Miller: Let me, let me be more precise about what I mean. I mean, this is the same thing that happens with amongst humans. Like if it, you know, we get fights with our partners or whatever. It’s like almost always, people always like to throw this term out. Like, oh, it’s just semantics. And you’re like, the irony of that is unbelievable.

[00:34:16] Joseph Miller: Like Yeah, of course. It’s just semantics. Semantics is the meaning of words. It is the foundation of our thought. So if you are not being clear about your semantics, yeah, you’re gonna fight, you’re gonna get, you’re gonna be confused. You’re gonna talk past each other. So like it’s really, really important to be able to.

[00:34:31] Joseph Miller: Find these things so that when you get it wrong, you can go back and say, ah, it’s because this is, this is murky. Let me define these better and separate them more. So I, they’re, they’re, they’re more clear about what I mean, or my intentionality of that is, is more clear that will help the LLM and it also, you know, helps your personal relationships.

[00:34:52] Tessa Burg: I agree. So we’re at time. That flew. Went over 30 minutes, but it’s good time. Good. Great. And I hope that everybody’s walking away understanding a few really key things. One, the hype is real. AI is disrupting the way that we do work. But that doesn’t mean that there aren’t challenges to solve. And I hope that coming outta this conversation.

[00:35:15] Tessa Burg: You feel empowered to be bold, but take the time to do those definitions, to get deep into the process, pull in the right people and have that I, I love that. Call the shot. Like Yeah. Be clear so that you are leading efforts that not only give people the opportunity to build new skills, solve different types of problems, but empower sales teams, marketing teams in ways that have never been possible before. Uh, Joseph, before we end, is there anything else you wanna add or close out with?

[00:35:46] Joseph Miller: Uh, I would just say that I think that there’s a lot of anxiety around the future of what AI is gonna be, how they’re gonna disrupt our companies, how they already are disrupting our companies. And I think that that’s a valid thing to be.

[00:35:58] Joseph Miller: Concerned of. And I think that especially the business leaders that are listening to this podcast, like it is a thing that we should be, um, having in the front of our minds and we should be actively thinking, how do we be empathetic through this process? But the process is gonna happen. Um, this is, this is, this is real.

[00:36:14] Joseph Miller: It is really happening. I I am seeing it in every domain that I, I am, I’ve been blessed enough to, to touch here is, um, is, it is really, really dramatic and very exciting. But, um, it, it, it will require some human leadership, I think, to navigate the companies, uh, empathetically through this process. So, um, I’d encourage everybody to like, you know, stay up on the reading, understand, you know, where the, the human values are and help us shape these things so that the AIs, you know, come out more human and, uh, it’s less them versus us and just more of us, you know?

[00:36:48] Tessa Burg: Yes.

[00:36:49] Joseph Miller: Oh, and thanks for everybody listening.

[00:36:52] Tessa Burg: And if you wanna hear more episodes of Leader Generation, you can find them at modop.com. That’s modop.com. And until next time, have a great week, Joseph.

[00:37:02] Joseph Miller: See you guys.

Joseph Miller

Chief AI Officer and Co-Founder of Vivun
Joseph Miller

As the Chief AI Officer and Co-Founder of Vivun, Joseph Miller, PhD, leverages his expertise in AI/ML and causal inference to build complex agentic systems. He is nationally recognized for his work in AI labor disruption and algorithmic strategies, and has appeared on platforms such as Bloomberg and Nasdaq. Joseph also guest lectures on AI, entrepreneurship and quantitative finance.

Scroll to Top