Delve into William Blair’s latest generative AI report, “Navigating the Boom: Confronting GenAI’s Most Pressing Questions.” Featuring insights from technology analysts Jason Ader, Arjun Bhatia, and Sebastien Naji, this episode explores the current state of AI, industry trends, and the future of AI development.
Podcast Transcript
00:21
Chris
Hi, everybody. Welcome back to William Blair Thinking Presents. Today is March 6th, 2025. And today we are focused on William Blair's latest gen AI report. It's titled “Navigating the Boom: Confronting GenAI's Most Pressing Questions.” This is a report that's a follow up to the 2023 GenAI primer, which we actually have done a podcast on in the past. With the intention of addressing common questions about GenAI that a research team has heard from investors over the past 18 months.
And so joining me is Jason Ader and Arjun Bhatia. They're co group heads of our technology media and communications research sector. And then we've also got Sebastian Naji, our equity research analyst also on the tech team who focuses on semiconductors and infrastructure systems. So, guys welcome. Appreciate you being here. Let's kick things off. I figured we do, maybe a brief overview of what the report is about, and then maybe a refresher around what gen AI is and how it differs from other types of AI. And then from there, maybe take it a bit deeper.
01:28
Jason
Thanks, Chris. This is Jason. I'll start out and then pass it off to the others. But just in terms of the conception of this report, you know, we really wanted to, as you said, address the common investor questions that we've been getting over the last year and a half or so. I mean, it's just been a whirlwind.
Within this space there is tremendous, progress. Kind of across the board on the technology, on the applications. Certainly, a lot of money being invested, by the major cloud providers. Just, you know, kind of hard to keep up. So we wanted to provide, I think, just a bit of an update on, you know, how we view the space, where the value is. You know what are the risks? What should we be looking out for? You know, around this generative AI theme and again, the generative AI theme and the technology is very groundbreaking. From, from the standpoint of its ability to generate new content on its own after being trained by lots of data.
Whereas like traditional artificial intelligence, was really focused on pattern detection. You know, generative AI is much more powerful than that. It can create new text, new software code, new images, you know, new songs, etc. So, it really kind of mimics the creative capabilities of the human brain. And now we're moving into kind of the next phase, I would say, which is more about reasoning or chain of thought, where you think about the sciences and math, where there's some type of ability to kind of verify a fundamental truth.
These reasoning models, they will be incredibly powerful in terms of their application in some of those more STEM type disciplines. So just a lot going on, an opportunity for us to address some of these overarching questions. And really, again, just sort of, help investors navigate through some of the noise.
03:59
Chris
Thanks Jason. Maybe we start with a question that comes up first in the report. You know, and it's dominated, you say, conversations with investors more than any other, which is, of course. are we currently in a AI bubble? Are we? I guess is my question to you. Or is the investment in AI justified by the returns?
04:18
Jason
I’m going to answer that real quick, just in terms of the comparison to dot-com. And then, maybe Arjun, if you want to you want to chime in, but the thing we are looking at is part of this report, was your comparisons to other periods, and the main one when we came up with was the dotcom era, sort of late 90s. Obviously you had the bubble burst.
One of the fascinating things that we, that we discovered in, in the process of that research is just how much value was created during the dotcom decade, even though we all think about the crash, you know, some of the most prominent and highly valued technology companies were founded in that ‘95 to 2005 we call it the dotcom decade.
You know, something like $8 trillion of market value. And, you know, I'm not going to mention any names, but, you know, there's some obvious names that were created, that are household names today.
And the other thing with dotcom, I would say is that valuations were wild. You know, for some of the public companies, it just didn't make any sense. And so, you know, kind of I would say irrational valuations. And we've seen much more rational valuations for, some of the big players, within generative AI. So I think that's a big difference.
I think there's also some similarities. Just, you know, some of the private company valuations have been, a little bit bubbly. Whenever you have a new wave like this, you know, there's just a ton of money that's being thrown at companies. And, you know, that's not necessarily healthy. But I would say it does have, you know, it rhymes a little bit with, with dotcom. But we also, we also think it's got, I would say probably even more kind of, disruptive potential than the internet just because of the cognitive capabilities. And it's again, sort of ability to really tackle, you know, problems in, in areas like science, and healthcare.
06:26
Arjun
I’ll add on there. It’s Arjun here. I think with any new sort of tech paradigm, like gen AI, there's always going to be a phase early on where you're building, all the infrastructure and the costs are high and the revenue is obviously not scaled. So, what ends up happening, right, is investors are making bets on basically, what's the world going to look like in five years and ten years? And in some cases, those multiples look high. And you start making comparisons to, to bubbles.
But if, if, if we think gen AI is actually going to be as transformative as it is. Right, you can easily make the argument that even at high multiples, these, you know, the private companies even that we're seeing that are early on are maybe not in bubble territory, maybe they're ahead of themselves.
But that doesn't mean the revenue can't scale eventually to get positive returns for investors. I think that's the private side of it. Right. So it's always hard to tell whether there's a bubble or not. I think that's kind of what makes a bubble.
But I think when you look at, at least the public companies that are out there and you look at their, their kind of multiples from 2019, before even the pandemic started, but certainly before gen AI became mainstream to, you know, to, to today, the multiples haven't actually really changed that much.
Maybe we're up 5%, 10%. Right. It's small in terms of the level of multiple expansion. So the public companies that are participating in the, in the gen AI boom, you know, the returns that we've seen there are not pure speculative multiple expansion. It's actually growth and earnings and growth and revenues that's driven the equity returns. Which again is telling you there's something fundamental behind it. It's not just speculation. It's not just a bubble.
08:35
Chris
How is the value in the AI space being distributed? And, you know, without naming companies. Where do you see the most untapped opportunities for investors?
08:43
Arjun
So, Yeah, I mean, like, it's kind of its kind of similar to what I was saying just now, right? That we're still in this kind of build phase of the AI cycle. So right now, a lot of the value or over the last 2 or 3 years, right, a lot of value has been in still the picks and shovels, right? Still a lot of the infrastructure. Because there's so much data center capacity, and compute required to, to power journey AI, that you're really seeing it in semis. You're seeing it in networking. We're seeing in anything that that's kind of required for the build out of, of data center capacity for, for compute resources.
You're seeing it some for the providers also. Right. But I think, you know, we're, we're not quite at the point yet where AI is in a deployment at scale. Like, you see it, you see experiments, you see pilots, you see, you know, consumers trying out, some of the chat-based AI tools. But I it's a broad-based usage is not there yet.
So it's not necessarily usage. It’s the value is being created at the infrastructure layer thus far. I think as we fast forward that will change. Right. Eventually we'll go from this kind of build phase to the deployment phase when we're all using, gen AI powered apps a lot more. And at that point, right, the, the value creation should start to shift a little bit, and you should see the application layer companies, the companies that are building actually the use cases of Gemini start to benefit as well.
I would argue we're not we're not quite there yet. Where it's still this infrastructure build out. That's the main topic of conversation and the main value driver with Gen AI. But, yeah, I think, I think we're getting, you know, every as time passes, we're getting closer to this kind of shifting, where, where the applications start to contribute value as well to the AI equation.
10:53
Jason
I think, I would just jump in and say the two, what I would call, kind of horizontal use cases, this is, you know, thinking about more on the enterprise side, application of AI to enterprise. The two most popular use cases, I would say. I wouldn't say they're fully proven out, but they're progressing rapidly. Would be software development and customer service.
So those are some early, you know, proof points that this stuff is, first of all, it works. And second, that it can, really make a big difference in your business. You know, if you can develop software much faster. If you can really streamline your customer service and, you know, reduce time to, resolution of customer service issues, things like that.
So this is, you know, like, call it, you know, the early, early days. And we're not, in a year or two, we're going to be talking about a lot more use cases, but you got to start somewhere. And those are two areas where I would say, you know, there's a lot of excitement.
And so, we are starting to see some value accruing. Call it in, in, you know, just beyond just the core infrastructure, but it's, it's going to take some time. And you think about some of the, we saw on the enterprise side, some of the gating factors. There’s just not a lot of, skill sets yet.
You know, people need to develop the skills to kind of build this stuff. From a, from an application layer standpoint. So that's still, you know, in the nascent phase. You also have a lot of privacy and security question marks. And so, companies are working through that. Data readiness. You know, people have a lot of data, but, you know, how do you prepare that data for AI?
And, you know, a lot of that's going to be proprietary data that's proprietary to your business. So, there's a lot of challenges, I would say, in sort of rolling this stuff out. And that's what we're seeing. It's just taking time. And also requires, some process change and behavioral change, on the part of, of users and companies.
And so, you know, I think we're, you know, we're just we're seeing the green shoots. But it is, you know, we're talking about this will play out over, you know, years.
13:29
Chris
All right, so let's talk a bit about AI scaling laws. Can you first define what they are and then are the scaling laws for AI models still holding or are we, you know, reaching diminishing returns with larger models.
13:40
Sebastian
Yeah. This is a Sebastian. I can answer that one. I think, you know, generally, definitionally AI scaling laws are really an empirical law. It's been a way to describe the just very fast progress we've seen and, model performance and performance over the last couple of years. And, the basic concept is that the more data you throw at the problem, the larger the model is, the more parameters it has and the larger the compute cluster that's used to, to run or train that model, the better the performance will be.
As a result, and in the early days of AI, you know, first year, year and a half, a lot of the focus and scaling has been on the pre-training side. So, you know, training models on larger and larger data sets, using larger and larger clusters to train those models. And what we started to see at the, in the middle of last year was that approach, that pre-training approach was starting to see diminishing returns.
So, doubling, you know, the computer, the data did not necessarily double the performance. And that that I think raise questions around our scaling loss starting to, to diminish or, and are we starting to hit a wall. And I think the answer to that, what has emerged is that there's two new vectors of, of growth or of scale that have emerged.
One is post training. And so that's, you know, once the model has been built, you then, can refine that model with things like reinforcement learning, human feedback, or increasingly, reinforcement learning with AI feedback and that's actually been an area of a very strong progress over the last couple of months.
And then the third vector that's emerged, is test time compute or inference reasoning. And that's, you know, what models like OpenAI's 01 and 03 are able to do, which is rather than just spitting out an immediate probabilistic answer, it spends some time thinking about the problem and then really refining what might be the best answer to that, to that prompt. And that inference reasoning or test time compute approach has been shown to drive the need for compute, you know, 5 to 10 times higher than what it was, on the training side and, you know, have some, some vendors talking about, you know, 100 times more compute required, to run these test time compute models.
So, I think generally the answer is that, you know, while certain aspects of scaling laws are seeing diminishing returns as a whole, we found new approaches and new ways that continue to drive that performance improvement, and drive that scaling law curve forward.
16:09
Chris
Got it. Thanks Sebastien. That’s good insight into how a large language model’s performance evolves with its key attributes, but let’s take a bigger step back a bit and talk about these large language models themselves. Specifically, how do these open source LLMs compare to closed source LLMs? And what are the implications for the future of AI development?
16:23
Jason
You know, the way to think about it right now between open source and closed source, right? Is, open source, just like other open-source software. The code is basically freely available to download and, you know, effectively, it's not a sort of proprietary and controlled by one company.
You know, and you and you've seen, you know, a bunch of, different open-source models that, you know, have, have risen, you know, kind of upped the league tables. And are seeing widespread usage, including, the DeepSeek model coming out of China. So, I think the way we view open source versus closed source is that there's a role for both. You know, not a zero-sum game. I don't think it's all going to be open source or it's all going to be closed source. I think it's sort of different strokes for different folks, depending on what you're looking for. But clearly, you know, open source is going to have a major, major, role to play. And it's goodness, especially for startups, because, if you're working with open source, you know, you're not you're not paying the tax that you need to pay, to the closed source models, which could make it difficult to kind of scale your business because your, on the cost side, you know, the model is going to be a big part of the cost for AI startups.
So I would say the startup community is thrilled with, what's happening on the open source side. You expect pretty rapid development both on open source and closed source? The one area or the one factor to consider with open source, relative to closed source is that, you know, there are some voices out there that believe, you know, from a, kind of overall safety and national security standpoint, right, open source models could create some challenges. And that closed source, there’s just more control, basically, on how, you know, how they're created, how they're used. And so t that's I would say something to consider over the next few years. But there's just been tremendous, progress on the open-source side.
And DeepSeek kind of really has illustrated, you know, what you can do on kind of a tight budget. And you know, what I would say is the closed-source models, though, are, you know, I would say still somewhat ahead. You know, from a kind of frontier perspective, you know, and we expect that to probably continue. But we see a role for both. Arjun, do you want to talk commoditization?
19:31
Arjun
Yeah. I also think, you know, just a point to make, how these models are being used. It's worth considering just how applications are being built. You know, if you build an AI application, most of the time you're not going to be using one model to power it. Depending on the use case, depending on what feature set you're using, most of these applications that are being built are actually multi-model powered, meaning, you know, you can have one company that's building an application for software development, for example, that might be using dozens of different models depending on which part of the development lifecycle we're talking about.
So just like today where most software applications are multi-cloud, meaning they're hosted on different hyperscalers, the AI applications are also going to be multi-model, which is something to consider. And it kind of feeds into this idea that, you know, it's going to be harder and harder to differentiate the LLMs themselves because they're looking more and more similar over time.
You know, it is hard certainly, to build a large language model just because the compute cost is very high, though, as we talked about with DeepSeek, right. That's, that's going down. That's, you know, the cost curve is, is getting lower and lower. But still there's a lot of data required. There's a lot of compute required that makes it hard to build. But for those companies that have the funding that have built this, you know, they try to differentiate across speed, accuracy, you know, cost, reasoning capability. But in general. Right. I think these models will look very similar. And where you differentiate is what you build on top of the models, not on the models themselves. So it ends up being more workflow and ends up being context. Right? And so over, over the medium term, we think the models will look pretty close together in terms of the services they offer or in, in their performance. But it's really what's being built on top of it that will be, that will be that will be differentiated.
And each model will have its own kind of pros and cons. And I think it'll be an interesting question for some of these companies that started as LLM providers. You're already starting to see them move up the stack, so to speak, and build their own application capabilities, build their own AI agents. And they're seeing this already. Right? So that's why they're trying to, already differentiate, from just providing the core LLM to providing a lot more.
22:26
Chris
Got it. Let’s stay on this topic of commoditization for a moment. As you said, we’re seeing these LLM providers moving up the stack, building new applications to help differentiate themselves. But how is this playing out in the enterprise world? Are there any barriers to adoption of AI? And what would you say are some of the most promising enterprise use cases?
22:49
Arjun
I'd say like probably the biggest bear here thus far is that this is just so new for everybody. And there’s two different lenses, right to look through this. One is the enterprise lens and one is the consumer lens. But if we think about enterprise adoption, enterprises are generally risk averse. And this is a new technology. So they're still trying to figure out like how do we use this? What's the best application for AI? Where do we get the most ROI? As Jason was alluding to earlier, right, is our data in order to actually be able to power the applications that we want? Right. Because gen AI is only going to be as good as the data that powers it. Do we have budget for this? Right. These are kind of like blocking and tackling items that are big barriers to adoption. I think there's also the other side of the house where the tech providers, the tech vendors are also building the products themselves. And they're early in actually figuring out what are we good at and what are we not good at?
How do we, just like with any product development cycle, how do we get this in the hands of users and how do we iterate on it and some of that stuff just takes time.
So, there are certainly barriers right now. None of them, I think, are insurmountable. I think with time a lot of these barriers will get addressed. Definitely on the enterprise side. But I think the same applies on the consumer side. Right. The applications will get better and users will get more familiar with gen AI. And, you know, that's all going to be a positive just as we get more familiar with the technology.
24:35
Jason
And I would just jump in and say enterprises will not, accept, you know, 80% reliability from the answers that the models spitting out, right? So, enterprises need, like, you know, five nines reliability. So, they're not going to really adopt this stuff until that, that reliability of the responses and the accuracy of the responses gets to a point, you know, gets to a certain threshold.
On the consumer side, I think people are more willing to accept, you know, less reliability, less accuracy, because it doesn't affect, you know, revenue and brand reputation. So I do think that we'll probably see more adoption on the consumer side. And I think that's typical with new technologies. So think about, you know, consumer at the front end, on the adoption curve. And then enterprise will lag. But as Arjun said, you know, none of these barriers we think are, are really, you know, things that are to going to be major roadblocks. I think they'll just be little bumps and customer and, and, and the organizations will figure out how to get past them and, the whole industry, is focused on this reliability challenge right now. We're seeing, you know, a lot of, start-ups and a lot of new technologies coming around to kind of help deal with that reliability challenge.
26:08
Chris
The report also covers off on some of the main regulatory challenges facing the AI industry. And, you know, how might they impact the development and deployment of AI technologies? Can you guys maybe break that down a little bit for us?
26:21
Arjun
I think their, the government, the public sector is still trying to figure out exactly how and if to regulate AI. I don't think there's a clear answer yet on what the best way to go about this is. And I think it's a tricky balancing act because you want to facilitate innovation, right. And in a lot of cases, that means, not standing in the way. But I think, you know, there are obviously social implications that come out of adopting gen AI. I think one of the bigger ones is, you know, what does it mean for jobs? And this is a common question that we're fielding, certainly on the, on the enterprise side.
But I think that's probably one of the areas where, you know, government is trying to figure out how do we make sure this is not a shock transformation where, you know, jobs are being impacted, and everything is being replaced by AI. And how do we make sure people are getting reskilled in the new labor economy?
I think that's one big aspect of it. Then there are certainly, you know, cybersecurity, national security risks. Right. In terms of hallucinations and deepfakes and, I think that's also an area where we're seeing kind of heightened scrutiny, especially around election cycles. Right. Like just like the one that we went through. But I think nothing is really concrete at this point. It seems there's some, there's some regulations that have been introduced. But it seems very much to be a moving target, just given how, how early it is. But I think every company, every country certainly wants to be at the forefront of gen AI adoption. And it's a tricky act to make sure you're innovating and you're progressing well while still, you know, keeping some safeguards in place for social societal risks, you know, geopolitics, misinformation, so on and so forth. Right. So there's, there's quite a bit of that that's still being figured out.
28:47
Chris
And then the last question. What's the next frontier in AI development? And you know how will it shift the future of technology and society?
28:54
Sebastian
That’s the trillion-dollar question. I mean, I think I, as we've talked about, you know, a lot of the initial value so far has been captured in sort of the pick and shovel names and, or the infrastructure that has to be built. And I think that's going to continue to be, you know, a part of this infrastructure or the stack that continues to grow rapidly.
But as we move forward, I think as, as you know, Arjun and Jason have talked about it's really going to be at the application and even the data layer, where a lot of the innovation happens and, you know, the ability to use things like agenetic AI and whatever comes after agenetic AI to, you know, do, do simple tasks, automate, rote tasks that, that humans were doing in the past so they can focus on, more value added parts of the business or just move faster, you know, deliver applications faster, deliver better customer service.
I think, that's going to be, you know the trend that we continue to see as we move to 2025 and 2026 and then, you know, beyond that, I do, you know, really buy into the thesis that that AI will just become a core part of every application, every infrastructure. Whether that's, you know, enterprises, consumers are using it, governments are using it.
And so, it’s definitely an exciting time right now. Thing are changing incredibly quickly. But I do think that we're going to continue to see, you know, more and more that AI has infiltrated every aspect of our society and of our workflows.
30:22
Chris
All right. Well, for those interested in reading the report, once again, it's titled Navigating the Boom. You can request copy by reaching out to one of us or at WilliamBlair.com/contact-us. Thanks for taking the time to be with us today.