Duncan Davidson Founding Partner Bullpen Capital
Bullpen Capital is the pioneer of Post Seed investing. Previously, two bubble era IPOs (InterTrust and Covad) and two other startups (SkyPilot and Xumii, the first mobile cloud app). Being the pitch person on one of them at the peak of the bubble, I learned firsthand that venture capital is a cyclical business. During the bubble, speed matters; during the downturn, two successful strategies: long-term, deep tech; and short-term, value investing in oversold but otherwise solid companies. Bullpen was crafter for both bull and bear markets. The key: help the companies succeed, not focus on the deal.
Danny Brown. Partner MaC Venture Capital
Prior to MaC, Danny was Chief of Staff at Atom Factory, an LA-based talent management firm and angel fund founded by Troy Carter. There he focused on leveraging their unique position at the intersection of media and technology on behalf of their portfolio. Simultaneously, Danny joined the Cross Culture Ventures team, where he cut his teeth in early stage venture capital while identifying the next generation of culturally impactful technologists. Danny has a wide scope of focus, geeking out on the future of digital infrastructure, emerging tech ecosystems, and creating opportunity and access to venture capital.
AI and investments
Duncan: Well, I think this is the type of transition we saw when PCs were hot and then they weren’t. Then the internet got hot and then it wasn’t. Then we had cloud mobile and then it wasn’t. We’re at one of those moments. So when you go look at those moments in history, if you’re a VC and you’re still investing in PC deals during the internet craze, you’re crazy because nobody cared. And the LPs don’t care and the stock market didn’t care. So the same thing has occurred here and all the SAS deals are in the toilet in terms of multiples. Why? Because the market wants AI. These aren’t trends. What I’m talking about, the PC thing was a complete revolution based on the micropros, right? Internet, complete revolution! We just don’t know yet.
Duncan: Well, at Bullpen, we have a certain model, which are called post-seed. And we’re beginning to see, so we don’t actually look at the deals when they first get started. We look at them about a year to year and a half later when they need the next round. And we’re watching the really interesting AI deals begin to hit us. About a year ago, we saw what I would call the wrong AI deals. They’re called simple wrappers. They sit on top of ChatGPT. There’s not a lot of tech to them. They’re rushed. They go out. And then there’s 20 competitors. Now we’re beginning to see the next generation, which are much more serious technology. Very interesting stuff. So we’re actually quite excited right now.
Danny: Yeah I definitely think that I have maybe a similar take to Duncan just in terms of the emergence of AI and the attention we’re paying to it. It kind of reminds me of our initial reactions or the market’s reactions to things like blockchain and crypto. Essentially, a lot of attention for new dynamic or a new layer of infrastructure or new set of protocols, etc I think you know the interesting thing about this or all of these topics, including AI, are that we treat them upfront as their own industries, when in reality we likely need to be talking about the application of AI or any of these other technologies within specific verticals and industries. So when I think about AI and when my team thinks about AI, we think about AI in terms of the application of healthcare, AI in fintech, AI in media, et cetera. And it kind of goes back to just sort of the whole Web 2.0, right? We don’t necessarily talk about that as you know its own vertical. That was the emergence of ecom and social media, an emergence of new ways of doing things, new dynamics, new sub-industries, things like that. And I think we need to be looking at AI in a similar light.
Symbionts and Sovereigns People as regards AI
Symbionts have mental partnership with AI whereas Sovereigns have careful boundaries around thinking and use AI selectively and deliberately
Duncan: I think the biggest risk we have with AI is not that it does something, it’s just a machine, but it’s that we give too much credence to it. There used to be this thing in the data world, garbage in, garbage out. And what I learned was garbage in, gospel out. You believe the machine. So we don’t want to believe AI. We have to be very skeptical. They’re improving it. I think the problem with the guardrail argument is very simply put, is the technology and the people in the industry are improving things so rapidly, there’s no way the government’s going to catch up. And anything a government tries to do to put guardrails or safety on right now would just screw the pooch, so to speak, in aerospace terms. It would destroy the potential. Yeah, the people inside the industry, although they have mixed motives, are trying to solve this problem. And it’s not so much AI, let me put it this way. There’s an anthropomorphizing of AI, which I think is completely wrong. It’s just a machine. All these movies like Terminator make you afraid of it, but AI has no motivation. It’s just a machine, a response. It’s not motivated. It has no consciousness, so to speak. The fear we have is we just give too much credence to it and don’t have skepticism over the results and manage the results. So in other words, I think the fear is vastly overblown, but the AI systems are not reliable yet, but they’re getting there. They’re getting really good, really good.
Danny: Yeah I’m definitely of the same mind where, government and social perception of this at least is going to take a little while to adapt, right? There is definitely a lot of panic, a lot of hype, going back to the Terminator joke or the worry of Skynet or just the general weaponization of AI. People love to catastrophize, right? So we’re going to jump to those conclusions first the second we see anything that is unexpected. That said i do think that we need to, from a policy perspective, remain open-minded. However, we do need to be intentional with how we develop AI and the user cases with which we refine it. We’ve already seen a couple of hiccups and just issues from social progress dynamics, how you train a model which changes how it interprets the world around it. And we want to give AI as good of a foundation as possible to continue refining itself, as well as giving us something better to work with long term.
AI Scraping the Internet
Duncan: Well, I think there was an early fight with people like the New York Times behind a firewall trying to protect content, but they’re working those things out. They’re basically working deals where you get paid to do it. I don’t think that was an issue a year ago. I don’t think that’s where people are really focused right now. Of course the Creators, but I just think that’s an ongoing negotiation but we end up with crawlers on the internet and Google. And the breakthrough is going to be something that people aren’t discussing, which is Google early on crawled the web, but it took it a week. Then it got it down almost real time, the way they structured it. These ChatGPT bots are taking months before they get done, then they’re months out of date. They’re not up to speed, but they’re getting better and better. I think a real breakthrough is gonna occur when they’re basically real time like searches. That’ll be very interesting when that occurs.
Danny: Yeah I definitely think that going back to how AI interacts with data and where it’s getting its information, I don’t necessarily see it as a huge fundamental issue. I think the point you brought up about creators is actually rather pertinent, though, since that’s going to color a lot of pop culture and society’s perception of AI, right? As long as there are deals that are being worked out and AI is able to rightfully and ethically collect data, I don’t see an issue with how what it gets access to or how it processes it. But we do need to think about how AI plays a role in certain categories, like for example, media. That way, we can work something out with Creators across the web. There’s a fear of displacement by AI in general in a lot of different sectors, by a lot of different populations. And I think that the point of technology and automation is to make things easier. And that often does mean replacing labor or shortcutting other processes that exist. But which processes are almost subjective and dependent on human value, right? Human to human value. So what we might see as an issue isn’t necessarily a problem for others and vice versa. And I think that’s what we’re seeing certain things like that.
Our Human Capacity to Adapt
Danny: Yeah I definitely think that this is one of those things where AI is a game changer, right? So it really is a new entrant to our ecosystem. You can think about it like a food chain. We need to see where in the stack AI plays? And then everyone below it needs to be wary that there’s a new predator, everyone above it, which maybe there are those of us with the sovereign mentality who have to figure out can this help us? Does this change our game? Is this a tool for us or something we can ignore, right? It really is, I think, dependent on what your primary life focuses are and how technology already plays a role in that? Because as with anything, as technology advances, we find new ways to use it. So if you’re not used to using it to begin with, jumping straight into AI as opposed to just adopting basic processes right now, is going to be a big leap in the learning curve. And you’re not going to be able to.
Technology overriding more established things like Government
Duncan: Well, I think there’s a different issue which relates to that, which is rules-based structures that a lot of agencies work under are actually not good. They make discretion much harder, and so people get trapped in rules. You could do a computerized rule-based agent system, but it still would have the same problem of not having flexibility. I think there’s a deeper issue we have to fix in our agencies, and maybe DOGE is gonna fix it?
But I think the big issue behind all this is what’s about to hit this year, AI agents. AI right now provides information, humans decide. Soon AI agents decide. They act, they do things. There’s a view that in five years, many companies will have more agents than employees. Already I’ve seen deals where they’re not selling software anymore. They’re just selling the answer, like recruiting. I’m not going to give you recruiting software. Here, here’s a recruit. Go hire this person. And the person’s already been vetted, interviewed, contracted for. It’s like everything’s done by the agent. And the recruiting department is somewhat taken out of the picture. This is going to be the big change that makes this technology more like the Industrial Revolution and less like the Internet or the PC. Because it’s going to radically transform work. And I think that the government’s worried about guardrails and AI are looking at the wrong issue. They should be worried about this issue. How do you retrain the workforce? How do you re-educate the workforce to work on top of the AI agents and not be displaced by them?
Can AI Agents Solve Problems when you hit a Snag?
Duncan: No, you’re raising a profound point about the limits of AI compared to humans. The self-driving cars that are trucking around San Francisco have human assistance in the background because sometimes they don’t know what to do. There’s some funny stories where they all honk at each other. Well, that’s it. So you still need at some point in the edge cases, a smarter entity, a human being to fix. There’ll be other jobs. The fallacy from the Luddites is that automation, in the Luddite case, steam-powered milling, automation displaced workers. And the answer is it does, but it creates more jobs on top. So the Luddites were right in that they lost their jobs, but 10 times more jobs were created because the productivity of AI will do the same thing! But do we have a workforce trained and prepared to take the new jobs? That is a government question and they should be focused on that. And they seem to be focused right now on stopping AI.