Everyone said AI would let small teams take on the giants. After a year of watching that play out, I think the opposite happened.
When we started talking about starting Super Mega Lab last year, the first ideas on the whiteboard were small SaaS products. Nothing groundbreaking. Tools that solved real problems but didn't have much depth to them. At the time that felt like a reasonable place to start, a way to get our feet wet and ship something.
That window closed faster than we expected. The models improved, the tooling improved, and the landscape shifted underneath us in a matter of months. The small product ideas we'd been considering were suddenly either already done, getting absorbed into platforms, or trivial enough that anyone could spin one up in a weekend. We ended up pivoting away from what everyone else was already building to carve out our own lane entirely.
That experience is why I think the conclusion most people drew from the AI-makes-building-easy narrative was wrong. AI is a multiplier, not an equalizer. It made everyone faster, including the companies you're trying to replace. The moat between challengers and incumbents got deeper, not shallower, and most people building right now haven't figured out how much the game changed.
I'm not writing this to discourage anyone from building. I'm writing it because we lived through the exact adjustment I think a lot of people are in the middle of right now, and I'd rather you hear it and course correct than burn six months on a strategy that stopped working.
Around that same time, AI models got meaningfully better at handling larger, more complex tasks. They could scaffold entire systems. Suddenly it felt feasible for a single person to build things that previously required a team.
The first wave was micro SaaS. Small, focused tools, most of them AI wrappers. Take an LLM API, put a nice interface on it, charge $20/month. Some of these were genuinely useful. A few found real niches. But almost none of them had a moat.
The market flooded with hundreds of "AI-powered" tools that were essentially the same product with different branding, and then the platforms caught up. Features that people were building entire companies around got absorbed natively into Claude, ChatGPT, and Gemini. If your entire product is a wrapper around someone else's API, you're one platform update away from irrelevance.
Most of those micro SaaS products are dead now, or might as well be.
Models and tooling kept improving. Ambitions grew with them. People stopped thinking about wrappers and started thinking about real products. Full-featured competitors to established players. "I can build a project management tool that's better than Jira." "I can build a CRM that actually makes sense." "I can replace this bloated enterprise software with something clean and modern."
Honestly? The building part delivered. You can go from zero to a working MVP faster than ever. Features that would've taken a team weeks can be prototyped in days. The technical barrier to entry genuinely dropped.
Some of these MVPs converted to early funding. Some acquired initial users. The "I built X in a weekend" posts blew up on Twitter. The narrative felt validated.
There's a problem with this narrative though, and it's the part that doesn't make it into the Twitter threads.
While you were building your MVP, the incumbents were using the same AI tools with larger teams, existing revenue, established integrations, and years of brand trust behind them. They're shipping faster than ever too.
The baseline for what constitutes a competitive product moved dramatically. Your MVP from six months ago would've been impressive. Today it's table stakes. The features you spent a month building got added by the incumbent as a sidebar update last Tuesday, along with three other things you haven't even thought of yet.
The multiplier effect works in their favor. A 200-person team with AI ships dramatically faster. A solo dev with AI ships dramatically faster. The ratio didn't change. Now, large organizations have their own friction. Coordination overhead, compliance reviews, internal politics. AI doesn't eliminate any of that. But when it comes to raw feature output, they can run ten AI-assisted workstreams in parallel while you're focused on one. The net effect still favors them in a straight feature race.
Users got used to polished, feature-rich products. An MVP with 20% of the features isn't compelling when the incumbent just shipped your whole roadmap as a minor release. The bar for "good enough to make someone switch" keeps rising. And distribution is still the real moat. You can build the product faster than ever. You still can't build the user base, the integrations, and the trust faster. The hard part was never the code.
If you're building in this space, you're surrounded by an endless stream of people shipping impressive things. New tools, new products, new "I built X in 24 hours" posts. Every time you open Twitter someone just launched something that looks like it took a team of ten.
This creates a distorted sense of what matters. The temptation is to see someone ship a slick feature and immediately add it to your backlog. To pivot because something is trending. To try matching the incumbent feature-for-feature because that's what "competing" looks like.
If you're trying to beat the incumbent at their own game, you've already lost. Change the game.
That's how you end up with a product that does twenty things and none of them well. Competing on feature parity with teams that will always out-feature you. Running on a treadmill that speeds up every time you match their pace.
So if you can't win by building more features faster, what do you actually do?
I've been thinking about this a lot, and I keep coming back to three approaches that I think are genuinely viable. They're not mutually exclusive, but they are fundamentally different from the "build a better version of X" playbook that most people are running.
Not a missing feature. A broken assumption. Something the incumbent can't fix without rebuilding from the ground up because it's baked into their architecture, their business model, or how they think about the problem.
Linear is a good example of this. Jira's fundamental assumption is that project management needs to be infinitely configurable because every team works differently. That assumption led to a product so complex that most teams use maybe 10% of it and resent the other 90%. Linear bet that most software teams actually want the same thing: fast, opinionated, and out of the way. Atlassian can't just simplify Jira because their enterprise customers depend on that complexity. The flaw is structural.
Perplexity found something similar with search. Google's entire business model is built on showing you a list of links because that's where the ads go. When people started wanting direct answers instead of ten blue links, Google couldn't fully commit to that shift without undermining the thing that makes them money. Perplexity didn't have that constraint.
The pattern is the same in both cases. The flaw isn't that the incumbent is bad at what they do. It's that what they do well is the wrong thing, and they can't pivot away from it without breaking something fundamental about their product or their business.
How do you find these flaws in practice? Talk to people who use the incumbent product every day. Don't ask them what features they want. Ask them what workarounds they've built. Ask them what they complain about to their coworkers but would never bother submitting as a feature request. The workarounds are where the structural flaws live, because if the product could solve the problem, people wouldn't be working around it.
The approaches above assume you're going after territory that an incumbent already occupies. There's another option that I think is underrated right now: build for workflows that didn't exist eighteen months ago.
AI is creating entirely new categories of work. Agent orchestration, prompt engineering pipelines, synthetic data generation, AI-assisted code review, knowledge management for AI-augmented teams. These are real workflows that real people are doing every day, and most of them have no established tooling. There's no moat to contend with because there's no castle yet.
This is where small teams have a genuine structural advantage. You're closer to the bleeding edge. You're using these tools yourself. You can feel what's missing in a way that a large company running a three-month product planning cycle can't. By the time they recognize the category exists, you can already have users and opinions and a product that reflects actual usage.
I'm biased here because this is essentially what we're doing at Super Mega Lab. We build AI development tooling because we use it every day and we keep running into problems that nobody has solved well yet, or at least not the way we think they should be solved. The tooling we wanted didn't exist, so we started building it. That's a very different starting position than "let me build a Jira competitor."
Not everyone needs to build the next category-defining product. There's a whole class of sustainable businesses built on serving niches that incumbents can't or won't serve because the market is too small to matter at their scale.
A vertical SaaS tool for a specific industry. A simpler version of an enterprise product for teams that can't afford or don't need the full thing. A product that does one thing really well for a specific persona instead of everything adequately for everyone.
This isn't the flashy path. It probably won't get you a TechCrunch article. But it's how a lot of durable small software companies actually get built, and AI makes it more viable than ever because you can build and maintain a niche product with a much smaller team than you used to need.
The key is being honest with yourself about which game you're playing. If you're building a lifestyle business that serves a niche, great. Own that. The problems start when you're building a niche product but telling yourself you're going to take on Salesforce.
The execution bottleneck is gone. A year ago the hard part was building the thing. Now the hard part is knowing what to build and why anyone should care.
I catch myself falling into the same traps I just described. Seeing someone ship something cool and wanting to chase it. Thinking about feature parity with tools that have ten times our resources. The pull toward building more instead of building different is constant, and it takes real discipline to resist.
What keeps me grounded is talking to the people who actually use what we build. Not Twitter, not Product Hunt, not the highlight reel. The people doing the work, running into the problems, building the workarounds. That's where the real signal is.
If you're building something right now, try this: talk to twenty people who use your competitor's product and ask them what workarounds they've built. Don't ask what features they want. Ask what problems they've given up on the product ever solving. That's where the real opportunities are, not in matching the feature list, but in the gaps the incumbent has trained their users to accept.