Software Engineers Are Becoming Pilots
We've seen this panic before. Here's why engineering will be fine.
The software industry is on edge right now. Teams are getting gutted, and a handful of engineers armed with LLMs are matching the output of what used to require entire departments. LinkedIn is flooded with layoff posts and hot takes about the death of coding, and honestly I get why people are unsettled. But I also think there's a lot of overreaction happening.
We've Been Here Before
When's the last time you saw a punch card operator? How many professional assembly programmers do you know? They're about as common as snow in the desert.
But software didn't die when those roles disappeared, and programmers weren't killed off. We harnessed new tools like compilers, high-level languages, and IDEs, and we built entirely new processes around them. Waterfall gave way to Agile, monoliths became microservices, and the job evolved.
Every major technological shift in our industry has followed the same pattern:
- Initial panic
- Adaptation
- Growth
The tools change but the fundamentals remain. Will AI be different? Maybe, but I'm betting on the pattern holding.
The Pilot Analogy
Think about commercial airline pilots. They're masters of their aircraft, spending thousands of hours training on systems, aerodynamics, and emergency procedures before they ever touch a commercial flight. And yet modern autopilot is incredibly sophisticated. Planes practically fly themselves, and a pilot rarely needs to take manual control during a routine flight.
So why do we still need pilots?
Because when things go sideways, when there's an engine failure or severe turbulence or a system malfunction, you need someone who deeply understands the machine. Someone who can take over, make split-second decisions, and land the plane safely. Autopilot can't handle the unexpected, but humans can.
This is where software engineering is heading. We're becoming pilots.
The LLM is your autopilot. It can write boilerplate, implement features, refactor code, and handle the routine stuff with impressive competence. But you still need to understand your craft at a deep level: design patterns, system architecture, performance implications, security considerations. The stuff that matters when the autopilot can't figure it out.
What "Pilot Training" Looks Like
For engineers, this means studying the fundamentals that LLMs can't reliably reason about:
- Distributed systems
- Database internals
- Concurrency patterns
- Memory management
It means building things from scratch occasionally just to understand what's happening under the hood, and doing code review with a critical eye instead of blind acceptance.
Takeoff and Landing
Here's something interesting about pilots: they're most engaged during takeoff and landing because those are the highest-risk, highest-complexity phases of flight. Once you're cruising at 35,000 feet, autopilot handles most of it.
Software engineering works the same way.
Takeoff = Planning
This is where you figure out what you're actually building. Requirements gathering, architecture decisions, API design, edge case identification. This phase requires human judgment because you need to understand business context, anticipate problems, and make tradeoffs. LLMs can help brainstorm, but they can't make these decisions for you.
Autopilot = Implementation
Once you've charted the course, let the LLM fly. Generate the CRUD endpoints, write the unit tests, build out the UI components. This is where AI shines.
Landing = Verification
You're back in the captain's seat for manual testing, integration testing, security review, performance profiling, and edge case validation. This is where things break, where subtle bugs hide, where the LLM's confident-but-wrong code gets caught.
I recently architected a complex feature involving real-time data synchronization across multiple services. The planning phase took days while I figured out consistency guarantees, failure modes, and rollback strategies. The implementation? The LLM cranked out most of it in hours. But the landing took longer than the planning, catching race conditions, fixing edge cases, and verifying behavior under load. It required every bit of engineering knowledge I had.
That's the job now. Heavy on takeoff and landing, with autopilot in between.
The Short-Term Trap
What worries me about some companies' responses to AI is that they're chasing short-term cost savings without thinking about what happens next.
Yeah, you can replace a team of ten with three engineers and some LLMs, and output might stay the same for a while. But what happens when you hit a genuinely hard problem? When the architecture needs rethinking? When there's a production incident at 2 AM and someone needs to actually understand the system?
You need pilots, not just autopilot. Companies optimizing purely for cost reduction will learn this the hard way, while the ones that invest in skilled engineers who can leverage AI effectively are the ones that will pull ahead.
Don't Get Complacent
My advice is simple: don't stop learning.
It's tempting to let the AI do everything, to accept its output without scrutiny, to forget how to write code by hand. But you shouldn't.
- Learn design patterns
- Study system architecture
- Do hands-on coding where you're thinking through every line
- Build mental models of how things work under the hood
You never know when you'll need to turn off the autopilot.
The future belongs to engineers who can do both: leverage AI for productivity while maintaining the deep expertise to handle what AI can't. Be the pilot, not just a passenger.