Hiring in the Age of AI: Why I have Always Hired for Problem-Solving Over Code
For years, my hiring approach drew skepticism from peers across the industry. While most interview panels grilled candidates on algorithms, I took a different path. While they obsessed over coding challenges, I hired brilliant engineers by barely asking them to code at all.
My Hiring Philosophy: Conversations Over Code
Across my career building tech teams, I developed what many considered a counterintuitive methodology. I minimized technical questions. I rarely asked candidates to recite syntax or solve algorithm puzzles. Those skills can be Googled in seconds or now generated by AI in milliseconds.
Instead, I focused on maximum soft skill assessment. Communication mattered more than code. Curiosity revealed more than credentials. Adaptability told me more about future performance than any past project list.
I let candidates talk extensively about their past work through deep project discussions. Not the surface details of what they built, but the deeper questions of why they made certain choices. I listened for the level of detail they could master. I paid attention to the challenges they articulated and I observed how they described navigating complexity.
Then came the deliberately impossible scenarios. I’d present problems completely outside their domain expertise. The scenarios were intentionally ambiguous and unsolvable with their current knowledge. The exercise revealed everything I needed to know:
How do they ask clarifying questions?
Can they break a massive problem into manageable pieces?
Do they embrace feedback or become defensive?
How do they handle uncertainty?
If candidates wanted to show code samples, fine. Optional code reviews had their place. But their thinking process mattered far more than their syntax preferences. The approach delivered results. I built teams of exceptionally capable engineers. Across dozens of hires, I struggle to recall a single hiring mistake using this method. Those engineers built exceptional things.
The Industry Catches Up
Now, in early 2026, the technology industry has caught up to what seemed like an outlier philosophy. Artificial intelligence transforms software development at unprecedented speed. The skills I always prioritized have become the consensus standard for what separates valuable engineers from replaceable ones. Problem-solving leads that list and adaptability follows close behind. Architectural thinking rounds out the essential trinity.
With vibe coding the shift accelerated rapidly. Vlad Balazs, who oversees engineering at Intuit, acknowledged his company is redesigning interview processes around this reality. The new assessments present more complex problems with the explicit expectation that candidates will use AI tools to complete them, he explained, because that mirrors how they’ll actually work once hired.
The commoditization of coding knowledge has exposed a truth about skill hierarchies that researchers at Harvard Business School recently quantified. Their analysis found that nearly 80 percent of the wage premium commanded by advanced technical skills depends on underlying foundational abilities: communication, critical thinking, problem-solving. These capabilities, increasingly termed “durable skills” to distinguish them from perishable technical knowledge, demonstrate markedly different longevity. Technology-specific skills carry a half-life under 2.5 years, while problem-solving and decision-making skills persist beyond 7.5 years, according to workforce development firm Guild’s analysis of labor market data.
What CTOs Actually Want Now
CTOs describe the transformation in remarkably consistent terms. Engineering leaders echo the same themes. A survey of technology executives revealed unanimous emphasis on critical evaluation over code generation. One chief technology officer insisted the focus had shifted “from pure coding ability to evaluating deep problem-solving acumen, architectural foresight and that uniquely human ability to question, reason effectively and adapt swiftly.” Another observed the inherent irony: “AI was supposed to make coding easier, but it’s actually making the thinking parts of development more valuable than ever.”
This dynamic extends beyond individual contributor roles into architectural work, where the stakes prove even higher. AI excels at generating functions but struggles (at the moment) with system-level design. Engineers in this domain must balance competing constraints. They must anticipate failure modes. They must make judgment calls about infrastructure choices that AI cannot make. System design capabilities are becoming expected of engineers at all levels, argued one analysis of hiring trends. As AI handles repetitive tasks, even junior developers must think at architectural scale. They need to guide AI agents effectively. They must understand how components interact within larger systems.
The New Demands of Architecture
Meanwhile, the nature of architectural work itself has evolved. Modern software architects must navigate challenges their predecessors never confronted. They design systems where AI agents interact with traditional code. They create feedback loops that improve model outputs. They understand constraints like token budgets. Semantic drift requires new strategies. The fundamental question remains unchanged, making decisions AI cannot make, noted an O’Reilly analysis in summer 2025. An AI can explain how to implement Kubernetes, but lacks the contextual judgment to determine whether the complexity serves a particular organization’s needs.
Research into workplace skills reinforces these patterns. Deloitte’s 2025 survey of young professionals found that while nearly two-thirds focus on building AI capabilities, more than 85 percent identified communication as more vital to long-term success. Empathy ranked similarly high. Leadership completed the essential triad. The technology excels at pattern recognition. Data processing comes naturally to AI but inspiring teams remains exclusively human. Understanding human impact requires emotional intelligence while creative problem-solving in unprecedented situations defies automation.
How Hiring Processes Are Adapting
Smart companies have begun redesigning their hiring processes accordingly. Leading firms now replace traditional coding tests with “problem-solving simulations” using real client scenarios. These assessments test interpretation skills. Requirement gathering reveals candidate capabilities. Some deliberately introduce errors into problem statements while others add ambiguities by design. They monitor whether candidates ask clarifying questions rather than charging ahead with assumptions. They evaluate code for risks. Functional correctness alone no longer suffices.
Zhi Sun, a startup founder who has interviewed hundreds of engineers, captured the transformation succinctly: “AI tools haven’t just changed how we code. They’ve changed what we should look for in engineers. The real challenge is knowing what to build, how to shape it, and how to ship it quickly.”
Marcel Weekes, Figma’s Vice President of Software Engineering, described how his teams leverage AI for pre-reviewing pull requests. The system catches redundancies before human reviewers see the code. It identifies inefficiencies automatically. The strongest developers learn to break down problems into smaller chunks for multiple AI agents to work on simultaneously, then synthesize the results. “One key skill going forward is spending time on documentation,” Weekes noted. “Providing additional context to LLMs matters more than ever, almost like you would help an intern ramp up on a problem.”
What This Means for Engineers
The implications ripple through career development strategies. Engineers who invested heavily in memorizing algorithms now face a landscape where those skills deliver diminishing returns. Syntax knowledge proves similarly devalued. The capacity to work with AI as a collaborative tool matters more than viewing it as a threat. Communication skills enable engineers to translate technical concepts across teams. Explaining architecture to non-technical stakeholders requires capabilities AI cannot replicate. They must identify edge cases and they need to spot the “confidently wrong” answers that large language models occasionally produce.
For hiring managers clinging to traditional assessment methods, the message carries urgency. Job descriptions emphasizing specific frameworks optimize for skills becoming less relevant by the month. Programming language requirements follow the same trajectory. On the other side, behavioral questions revealing how candidates handle ambiguity deliver more predictive value than algorithm memorization tests. Questions about learning from failure matter more than past successes. Collaboration with diverse teams predicts future performance better than solo coding prowess.
Looking back, the philosophy wasn’t unconventional at all, it was simply running on a different clock than the rest of the industry. It rested on a belief that technical skills were trainable given the right foundation. Problem-solving ability either developed over years of deliberate practice or proved largely innate. Curiosity could not be packaged into a boot camp curriculum. Adaptability didn’t come with a certification. As AI accelerates the commoditization of coding knowledge, what once felt like instinct has quietly become industry consensus.
The engineers built to thrive in this landscape are not the ones who code fastest or memorize the most frameworks. They are the ones who think deepest, ask sharper questions, and notice patterns others overlook. They bring clarity to messy problems and build systems that hold up under pressure. Most critically, they know when to trust AI output and when to push back on it. Those were the engineers I sought out then. They remain the ones every forward-thinking organization should be competing to hire right now.


