
Last year you overhauled your team's tooling. Cursor, Copilot, Claude Code. Your engineers build differently now. Shipping velocity is up. The workflows changed. Good.
But your interview loop didn't change with them. Your scorecards still reward solo code output. Your job descriptions read like the work happens without AI. And the gap between how your team actually builds and how you evaluate new hires is getting expensive every quarter it stays open.
I keep seeing the same pattern. Engineering leaders who moved fast on adoption, then froze on hiring. They know the old interview is broken. They just keep running 2019 evaluations in a 2026 environment and wonder why new hires take three months to become useful.
I think you should demand four things now, and waiting is a worse bet than it looks.
The baseline moved. Your hiring bar should too.
The old baseline for a strong senior hire was clear fundamentals, decent systems knowledge, and the ability to produce code under pressure. In 2026, that gets you someone who can type but may not be able to build.
Four things should be non-negotiable on your scorecard now:
AI fluency, for real. Not "has tried Copilot." The engineer should know when to hand a task to an agent, when to decompose it first, when to switch models, and when to stop using AI entirely. I call this harness engineering. I defined the term in [the first post in this series], and OpenAI published about it recently, using the same name. The term is sticking because it describes something real. Over half of tech job postings now list AI or ML skills as requirements, roughly double the rate from a year ago (Robert Half, 2026). If your scorecards still treat AI fluency as a nice-to-have, you are behind the market your own job ads compete in. Olivia Moore at a16z calls what we need 'safe hands': people who take responsibility reliably when AI is doing the heavy lifting. That is what harness engineering looks like from the hiring side.
Judgment, not speed. AI made code production cheap. It did not make code decisions cheap. AI-assisted code carries 1.7x more major issues when the engineer does not validate carefully (Second Talent, 2025). In a randomized trial, experienced developers using AI were actually 19% slower on average; bad AI use adds overhead instead of removing it (METR.org, 2025). The engineer you want notices the subtle bug, rejects the wrong abstraction, and writes the missing test.
System thinking over line-by-line coding. The best engineers already spend less time writing code than you would expect. They shape interfaces. They decide where automation helps and where human review stays mandatory. A harness engineer treats agents like junior teammates: scoped tasks, intermediate checkpoints, clean context. Not a giant prompt and a prayer. Never that.
Process discipline as infrastructure. Tests as contracts. Documentation that matches the code. Consistent naming. Predictable patterns. Agents perform better in legible codebases. They fail harder in codebases full of hidden assumptions and stale docs. Someone who has never maintained that discipline will struggle in an AI-heavy environment regardless of how fluent they are with prompts.
The industry is moving. So is the junior crisis.
Meta confirmed it. Google went the opposite direction, back to in-person interviews to make AI cheating harder. I think Meta has it right and Google has it backward, but the fact that this is even a debate tells you how seriously the cheating problem hit. Cluely went viral promising to beat any coding screen. The cheating tools are only getting better. If your process still assumes solo, unaided code output predicts job performance, the signal is already gone. This is now officially over.
Meanwhile, new grad hiring is collapsing. Computer programmers are the #1 at-risk occupation according to federal workforce data, and college grads are 4x more exposed to AI displacement than workers without degrees (Burning Glass Institute, 2025). This makes harness engineering skills more important for junior candidates, not less. A junior who validates carefully is more valuable than a senior who delegates blindly. If your interview loop can not distinguish between those two, you are filtering on the wrong signal at every level.
What you can reasonably expect from candidates today
None of this is aspirational. This is where the bar is now.
Evidence of AI-augmented work. Real projects where AI was part of the process. What did they delegate? What did they verify? What broke?
A describable orchestration workflow. One agent or several? How do they split tasks? How do they handle retries, state, review? If they can describe their control system, they understand the work.
Codebase stewardship. Can they maintain a codebase where humans and agents both operate effectively? Tests that constrain behavior, docs aligned with implementation, patterns that make future changes safer.
A learning system, not just current knowledge. What did they stop using recently, and why? That question is the most revealing. Anyone can list tools. Fewer people can explain what they rejected and on what grounds.
Why this can not wait
A bad senior hire costs upward of $240K when you account for salary, onboarding, lost productivity, and backfill (Protingent, 2025). These problems compound. You test for the wrong skills, hire the wrong person, burn the quarter replacing them, and start over with the same broken loop.
Every month you keep a 2019 interview loop in place, you are training your company to hire for a job that no longer exists.
Fairground is an interview platform built for exactly this problem. Async AI coding screeners that capture the whole process, not just final output. Live collaborative canvas for onsite rounds with AI tools available. Structured scorecards that measure AI judgment and process signals, so your hiring decisions are based on evidence, not vibes.
Start with one change. Allow AI in your next interview round. Then pay attention to what happens. The differences will tell you more about engineering judgment than any whiteboard problem ever did.
If you want to see what that looks like in practice, Fairground gives you 100 free credits to run your own interviews. No credit card. No sales call.

Get started with Fairground in just few mins.
Plug and Play. Works well with your existing ATS.
100 Free Credits


