
91% of US hiring managers have encountered or suspected AI-generated interview answers (Greenhouse, 2025). Your reverse-a-linked-list question now takes 4 seconds with Claude. You are not testing engineering anymore. You are testing typing speed. Call it the interview signal paradox: the better your AI tools get, the less your coding test tells you. Andrew Chen frames it well: the most important AI startups won't replace humans. They will change what "qualified" means.
Two camps are forming. One tries to block AI: more proctoring, redesigned puzzles, a permanent game of whack-a-mole against the latest cheating overlay. The other allows AI and measures whether the candidate uses it with judgment. I think the second camp wins, and I don't think it's close. Meta now allows AI in coding interviews. Google went the opposite direction, reportedly pushing candidates back to in-person interviews to make cheating harder. I think that's a losing bet, but it shows how seriously companies are taking this.
So the real question isn't "how do we stop candidates from using AI?" It's "what are we actually trying to measure?"
The proxy broke
Traditional coding interviews tested pattern recognition, recall, fluency under time pressure, some abstraction, some debugging. Never perfect, but directionally useful because the candidate had to produce the work themselves. That assumption is gone. Completely.
Cluely, formerly Interview Coder, went viral in 2025 by promising an undetectable overlay for live coding screens. The founder got expelled from Columbia over it and still raised $5.3M. That's the market you are hiring into. Cheating rates doubled in the back half of 2025.
Even honest candidates can now solve standard algorithm questions with AI assistance. The candidate who understands tradeoffs and the candidate who copy-pastes blindly both arrive at a correct answer. Output converges. Process diverges. Score output and you miss the only difference that matters.
The pattern is predictable and I keep seeing it: candidates who ace AI-assisted interviews, then can't ship in an AI-heavy codebase. The interview worked. The signal didn't.
What to test instead
If the work changed, the interview has to change with it.
Real engineering is not a single function in a blank editor. It's a multi-file codebase with ambiguous requirements and constant tradeoffs. It's done with tools, docs, tests, and now AI. The assessment should look like that. Give candidates a real problem, a scoped feature or a bug hunt across multiple files. Let them use AI tools openly. Then score how they use them.
That last part is the whole game. Here's what I'd actually score: Does the candidate decompose the problem before prompting, or paste the entire requirement and hope? When AI output comes back, do they read it critically or accept it wholesale? Do they write tests? Do they refactor the obvious garbage, or ship the first draft? Can they articulate why they accepted one approach over another?
In a randomized controlled trial, experienced developers were actually 19% slower with AI, largely because validation and correction overhead ate the productivity gains (METR.org, 2025). AI does not erase engineering skill. It amplifies whatever judgment the engineer already has. Good or bad.
Three interview formats worth using
Once you accept that AI use belongs inside the interview, the format options get better. Here are three I've seen work.
AI-augmented coding, async. Give the candidate a real coding task in a full IDE with AI tools available. Capture the process: prompts, iterations, edits, test runs, validation steps. Not just the final submission. If you just grade the final code, you have recreated the take-home problem with shinier tooling. The whole point is watching how someone works through the problem.
AI judgment evaluation. Show the candidate AI-generated code with subtle bugs and ask them to review it. Give them an AI-proposed architecture with hidden scaling issues and ask them to critique it. This isolates judgment directly: no scaffolding, no warmup, just "here's what the machine produced, what's wrong with it?" Underused, and in my experience the most revealing format.
The process interview. Ask the candidate to walk through the last thing they shipped with AI assistance. Not "have you used Copilot?" I mean: What did you delegate to AI? What did you keep manual, and why? Where did the model mislead you? This gives you evidence of actual working style, and it's hard to fake in detail. Someone who's genuinely working this way lights up talking about it. Someone performing can't go three follow-ups deep without contradicting themselves. Frank Dilo's thread on X about this got massive engagement for a reason.
How Fairground fits
Most teams now agree AI should be allowed in interviews. The harder problem is evaluating it consistently. Capturing what someone typed into an AI sidebar isn't evaluation. It's logging.
Fairground is an interview platform built for how engineers actually work now: with AI. Our AI Coding Screener gives candidates a full IDE with AI tools, runs async around the clock, and captures the whole process. It scores how someone works, not just what they submit. Our Canvas puts code editor, docs, drawing, and screenshare in one place for the live round. Across all of it, we generate structured scoring on AI judgment, process quality, and confidence indicators.
If you don't update what you are measuring, the signal will keep decaying and you won't notice until the hires start failing. Your candidates will still produce answers. They just won't produce the signal you need. Here is the filter for your next interview cycle: can your process tell you how someone built the answer, or only that they built it?
100 free credits. No credit card. No sales call.

Get started with Fairground in just few mins.
Plug and Play. Works well with your existing ATS.
100 Free Credits


