
AI interviews are quickly becoming a standard first step in hiring. Greenhouse says nearly two-thirds or 63% of U.S. job seekers have already been interviewed by AI, a sharp jump from six months earlier, which means the format has already moved into the mainstream.
Yet trust among candidates has not kept up with adoption. In Greenhouse’s 2026 Candidate AI Interview Report, candidates are rejecting how AI is being used without transparency. Among those who experienced AI evaluation, 70% said it was not clearly disclosed before the interview, while 21% only found out once the interview started.
Candidates are also opting out of AI’s involvement in hiring, with 38% saying they had already withdrawn from a hiring process because it included an AI interview. For job seekers, that means the the prep problem is no longer just how to answer well. It is how to perform clearly inside a process that may be automated, monitored, and only partly explained.
Greenhouse’s core finding is simple: AI is spreading faster than candidate confidence. Only 21% of U.S. candidates said they believe most employers are using AI responsibly and transparently. Just 18% said most employers have explicit, clear AI policies.
That gap shows up in behavior. Candidates were most likely to walk away from processes that:
But the trust problem does not stop at transparency. It is also about perceived fairness. As explored in a previous article about tech candidates’ use of AI tools, they’re already adapting to these systems by optimizing resumes for keyword filters, rehearsing answers for AI scoring patterns, and in some cases strategically shaping responses to “fit” what they believe the model rewards.
The more candidates try to reverse-engineer AI evaluation, the less signal employers can trust at the top of the funnel. In response, companies add more layers, such as monitoring, structured prompts, or additional rounds, which further erodes candidate trust.
What happens after the interview deepens that gap. Among candidates who completed AI interviews, 51% report never receiving an outcome, 38% never heard back at all, and 13% are still waiting.
At the same time, candidates are not rejecting AI outright. Only 19% say they want less AI involved. Most are open to it, but with clear disclosure, defined evaluation criteria, and visible human review.
The international signal points in the same direction. People Management, reporting on the same Greenhouse study in the UK, said 47% of job seekers had already been interviewed by AI there. Of those candidates, 82% said they were not told beforehand, and 24% only realized AI was involved after the interview had already started.
That helps explain why the strongest candidate reaction is not anti-technology in the abstract. It is anti-opacity. When a candidate cannot tell whether a recorded answer is being reviewed by a recruiter, scored by a model, or filtered by both, the interview stops feeling like evaluation and starts feeling like surveillance.
But disclosure alone is not enough. Greenhouse found that 39% of candidates want a clear explanation of what AI is measuring, and 38% want confirmation that a human reviews AI output before decisions are made.
The Interview Query signal suggests this shift has already reached the mechanics of real interview loops. Across recent interview reports, we keep seeing candidates encounter tool-mediated screening before they build any rapport with a hiring team.
One recent writeup from a candidate interviewing for a data analyst role at a major bank described a CodeSignal assessment that relied on CSV and Excel-style analysis under camera, mic, and screen-sharing requirements. Another described Stripe’s data scientist interview process to include the presence of an AI assistant alongside a human interviewer across all rounds, though it hasn’t been disclosed what the tool is used for.
Recent transcripts show the same pattern. Candidates describe loops that start with recruiter screens, move into SQL or technical filters, then expand into take-homes, case presentations, or multi-round onsite formats that test communication as much as correctness. While that doesn’t mean every company is using AI to score interviews, more candidates are meeting structured systems before they meet a decision-maker.
As interviews become more structured and more mediated by software, the bar shifts away from raw recall and toward visible reasoning. Candidates need to show how they think, how they prioritize, and how they explain a tradeoff when the process gives them less room to build personal context.
That is one reason the Greenhouse data matters beyond recruiting ops. If 70% of candidates were not told AI was involved, many will walk into these interviews without knowing what kind of clarity the format rewards. Candidates who want to practice that kind of structured response thus benefit from using an AI interviewer, where they can rehearse narrating their stories in a similar, high-pressure environment, than from passive review alone.
The same logic applies to later rounds. A take-home, a case presentation, and a behavioral interview all reward calm communication under constraint. Candidates who struggle more with delivery than with fundamentals usually improve faster through coaching, especially when the weak spot is framing a recommendation instead of finding an answer.
Rather than panicking about AI interviews, candidates must prepare for less forgiving formats. Many employers are introducing automation without much explanation.
That makes realistic rehearsal more important. 4 Corner Resources, summarizing fresh ZipRecruiter job-search research, found that first-time job seekers were much more likely than experienced candidates to have already faced a fully automated interview (32% versus 13%), or an AI-analyzed video screening, (40% versus 19%). While you should not “game” the system, you should assume that clarity, consistency, and well-structured answers are more likely to perform well in both human and AI review.
Practically, that means training in three layers:
You should also expect more work-simulation formats such as timed assessments, monitored screens, and async video responses. Instructions may be minimal, so part of the evaluation is how you handle ambiguity.
For full-loop rehearsal, especially when a process mixes technical screens, presentation rounds, and behavioral judgment, mock interviews are still one of the fastest ways to find where a polished answer breaks down in real time.
Greenhouse’s report does not suggest that AI interviews are going away. It suggests they are arriving faster than employers have earned trust for them. The adoption story is real, but the more important signal is that candidates will tolerate automation only when the process stays legible and human.
What IQ’s interview signal adds is a ground-level view of how that feels in practice. Candidates are already moving through monitored assessments, take-homes, structured screens, and work-simulation loops before they get much human context at all. In that environment, the strongest prep edge is showing clear judgment inside a system that may not explain itself very well.