The reach of artificial intelligence extends past tech interviews; it’s changing consulting interviews, too.
For years, the consulting case interview followed a familiar script, where structured thinking, mental math, and logic were tested on a whiteboard. But as Interview Query’s State of Interviewing report has pointed out, AI is already altering how candidates are assessed across industries, with consulting being no exception.
Based on recent reporting by Business Insider, firms like McKinsey and BCG have begun incorporating AI chatbots and AI-assisted case formats into parts of their interview process. While this doesn’t mean human interviewers are completely removed from the loop, it’s shifting the definition of “strong problem-solving” expected among consulting candidates.
As a result, consulting interviews are now testing how candidates think not just by themselves, but with the help of AI.

Based on reporting, these firms are experimenting with AI in controlled but meaningful ways. Candidates may encounter AI chatbots that simulate case prompts, provide data, or respond dynamically as the case evolves. BCG, for example, has publicly discussed internal AI platforms like GAMMA and X that support consultants’ work, while McKinsey has rolled out tools under its Lilli AI initiative.
In interviews, AI isn’t solving the case for candidates. Instead, it acts more like a live input source that offers information, generates scenarios, or responds to follow-up questions. For example, in McKinsey case interviews, candidates are expected to prompt the AI tool and refine insights for clear, collaborative reasoning. Candidates are thus evaluated not on technical AI expertise, but on how they review and interpret AI output, decide what to trust, and structure insights for a coherent recommendation.
Firms are testing this now for a simple reason: their clients are already using AI. Consultants who can’t work effectively with imperfect AI tools will struggle on real engagements.
It’s worth noting that no one is asking candidates to explain transformer architectures or use AI jargon. The focus is collaboration and decision-making under ambiguity, which are the same skills consulting has always valued, now in an AI-augmented environment.
Historically, strong interview performance meant clean frameworks, sharp math, and confident synthesis under time pressure. And while that still matters, it’s no longer sufficient.
Now, firms are looking for human judgment layered on top of AI-assisted reasoning. Can you spot when an AI-generated insight is directionally useful but flawed? Can you challenge an output without dismissing it outright? Can you explain why you used, ignored, or refined a tool’s suggestion?
In other words, the integration of AI in consulting interviews raises the bar. Candidates must demonstrate analytical rigor, communication skills, and basic tool literacy all at once. The expectation isn’t blind trust, but thoughtful engagement with AI as a “productive thinking partner”.
This mirrors what’s already happening inside consulting firms. As Interview Query has previously reported in coverage of IBM’s consulting arm, AI platforms are increasingly embedded into how advice is delivered, going beyond preparing slides and decks. Interviews are simply catching up to day-to-day reality, where AI supports and optimizes client-facing services like analysis and scenario modeling.

It’s not hard to see why candidates are uneasy. One concern is uneven access. Not everyone has had hands-on exposure to AI tools at work. Another is fairness, as consulting interviews were designed to be standardized, yet AI introduces a variable that can feel opaque.
There’s also anxiety about being judged on tooling instead of thinking. Business Insider reports that firms are aware of this tension and are actively trying to set boundaries. AI “cheating” is still a hard no. Some firms have even stopped reviewing cover letters altogether because they’re so easy to generate with AI.
This nuance matters. Firms say they’re not testing whether candidates rely on AI, and are instead testing how candidates reason in environments where AI exists. That’s closer to real client work, where consultants are expected to use tools responsibly, not outsource judgment.
If you’re a tech worker considering consulting or a traditional consulting candidate, it’s becoming increasingly evident that traditional case prep alone may not be enough anymore.
AI literacy is an imperative. But it doesn’t mean mastering every tool or explaining complex technical concepts; it simply means understanding AI supports analysis, where it fails, and how to clearly explain decisions made with imperfect inputs.
Candidates who perform best will likely be those who can narrate their thinking: why they accepted one AI-generated insight, rejected another, and how they arrived at a final recommendation. Outcomes matter, but reasoning matters more.
Overall, this isn’t about beating AI in interviews or solely relying on AI for responses. It’s about working alongside it—collaboratively, critically, and transparently.
As AI continues to reshape both tech and consulting, it’s important to remember that AI skills are no longer optional. Interviews are just the first place many candidates are feeling that shift, setting the stage for day-to-day workflows significantly transformed by AI.