AI tools used to promise a simple tradeoff of automating the boring parts to give workers their time back and allow them to focus on more complex tasks. But for many knowledge workers, the opposite seems to be happening. While they’ve become faster at accomplishing tasks, the workdays feel longer, denser.
A multi-month field study by UC Berkeley researchers followed knowledge workers using AI in real workflows and found that AI often doesn’t reduce total work. Instead of measuring just output speed, the researchers examined changes in task scope, pacing, and perceived workload, concluding that AI even intensifies work.
Essentially, AI reduces friction per task, but expands the number of tasks and expectations.
The Berkeley study identifies a pattern that many workers already suspect: productivity gains don’t disappear, but get absorbed. Here’s a breakdown of the so-called AI productivity trap:
When something becomes easier to start, workers start more of it. As tasks like drafting, prototyping, analyzing, and documenting become more accessible and low-friction, they initiate projects that previously felt too time-consuming. As a result, scope quietly expands.
When managers see faster turnaround, they adjust expectations, often beyond the original workload. For example, if a report now takes two hours instead of four, it doesn’t mean you leave early. It often means you produce more reports.
AI enables juggling multiple streams at once: code in one tab, documentation in another, analysis in a third. But parallelization increases cognitive switching costs. Instead of finishing sooner, many workers multitask more intensively. Some workers even report that this context-switching raised expectations for speed and increased pressure—the same pressure that AI and automation was originally meant to reduce.
Overall, this is what we might call workload creep, referring to the gradual expansion of responsibilities because efficiency makes more possible.
Engineers, analysts, and other knowledge workers are often the earliest adopters of AI tools, and thus the first to experience their side effects.
AI is embedded directly into tasks like code generation, documentation, data analysis, debugging, and writing.
However, a faster first draft doesn’t eliminate iteration; it increases it. Those who can prototype in minutes are expected to test more variations. And if debugging and analysis speed up, engineers and analysts are assigned more features and dashboards to work with.

Thus comes the key insight from the Berkeley study. Even as knowledge workers’ reported increased efficiency, the intensified pacing and broader role boundaries also raised expectations.
It’s worth noting that this happens beyond tech. Similar patterns are emerging in marketing, consulting, operations, and customer support. Anywhere AI reduces friction, scope tends to expand.
Efficiency, as it turns out, doesn’t automatically mean fewer hours. Often, it means more ambitious targets.
The productivity trap essentially means this: even though AI makes task faster, workers don’t feel relief because time only gets reallocated.
First, there’s verification. A recent Interview Query analysis examining “vibe coding” trends found that nearly 1 in 3 developers report having to fix AI-generated code in ways that offset the time savings. In other words, AI can accelerate output, but not necessarily quality. Reviewing, debugging, and rewriting become part of the workflow.
Second, AI enables more simultaneous work streams. Instead of one major project, you now manage three. Small tasks that would have been skipped before now get added because “it only takes five minutes.”
Third, coordination overhead increases. More output means more meetings, more approvals, more integration work.
Over time, that intensity can contribute to burnout. Separate workplace research has already linked constant digital acceleration and cognitive overload to rising burnout levels among knowledge workers. AI may amplify that trend if expectations scale faster than capacity.
Such findings only point to the fact that concerns about AI shouldn’t begin and end with whether it replaces work. It’s evidently becoming a labor multiplier, as work expands to fill efficiency, expectations reset upward, and output, not quality, becomes the new baseline.
Thus, for tech workers, the question isn’t whether to use AI, but how to use it strategically. Without guardrails, productivity gains will simply translate into more assigned work, creating risks for job satisfaction and overall employee well-being.
For employers, there’s an increasing need to implement an effective AI strategy with intentional constraints: defining clear scope boundaries, protecting focus time, and building regular check-ins that prioritize human judgment over raw output. Teams may need explicit conversations about where productivity gains go, ensuring that it is reinvested toward innovation, toward reduced burnout, or simply toward more volume.
If AI-powered efficiency increases, someone captures the upside. The long-term challenge is no longer confined to AI adoption. It’s preventing workload creep from quietly becoming the new normal.