
Uber Data Scientist interviews typically run 4–6 rounds: recruiter screen, 1–2 technical screens, and a final loop with a hiring manager, bar raiser, and technical rounds. The process spans roughly 4–6 weeks and is distinguished by its heavy emphasis on causal inference and experimentation design.
$120K
Avg. Base Comp
$254K
Avg. Total Comp
5-7
Typical Rounds
4-6 weeks
Process Length
What stands out most across the Uber data scientist experiences we've collected is how consistently the process tests reasoning under ambiguity rather than technical recall. Multiple candidates noted that the coding itself — SQL, Python, basic algorithms — was not the hard part. The hard part was being asked to frame a vague problem, defend a metric choice, and explain experimental tradeoffs in real time. One candidate described an A/B testing question about ETA display that was deliberately open-ended, and another was pushed hard on endogeneity in a price elasticity problem where a naive regression answer would have been a red flag. Uber isn't just checking whether you know causal inference — they want to see you think through when each method breaks down.
A recurring theme is the depth expected around experimentation, particularly for marketplace-specific designs. Switchback testing came up explicitly, and one recruiter flagged it as a deliberate focus area precisely because most candidates don't have hands-on experience with it — meaning conceptual fluency is what separates candidates here. The econometrics case study round, focused on surge pricing and price elasticity, is where we've seen the most candidates feel underprepared. It's not enough to name instrumental variables or diff-in-diff; interviewers probe whether you understand why a method is appropriate for a two-sided marketplace with dynamic pricing.
The bar raiser round adds another layer that catches people off guard. Uber's stated values — boldness, doing the right thing, building globally — aren't just behavioral window dressing. Candidates who wove those values into conflict resolution and planning stories fared better than those who saved them for direct value questions. The process is genuinely selective, and the feedback loop after rejection is often thin, so going in over-prepared on causal inference and marketplace experimentation is the right call.
Synthetized from 6 candidates reports by our editorial team.
Had an interview recently?
Share your experience. Unlock the full guide.
Real interview reports from people who went through the Uber process.
The process for me started with a recruiter reaching out, and then I went through two technical screening rounds before the final loop. The first screen mixed coding and a case study, and the second was similar in spirit, with a more interactive coding question that felt LeetCode-like but not exactly a standard whiteboard problem. After that, I had three interviews in the final round: one with the hiring manager, one with a bar raiser, and one technical round focused on experimentation. The overall process was pretty structured, and most interviewers were friendly and clear, though one round felt a bit less crisp when I asked clarifying questions.
The technical questions were a mix of Python/SQL manipulation, a simple Python function, and case-style product thinking. In the early screen, I was asked about causal inference concepts like when to use synthetic control, A/B testing, and difference-in-differences. Another case centered on Uber service expansion and metric formation for data science and analytics, and there was also an optimization-style problem around Uber Eats order or driver assignment, where the emphasis was on defining objectives and constraints rather than just jumping to code. In the final technical round, I got a vague A/B testing question about how I would test whether showing driver ETA one minute less actually changes behavior or outcomes. The behavioral parts were standard but still important, with resume walkthroughs and questions about conflict with a manager and working with multiple stakeholders.
Difficulty-wise, this was more demanding than a typical coding screen because it combined algorithmic problem-solving with product sense, experimentation design, and real-world case reasoning. The coding itself was not the hardest part; the tougher part was being able to frame the problem well, explain tradeoffs, and defend the metrics or experimental design under time pressure. I did not get an offer, and the process felt fairly selective overall.
If you’re preparing, I’d focus on causal inference basics, especially synthetic control versus A/B testing versus DiD, and practice explaining how you’d set up experiments for product changes like ETA display. It also helps to rehearse optimization-style case studies where you clearly state objectives, constraints, and what a better algorithm would actually change.
Prep tip from this candidate
Focus on causal inference tradeoffs like synthetic control vs A/B testing vs DiD, because that came up directly. Also practice explaining optimization cases by stating the objective, constraints, and how you’d evaluate whether a better algorithm actually changes the product outcome.
Share your own interview experience to unlock all reports, or subscribe for full access.
Sourced from candidate reports and verified by our team.
Topics based on recent interview experiences.
Featured question at Uber
Write a query to select the top 3 departments with at least ten employees and rank them according to the percentage of their employees making over 100K in salary.
| Question | |
|---|---|
| User Experience Percentage | |
| Distance Traveled | |
| Experiment Validity | |
| Download Facts | |
| Weighted Keys | |
| Maximum Profit | |
| Third Purchase | |
| Sum to N | |
| Bank Fraud Model | |
| Christmas Dinner Ingredient Optimization | |
| P-value to a Layman | |
| Random Weighted Driver | |
| Uber User Journey | |
| Cancellation Fees | |
| Assumptions of Linear Regression | |
| Testing Price Increase | |
| Dice Rolls From Continuous Uniform | |
| Encoding Categorical Features | |
| Drawing Balls From Bin | |
| Random Forest Explanation | |
| Type I and II Errors | |
| Max Width | |
| Uniform Car Maker | |
| Data Preparation for Imbalanced Data | |
| Demand Metrics | |
| Type-ahead Search | |
| MLE vs MAP | |
| Uber Eats Customer Experience | |
| Ride Requests Model |
Synthesized from candidate reports. Individual experiences may vary.
Initial conversation with a recruiter covering your background, resume, and role fit. Uber typically shares materials ahead of time outlining the types of interviews to expect. Part of this round may also touch on Uber's cultural values.
A combined round mixing SQL and Python coding with a stats or product case study. SQL questions cover window functions, joins, CTEs, lag/lead, and dense rank. Stats questions often involve probability and expected value problems, and the case study tests experiment design and metric definition for a marketplace or logistics feature.
A second technical screen focused on A/B testing, causal inference, and analytics reasoning. Expect questions on confidence intervals, sample size, experiment duration, and how to handle edge cases or unexpected results. May also include an interactive coding question.
A deep-dive technical round focused on experiment design and causal inference methods such as switchback testing, difference-in-differences, synthetic control, and instrumental variables. Questions are often framed around real Uber product scenarios like surge pricing, ETA display, or order batching.
A case study round applying data science and econometric thinking to a real business problem, such as estimating price elasticity of UberEats delivery fees or evaluating marketplace efficiency. Emphasis is on structuring the problem, identifying endogeneity, and selecting the right causal method.
A behavioral and resume walkthrough round with the hiring manager. Expect questions about working with multiple stakeholders, conflict resolution, and how you've approached ambiguous data science problems in past roles.
A behavioral round conducted by a designated bar raiser who evaluates whether you embody Uber's cultural values across all dimensions. Stories should reflect boldness, business and customer impact, and sound judgment — not just technical competence.