Zoom Video Communications, Inc. is an American communications tech company founded in 2011 and headquartered in San Jose, California. The company offers video conferencing and online chat services via a cloud-based software platform that helps transform real-time collaboration experiences. Unlike other competitors, Zoom offers a unified meeting experience with a 3-in-1 platform consisting of HD video conferencing, mobility, and web meetings.
At Zoom, data plays an integral role in understanding and improving user experiences, and the data science team lies at the foundation of the company’s success. Recently, Zoom announced a “Smart Meeting” feature that offers automatic transcription for its video conferencing services, saving time for users with taking and sharing meeting notes.
To achieve this feat, Zoom employs voice-to-text transcription: a voice-based AI technology that utilizes advanced machine learning technology that can identify the individual voice pattern of every individual in the meeting. As evidenced by this new feature, Zoom is constantly doing ground-breaking work that offers new and even seasoned data scientists a platform at which to grow.
The Data Scientist Role at Zoom
Zoom is a rapidly growing company that relies primarily on data science to make decisions that affect growth, drive innovation, and improve customer experiences. Data scientists, as well as data engineers, data architects, data analysts, and database engineers, play an integral role in maintaining this standard.
Data scientists at Zoom leverage data and data technologies to identify and understand business trends and opportunities for improvement of new and existing products and end-user satisfaction. Even though the company has a central data science team, individual roles and functions may differ slightly, and can be tailored specifically to teams and assigned products/projects. As such, the necessary qualifications can range from standard data analytics and visualization knowledge to machine and deep learning heavy skills.
While Zoom provides a large platform and ecosystem for new data scientists to grow, it is also sought out by highly skilled and experienced data scientists to join the ranks of professionals already making an impact at world scale. On average, Zoom hires experts with at least four years (6+ for senior level) of industry experience working with data to facilitate decisions.
Other relevant requirements include:
- Hands-on experience with classical machine learning, deep learning, reinforcement learning/control systems, probability theory, statistics, causal inference, time series forecasting, optimization, and dynamic programming.
- Bachelor’s/Master's/PhD in Computer Science, Statistics, Economics, Mathematics, Physics, Operations Research, or other quantitative fields.
- Strong programming language skill, especially with Python, Scala and Java.
- Extensive experience with analytics and visualization tools (e.g Tableau).
- Experience with SQL, Python libraries, and R with the ability to execute complicated models.
- Experience with building data pipelines, efficient ETL design, implementation, and maintenance.
Data Scientist Teams at Zoom
The term “data science” at Zoom covers a wide scope of domain expertise, including data scientists, data engineers, and data architects. Although there is an exclusive data science team, data scientists can also be assigned to other internal teams, or collaborate cross-functionally to achieve desired goals. Teams are constantly expanded across the organization, and although general roles may sometimes overlap, primary responsibilities rely heavily on the assigned team.
Below are some of the data science teams at Zoom and their general responsibilities.
- Data Science Team: Analyzing current data designs for optimizing and providing structural improvements to handle business growth, and providing insights and new data processes to help with predictive modelling. Other roles also include collaborating cross-functionally with teams of product managers, data engineers, systems architects, and sales/marketing on critical projects.
- Trust and Safety Engineering: Leveraging data to identify trends, conducting root cause analysis, and discovering potential opportunities for improvements. Setting, monitoring, and maintaining key metrics to evaluate the service and effectiveness of the safety programs. Writing complex SQL queries, R, or Python codes to build data models to fuel analytical frameworks and dashboards. Leveraging advanced machine learning, computer vision, and data mining technologies for developing highly-scalable tools, models, algorithms predicting growth. As a member of this team, you also get to work cross-functionally with internal teams, including product engineers, product managers, data engineers, operations, legal, compliance and marketing managers, on critical projects.
- Sales Analytics: Writing Python or R codes and SQL queries to build models to sustain analytical frameworks and dashboards. Providing data hygiene and integrity and building predictive models based on product usage and billing data. Roles also include building and maintaining data visualizations to effectively communicate key sales metrics and KPIs across multi-level organizations.
- Data Science (Machine Learning): Researching, designing, and developing new machine learning solutions and algorithms to develop new features and improve existing products. Work extensively on data clustering, segmentation, filtering, estimation, automation, and predictive modelling for unlocking new growth opportunities. Collaborate with engineers, data analysts, and product managers to develop and improve growth models.
If you're looking for more info on data science adjacent fields, be sure to check our new article on research scientists and their role in tech!
The interview process follows the standard tech hiring process. It starts with an initial interview with HR over a Zoom Meeting, and a follow up take-home challenge will be mailed to you. If you pass the take-home challenge, you will be invited onsite for a series of interviews with the Head of Business, Head of HR, and Data Science Team Lead.
This is a standard exploratory discussion with an HR over Zoom. Throughout the course of the interview, the interviewer will tell you more about the job role, team, and the company as a whole. You also get to discuss your skills and past projects you’ve worked on.
- Walk me through a machine learning project you've done in the past.
- What would you do if you have conflicts with your teammates?
- What did you learn from your current role?
Online Data Challenge
After the initial Zoom Meeting, you will be mailed a take-home data challenge that consists mostly of machine learning, data mining, and data structure questions that broadly test your skills. There will be up to 15 questions, with one and a half hours allotted for completion.
The onsite interview is the last stage in the interview process. It comprises three back-to-back interview rounds with the Head of Business, Head of HR, and the Data Science Team Lead. Zoom’s data science onsite interview is a mixture of behavioral and technical interviews, designed to test candidates’ knowledge on the length and breadth of data science, as well as product-sense knowledge.
Questions usually are case study based with data sets similar to real-life Zoom cases, spanning across basic statistical concepts, machine learning, and designs.
Try this question from Interview Query to practice.
Notes and Tips
The Zoom data scientist interview follows the standard tech interview process. Questions are standard and tailored-specific to the requirements of individual roles. Interview questions are a mixture of statistics, case-study, coding, behavioural, and product-sense.
Looking for more data science interview guides? Take a look at our articles on Facebook, Google, and Twitch data scientist interviews.
Zoom Data Scientist Interview Questions
Scan through the following steps and import needed Python libraries– don’t worry, you can import them later if you forget one.
2) Load the data from the csv file
3) Perform basic commands to understand the data
4) Bin the following features:
a) 'currentterm' into [0 to 11], [11 and more]
b) 'mrr_entry' into [0 to 14.99], [14.99 to 500], [500 to 5K], [5K and more]
c) 'account_age’ into [0 to 90], [90 to 180], [180 to 360], [360 and more]
d) 'days_left_in_term’ into [0 to 30], [30 to 360], [360 and more]
5) Set 'churn_next_90' as your target column
6) Set 'zoom_account_no' as an ID column, this should not be a feature
7) Set 'ahs_date' as a date column, this should not be a feature
8) Treat the binned features from step (4) and the following features as categorical features:
9) Perform feature selection using your preferred method and ML algorithm. Choose 10 features and continue to step (10).
10) Divide the new data frame (with 10 features) into test and train subset
11) Use a different algorithm from part (9) and perform cross-validation method for parameter tuning. Print out the results.
12) Based on results from (11), fit your model on the train subset
13) Test your fitted model using the test subset
14) Print feature importance, accuracy score (roc_auc_score), and confusion matrix (crosstab) from step (13)
15) Save your trained model using pickle
Thanks for reading! Visit interviewquery.com for standard Zoom data scientist interview questions and join the community of thousands of data scientists and research scientists practicing for their next interview.