Researching People in Organizations

Meta — Content Moderator Performance

As part of the Global Operations Organization at Meta, my team supported the company’s Trust & Safety mission. I researched and formed the strategy for the performance and training of the 40,000 professional content moderators globally who removed harmful text, images, and videos across all Meta platforms.

  • What I Measured: the (1) time-to-proficiency, (2) accuracy (i.e. correct decisions vs. false positives/negatives), and (3) well-being of the content moderator population (n=40,000).

  • Mixed Data Collection: Repeated measure surveys of content moderator performance and well-being, comparison tests of accuracy data, and interviews with a targeted sample (e.g. low performers, those voluntarily leaving, high performers) of both content moderators and managers.

  • Data Analysis: repeated measures ANOVA and Wilcoxon Rank Sum Test of survey and accuracy data; thematic analysis of interviews; linear regression for drawing inferences.

  • Impact: my research highlighted several areas that then reduced the content moderator time-to-proficiency and increase their accuracy. Additionally, I set forth the strategy for the entire organization (~100 people) for the following topics: knowledge management, electronic performance support systems, and learning localization.

Adobe — Technical Talent Development

The Technical Talent population at Adobe consisted of nearly 7,000 individuals globally. My role, which was part of the Global Talent Development Team, required that I research all aspects of technical skill for the target audience of 6,500 software engineers, computer scientists, and data scientists.

  • What I Measured: (1) current machine learning skill (before the training intervention), (2) learning outcomes of the training program participants, (3) learning experience of the training program (n= 1,500 participants + 100 managers = 1,800 total).

  • Mixed Data Collection: Surveys and focus groups of participants, interviews and surveys of the managers of participants.

  • Data Analysis: One calculation of learning outcomes was the difference in new projects or product features that managers assigned to program participants in the months that followed the completion of the training program.

  • Impact: Consistently over a 3-year period, I measured and reported the research results to the CTO and the senior VP over all Adobe machine learning. Based on the recommendations I made, the program was able to evolve into a bootcamp and added another 3 years to the contract with the SME vendor.

The Church of Jesus Christ of Latter-day Saints — Servant Leadership Practice

I examined how a representative group of servant leaders developed their behavioral style and attitudinal approach.

  • What I Measured: (1) current leadership knowledge, behaviors/practices, and attitudes; (2) the development path for each individual (n=25).

  • Qualitative Data Collection: surveys (using a previously validated instrument) to establish the level of servant leadership practice, interviews, direct observation of a target sample of training programs, and review of training documents.

  • Data Analysis: thematic analysis, 4-level coding (i.e. first-order codes, sub-categories, categories, aggregate dimensions).

  • Impact: my work highlighted the importance of the cumulative effect that training and trigger experiences have on a servant leader’s development over an extended period of time. Particular note was made for those events that were under the control of the organization: exposure to models of servant leadership, intra-organizational experience (i.e. job rotation), formal training, and spiritual learning. This research earned an award from the Greenleaf Center for Servant Leadership.

Stanford University GSB — Redesigned the Data & Decisions MBA Course

Background: as an internal consultant, I partnered with Dr. Lanier Benkard to completely redesign the MBA core course, Data & Decisions — a masters-level data analytics course for all first year MBA students. We redesigned the course sequence, the instruction of R, and utilized a “flipped” model of delivery by incorporating a series of high-fidelity instructional videos.

Impact: we piloted the redesigned course at first with only one section of students (n=70). However, the pilot was so successful that the Dean of the GSB chose to have all future students (n=480 students annually) take the same redesigned version thereafter.