Assessing tutor applicants' level of Uncertainty Tolerance, a metric that is a predictor of a pair's success.
The Tutor Onboarding Form is used to gather information about a prospective tutor’s background and skills. Prospective tutors must also complete additional forms detailing their availability and subject / grade level preferences.
If a tutor is accepted, this information is used to pair them with a compatible student.
One of our key findings from Spring Semester was that a major cause of a student-tutor pair failing is attributed to this idea of Uncertainty Tolerance, a term we coined. Uncertainty Tolerance is a “measure of a tutor’s ability to adapt to a student’s varying communication levels, needs, and schedule.” Highly related to uncertainty is the amount of compassion and empathy displayed from the tutor toward the student.
While the Tutor Onboarding Forms collect background data on a tutor applicant, we felt that the form did not accurately evaluate a tutor’s empathy and Uncertainty Tolerance levels.
We set a goal to develop a tool that could measure and predict the level of Uncertainty Tolerance in tutors during their onboarding process.
We decided to create a forced-choice assessment, which requires respondents to commit to a specific answer.The forced-choice assessment ‘forces’ the respondent to choose between seemingly similar / correct answers. As a result, it minimizes the chances of participants faking responses to make the best impression.
Our particular assessment involved two scenarios that place the applicant in realistic situations they may face with a student. We hypothesized that the applicant’s answer choices could be used to measure their Uncertainty Tolerance level.
We used Qualtrics to present each scenario to test participants. We tracked their responses to each question and feelings about each scenario (through their written responses).
College-educated adults
58 volunteers
We chose to give each response a weighted score, depending on how empathetic the response was. More empathetic responses received higher scores than their counterparts.
We then developed a standard scoring metric that took these considerations into account. Some answers were conditional on previous responses, so we weighed answers depending on the path a participant took in our scenario.
We analyzed the correlation between the two scenario scores. If one volunteer received high Uncertainty Tolerance scores on both Scenario 1 and 2, while another received low scores on both scenarios, then we could conclude that our measurements were mostly accurate.
Excitingly, the scores between our two scenarios per participant had a Pearsons’ R correlation of 0.38, which was significant at p < .05 (p = .0038). This meant that our two scenarios were likely to be measuring the same attribute in respondents!
We tested a second iteration of the assessment with tutors at Pandemic Professors. We hoped to discover whether their qualitative responses would provide further insight into how tutors differ in their approach with unreliable students, especially tutors with a history of re-matching. Due to their history of re-matching, we hypothesized that these tutors would receive lower Uncertainty Tolerance scores on the assessment.
We used Qualtrics to present a revised version of the assessment to actual Pandemic Professors tutors. Like before, we tracked their responses to each question and feelings about each scenario.
Pandemic Professors tutors
50 tutors
Once again we received qualitative responses that were rich in depth and context. However, it was difficult to test our hypothesis that tutors with a history of re-matching would receive lower Uncertainty Tolerance scores on our assessment. Only a few respondents met that criteria, and our sample size was too small to be significant.
Despite our inconclusive hypothesis, we found that current tutors completed the assessment differently from prospective tutors. Specifically, they referenced their experiences and Pandemic Professors’ guidelines when responding to scenarios.
For this reason, we decided that any future rounds of testing should involve tutors who have applied for a position but have not yet been paired.
To analyze the written responses, we developed an algorithm to identify keywords that tutors commonly used and calculate a score based on its polarity (i.e. how positive or negative the keyword is).
The Keyword score can be used in conjunction with the quantitative scores to provide a more holistic analysis of the tutor’s Uncertainty Tolerance level.