New class content beta tested with users prior to launch.
Class content, metrics, comparisons, final recommendations before full launch to paying customers.
Students were not completing an advanced introductory class and even those that were lacked comprehension. This impacted several of our KPIs, including churn, Net Promoter Score, and students’ ability to land a job. We decided to fully rewrite the class but had already tried this approach 3 times in the past. Determined to have a different outcome, we set out to beta test the content before releasing it to all customers.
Progress data told us that students were starting the class but not many were finishing. Those that did finish gave the class low NPS ratings and numerous comments that they didn’t understand the material or feel confident. This was a big blocker to our ultimate goal – students landing jobs. We needed to solve multiple problems: Unhappiness with class content and low NPS ratings.
Our routine approach to curriculum development wasn’t working with a more complex topic. We needed to completely rethink how we could deliver the content on time and on budget, while still meeting student needs.
As the project progressed, we also started to see a problem with our data collection. Progress was no longer a clear metric that explained comprehension and success. Students would complete the lessons to mark them done, but weren’t able to solve simple logic problems on their own. We decided to track skill confidence at the start and end of the class. We also tracked the number of students to land a job after successfully completing the class.
To begin the test, we planned to work closely with a small set of beta testers during class production so we could quickly iterate on real feedback.
We released two lessons at a time to beta testers and followed their progress closely. Each lesson included a feedback form where we gathered impressions about the content and asked questions about delivery methods (video, written, or challenge). Testers also met with the class instructor in 1-1 sessions to ask questions and give user feedback. We conducted these as user tests (asking no leading questions) combined with general Q&As.
After testers completed batches of two lessons, the feedback was compiled and reviewed with the Instructional Designer and SME (subject matter expert). Iterations of the lesson content happened quickly before moving to the next concept. This allowed us to tweak the class outline as needed, spending more time on complex topics.
We also implemented HaTS surveys (Happiness Tracking Surveys) for a brand new lesson type. This allowed us to gather first impressions of the look/feel of a challenge before rolling it out to other classes.
The test took place over the holidays, so we did lose some momentum just as the concepts were getting more difficult. This affected beta testers comprehension, and we needed to do some refreshers to get things back on track. Keeping testers active & engaged over a long time frame of several months was difficult, but we did incentivize completion of the test to combat that issue.
Also, because this was our first actual beta test, knowing what feedback to implement versus what was just the experience of a few people was somewhat difficult to decide. We needed to think long and hard about what was essential knowledge and what was “nice to know but not necessary.”
Finally, measuring confidence proved to be a difficult metric. Confidence is subjective, so how can you accurately measure it? We did tie this to successful outcomes, which helped give it more weight.
The beta test was conducted over three months, and roughly 5 months before we released the class to paying customers. There were a number of improvements introduced from beta test feedback: