How Comet Redefines Learning Measurement

Investing in workplace learning is more than allocating time, resources, or funding—it’s about ensuring these investments lead to measurable impacts on employee development and business outcomes. Traditional learning metrics often lack the rigor or consistency needed to demonstrate true value. That’s where Comet is different. Through our Impact Funnel, Comet measures engagement, learning, and behavior change, yielding meaningful metrics and actionable insights to help our clients understand impact and ROI. 

This post is designed to share the framework we use to measure impact, and the specific metrics we use within that framework. Keep in mind that the first step of every Comet client engagement is a kickoff meeting that includes aligning on a definition of success and measurement plan. So think of this as a menu of data sources and metrics, with each client ordering the meal that best addresses their business objectives. 

The Comet Impact Funnel 

The Kirkpatrick Model serves as the foundation of Comet’s approach. Kirkpatrick’s four levels—Reaction (aka Engagement), Learning, Behavior, and Results—act as stages in Comet’s Impact Funnel, allowing us to track the user journey from initial engagement to measurable behavior change. For this post, we’ll leave Level 4 business results out of the framework—this level of measurement requires metrics specific to each client’s situation, and typically some shared responsibility for measurement between us and our clients. Below is a breakdown of each level, including metrics, strategies, and data points we capture within Comet. 

Level 1: Engagement

The top of the funnel is Engagement. How much of the target population (all training participants or event attendees) engaged with the “long tail” experience? How much learning did each user consume? How much did they engage in social discussions and other modes of learning?

In traditional settings, tracking stops at surface-level metrics like attendance or completion rates. However, Comet takes engagement tracking further, capturing both the quantity and quality of user interactions with learning content. Here’s how:

  • Metric Breakdown:

    • Progress & Completion: The basics are covered, such as tracking completions of microlessons. We also break down user activity into different learning “moments”, such as engaging in group discussions, responding to quizzes, and completing actions.

    • Time Invested: Beyond completion, Comet measures the time users actively invest in learning modules, discussions, and actions in the flow of work. The Comet experience is self-guided and asynchronous, so it is meaningful when users choose to engage!

    • User-Generated Content: Comet includes both metrics and qualitative analyses of in-app social discussions and personal reflections, providing a richer picture of user involvement.

    • Reaction Polls: We embed NPS and sentiment surveys in every Comet experience.

Here are a few examples of how these metrics come through in the reports we provide to clients:

Comet’s engagement data lets organizations identify which aspects of learning resonate with employees, supporting refinements that boost participation and, ultimately, retention.

Level 2: Learning

When it comes to learning, we think about both knowledge acquisition and shifts in mindsets and attitudes. Therefore our measurement approach is dual-faceted, capturing both hard knowledge (quizzes) and soft attitudes (Likert scale surveys) for a complete picture of learning impact.

  • Knowledge Metrics:

    • Embedded Quizzes: Knowledge is gauged through quizzes within microlessons, creating a game-like experience that enhances retention.

    • Pre/Post Testing: By comparing knowledge levels before and after lessons, we assess both retention of key lessons from the live training and the impact from the comet itself.

  • Mindset & Attitude Tracking:

    • Likert Scale Surveys: Conducted at four stages—before training, at the start of each Comet section, at the end of Comet, and 60 days post-intervention—these surveys reveal changes in learner mindsets and attitudes over time.

Like any great company that’s passionate about both data and cosmological metaphors, we keep our learning impact measurement focused on a North Star: in this case, bending the Forgetting Curve. The Forgetting Curve is well known to learning practitioners and has been confirmed experimentally several times since it was first discovered in the 1880s: people tend to forget the vast majority of information they learn, to the tune of 70% within the first 24 hours, and 90% within a week.

But with consistent reinforcement, or spaced repetition, it’s possible to reverse this decline in retention, or bend the Forgetting Curve back upward:

Together, these data points allow clients to evaluate the effectiveness of both the initial training and Comet’s reinforcement, driving continuous learning for our users and real insights into what’s working (and what’s not) for our clients.

Level 3: Behavior Change

Behavior change is a more complex level to measure, as it requires evidence of learning application. Importantly, Comet includes not just prompts to take action in the flow of work, but a low-friction To-Do List to track the actions you commit to, opportunities to reflect on each new behavior with your cohort, and a habit builder to help you repeat the new behaviors that matter for you, in a consistent context, until they become automatic.These popular features generate lots of behavioral data we can observe and analyze, with metrics including:

  • Actions committed: When a user adds an action to their to-do list, they express an intent to try that new behavior on the job. They can also set a reminder on their calendar.

  • Actions completed: Actions users have checked off their to-do list or completed during a microlesson, showing follow-through and application.

  • Actions completed with reflection: Every time a user completes an action, they get a customized reflection prompt such as “What impact do you expect from taking this action,” or “What advice do you have for your peers when taking this action.” These reflections become part of the group discussion feed in the Community, and they indicate a deeper level of commitment and “learning by doing.”

  • Habits committed: When a user commits to building a new habit, they define the cue that will trigger their repeatable behavior, set reminders on their calendar, and write a commitment statement visible to their cohort in the Community.

  • Habit reps completed: The frequency of repetitions or “reps” varies based on the behavior and context, but users typically commit to 3–5 reps per week as they build new habits.

  • New habits built: At the end of each week, users are prompted to either confirm that their new habit has become automatic, or re-commit to continuing to build that habit for the next week.

When we think about behavior change that we can’t observe directly in-app, there are three ways (short of techno-dystopian observational systems) to measure it: (1) in-app surveys, (2) separate user surveys delivered outside of the app, e.g. email and SMS, and (3) observations of behavior from other stakeholders. While observed behavior is seen as the gold standard in terms of data quality, it’s also really hard to get responses from stakeholders not directly involved in the event or learning experience, and difficult to know how a smaller sample size skews.

Roughly, this is how we think about the trade-off between data gathering methods:

When using self-reported behavior metrics, we try to craft questions with objective, measurable answers to improve the precision of responses (e.g., “How many times did you give feedback during your last shift?”). This data enables organizations to quantify shifts in workplace behaviors, connecting learning to performance.

With behavior, just as with learning measurement, we focus on changes over time. Below is an example of survey response data from a recent comet. Each line represents a different question—in this case, eight questions that captured the most important learning and behavior outcomes from a manager training—and each point represents the average response to that question during the pre-program survey, the start of the comet, or the end of the comet.

In this case, the “Start of Comet” surveys were taken between 0-3 days after the live training event, with “End of Comet” surveys 30-45 days later. Our clients noticed:

  • A massive retention boost. Think about the classic Forgetting Curve relative to these results over 4-6 weeks.

  • A “catch-up” effect. Some concepts and skills that saw less impact from the live training event saw more impact in the long tail, as users had the chance to learn by doing.

  • A few “hot spots” to focus on. Where we saw relatively lower response scores or users backsliding in the weeks following the training, the client prioritized follow-on interventions.

A Transparent, Data-Driven Experience

Data is only useful when it’s transparent and accessible. Comet offers real-time data dashboards, weekly reports, and comprehensive post-experience assessments to keep clients fully informed on learner progress.

Dashboard Access: Visual, real-time data on engagement, knowledge, mindset, and behavior change, allowing clients to monitor their cohorts’ progress on demand.

Weekly Insights and Recommendations: Regular updates offer actionable advice based on data trends and insights, enabling clients to address learning gaps or optimize engagement strategies mid-course.

Post-Experience Report: A full analysis of engagement, learning, and behavioral impact across the cohort, summarizing key takeaways and lessons learned.

This transparency and openness are at the core of how Comet empowers our clients to take control over their learning outcomes, from initiation to impact.

Conclusion

With the Impact Funnel, Comet is redefining learning measurement. We’ve articulated concrete metrics to define impact at the levels of Engagement, Learning, and Behavior Change. Importantly, these metrics are generally applicable and comparable across cohorts, industries, and work environments. And with upfront alignment with our clients to define what and how we’ll measure to assess impact, learning investments become measurable, meaningful, and strategically impactful.

Next
Next

Experience Comet for yourself: How we’re using our platform to extend our own event