Talking about engagement in digital health is a good thing. As Chrissy Farr noted in her recent column, largely gone are the days of paying PMPM fees for solutions that no one uses and that's GREAT for the sector.
The intent behind a movement toward engagement metrics and models that pay per engaged (or per activated) user is absolutely a good thing. But it also fails to understand key aspects of health behavior science and decades of evidence-based behavior change.
My students sometimes groan a little when I use examples more than a decade old in class (side note: nothing makes you feel older than a bunch of wickedly smart students thinking something that feels like yesterday to you was so long ago it is irrelevant. But I regularly remind them you have to look back to learn how to choose the best path forward), but I think Omada's early go to market strategy was brilliant "way back then" and remains a great source of learning today.
The employer health market was firmly in the PMPM zone with general health platforms providing a bunch of mediocre "check the box" solutions for the concerns of self-insured employers. Did each platform have a weight loss program? Yes. Did most of them work well? No. Did each platform have physical activity promotion and heart disease education content? Yes. Did it work well at changing employee behavior and meaningfully lower disease risks? Nah. This created a great opportunity for a company with a novel evidence-based solution to come in. But that program included a fitness tracker and connected scale (which were considered pricey incentive gifts, not standard program materials), which would have put the PMPM price way above "market." Omada priced based on enrollment and then went a step further - offering to go at risk on the outcome. They wouldn't get paid if the enrolled member failed to lose weight.
Omada could take that risk and succeed for a few reasons
1) They built their product around an existing solution that had been well tested. This meant they had the data on what the usual outcomes looked like.
2) They built their own outcomes evaluation engine around it, hiring some fantastic scientists to support their ongoing evaluation, publication and connection back to the product execution
3) The outcome their solution proposed to deliver was measurable, easily so in fact, and deliverable within the customer buying cycle. Furthermore, the outcome was a thing (% weight loss). Not the avoidance of a thing (a disease diagnosis).
Omada was also well positioned to talk about their engagement
The DPP (and the studies on which the DPP was built - NIH doesn't put millions of dollars into a large multi site multi year clinical trial without LOTS of foundational work de risking the endeavor!) had data showing that participants who attended more sessions lost more weight - in population health science this is called a dose response effect. It is why "engagement" would theoretically matter in a program/intervention. When we study drugs, we look at the dose of medication received and evaluate if more is better. We also look for thresholds above which more doesn't help (or potentially even harms). Engagement is a good leading indicator in digital health models where a dose response relationship exists. Far too few programs consider that a threshold effect exists. And almost none consider models where the digital health solution is like a Z-pak - do these things for X time period and you are good to go.
I suspect thresholds and limited dose models don't happen because they don't align with recurring revenue business models (the tail is wagging the dog). The DPP is not an evergreen program. It is a 6 month active intervention program with a 6 month reduced dose maintenance phase. In addition to content providing information about nutrition, activity and weight loss, the DPP focuses on behavior change techniques: goals and planning (e.g. goal setting, action planning, problem solving), feedback and monitoring (e.g., self monitoring of behavior, feedback on behavior), social support, knowledge shaping (e.g., instruction on how to perform the behavior), natural consequences (e.g., information about health consequences) comparison of the behavior (e.g. demonstration of the behavior), repetition and substitution (e.g. behavior substitution), antecedents (e.g. restructuring the environment, avoiding cues for undesired behaviors), self-belief (e.g.g self-talk). It does this via a structured protocol. In the original trial, participants were taught about the value of self monitoring, how to self monitor and how to use that self monitoring data for feedback and insights that further supported behavior change (either on their own or via the trained facilitator/coach). The DPP designed for decreased engagement with the program over time. In reality, it assumed that participants were applying the skills learned and leveraging the many behavior change techniques. They might need a booster shot, but there was never an expectation that the dose delivered in the first month would still be needed months later.
At ScaleDown, we leveraged the existing weight loss science to know that users who failed to lose weight in the first two weeks were highly unlikely to meet the 6% weight loss goal. That made the first two weeks of engagement a lot more important to our clinical outcome than any other period. We knew from our foundational clinical trial that individuals who weighed 7 days/week lost more weight than those who weighed 5 days/week. That meant our product put a lot of emphasis on hitting 7 days/week engagement, especially in the early weeks. I suspect similar concepts are true for many other health behavior change focused solutions.
We need to think more about what the necessary dose is in digital health and align the engagement metrics we evaluate solutions on to that. As I mentioned last week, an effective loneliness solution should see engagement go to zero.