member engagement

Talking CAHPS: Member identification

  • Posted on

  • Written by Dan Ready

In this series of interviews, we look at the three key elements of a CAHPS improvement strategy. I sat down with Lisette Roman, Decision Point’s Director of Analytics, to learn how to beat CAHPS through data insights.

Excelling on the CAHPS Health Plan Survey has incredible benefits: It leads to improved Medicare Star scores, increased revenue, and increased enrollment for a health plan organization. But having the right information to improve performance on CAHPS can be challenging. Here at Decision Point, we have developed a data-driven solution to boost survey performance. In this series of interviews, we look at three key elements of a CAHPS improvement strategy. For this second edition, we focus on identifying the right members to target.

Last time, we discussed personas and how they can be used to improve plan performance on CAHPS. You began to walk us through how personas and predictive analytics work together to help an organization identify the right members to reach out to. Can you elaborate on that?

Sure. Last time we looked at two examples of a persona: “Victor” and “Angel”. A plan with 40,000 members may have 5,000 Victors, but not all of those Victors may be dissatisfied with your plan. Which of those Victors are dissatisfied? Which are dissatisfied to the point that they’ll respond negatively on CAHPS? Who’s likely to respond to a survey — should they even be targeted to complete the survey? In other words, which Victors do I need to reach out to?

We can answer these questions with predictive analytics, and we do, using technologies such as artificial intelligence and machine learning. The insights we gather from the output of these technologies enable us to assist organizations with building tailored outreaches to the members most likely to respond negatively to a CAHPS survey item. We can also identify the members most likely to respond positively to a CAHPS survey item, which can be useful in other interesting ways. Where previously for our health plan partners it might have felt like throwing darts in the dark, they’re now starting to really target their resources on outreaches to boost CAHPS performance.

What if an organization is doing well on CAHPS overall but has just a couple of pain points – say on Rating of the Drug Plan and Getting Needed Prescriptions?

That’s a pretty typical scenario. What I haven’t explained yet is that the insights we’re generating are at the survey item level. We’re not just identifying members who are dissatisfied with the plan and likely to rate the plan poorly in general, we’re identifying the likely rating of each question on the survey for each member. Because the survey hits on so many different facets of the member experience, we can reach that level of precision with our predictions, as well. At the end of the day, we’re helping our partners improve performance on specific CAHPS measures – we’re identifying a subset of the population a plan should focus on to move two measures from 3 stars to 4 stars, for example.

From what I understand, predictive analytics requires historical data on actual past outcomes (such as a response to a survey item) to predict future outcomes. But CMS survey results are anonymous – so where are you getting the historical data? How do you actually do this?

That’s a great question. It’s true, it’s impossible to link actual CAHPS survey results from the official survey back to members to understand the types of members that are rating the plan low. In other words, what we provide is information that’s not otherwise available. We do that by using survey results from off-cycle (sometimes called “mock”) surveys that our health plan partners conduct for this purpose. Actual survey responses, just not from the official survey.

These mock survey results are identified, so we can link respondents to the vast data sources I described last time (claims, pharmacy fills, census data, consumer data, etc.). This way, we know a lot about the members that rate certain CAHPS survey items low and certain CAHPS survey items high, and we train our predictive models on that data.

Once you have these insights into your population – personas and probabilities of survey response behavior – how do you leverage those insights into changing the predicted behavior?

This is about engaging the members that we’ve identified, in the right way at the right time. How do you time the outreach? What method are you using to contact the member – is this a phone call? Is it a text? Is it a mailer? Who should make that phone call? What should the text say? What’s the best design for that piece of mail? What if I don’t reach the member the first time? These are the details we need to get right.

In next week’s edition, Lisette and I cover a critical piece of the CAHPS pie: changing member perception & behavior.