I’m Rajaie Batniji, Chief Health Officer at Collective Health, and I’m here with Sanjay Basu, Director of Research and Analytics. Today we’re talking about machine learning. We’re getting used to hearing the word, machine learning, thrown around everywhere, But it’s hard to seperate what’s real from the hype. Now, Sanjay, you work quite a bit on our machine learning efforts here at Collective Health. Why should employers care? Well, I think there’s a lot of rhetoric in the field and there’s a lot of hype. And unfortunately we’re seeing now that a lot of the hype is not materializing in the right way. So for example, there was this recent machine learning article about pneumonia. Could you predict, do people with asthma or without asthma need more care support? And the machine learning algorithm actually said people with asthma need less support, which was very counterintuitive. It turns out that the people doing the algorithm, while great computer scientists, didn’t have the expertise on the system to realize that the hospitals they were studying automatically triaged people with asthma to the intensive care unit. So they were basically codifying what was a triage protocol and making a wrong inference from it. We’re seeing that a lot in the machine learning literature. You work quite a bit on how we’re building machine learning into our core products here at Collective Health. Can you talk a little bit about how we’re using the technologies? Yeah. We backed up a little bit and let’s said, Let’s look at what the actual problem is rather than looking at a new technology and just trying to find something to stick it on. We instead said, “What are the core problems our clients and members are facing?” One of the big ones right now is actually privacy, since we deal with healthcare data. If we were just to aggregate it all and run all these algorithms on it, it actually poses a lot of considerable risks. So we put the problem on its head. Can we, instead of using machine learning just to mine people’s data, can we instead use a machine learner to point out where people’s vulnerabilities are, in terms of their privacy? Can we use that as kind of a sniffer, a bloodhound to look at all the different players in the healthcare system, from the hospitals, to the doctor’s office, and so on. And find where we can strengthen the privacy barriers and use the algorithm that way for increased protection? The second thing we’re doing is acting as kind of a third party objective reviewer. One of the things we do in our research is we know that on one hand there’s the clients and members. On the other hand, there’s these program partners offering behavioral health services, fertility, onsite medical clinics, and so on. They all make a various claims and it’s hard to validate those claims, but we’re the ones who actually get to see the claims and can actually determine whether or not they’re doing a proper return on investment calculation. if you’re looking at fertility program, it’s pretty easy to conclude that fertility costs more money, but actually what you want to know is after you add in your healthcare expertise, you know that people might be older, they might have comorbid conditions. You want to be able to tell is that really providing a cost effective solution and using the best protocols, not just slicing and dicing, and saying people with fertility services cost more. Thank you, Sanjay. And thanks for making machine learning a lot more than just a buzz word. The latest episode and article of The Breakdown is on our website at collectivehealth.com/insights. We hope you’ll tune in and learn more.