The MATTER Health Podcast

Advancing Health Equity: Artificial Intelligence

MATTER Season 4 Episode 18

In this episode of Advancing Health Equity, Dr. Maia Hightower, CEO of Equality AI and former Chief Digital Transformation Officer at the University of Chicago Medicine, joins Steven Collens, CEO of MATTER, to explore the transformative role of AI in advancing health equity. With over 20 years of experience in medicine and healthcare leadership, Dr. Hightower shares her expert insights on how artificial intelligence can address disparities and improve patient outcomes. From her time practicing medicine to shaping digital health strategies, she provides a compelling perspective on leveraging technology to create a more equitable healthcare system. Don’t miss this thought-provoking discussion on innovation and inclusion.

About Advancing Health Equity
MATTER’s Advancing Health Equity podcast series focuses on unpacking the complexities of health inequities impacting the healthcare system and the health and well-being of individuals and their communities. These 20-30 minute interview-style sessions are meant to take quick dives into critical areas of health equity and answer questions like:

  • What does health equity mean today?
  • Where do current gaps exist in the various areas of healthcare?
  • Where do we see intersections in care?
  • How can technology and innovation be leveraged strategically to positively make a change?


For more information, visit matter.health and follow us on social:

LinkedIn @MATTER
Twitter @MATTERhealth
Instagram @matterhealth

Hello and welcome to MATTER’s Advancing Health Equity podcast. My name is Steven Collens. I'm the CEO of MATTER, a healthcare technology incubator and innovation hub with a mission to accelerate the pace of change of healthcare. One of the ways we do that is by giving a voice to extraordinary individuals who are on the forefront of improving health and healthcare. For this episode, I'm joined by Dr. Maia Hightower, the CEO of Equality AI, and the former Chief Digital Transformation Officer of the University of Chicago Medicine. She is a true expert when it comes to AI and health equity perspective that she has developed over 20 years in the healthcare industry, practicing medicine and helping to run health systems. I spoke with Maia in front of a virtual audience. We had a terrific conversation that I know you're going to enjoy. 

With that, Maia, thank you so much for joining us today. I am looking forward to our conversation:

So happy to be here today and thank you for having me. I'm always excited to share how we can all work together to build an equitable future of healthcare that's AI enabled.

At least everyone in my world, and presumably in your world, is talking about artificial intelligence today. You've been in the healthcare arena for a number of years. You've practiced medicine, you've held a variety of C-level roles in different health systems, most recently, as I mentioned, at University of Chicago Medicine, but it's a relatively recent phenomenon. And just wondering, when did you first start hearing about and really thinking about AI in the context of healthcare?
Yeah, so within the context of healthcare, I would say my first awareness was probably in the say 2015 to ‘18 timeframe. And that really was when I was a Chief Medical Information Officer at University of Iowa. Probably the earliest examples was natural language processing and use of tools like Dragon, early iterations of Dragon, definitely AI enabled, although very clunky. There were certain accents where Dragon didn't pick up. Certain providers that saw incredible productivity gains while others did not. University of Iowa was also one of the first to adopt a diabetic retinopathy AI driven solution. It actually was one of the faculty members at University of Iowa that developed the technology behind it and one of the first solutions available. So those are probably the two early examples. And then the third example was a population health model that was embedded within a risk prediction model that was embedded within a population health platform, again in sort of the 2019 timeframe.

So we're talking five to eight years since we're really, and now it seems like, I mean, it's everywhere. It seems like it's not really a fad. It seems like this is a thing and healthcare leaders and entrepreneurs and investors and everybody else needs to continue to figure it out as the technology itself continues to evolve fairly rapidly. So you've had these senior roles at Iowa. You mentioned University of Utah, University of Chicago. Wondering how did each of these systems, and you don't have to, I'm not looking for naming names, what I'm looking for is just as with everything, most everything in healthcare, there seems to be a lot of variability across different systems, how they approach it and wondering how did the approaches differ? And is that entirely based on just the stage of the industry and the fact that it was progressing rapidly? Or did you see significant differences in the way leadership was approaching novel technologies that this might be interesting insights to glean? 

As some of the people in the audience are either with health systems just continuing to try to figure out how to think about and test and ingest new technologies. Or on the other side of the coin, they're entrepreneurs, they're innovators, and they're trying to figure out what's the best way to position and get in front of health system leaders.

Yeah, I would say that the pace of change definitely has taken off in the last two years, especially since ChatGPT came to the market. But when you think about the history of AI driven solutions in healthcare, I like to think of them almost like a matrix. One is on type, so predictive gen(erative) AI, your classic machine learning or predictive models, your classic machine learning. The second being generative AI, which is like ChatGPT, and then the third being computer vision, which has been pretty common in radiology. So those three different types. And then you have what was internally developed by your own, say IT team, those that are vendor driven. And so when I think about the arc of AI in healthcare, early examples were really a combination of vendor driven and our academic faculty developing models and wanting to implement them within the electronic medical record.

When you think about those that were vendor driven, Epic definitely was a big proponent and continues to be a big proponent of AI. They were the first from a vendor perspective, to have a catalog of AI models that you can pick and choose from well known about their sepsis model, their falls model, their model on prediction of no-shows. I mean, there was a large catalog and continues to be a pretty healthy catalog of models that if you're an Epic healthcare system that uses the Epic electronic medical record, you could start thinking about how you would evaluate the technology and then implement it. And then Epic also provided and continues to provide some basic tools where a health system can actually evaluate or run on silent whether or not a model works well with their local population. So I think that timeframe really has allowed for many health systems to get, although immature, to develop a process around AI governance and around how to approach novel technology in general.

I'd say novel technology in general. Health systems generally have a pretty good AI governance around say applications and how to decide whether or not to buy an application, build an application, implement an application, which stakeholders, what's the ROI, how it aligns with strategic priorities. And generally there's these pretty good rubrics out there on how to manage applications. I think one of the challenges with AI is that the technology is a little different than your traditional software. And some of the risks associated with AI are different than whether or not you're going to implement a telehealth solution. The challenges are a little bit different. And so that capability of identifying specifically AI risk that you have to evaluate for in addition to the standard risks associated with any application or new solution is a challenge that health systems will continue to need to evolve in the sophistication in managing AI.

But I would say, like you said, a lot of things have that pace that iis accelerated, but we actually have had some practice whether it's been Epic or embedded within radiology systems. Actually Phillips has a pretty, and other MET radiology systems have been deploying AI embedded within radiology for quite some time and working with faculty members across the country on how to do that within imaging systems. Similarly, new startups often coming from academic roots with say, a sepsis model. And this would be kind of like your Bayesian type of model where it started off at Hopkins and now wanting to scale that across healthcare with their learnings on say, a sepsis model or different types of models. So we've had some practice, like I said over the last say, seven, eight years. It's just that now with generative AI, not all the lessons translate as they had for predictive models. Now we just need to continue to mature along to be able to address a generative AI.

So you mentioned governance, and I'm glad you did. So the governance structure within health systems and how they make decisions about any sorts of new technologies seems to be very complicated, from an outsider perspective, very challenging to navigate. And obviously they're not all the same, but there's certain commonalities. There's a committee that does this and you need a champion over here and you need the CIO to do blah, blah, blah, and there's certain commonalities. Has governance changed when it comes to artificial intelligence applications as compared to any new software application? Is there anything different about it or is this just the next step in software that can be used or technology generally that can be used in the healthcare setting?

Yeah, so because of the unique risks associated with AI, AI governance structures have changed. And those governance structures are generally highly level driven by, say, international organization ISO, that has an AI management framework followed by NIST, the NIST AI risk management framework, and even within healthcare standards around AI lifecycle management that have been proposed by CHAI, which is Coalition for Health AI, as well as other health systems that have been sort of iterating upon their AI governance structures, including University of Chicago for quite some time. And so what's been recognized is that AI risk management or AI lifecycle management does take additional competencies above and beyond what typical application governance or security governance or tech governance has required in the past. You still need to do all those other things. And there are a lot of frameworks for say, application governance or technology governance, and generally those frameworks include making sure that they strategically align HIPAA compliance.

And generally there's this checklist, and even like I said, rubrics that help health systems wade through all of the information on whether or not to invest in new digital technology. So now you have that which actually in most health systems, like you said, have some sort of committee one way or the other and have done that for many years and have generally, unfortunately, they're not consistent across health systems, but within the health systems they tend to be pretty standardized. Just like how UChicago does it may look different than how University of Utah does it, but once you understand how University of Utah does it, you'll be pretty successful, say at University of Utah and similarly at University of Chicago. So internally, they tend to have pretty consistent processes. The challenge now with AI is these AI lifecycle management standards are new capabilities that you need to do above and beyond your application governance.

And it requires managing these new risks associated with AI and those new risks being concerns about health inequity or bias, algorithmic bias, concerns around additional privacy concerns and intellectual property concerns that didn't exist with prior to generative AI like machine learning models didn't have to approach that additionally. Some of the same concerns around translation and impact on the end user and stakeholder engagement, more concerns around how do you ensure that your stakeholders actually understand the technology well enough to be an appropriate copilot or to be an appropriate stakeholder that's going to be providing beneficial insights in decision making. So there's a whole sort of AI knowledge gap that similar to digital knowledge, digital literacy, but can now call it AI literacy that we haven't fully enabled competencies around.

Yeah, so thank you for that. I don't want to turn this into a discourse on governance and all the things, but I do think it's really important for the innovators who are listening because it just comes up over and over and over again is like, how do I penetrate? How do I get in? So your insights are really interesting about the evolution of governance from software generally to incorporating artificial intelligence. I guess it may transition a little bit. So the Gartner hype cycle is sort of a thing used to describe the evolution of different types of technology, and for those who aren't familiar with it, it starts with a trigger, the introduction of a new technology, and then everybody gets really excited about it and it hits this peak of inflated expectations, and then people sort of realize it's not ready yet or there isn't anything there or a variety of things. And then the excitement level sort of plunges into the trough of disillusionment, and then it kind of gradually recovers and you get to a point where there's a sort of incorporation and productivity and utilization. I think we're, well, I don't know, where do you think we are on that process when it comes to AI and healthcare?

Yeah, I think it depends on which type. If you think about predictive models like sepsis or fall or some of these pretty straightforward machine learning models or definitely NLP and voice recognition, voice transcription, I think we're in a plateau of where we're getting some return on investments. We actually can measure the improvement in quality. We can measure the improvement or the productivity gains from voice to text. So certain AI technologies are in a place where they're providing some good return on investment. Others though are a little bit more challenging: how do we apply now automated dictation or voice to text that automatically generates a note? Some health systems have moved forward with solutions like Bridge or OpenAI type solutions embedded within the system nuance, their next iteration of opening AI enabled ChatGPT enabled voice to text, but it's not just voice to text, it's voice to text to note, to full on note generation or replies to patient emails, MyChart messages.

And I think that as we move forward, it becomes a little bit more, I think we're realizing the challenges of translation like AI translation at the point of user adoption, definitely there's a lot of promise when it comes to generative AI specifically and computer vision specifically, even further productive models. But at the same time, there are so many challenges to translating that into meaningful return on investment. That is the trough of disillusionment, right? It's not so easy as picking up your iPhone and expecting it to do as you want it to do. There's this complexity in a healthcare system, the way that we deliver care, the way that we deliver products to our providers, to our care teams. That translation makes it so much more challenging in healthcare than in other industries or direct-to-consumer where the consumer experiences can be so slick, but the healthcare experience not so slick. So there is some disillusionment and then of course sticker shock because none of these are cheap.

What do you think has the most potential to, or what do you think if artificial intelligence achieves everything in the next decade that it could, what do you think the most important transformational impact will be on healthcare? And is that patient facing? And if so, when do you think patients will really start to see the day-to-day change and impact of AI in their own healthcare and medical experience?

Yeah, so unfortunately for patients, probably the first areas of opportunity really are on the provider care team as well as operational use cases. And those are mostly in order to protect the safety of patients because clinical care is high risk, whereas clinical documentation, coding and building are less risk. And so healthcare being naturally hesitant or naturally risk averse, we're going to move forward more rapidly on these sort of lower use, lower risk use cases that have very immediate return on investment. And so when you think about the amount of administrative burden that physicians and nurses and care teams face every day in healthcare, it is a very administrative burden type of industry. And generally speaking, physicians did not go into healthcare in order to write notes. It's not like, Ooh, the highlight of my day is to fill out a billing sheet and write all these notes and talk to the insurance company and try to demand that my patient get the care that they deserve and that they need because of my clinical opinion.

We did not go into healthcare for that reason. And yet healthcare is full of these administrative burdens, these administrative obstacles. And so that likely is the first area of opportunity to really return back the joy of medicine, the joy of practicing, the joy of a nurse, that one-on-one connection between a nurse and a patient. And the patients may get that sense of, wow, my care providers are far less taxed on these administrative tasks. Maybe they can actually make eye contact, not at a computer typing away. So that's actually a win-win for everyone if we can alleviate some of that administrative burden and then return that joy of practicing medicine and creating connection at that human level between provider, between a caregiver and individual patients. I think that'll be, and that already is sort of like first wave. Patients may have noticed that their providers may be asking them or letting them know that conversations are going to be recorded, but that's in order to help with documentation so that they can really be fully present on a care encounter. So that's probably going to be the first early benefit of AI on the backend. Again, the billing and coding compliance piece is huge when it comes to measuring quality of care. That measurement is very tedious. And so to be able to use AI to help extract what that quality of care actually is, what's appropriate for billing, and then be able to automate some of that is another area of opportunity that patients may not recognize, but again, it can decrease some of the overarching administrative burden of healthcare.

Now, what do you think is going to be, 10 years out, most meaningful application of AI?

Yeah, I mean ultimately 10 years out really is getting closer to personalized precision medicine. How quickly can we get to really truly personalized precision medicine where you as an individual, your data points really are driving the AI behind the scenes that may be helping navigate your care? And I'm optimistic that there are research scientists that are really looking at how we apply AI to specific, say, clinical use cases and really trying to understand all the different variations that we as humans represent in order to make sure that those models are really precise, precisely fine tuned to each individual. Now, that's not an easy task because we are so varied. And so right now, the way that AI models are developed, there's a trading dataset and certain populations are underrepresented in that trading dataset. Certain populations may not be well represented in the clinical question that's being asked. And so right now, the risk though is that instead of being able to provide really well defined precision models that work at that individual level, they may work at large population levels in populations that are better represented in the trading dataset, which may not reflect you as an individual. So that I think is the biggest challenge. And my hope is that over the next 10 years, we are able to overcome that challenge and really develop models that work for everyone.

Yeah, so let's talk more about that and people have been talking for a while about how AI has the potential to exacerbate health disparities. And people have been talking about that just about software generally. And before that about every new technology that there is not necessarily a risk, but maybe even a predisposition that the newer technologies are sort of inherently going to be more available for people with means for the wealthier health systems and providers. And that there's sort of an inherent kind of almost something structural about our healthcare system that new technologies will be most available or least available to underserved populations. Is that the same narrative that, well, first of all, I guess just asking do you agree with that? But is that the same narrative, assuming if you do, that applies to artificial intelligence or does ai, you were just talking about training models and things, which I think injects a different dimension to it. That is almost like another thing that people really need to be cognizant of and aware of and thinking about if they're ultimately trying to build solutions that improve equity and not exacerbate inequity.

Yes. So I was presenting the flip version, right? The optimistic version that we move forward with responsible AI and we really do recognize the limitations of bias that may be a part of the AI modeling process, AI lifecycle, and that we address that bias throughout and ultimately build these more effective models for everyone. So yes, I was presenting that more optimistic view, but literally we are at this crossroads where we can use existing AI technology, existing methods for bias detection, mitigation and either go one of two directions. We could ignore these methods and potentially, just through ignorance, through path of least resistance, exacerbate health inequities at scale across large swaths of populations, or we can use what we have already learned and really be able to address these biases, these inequities and create better models that work for everyone. And so at that, given that crosswords, when I think about AI bias, and this isn't just Maia, this is actually in a lot of these standards like ISO and NIST is really recognizing the AI lifecycle and the AI lifecycle really starts with problem formulation.

Who gets to decide which problems are worthy to address using AI? Is it community? Is it patient groups? Is it rare diseases or is it healthcare systems or large funding agencies that drive the agenda? So already structurally who drives the agenda is embedded with bias and ideally we'd have representation of all stakeholders so that there's a balance so that various stakeholders are determining what agenda is important, communities deciding what's important for their community as well as healthcare systems, deciding what's important for the strategic success, driving margin and mission for that health system, and similarly with funders and private industry. But right now there's an imbalance, right? It's not like patients or community groups are well represented and even deciding what problems are important to solve. And so that's just in the beginning. Then you have the training data set, like where does the data come from?

How is it collected? Healthcare data is a reflection of real world data. We know that real world data is messy, real world data is fraught with the same biases that exists in the real world, who's represented and who's underrepresented in a data set. And that missing what data is missing is not random, right? There's structural components that drive what's missing in a data set, but there are ways that we can actually address that from a technical perspective, be able to at least be transparent on who's in a training data set and then be able to adjust or make modifications based on say, a training dataset that is incomplete. Can we augment that data in some way? Can we join say a U Chicago, which has a pretty demographic group that looks one way compared to a Mayo that has a demographic group that looks a different way that maybe complementing those two different populations makes a more generalizable dataset?

We don't know. I mean, there are ways that we can test that. So from the data to actual modeling within the model process, there are hundreds of decisions that are made by data scientists. Sometimes they're informed by clinical decision, by clinical experts, and sometimes they aren't. And so having those clinical experts, those lived experience experts, part of the modeling process where they don't need to actually be the data science that models, but the data scientists should be working in these teams. So that broader knowledge is brought into the data science, the actual model development process. And then once you've built a model evaluating that model and it's part of model evaluation and during model evaluation, just checking how does that model perform across different populations? Does it work for old people, for young people? Does it work for rural people or urban populations? Does it work across payer types across race, ethnicity, gender?

It's completely unbiased. It works for everyone. You then release it within workflow, right? Is it effectively put in an effective part of a workflow? It's almost like the last mile kind of question. Is the person that is supposed to be engaging with this model have a clear pathway on what to do with that information, with that prediction, as well as training and then monitoring for the outcomes, the intended outcome for that model. And then if the model starts to degrade in performance, let's say you've adjusted for all of these, so your model's performing just as you expected. The people around it that are co-piloting or using the model are perfectly attuned to making good decisions, better decisions than alone without the model monitoring for those outcomes. But at some point there's going to be degradation of the model. When do you retire that model? When do you decide that the model no longer is effective or needs to be retrained?

Demographics change, populations change. Covid showed us that there were a number of models that worked really well pre-covid, but during covid, the data was just so very different. There's this huge data shift, and so hence this need to ongoing monitor and then even retire when a model no longer performs what's expected. So that whole lifecycle, there are biases that can be introduced throughout. It can also be mitigated throughout both technical and social methods. There's a lot of different publications out there that support this. And then as well, like I said, all the way to the very end to measure, is it closing health inequities? Is it working for everyone or is it widening in?
Where does your organization that you started, Equality AI, where do you fit into that? And you just laid out a really comprehensive view of where do you fit.
Yes. So Equality AI, we're a quality assurance and compliance solution for healthcare. So specifically what we do is try to identify those. We identify those main risk points for AI translation and help health systems navigate those risks through AI governance. So using a lot of these standard frameworks, whether it's NIST, like I said, CHAI, there's a number of frameworks out there. Are you appropriately managing the risk? Is a health system adopting these new capabilities to address AI specific risk in addition to all the other risk of technology? So that's first step. Then we help health systems then measure evaluated models, AI models, so both from predictive gen AI and then working on computer vision to actually measure or audit the effectiveness of those models across populations often in alignment with emerging regulations. So there are a number of emerging regulations that HHS different, HHS agencies have released, including ONC.

That's probably the first with office of national coordinator for healthcare IT as specifically targeting those, essentially it's electronic medical records that those models deployed within electronic medical records have a certain amount of AI transparency. And again, it's often this measuring the demographic representativeness of the training population of the training data dataset, measuring the performance of a model stratified by demographic groups and then being able to measure as well as fairness and potentially mitigate bias that's detected. So that's required for ONC. And then there are some variations of those slight variations mostly aimed at non-discrimination that CMS has released as well as the FDA continues to evolve their approach as well.

Do most health systems do what you just described internally or do most of them have a partner like you on the outside that helps them navigate it?
Like all of healthcare, there are the haves and have nots, right? If you look at a UChicago or a Stanford or a Mayo or you have these robust faculty departments in computational biomedicine, the talent is there and often the talent is able to help the health system develop some pretty robust mechanisms for AI monitoring, for AI lifecycle management. Then you have your regular health system, your community health system that does not have PhD data scientists on faculty. And so then you have even well-funded community health systems that are generally not-for-profit and generally create a margin, maybe it's their 4%, 6% every year. They're just very reliable. They give good care, four stars, five stars CMS rated. But then when you look at their actual technical capabilities, their workforce, IT workforce, they can't afford a PhD data scientist and AI research scientist. And if you look at your typical IT team, they may have somebody with a master's degree or an analyst that is just trying to understand how to do these capabilities.

And so for them, the gap is actually quite large. Even AI governance and identifying what's important to measure can be challenging to navigate. So then you go even further downstream to your FQHC. Right now your FQHC is just trying to keep the patients going, let alone the IT team is like one person, one or two people, or maybe they're part of a connect program with their local academic medical center, but their closeness, the proximity to ability to manage their IT and infrastructure is pretty limited in their resources, incredibly strapped. And so I'd say if we were to measure maybe the top 5% are actually able to do some kind of full on AI lifecycle management and the rest are just trying to figure it out.
So I guess just to button that up, presumably that layer of ones with the PhDs and presumably that's a pretty thin layer of the healthcare system and that most organizations don't have those resources and capabilities. That is most do not have. And even that thin layer, they are motivated not because they are employed by the healthcare system. These are faculty members that are motivated by scientific advancement in AI research. And so even that alignment can be challenging because these faculty members, just like any other scientists, are motivated, not necessarily based on the success of the healthcare system, but on advancing the cutting edge of AI research. And they too are underpaid sometimes than what they can get in the commercial environment. So a faculty member that is a seasoned AI research scientist is probably being knocked on the door very often by Open AI, by these really high paying industry leaders that are even siphoning off the talent that we do have at that more altruistic faculty level. It's a complex interplay and generally academia has more talent, but it can be a revolving door as they get lowered from academia to industry.

You started this a year ago?

So went full-time about a year and a half ago and then a year ago, but started about two years ago.

Okay. Have the types of questions you're getting from health systems evolved or I guess what I really want to understand is I guess some part of what you're doing is governance and structure like infrastructure. Part of it is actually you went deep in there from my perspective, training models and how you ensure that your models are not biased or unbiased. They can be given the data that you have to work with. And then there's another category that we haven't really touched on, which is outside solutions, whether they're from large companies, whether from Epic or whether it's they're startups that are coming in, especially a lot of the smaller companies, they're training the models themselves. And so are you also playing some sort of a role either working directly with them or working with the health systems who are trying to evaluate is this solution something that we should move forward with or not?

So currently we predominantly work with health systems as well as pharma, specifically for pharma. It's use case specific around clinical trial enrollment and ensuring that that process is streamlined and equitable. The challenge with healthcare startups, and I'm always happy to speak with startups and help advise them, the challenge there is that often they're resource constrained as well and how much they are able to address this systematically when there's a level of a learning curve, I found to be almost like an infinite void. So all your time can be sucked by startups that are trying to figure it out. And where the better use, at least for our time, as a company is to focus on healthcare systems so that they're able to interrogate a potential customer, like a partner development partner in a meaningful way, and then through that to guide that startup in an appropriate direction versus us directly. Of course, if we had infinite resources, we would love to address that market as well, but we're early on and we need to be very selective on who our ideal customer profile is. And for right now, like I said, there's just too much variation with startups and the challenges that they face that we haven't been able to scale our solution in an automated way, which is our goal in the future so that it makes more of a self-service model that startups can use.
But when it comes to those larger entities, enterprises, that's generally where we're focusing on a billion revenue and above.

You just said something that I didn't realize that your goal is to develop an automated product rather than more of a consulting services solution. Say more about that. What does that look like? What does that mean?

Yeah, so even though each health system or each entity is unique, the challenges they face are not, and so for any kind of technologist, the ideal is to take what is systematic and be able within a technical solution, and then what is variable by organization to help them through their individual variation. And our goal is even when we're working with individual, say, healthcare systems, it's understanding what is it that's a unique challenge for that healthcare system and parsing that to what is a challenge that's replicable across healthcare systems and what is a challenge that's replicable across all, say HHS agencies and what is replicable across all of ISO and what is replicable then you put into the tool and automate that. Eventually getting to, and this is our hypothesis or thesis, is that there is a core set of capabilities that all organizations are going to need.

If you are one of these three players, if you are a healthcare AI user, there's going to be a core set of responsibilities that you're going to need to be able to address that's standard across all users. If you're an AI producer, it's a core set of capabilities you're going to need to have. And then if you're an AI distributor, and this is like the EU kind of language, if you're a distributor that there's a whole other set of responsibilities that you're going to need to be able to do and that will be standard across, that's our hypothesis. But of course right now we've got to work within the variation, this is all very new, and then identify what's bespoke and what is scalable.

So interesting. So going back to startups and you very correctly observed that they're also cash strapped and it's almost most of them, almost my definition. And so what should they be doing if they want to build their solution and make it a positive force for reducing health inequities? What are your recommendations for how they should think about that? And especially given the truth that almost by definition they're resource constrained?

Yeah, so one is FDA, they have a number of really good frameworks that can be applied that is transparent like FDA currently within medical devices as well as software, software external to medical devices. They have some good practices and adopting those good practices will get you in the ballpark. Generally those good practices are going to be making sure you have a diverse dataset and that you can actually produce or be transparent on the representativeness of your training dataset. The second is going to be being able to do stratified performance of a model across demographic groups. Now every system is going to want a different, say demographic group. A common standard actually is payer who would've thought of payer. But often when we're thinking about quality measures in the healthcare delivery, we're actually often a lot of those health equity goals are payer stratified, not necessarily race or gender.

(47:28)And so that actually is a pretty straightforward way to start is can you stratify by pair group or can you stratify by gender protected classes? But I would say an easy one to start, easy one to start is pair, and it's pretty agnostic, right? It's like how is your Medicare doing versus your Medicaid versus your commercial? So I think that if you have that within your training dataset, that can be a good place to start on that stratified performance. And then the third is just being familiar with and playing around with what are called these fairness metrics. So that's a good place to start. And again, ONC provides great guidance. FDA, they definitely for one and two, the training dataset, representativeness as well as stratified performance, they're looking for that information and that's a good place to start from a navigating healthcare systems perspective. If you're selling to healthcare systems one at a time, understanding that process, like you mentioned, each has a committee, each has a way and it's standardized within that institution, but not across institutions and navigating, at least with one health system, that champion ought to be able to help you to navigate to what the decision making process is.

Now, unfortunately, not all champions understand either. So within healthcare systems, there are plenty of champions that have no clue how decisions are made, but they may love your product. The closer a champion is to the executive suite, the more in-the-know they're going to be on how to navigate to their own internal decision-making process, their own internal governance process. And of course, if it's a technology solution the IT is going to know the best. So your CIO is going to have a pretty good idea of what is within discretion, what has to go through IT, governance of some sort, and what that rubric is always start with asking for in healthcare for their privacy and security standard. Most have it written, and it's amazing how many healthcare startups haven't even gone through a SOC two audit, they don't have a privacy policy, they haven't even begun to think about hipaa.

And in order to be somewhat successful in healthcare, that's all predictable. Every health system is going to have these standards, and this is actually external. Any enterprise is going to, at large enterprise is going to have some sort of privacy and security policy. So the sooner you understand that, even if it's just one health system, getting that document, using that as a template, I think can be incredibly helpful to understand where to spend your resources when it comes to setting aside 10, 20% of your resources and making sure you're becoming compliant and actually marketable in your MVP.

Yeah.

That was a lot.

I mean, I think what you just said is you do your homework and there are publicly available and very easily accessible places to go, but it doesn't just happen. There's a lot of work that has to be done for every single company that's doing this, and it's kind of the reality of what—

The reality of what it is, and building technology is fun, but that's always the fun part, right? It's so easy to build technology. It really is, but that's not the hard part. The hard part is really in that product market fit is making sure that your product actually is in demand, but as well that you can actually navigate into a successful sales cycle.

Any tips to, again, resource constrained startups as they're going through this, doing their homework as ways to do it so that it's auditable when they go into a customer, it's just boom, boom, boom. They don't have to reinvent the story every time as to how they should think about it or how they should approach it.
Yeah, so I think that, again, even if it's your first customer, with your first customer trying to figure out what their security and privacy policy is, and then that actually is a good place to start with some of the automated solutions because there's a number of automated compliance solutions out there, and the challenge with those automated compliance solutions is that their library or their catalog is so big that if you're not selective in which of the compliances to pursue, you can end up pursuing a number of compliance pathways that have nothing to do with your current customer. So definitely on demand customer by customer, identifying what they require and then building that out, that compliance pathway or that rubric, like I said, sometimes you've passed the compliance, but now you go through this rubric that's strategic alignment and that rubric, how you score, definitely the insights of your champion can help to prepare for that type of rubric scoring mechanism, but generally it's going to be strategic alignment driven.

There are certain parts, strategic alignment that are the same across healthcare systems, quality, cost savings, driving revenue, but for certain solutions, they're very much targeted towards strategic importance for that particular healthcare system. So we may not fall in the rubric exactly the same for system A, that for them, they're really concerned about driving, say, a new market, and hence growth is going to be heavily weighted in their rubric. Whereas another may be like, oh my gosh, mark quality is concerning, and then quality may be the most important weighted object in their rubric. So only your champions are going to know what's driving a rubric. Even though quality cost growth, the rubric itself, it's pretty mission aligned.

Got it. Where can people go to learn more about Equality AI?

Well, of course, equalityai.com.

There it is. Maia, thank you so much for joining us, for sharing your perspective, your wisdom when it comes to these issues. Really appreciate it.

Absolutely. Well, thank you so much for having me. This has been absolute fun, and I look forward to the next time.

Thank you all for listening to my conversation with Dr. Hightower. This podcast is a MATTER production. Learn more about how MATTER is accelerating the pace of change of healthcare at matter.health. If you enjoyed the conversation, you can find past episodes of Advancing Health Equity and all of MATTER’s programs on our podcast, wherever you subscribe. I'm Steven Collens. Thanks for listening.