top of page
Search

How AI is Reshaping Healthcare with Empathy | Emily Olson of Braided Health | Brainiac Blueprint Podcast

  • Writer: Kyle Lambert
    Kyle Lambert
  • Aug 15
  • 30 min read

On this episode of Today on The Brainiac Blueprint Podcast by Left Brain AI, we sit down with Emily Olson, Head of Clinical Operations at Braided Health, to explore how AI is enhancing—not replacing—human connection in healthcare.

With deep experience in care model design and behavioral health, Emily shares how her team is solving complex healthcare challenges by aligning technology, empathy, and operational excellence. From reducing emotional labor to improving care coordination, Emily discusses how AI can unlock scalable, compassionate care in ways most organizations aren’t thinking about yet.

Whether you’re in health tech, a startup founder, or a care provider looking for a fresh perspective—this one’s for you.


Full transcript below.


🎧 Watch or listen to The Brainiac Blueprint Podcast 

Apple Podcasts: [coming soon]


In this episode, we discuss: 

00:00 - Intro 

01:20 - What is Braided Health? 

03:30 - AI as a tool for systemic change 

05:00 - How to win over stakeholders like CFOs 

07:30 - Ego, emotion & resistance to change in healthcare 

10:00 - Using AI to power empathy & personalization 

13:20 - Finding joy in healthcare chaos 

18:00 - The “invisible work” that breaks healthcare 

23:30 - Automating workflows to prevent burnout 

28:00 - Real-time AI in urgent clinical scenarios 

31:30 - Cultural sensitivity vs cultural assumptions 

35:00 - How to reduce stigma in care 

39:15 - The 3 essential patient questions 

44:25 - Emily’s dream AI assistant 

45:10 - Where healthcare + AI is going next 

46:15 - The role of AI in mental health 

48:00 - Final thoughts & how to connect




📲 Connect with Left Brain AI

Instagram: left.brain.ai

X (Twitter): left_brain_ai

LinkedIn: left-brain-ai



📣 Subscribe & Share If this episode inspired you, taught you something new, or gave you a different lens on AI in healthcare: share it, leave a comment, or tag us.


Let’s help more people stay brilliant.


Know someone who would make a great guest?



Episode Full Transcript:


Kyle: Welcome, everyone, to the Brainiac Blueprint. Today's podcast is part of our health care series. We are being joined by Emily Olson. Emily, how are you today?


Emily:Doing great. Thanks. How are you? Doing great. Ready to get this conversation started. 


Kyle: So just so everybody knows, you are the head of clinical operations at Braided Health. Would you mind telling us a little bit about Braided Health and what the head of clinical operations does there? 


Emily: Sure. So Braided Health is a tech-enabled services company. And for those who aren't in the industry, what that means is that we're combining knowledgeable people with experts,

innovative technology and AI to create a better experience for everyone, all stakeholders. So that means that ensures the patients as well as the people who are caring for those people. There's a really beautiful place where all of that can intersect and it's a triple win type situation.


Kyle: I love that. I think it's easy to focus on just one of those, you know, making sure the patient's okay or like the hospital or your system, whatever it is, is okay. But it's nice to kind of think about that, that full picture there.


Emily: Yeah. And not only is it nice, but I think it's actually necessary because what we find is although these types of things, when they're sort of unidirectional can look great on paper, you often hit a wall when, you know, the team can't use the technology or the patients aren't interested or getting what they need. And so while it can be very enticing to focus on that little slice, what we quickly realized is that that's sort of the, origination of a lot of the problems that exist today. So it's really necessary, although it does take longer to accomplish for sure. 


Kyle: 100%. But I feel like it's going to be more sustainable, right? You know, if you're kind of thinking about all the angles, you know, you'll have maybe little obstacles to overcome versus some major existential thing. It's like, Oh, God, we're not taking care of this whole part over here.


Emily: Exactly. It's a long-term play for sure. 


Kyle: Awesome. Awesome. I love that. Well, I love that you are thinking about AI and you guys are tech-enabled and everything. So I think that leans perfectly into our podcast and our conversation today. So very open-ended question. Emily, if you could just say, I think AI is, and then fill in the blank for me. 


Emily: Sure. I think AI is going to be an essential part of making amends for some of the wrongs that have been occurring in our health system for a long time. And I say that very carefully because I certainly don't believe that that means we should have AI taking the places of nurses and doctors and social workers, et cetera. But I do think there's a lot of really wonderful things that AI can help us do and do faster and with less emotional labor than it would otherwise take. And so I think it's a key ingredient, but I do not think it's the whole recipe. 


Kyle: I love that. My whole mantra for all of my AI work is human empowerment versus human replacement. So I think that, you know, I think we're kind of on the same wavelength there. It's an appropriate mantra to have, I think.


Emily:Absolutely. Love that. 


Kyle: All right. So before we jump into a lot of AI specific stuff, you know, you and I exchanged some emails and I think some of the information that was in there was very interesting. And I liked a lot of the wording that you used. So I want to kind of jump into some of those topics. And I think that'll lead the conversation nicely into AI and healthcare. But one of the things that you had mentioned was being a professional translator of this makes sense clinically into yes, the CFO will sign on on it or sign off on it. As a marketer and a service provider, I feel like I have to, I think the kids are calling it code switch a little bit, depending on who you're talking to, so that you can speak their language and really get your message across. So I was curious if you have a secret sauce specifically for that CFO to say yes, or even if it's to get them just to not say no immediately. 


Emily: Yeah. Interestingly, I have a very specific approach that I use when thinking about designing a care model. So who are the stakeholders? What are their inherent interests? So instead of trying to convince someone about the things I care about, either as a clinician or as som eone who's pitching to, say, the CFO in this example, I think it's really important to think about what things do they already care about and how do the things that I want to accomplish overlap with that. And in the psychiatric world, which is where I come from, that is called motivational interviewing. And I use that in my daily life. I use that in my work life across both my work in care model design, as well as when I'm pitching to an audience. So understanding in the CFOs shoes, what is important to them, which is budget and making money usually, or saving money. Those can be opposite sides of the same coin. And then thinking about now, how are the things that are important to me and important to a member or a patient? Where is the intersection? And I find that's the easiest. I think early in my career, I would often try to change the script more dramatically. But then it kind of backed me into a corner because what if you're talking to an audience that has all of those types of people or how do you think about a website that many different perspectives are doing? And so finding something that's true and has legs

to each of those different stakeholders has really been the approach that I've taken. And I think that has been, it's not perfect because sometimes I use language that's not necessarily accurate or recognizable immediately to everyone. But I think it's the one that, again, when we think about that sustainability and how do we have principles that I can not worry about who exactly is in the audience and know that there's a piece in there for everyone and something that is going to be relatable. 


Kyle: That makes perfect sense. And I think, like you mentioned, you've got to find that sweet spot, who they are, what they care about, and what you're going to be doing to help with their goals as well. As someone who's worked in health care, just from a marketing standpoint, I certainly don't know the ins and outs like someone like you might. But I know that with some health care, there's a lot of ego that can be involved, whether that's because people have been very successful in treating and caring, or they've been building their business and everything. So I was wondering how you deal with the emotion of it all. Because again, you can go to a CFO and say, "Hey, we're going to make you X dollars," and that speaks, right? That's very clear. But there's also an emotional level to this all, and I'm wondering how you might deal with that a little bit.


Emily: Yeah, so I think there's emotion on both sides. I am very invested in what we're doing, and I think it's a really important piece of making moves and change in the health care system, again, to this idea where everyone wins. So it's hard at times not to take it personally. But I think the other thing is when you are invested emotionally in something that requires a lot of change, that can be really difficult for those that have to change for a number of reasons. A, anytime we're saying what's happening is not working, not only are we talking systematically, but we're also talking about everyone who's participating in that system.

Even when people recognize— and I saw this a lot in my work in value-based care— that it makes sense to align finances with outcomes, it also means a very material shift in paradigm and how people think about the work they're doing and their place in it, and maybe realizing something wasn't the best. I often saw that the issue wasn't ego in the sense of "I think I'm best," but ego as in "Who am I? Who am I as a core person?" Especially in caregiving professions, people usually enter because they want a better experience for others. They want to do something better for humankind. So coming at them with good intention still requires a lot of inward reflection and willingness to critically look at what they did before they had this information and what they could do differently now that they have it.

Then there's the second piece, which is habit. Someone may have been doing something for 35 years, commended for it the whole time, and now they have to change it. That is very difficult as well. I think those are probably the two biggest places I see that concept of ego come into play.

Kyle: I love that. And I think habit, change, and the whole paradigm shift are so important in a lot of ways. Especially when you're bringing something like AI, which is obviously a huge paradigm shift but comes with a pre-built-in reputation, right? People have an idea of what this means. Whether it's true or not, they just have this idea. So I'm wondering if, regardless of the stakeholder, when you talk to them about AI or some new initiative, you've ever been laughed out of the room or gotten an obnoxious reaction from anybody in the past.ten like laughed out of the room or gotten some obnoxious reaction from anybody in the past. 


Emily: ot laughed out of the room. I think the most interesting response I got was, “Well, this is too good. We don't want all this fancy stuff. We just want to do the most basic thing,” which I thought was really interesting. I think there are a couple of pieces specifically related to AI and the idea that if we let a little bit of this in, is it going to come for our jobs and are we going to be replaced? That’s a real and understandable fear because there’s been a lot of projection about that.

It’s really important for people who are using AI to understand and develop a firm set of principles about how they’re going to use it, what the limits are, and where they stand. You mentioned this earlier, but something we say a lot is, “We want to use AI to enhance, not replace, the human connection.”

When we think about AI, we often think about data and analysis, which is great, but we’re also using it to say, “Hey, remember, Miss Jones has four cats. Those cats are like children to her. The most important thing in her life is staying home so she can take care of them.” Or, “Remember two weeks ago when someone from our team had an interaction with her, she talked about her grandson’s birthday party and how important it was for her to attend because she was in the hospital last year.”

Not only do those things make people feel good, but they also allow us to tap into the intrinsic motivation I mentioned before. The CFO may want Miss Jones to stay out of the hospital because it’s incredibly expensive. Miss Jones wants to stay out of the hospital because she wants to stay home and take care of her fur babies. If you take a step back, you realize everyone wants Miss Jones to stay at home. That’s the exciting part, and I think AI is a novel way we’re approaching it outside of the typical way we think about it.

Kyle: I love that. You know, it's such a simple concept, having some empathy and saying, “I know you want to spend time with your cat, so this is how we're going to help you.” That resonates so much. It shouldn't be that complicated, right? But


Emily: Right. And it's not. But it's just, again, the opposite of how we're all trained. We say what the goals are as healthcare providers. We are supposed to know the answers. We're trained to address symptoms, not necessarily people all the time. So it makes a lot of sense as to why it's so hard to even know. In concept, it's very simple.


Kyle: Absolutely. Absolutely. Well, I think that's all a really nice segue into the next core concept I wanted to discuss with you. During our exchange, you had mentioned that you have some joy in untangling the delightful chaos, and I found that saying to be so interesting. It really stuck out to me. In general, I like a lot of the words that you use. They felt very purposeful. To me, it felt like there's a bit of joy in the seriousness of it all and the craziness of it all. So I'm wondering if this is an intentional mindset that you have, and if you have ways to control the emotion that comes with chaos and healthcare, and how you handle all the nuttiness that might be there.


Emily: Yeah. So I think it's a little of column A and a little of column B. I think by personality, I am a dopamine chaser. There's a part of me that’s drawn to that. I enjoy things like escape rooms, puzzles, or doing arts and crafts with my hands. I literally like looking at a pile of yarn, for instance, and being able to create a scarf out of it, or figuring out the ins and outs of the escape room.

I think that actually translates really well. I talk a lot about the startup world because people like me tend to be drawn to it. There's a huge realm of possibility. There’s no guidebook for it, and it’s where innovation can occur. But to your point, on the other side of that, it’s unpredictable for sure. You have to be incredibly agile and dynamic, and also know when to hold tight and not pivot because you don’t yet have enough information.

There’s no single way to deal with that stress. Having a supportive social circle of family and friends, and working with people who are just as devoted and maybe as intense as you are, is really important. You tend to find those types of people in the startup space. It’s important to keep that mindset and also have deep-rooted conviction about what you’re doing. That can be a blessing and a curse because hits fall harder when you’re that convicted, but you have to have that anchoring to help you ride the waves that occur in this type of space and in health in general, frankly.



Kyle: What is your conviction? What is your why? Do you have a specific.. Are you just drawn to helping people? Or is there... 


Emily: No, I mean, I probably should say that, right? I'm a nurse. That should be my answer. I do love helping people, that’s true, but I think I like the challenge of it and knowing there is a better way—and that I’ve worked in spaces where it has worked financially too. Having that history, this is my third serial startup. I worked in a very successful one right out of my first experience and saw that it is possible to create a space where everyone wins.

We don’t like to talk about money in mission-driven work, but at the end of the day, it’s how we can do the work, right? None of this happens for free. You have to flip the script and understand how to make money in a way that doesn’t feel wrong when you go to bed at night. I think that’s really crucial as well.

Kyle: I always tell my clients, whether we're talking about marketing or AI—especially with marketing—that you can't go to the bank with a bag full of leads, right? You need to be able to treat these people and actually have dollars to continue helping them. It’s an uncomfortable conversation, but it’s a necessary evil to be able to do the good you want to do.


Emily: Absolutely. 


Kyle: Cool. That's some great stuff. I love that we are talking about AI, but we’re talking about people a lot and making sure that we’re staying focused there. I think it's a great theme. You had originally mentioned, I think you called it the invisible work.


You know, the emotional labor, the systems, and some of the unpaid roles involved in keeping health care together. What’s some of the behind-the-scenes work that a regular individual like myself wouldn’t know about, that you want to shine a light on, which helps you and the patients or the business, whatever it may be?

Emily: Yeah, so I originally came across this concept in an article that talked about how women often take on a lot of emotional labor. Not necessarily doing things, but planning and being aware, like when kids need a doctor's appointment or when bills are due. These are things that are not outwardly tangible or visible but take a toll on stress. They might be the things causing insomnia or anxiety, making you irritable.

I loved that concept because in my own marriage, if you looked at the chores done in the house, it would be easy to say, “Oh, my husband does all of that.” But then I thought, “What am I doing?” Reflecting on that really came into play when I thought about the teams we’re supporting now and have in the past.

It’s not just, “I have to take care of this patient.” That’s what most care providers know how to do. It’s also, “I have to remember what insurance they have, what benefits they have, how much they’ve spent toward a threshold if there is one, what their personal goals and philosophies are, or what policy or workflow changed last week.” At the end of the day, it’s not possible to hold all that in your head.

That’s where we saw a unique opportunity to take that burden, because that burden isn’t actually helping. AI can be much more effective in addressing those things by being a virtual memory vault, automatically redirecting workflows when they change, and freeing the care team member to focus on what is most valuable: building relationships, making decisions, and placing value on different outcomes. None of those are things AI is the best at today.

This is another place where we create a win-win and give people the ability to leave work at the end of the day and enjoy other parts of their lives. That’s something we’re all chasing, but this is a step toward making it a reality. I no longer have to keep 100,000 sticky notes because the system will alert me when something’s falling through the cracks and even alert me to people I didn’t know were falling through the cracks by identifying patterns in data I would never have the time or skill set as a nurse to analyze over 10 years.

That’s why this has been a hallmark for me, something really important to me both as a woman and as a former case manager. It resonated very strongly with me.

 


Kyle: I mean, nobody is successful without other people, right? Everybody needs to be working together in lockstep, building each other up. I love that you're thinking about that. I'm curious if you've implemented any models or workflows in your day-to-day that help you accomplish that or at least mitigate some of the issues that might come.


Emily: Yeah. So just things like having an AI note taker that will ping me about follow-ups so I can really be more present in a conversation, yet not have to worry like, did I miss something? Am I going to do what I said I was going to do? And that actually became sort of the model for a lot of the work that we're doing in our platform as well. It is saying, okay, if you're already doing the work, can't we just create a system that captures that without you having to go back and then remember what you did, appropriately document, make sure you're meeting all the criteria of the different credentialing bodies or regulatory bodies, etc.

So that's a very simple example, I think, of using an AI note taker. And then I do a lot of bullet journaling as well just lists and things that release the need for me to try to keep everything in my mind at once.

Kyle: That's great. I love that. I don't know if you, not to plug Google here any more than they need to be plugged, but if you haven't tried Notebook LM, give that a shot. It's a great way to throw all of your notes into a list, and they'll digest things for you. It's very cool.


Emily: Yeah, I have not. I've used—I'll just plug it because I have no association with them—but Read AI is something that I was introduced to through Google Meets. That was great and planted the seed that I've run with and our product team has run with.


Kyle: I have Read hooked up to my Google Meet as well. Yeah, we're on Zoom, so we don't get it today, but yeah.

Cool. So I want to take a step back from your day-to-day in terms of normal operations and focus a little on care and coordination of care. I think everybody always has this immediate reaction of, you know, health care sucks or helps out some, or there’s an immediate reaction in a lot of ways. But I think everybody can agree that everything can always be better in some way, shape, or form. So I'm curious if you have any tangible examples of how you think coordination of care might be improved in the near future, especially by implementing some of the concepts we’ve talked about with AI and automation.

Emily: Yeah, I think the biggest barrier to that today is the siloed data. So coordination by nature requires that you understand what’s happening on the other side of the fence. And our biggest barrier today is it's no longer, in my opinion, that we don't have the data we need. But if you don't have the data next to the other data, next to the other data, you're not getting the full picture. And so we often think about trying to manage someone with only seeing like half the picture or having it very blurry. So you have a general idea that there's a human here, but like what makes that human unique and therefore how we need to address whatever issues they're dealing with, that's like where we need to get to. So the biggest thing is how do we get all that data in one place? And then how do we make meaningfulness out of that data, meaning when we have that data, how do we make sure that however we're presenting it is actually actionable?

So just being aware of the problem is not enough to fix it. And I can use an example of we know someone that went into the hospital for fluid overload. Right. But there's 10 different ways that fluid overload can happen. It could be poor diet, not meeting hydration needs adequately. It could be a disease progression, which there's a million other reasons. It could be not taking your medication in the right way. And what we do about that really needs to we need to understand the how come behind that. And so that's where you know kind of connecting all those pieces, having the data, using AI and analytics to create the what's happening, how come, and what now type answers and then surfacing that in a way to the care team. Like all of those, there needs to be a single thread that connects all of that. And, you know, most of the time you don't have all of those players in the same space working on the same platform.

So that interoperability is not only that we're exchanging information, but again, that we're exchanging the right information, that the recipient is only getting enough, getting the right amount of information so that it's specifically the information they need to impact their care with their licensure. So, you know, a lot of times we will fax 40 pages of information narrative notes to the PCP and check the box in terms of care coordination. But, you know, just like everyone else, PCPs are overwhelmed today. And if they have to go hunting for 25 minutes, which they don't have, to find that one nugget, it's not worth their time. So we need to say, here's half a page that's digestible in 30 seconds. And it's going to say exactly what you need to know, like they're not taking their medication because of the side effect, or that may be a conversation you want to have.

And again, all of this, we are really focused on surfacing information so humans can make decisions versus making decisions for the humans. And so it's more like, hey, did you notice this? Were you aware of this, or did you forget this type of nudges that we're using in our system to really connect that and make sure that everyone can see the full picture and more specifically knows what their role in meeting the outcome is.

Kyle: That's great. That's great. I can already see how it can all play together and make things work a little bit better. One of my immediate thoughts, though, and I'm curious, this is a bit of a specific segment, I guess, in health care. You know, it's one thing to have a platform or an app, whatever it is, that has this information and is sent to the provider and is empowering them throughout the process of care. I'm curious how you think things could be improved from a high-stress, in-the-moment situation. I'm thinking like surgery or ER, ICU type of moment. Have you seen anything that might be included in that when it's just like, you know, I need to make this decision now? I have five seconds or 10 seconds. How can AI help me to make that quicker? I'm wondering if you have any thoughts or instances around that.


Emily: Yeah, so I'm having a little connectivity issue. I think I heard the question, but please redirect me if that's an error. I think what I heard is a lot of times we don't have time to pore over things, or there are decisions that need to be made in real time. I think the biggest problem progression forward that I see right now is the ability to take real-time conversation, digitize it, send it through an LLM, and get back responses in 30 seconds or less.

So, you know, if someone's mentioning some different symptoms of depression, that might cue, “Hey, it might be a good idea to do a depression screening.” Or if they mentioned that they tripped and fell outside MoneyGram, that might be an opportunity to think about, “Do they have some financial instability?” Even further than that, AI can help us think outside the box. Maybe helping someone get a bank account isn't within the normal realm of interventions. However, it could allow them to take the fixed income they were siphoning a portion of to pay for check cashing and instead use it to afford the medications they weren’t taking.

All of these have important effects, to your point, that need to happen in real time. There’s also the piece of behind-the-scenes prompts, where we know when this action happens, another should follow. It can prompt in that way as well, which is not dependent on transcription but can be rules-based.

There are all sorts of ways people can be helped in the moment. I think the biggest thing is there are a million different prompts that could happen, so keeping it narrow enough that we're not flooding with a thousand prompts per minute that just get lost in the ether is key. We need to think about what the main things are that we want to pull out of these, bring people’s attention to them, and direct the model so that it’s digestible and not distracting, which is the opposite of the purpose of these types of tools. But I guess that’s my short answer. 

Kyle: I mean, that's a perfect answer. And honestly, I got a little giddy listening to you there. Just thinking about real time going to an LLM and talking to them and having them help the situation is so interesting. Even 10 years ago, I think we would have been like, “What are you, living on Mars?” type of stuff. So that's so interesting to think about, and it's going to be very crazy to see in action. And even, like you said, talking about the bank accounts and all the stuff you don't necessarily think about with health care, it's crazy to see it all come together. Yeah. Expert like yourself, that's awesome.

Cool. So I wanted to jump in talking about patients and everybody's individual situation and how AI can help you learn a lot about that. When you and I were going back and forth, you had talked about cultural competence and how we can be competent in someone else's culture or not be competent in someone else's culture and what to do. I was hoping you might be able to elaborate on that a little bit. Are you speaking specifically to differences in health care related to their race, where they live, genetics, and things like that? Or is it something else that maybe I missed the point on?

Emily: Yeah, really. It's all of that. Okay. And anyone who's followed me at all on LinkedIn knows this is kind of like my soapbox topic, but I think the idea that you can be competent in someone else's culture is... What I do think is that you can be culturally sensitive, you can be culturally curious, you can be. But what I think that sort of, I think the biggest reason that I'm so opposed to this idea is that instead of us just saying, maybe we don't know, and let's just ask the person what their preferences are, we pretend that by knowing someone's race or religion or culture, that we are going to make assumptions based on that. And that just feels pretty dangerous to me. And it feels like the long way around to what we really need to know is that more important than their culture is what pieces might they take from it or not take from it. You know, everyone interprets culture, religion, race, all these different factors differently. And so the easiest way to do this and to get it right is to ask the questions and allow them to give you the answers. Then we're not making assumptions, then we're not guessing, and we are going to the expert who is the human who we're talking to, to get the answers. And I think for so long, we've been so afraid to ask those questions that has prevented us from doing the right thing and the easiest thing in doing that.

And then, you know, you can really damage a relationship by making incorrect assumptions. And that can be so hard to come back from. But if the person that is receiving the treatment is telling you how they want to be communicated with, what procedures may or may not align with their religious or other types of beliefs, what are their goals? Those are questions that are very simple to ask. And then it prevents you having to go in and make repairs, because if, and not to say that that means you get it perfect, but it's a lot better than trying to guess based on some curriculum that you were required to do six hours of training in that maybe highlights one racial or ethnic group or these types of things.

Kyle: I mean, we all have our different biases just from our own life experience. So you want to eliminate that as much as possible. I think that ties in nicely with the conversation we had earlier about talking to your CFO. You can speak their language because you've already talked to them. You've had questions with them. You know what their goals are and things like that, so you can speak their language.

Similarly, you can help diagnose and treat somebody only by asking these questions, not by making assumptions. I think that is a perfect way to approach it from a non-healthcare person, as a marketer and AI person. I love that you think about that kind of stuff, and I hope that becomes more common in general.

I think a similar concept that occurs in healthcare a lot is stigma, specifically condition-based stigma. So I'm curious if you see similar use cases or how you might approach something with a touchy subject to help treat that patient and also empower the conversation or the treatment with some AI.

Emily: Yeah, I think what's interesting is, you know, being a psych nurse, stigma is something that you think about all the time, because especially when I started 20 years ago, that was something even more so than now, I think, was intensely stigmatized. Reaching out for help was not necessarily seen as admirable or brave. It was seen as weak, that kind of thing. But I think stigma in healthcare goes far, far beyond that. And I think what's really important, and that we have made some strides in doing, is to stop labeling people as their diseases or their attributes.

This is where I think about the mini whole person health really coming into play, because someone is not just depressed or a schizophrenic or a diabetic, right? They are someone who didn’t have access to quality food when they were a kid, and because of that, they became overweight, which hurt their kidneys, and they ended up with diabetes, etc. I think when we stop looking at someone as their disease state and start understanding what the factors are, it just automatically reduces stigma because you're thinking of someone as a human versus a disease. But second, it really goes back to something we've already talked about, which is understanding what to do about it.

If you understand that they are overweight because they don't have access to food versus thinking they’re lazy, you can then understand how to approach that to actually drive results. I think about that piece a lot. Over the years I've sat in interdisciplinary team meetings where we used to say, “Oh, this is a non-adherent patient.” And when you dig a little deeper under the surface, you find out they're not taking their meds because they can't afford them. And guess what? Any one of us, if we couldn’t afford it, probably wouldn’t take our meds either.

One concept that I learned as a psych nurse was if you think about someone doing something you don’t like, think about whether it’s because they don’t have the skill to do it. I’ve morphed that slightly to say, “What is preventing them from doing it?” Then you very clearly have an answer of what you're going to work on. A lot of times it comes back to these Maslow's hierarchy needs—they can’t afford it, they can’t access it, they don’t understand it. These types of things are, in some cases, pretty solvable. But when we just go in with the same tactic over and over, everyone feels like they're bashing their head against the wall, and that fuels the stigma too, making it easier to make someone the bad guy.

I think the easier approach is, just like we talked about, working with them and understanding them as a human being. What are those triggers? Is it a fear of going in public so they’re not picking up their medications? These are very real things that we see on a daily basis.

Kyle: You know, I think everything, whether we're talking about healthcare, sales, or marketing, you're always more successful when you have empathy, right? And if you can understand where they're coming from and you actually listen to them, I think it's going to make things way more effective, right? You're not going to be beating around the bush. You're not going to be failing because of assumptions and all that kind of stuff.

So, whenever it comes to implementing a system or marketing for healthcare, I try to put empathy into the core of that. I have three questions that are the starting point of my strategy, and you kind of mentioned it earlier, so I think you're going to be on the same page here. I think about: What is wrong with me? How are you going to treat me? And then, how am I going to pay for it? Right? Those are three questions at the core of any strategy.

So I'm curious if you have a framework similar to that, how you strategize about taking care of a patient, reviewing the data, or whatever it is, and how you make sure that you're actually listening to the situation and providing the best solution.

Emily: Well, you don't know this, but you totally just set me up perfectly because one thing that we did as we are building our platform is we actually built this into the required workflow. So a portion of every interaction is going to be really dependent on who is the person, where are they, and how are we progressing towards goals. That changes from interaction to interaction. But we have one piece that's called the conversation starter. And there's a compliance piece to this that might talk about HIPAA identification verification or whether we have a release of information if we're not talking to the person themselves. But then there are three questions, and these are the key that I think are so important: What's on your mind today? What are you worried about? And what are you looking forward to in the next three to six months?

There are a couple of different reasons we do this, but number one is this is the start of the call, and we're demonstrating that you are going to lead this effort because whatever is on your mind is going to be the first thing we talk about. Instead of me diving into the list of to-dos I have, we're starting with you. The second is, if we don't address those things, the likelihood that the person is going to be present and capable of interacting with us is going to be almost zero. And then the third, again, it gives us the fodder for that aligned goal making.

So what are the things that are important to them? How do we use that as motivation for them to continue working on their own health journey, etc.? I think these three things, again, are very helpful in the patient or member space, but they're also really helpful in day-to-day life. What might that feel like if someone asked you that? You would feel cared for. You might get some questions answered, right?

You might ask or share something that you probably wouldn't have unless you were specifically prompted to do so.

Kyle: That is so cool. I love that you have those questions and that framework. Can you do me a favor and repeat the three questions again, just so we're all clear? 


Emily: What's on your mind? What are you most, what are you worried about? And what are you most looking forward to in the next three to six months? 


Kyle: Interesting. Interesting. Okay. And has this been iterated on? Was there a different question or a different way of saying it in the past that you've now changed to get to these three questions? I think we're back.


Emily: Okay, we are. I'm so sorry. 


Kyle: No, it's all good. It could be me. It's part of the game, unfortunately. 


Emily: It's not. I've been having internet issues this week pretty badly. 


Kyle: Oh, that's fun. 


Emily: Did you get them?


Kyle: What's that? Oh, I got them. Yes, yes. It was perfect. My follow-up question was, I'm curious if this is, you know, like a version three of your three questions or something like that, if you've iterated off of this at all. Or is this just like you guys all got together, came up with this, and were like, we like this, it works, let's run with it?


Emily: Yeah, this is, I first came up with the questions in a previous job. They had a different iteration, but what I found is those were more health-specific. And what I found is people sometimes had trouble answering those. So what I wanted it to be is things that are going to be forefront of people's minds, that they don't have to think about, and they can just be honest. I didn't want them to necessarily be constrained by health because that dilutes the intention behind it of understanding them as a human and not just a patient or health goals.

Part of this was driven by the idea that I would look in a chart and see, “Patient says that they want to decrease their A1C.” And I would think, did they really say that? Maybe one out of 50 would say that. But most of the time, what people are thinking about is, here’s my life goal and here are things. Then it's our job to translate that and say, this is how we think about your health and support that in order to make those goals more achievable.

Kyle: Awesome. I love that. I love that. Cool. All right. So we're getting towards the end here. So I have two kind of just like large open-ended questions for you. We'll do one at a time to make it easy. But I'm curious if you could snap your fingers right now and have one really detailed workflow or AI system implemented into your day-to-day to overcome some kind of issue or obstacle, what would that be?


Emily: Oh man, I think it would be just having an AI assistant with me at all times so I can remember conversations and, and keep my, I don't know if I'd like that. It sounds good now, but maybe it would be horrible. I'm not sure. 


Kyle: Maybe if you had like a trigger word, you can tell it to start listening or something like that. 


Emily: Yeah. Yeah.


Kyle:  Awesome. Awesome. All right. And then again, very open ended and broad here. Do you have any predictions for health care and the integration with tech and AI? And what do you think is going to happen in the next year, five years, 10 years, whatever it may be? 


Emily: Yeah, I think that the tech is going to continue to emerge. And I think it's going to be a lot longer than we think before this stuff is regularly and consistently used in healthcare. I think going back to the fear we talked about, a lot of decision makers are fearful. They worry about the accuracy of AI, which is a fair concern.

I think that, as is typical for healthcare, we are going to be slow adopters, and there's going to be a lot of push and pull, maybe some spurts forward and then tons of regulations getting put in place, reacting to unintended consequences or things that weren't anticipated. But I don't think it's going to be limited by the ability or expanse of AI in tech. I do think it's going to be limited by people's egos, fears, and anxiety around how it might impact their own livelihoods. Those types of things, I think, are going to slow it down drastically.

Kyle: That makes total sense. Absolutely. I wanted to give you an opportunity. Is there anything that you are passionate about or excited about maybe that we haven't discussed at all that you wanted to kind of just share with our listeners? 


Emily: Yeah, I think, you know, obviously mental health and behavioral health are close to my heart. And I think there's some really exciting opportunity with AI in that space to bring up threads that maybe the therapist isn't hearing or hasn't noticed and just provide that, you know, almost like a Jiminy Cricket on your shoulder—not necessarily talking to you about morals, but just saying, “Hey, did you notice this? Have you considered this? Is this a possibility?” I think that could be really, really interesting.

Given the sensitivity of a lot of that work, I think that's a place where it may be slower to be utilized. But I think there's some opportunity to think about how you can implement those types of things at less risk. For instance, not necessarily saving transcripts and just using them in real time is one option. There are also other ways that you can de-identify it and get the benefit while mitigating some of the risks that a lot of folks have.

Kyle: Well, I know mental health was kind of discarded for a while. You know, they focused mainly on the body. So I think we've gotten better at that, but hopefully AI and this wave can help to catch that up even more and really support the mental health initiative. That's great.

Well, cool. So again, I know that you're at Braided Health. People can check you guys out at braidedhealth.com. I know that you are on LinkedIn at Emily Olson. Do you have any other social handles you want to plug or anything like that?

Emily: Nope. You know, reach out if you love to talk about this stuff. I'm happy to nerd out with it with anyone for sure. Definitely love talking about it. 


Kyle: Awesome. Great stuff. Well, Emily, I really appreciate you coming on the Brainiac Blueprint. This has been a great conversation. Hopefully other people will find it exciting. And thank you all you Brainiacs out there listening. Stay brilliant. Talk soon. 


Emily: Thanks, Kyle. 


 




 
 
 

Comments


Alright Brainiacs! Let’s Build Your AI-Powered Business

Ready to transform your business with AI? Let’s talk.
Contact us today to schedule a consultation!

bottom of page