Welcome to The Edge of Healthcare, your premier destination for insightful discussions and actionable insights. In each episode, we dive deep into conversations with industry leaders, exploring the dynamic landscape of healthcare. From overcoming hurdles to embracing breakthroughs, join us as we discover firsthand the strategies and experiences of healthcare trailblazers. Whether it’s payer and health system leaders or innovative solutions, we’re here to empower you with knowledge that drives real change in the industry. Don’t just listen—be part of the transformation.

About This Episode

What if we treated AI like a new team member instead of just another tool?

In this episode, Shahid Shah, founder and CEO of Netspective Communications, shares insights from a recent AI conference in Dubai, emphasizing how the region fosters innovation by encouraging experimentation and reducing regulatory fear, unlike the more cautious U.S. healthcare system. He stresses the importance of treating AI not as just another software tool but as a new team member that must be onboarded, supervised, and trained to align with an organization’s culture and ethics. Shahid highlights ethical AI as a shared human responsibility, requiring oversight similar to how we mentor medical interns. He points out hospitals’ blind spots: falsely believing they can control or fully understand AI, when in fact, staff are already using it independently. Lastly, he argues that healthcare is uniquely suited to manage AI risks, given its experience handling human life risks, and urges leaders to embed AI into existing risk and decision-making frameworks.

Tune in for powerful insights on how we can responsibly and effectively integrate AI into healthcare systems without stifling innovation!

Read the transcript below and subscribe to The Edge of Healthcare on YouTube.

Martin Cody: Welcome to The Edge of Healthcare, where the pulse of innovation meets the heartbeat of leadership. I’m Martin Cody, your guide through riveting conversations with the trailblazers of healthcare. Tune in to gain exclusive access to strategies, experiences, and groundbreaking solutions from influential payer and health system leaders. This isn’t just a podcast, it’s your VIP ticket to the minds shaping the future of healthcare right now. Buckle up, subscribe, and get ready to ride to The Edge of Healthcare, where lessons from leaders are ready for you to use today.

Martin Cody: Hello again, everybody, and welcome to another episode of The Edge of Healthcare: Lessons from Leaders to Use Today. My name is Martin Cody, SVP of Sales and Marketing for Madaket Health. And with me today is Shahid Shah, who is a CEO, founder of a company that I did not know existed until about a month ago, but I think just like me, you’re going to want to know what he’s doing, how he’s doing it. His experience is really par excellence in the area of technology, healthcare, and last but not least, the ever-present AI. Shahid, welcome to the program.

Shahid Shah: Thank you so much. And, Martin, it’s really nice for you to invite me here, and I’m looking forward to having a conversation with both young and old in your audience.

Martin Cody: Excellent. Because we do have that demographic covered, and especially for some of us older folks, of which I am now firmly entrenched in that category and cohort. I want to get to the AI in a second, but I do want to also talk a little bit about the last 20, 30 years of your career. You’re coming to us from Washington, DC, and you’ve spent a bit of time in the government or working for the government, if you will, but walk us through from college post-college to what got you into where you are today and why?

Shahid Shah: Yeah, it’s a great little story, because I think many of your audience members who often are thinking about, should I get into healthcare, should I not get into healthcare, might find this story useful because I graduated in 1990. Computer science degree. Regular. Nerd. Propellerhead. However, you want to think of me. Got a really great, almost my dream job at the US Navy. So, I was a federal official at the US Navy. So, on the civilian side, not on the military side. And what I loved about it was it was just a bunch of R&D shops, right? Things like doing war games and simulations. They’re working on code for missiles and just really exciting stuff. That kid out of college with a computer science degree would find super useful. What happened at that time is similar to what’s going on today in Trump administration, which is a little bit of cleanup and reduction in force in government was occurring in the 1995 time frame. In that case, it was called BRAC, or the Base Realignment and Closures. And so, the base that I was working on just up and moved 200 miles south, and they’re like, would you like to come with us? And I was like, I’m in Washington, DC, a very vibrant city, moving to a farm town. Maybe that’s not the best move for my career. And so, when I looked at that, I said, you know, I’m doing something important. I’m doing it with impact. When I go in the morning, when I get home at night, I never feel like, oh my goodness, I wasted my day. Because you’re working on tools and technologies for soldiers and sailors, etc. So, I said, okay, what could I possibly do that might have the same level of impact and still be super heavily involved in IT? What I found was that there was a really cool architecture job at the American Red Cross. The American Red Cross, pretty much everybody knows, has two major divisions. One is their disaster recovery. They show up whenever anything goes wrong. You see, the American Red Cross, they’re trying to help out. And then there’s another unit that they have, which is for blood collections. So, anytime you give blood, it’s usually for some purpose. And that purpose is to save someone’s life or give them additional help if in their time of need. What was really cool is I just happened to know somebody got that job just out of. And this is normally how you would see it, right? Martin, it’s not like you have some plan that says, oh, I’m going to go do this big thing, but just basically pure luck at the right place at the right time with the right skills. At that time, where they were building at the American Red Cross, we were building out, taking individual departments and individual units that are each in each city, and creating a unified, fully computerized system at that time, in 1995 through about 2002. So, that was my entry into healthcare, and it couldn’t have been a better entry. We were creating a nationwide system. It was an electronic health record. It was unified. It had hundreds of integrations, thousands of places where there were little shops that were doing blood collection, tens of thousands of users. So, it doesn’t get more enterprise from an EHR perspective than this. And so, when I got out of there, that’s where I was kind of like an EHR expert, and that’s how I got into healthcare.

Martin Cody: A couple of questions on that. And that’s a fascinating story. Where did you where was undergrad? Were you graduated with a computer science degree, Penn State? So, you’re up at Penn State, and then you get a job in DC working for the Navy, which I find fascinating. And then the transition out of the Navy due to a yeah, I’m not going to move 200 miles south type of thing. And you said you knew someone at the Red Cross. Who was that? You don’t have to name names. But how did you know that person?

Shahid Shah: I had worked for this particular young lady at the time, at another location. She happened to move to the Red Cross and told me, hey, these guys are doing some really important stuff. We could use your skills. So, it was one of those things where, not knowing anything about healthcare, I would have said, well, I don’t know anything about healthcare, what do I do? Instead, one of my rules, leadership lesson number one from this podcast that probably, Martin, you’ve talked about before, is when you’re young, you say yes to everything.

Martin Cody: It’s that. And that’s where I was going because I was curious where that relationship came from. But I think leadership lesson number 2 or 1 point A would also be healthcare is a seemingly massive industry, but it’s actually a microcosm of many industries. And the lesson here is, don’t burn a bridge. You don’t know where that person’s going to surface and how much you’re going to need them later on in life. So, all these relationships have tentacles and reach, and if you’re helping someone and you’re giving value, then they will probably leverage your skills later on. So, I totally agree with you on that standpoint with regards to say yes to everything, take every conversation, you don’t know where it’s going to lead. So, congratulations on that. And so, that was your entry into the wonderful, wide world of electronic medical records and integration, and arguably interoperability at that point in time.

Shahid Shah: That’s right. And in addition to that, the world of medical devices, because in this particular environment at the American Red Cross, this was the entire computer system, is what we call a class three regulated medical device. So, it was not under the purview of simple regulations, like with the Office of National Coordinator and others. It was under the purview of the FDA. And it makes sense, right? If you’re taking blood out of somebody’s body and then giving it to somebody else, you can literally harm them. If not, kill them in the, if you don’t do things right. And that’s really the lesson here is when you’re doing something super important, you’re going to be impacting people’s lives. And that’s you have to start thinking about and say, well, do I want to be on the IT side where I’m working with information about patients, or do I want to be on the care side still in technology, but I want to be in medical devices, a pacemaker, some implantable in your brain, an MRI, etc. and what you want to think about from just a career perspective is how much impact do I want? And do I want that 2:00 in the morning phone call where something is so wrong that it’s harming patients and you need to get to work? And so when you’re young, like I said, you say yes to everything.

Shahid Shah: But I still, I haven’t had that particular position for 25 years. I still can remember, like if a particular phone call rings or a particular pager goes off, it still reminds me of those dozens of times when I did have to go in at 4:00 in the morning or 2:00 in the morning, because when life matters, there is no time. There’s not. You don’t have work-life balance and that kind of stuff. You just have emergency world of pretty much routinely, so you shouldn’t do it for too long. I did for about five years, and they were really hard. But everything that I know today came from those five years of that, almost like a cauldron, as it were. And I would recommend, unless you really love family life balance, do that. Do a hard job that matters to a lot of people for a little while so that you can go practice for everything else you’re going to do later.

Martin Cody: No, I think that’s an incredible wisdom, and I like the way you articulated it. Do that a very hard job, but do it for a short period of time. Otherwise, you’ll burn out and then be of very little use to anybody from that standpoint. So, that was early on in your career. And now, within the past seven days, you’ve just returned from a pretty robust AI conference in Dubai. And I don’t want to minimize everything that you accomplished in the gap between there. But I do want to tie this up in a bow with regards to some of the medical device aspects, where healthcare is going from a digital health perspective, and then obviously the explosion and importance of AI, and I would stress the responsible use of AI. Walk us through a little bit about what this conference was about in Dubai. And then part B of that question is compare and contrast it a little bit to AI from an industry perspective in the United States.

Shahid Shah: Yeah. So, this particular conference was a one part of what they were referring to as AI Week in Dubai. Dubai is pretty. It’s a dynamic environment. It’s a very small area. So, about maybe 4 million people or so, but they’re being pretty advanced in fintech, health tech, AI. They’re trying to get their tentacles into as many things as they can, drawing people to Dubai, creating it as a financial center and AI center, etc., moving themselves away from oil as their primary source of income into a more diversified environment. And of course, AI is a nice umbrella as well because it hits all the different industries. So, in this particular event that we were at, you know, there were many different types of people speaking. It was people from energy. Of course, we’re in the Middle East, so energy was a big thing. Climate was a big thing, but easily a third of the speakers there, maybe even more than that, were in healthcare-ish technologies. Many of them focused on, hey, we don’t have enough doctors. What the heck are we going to do all the way through to? Could AI sit on the edge inside the pacemaker and do a better job, right? So you had the entire spectrum, as it were, of talks. Now, what was really cool is that there are lots and lots of people that think that AI could do practical day to day work, but they’re in the United States who are fearful that if I put this in, who’s going to who’s going to hurt me from a regulatory, legal, etc., perspective and what they’ve done pretty well in the Middle East in general, even I was both in Riyadh and in Dubai. One thing the government is trying to do, there is force people to think outside of the box and say, look, when something goes wrong, it’s inevitable if something is going to go wrong, because not everybody knows this, but you’re not going to get in trouble for it. And that’s the slight difference here, is to say the government telling you, like we used to in the United States, the United States has a fantastic history of trying new things, except we do them in academia, right? That’s why you run clinical trials and things like that in academic medical environments. But in general, if you look in normal treatment areas, everybody’s afraid, right? We’re like, oh, I’m not doing that, and I’m not doing that. And what they’ve done really well over there is open the shackles and say, look, yeah, just try everything. Tell us how it goes. Come back to us. And they’re funding these. So, they’re saying these three hospitals try this here. These two hospitals try this there. Now again, it’s small. You can’t even compare the scale because we’re at 330 million people. There are 4 million people. But when you take the entirety of the group there, they’re about 70, 75 million people in the entire Gulf Cooperation Council. So it’s not like you couldn’t do some of those things. And what they have the benefit of is they don’t have 75 years of legacy to try to bring up the case. So, what I loved is hearing what the art of the possible was, understanding what is possible in the terms of either ethical AI or reproducible AI, and then bringing that back. Here is what I really enjoyed. So, two quick lessons that I learned is everything that is triable should be tried, except there is the part two of the lesson is that the only things that we can’t move fast and break things on are those that are literally FDA cleared. Up until that point, if it’s not an FDA-cleared requirement, try it out. And even with the FDA, what we found these days is that if you look at what the FDA is accelerating with what we call software as a medical device, they’re saying, look, even in these areas that are FDA cleared, try stuff out. And that’s what I think we’re lacking here, is we’re not as bold as we used to be 20, 30, 40 years ago, because we’re so afraid we have real tort reform to do in the country. We are afraid for good reasons, but lots of cool things that I saw over there.

Martin Cody: Yeah, it’s an interesting balance with regards to pushing the envelope, but then also from a safety perspective, not pushing it so far that you bring about harm. And one of the phrases I like what you just said on those two experiences or those two outcomes that you’re witnessing, but the phrase that you uttered that really caught my attention was ethical AI, and I’m curious who is determining that.

Shahid Shah: Yeah, I think the way to think about it is that the ethical AI is generally going to be determined by the person who is going to use that AI. So, at the moment, there is no way to teach or train the ethics directly into AI and then not prevent it from doing anything unethical. What you will do, though, is so let’s look at AI in the same way that we would look at a young medical student who just finished school. They are an intern or a resident somewhere in a hospital who teaches them their ethics, and beyond their parents, teaching them to be good people. The medical ethics is taught by the medical school, the medical standards, and most importantly, they’re training physicians. And so, that’s why we have to look at it and say, if I prompt my AI with a bunch of unethical stuff all the time, it is going to give me unethical answers over time. Yes. And that’s why this problem right now is, I don’t think there’s a scenario over the next five years. Let’s just say we won’t go too far is over the next five years, we’re not going to have scenarios in healthcare where we say it’s fully unsupervised, agent-driven, no oversight by humans. That I think is a bridge too far at the moment. Not because it’s technically difficult. Oh, sure, it’s that there are lots of problems that we can’t even fathom that the AI wouldn’t be able to handle on its own. So, I think if we keep this idea that we are doing supervised AI agents and the supervision is done the exact same way that medical doctors, nurses, etc. supervise their students, that’s a great way to think about how the process should go.

Martin Cody: It’s interesting because I still, I, part of me is a little scared with regards to that. And the reason being is history teaches us a lot of lessons. And one of the lessons that I found fascinating, certainly about the investment industry and the stock market as a whole, was when Alan Greenspan was fed chair. And during the implosion in the 1990s when the market cratered, he said the single most surprising thing was him was the behavior of the CEOs was all self-preservation, not shareholder protection. And so, when you juxtapose that to who’s going to teach AI the ethical use of AI, it’s going to be the medical institution, the fellow, or the training physician, stuff like that. Well, you get a decent or surgical supervisor pulled to the side and asked him or her, is that doctor pretty good? They’ll give you the honest truth type of stuff. So, it’s just it’s a challenge. And I think some people are uneasy with as it relates to, well, do we want that person who all of us would agree off the record is not necessarily the most ethical, teaching ethics to AI. But it also lends itself to a further question. You gave a talk in 2020 on how investors can sift through AIBS, and I think that was five years ago, and now that’s very prescient. By the way, congratulations on that. So, teaching the investment community how they can filter through AI from an investment portfolio perspective. And should we invest in this company? I get it. But I also think that same question applies today. Take out investors. And how do you teach hospitals? How do you teach payers? How do you teach patients to filter through the AI BS when it comes to adoption of digital health? When it comes to physician selection, when it comes to care modalities, and medical devices, and stuff like that. Huge question, Shahid, but I think AI is here to stay. And so, I want to help health systems understand how to better utilize it, help payer insurance industries, how to use it, and not continue to put roadblocks through prior authorization and everything else preventing the provider from getting paid. And then the consumer, the patient, the member. It is the only industry that all of us will either touch, beginning our lives, or ending our lives. For the most part, it is healthcare. So, how do we protect the consumer and leverage AI so that all of it is beneficial to those three entities?

Shahid Shah: Yeah, I love that framing of that question because it is both complicated, but in an ironic way. Simple as well. If you treat AI and modern capabilities as an IT project, you’ve already shot yourself in the foot because unlike an IT project, which has to be operated by humans. AI, in theory, could operate on its own, as we just mentioned a few minutes ago. So, how should you treat it? Treat it more like an incoming human being that you then have to take full responsibility for. Like, it doesn’t matter whether a physician is fully credentialed when they walk into a hospital or health system or a practice you onboard them with. Here’s your email. Here’s your office desk. Here’s the kind of scribe that you will use. You generally train them even though they are trained. So, you say, okay, when you bring in the AI. Suppose Shahid, … was created. I’ve got the most brilliant AI to potentially replace an entire doctor. The way to shoot yourself in the foot is to assume that the Shahid’s AI, … company, did a fantastic job of training and that I don’t have to touch that training when it arrives at my organization. That would be like hiring somebody from the outside, a … firm, or a new doctor who’s just been credentialed, and then say, hey, just figure out how things work around here. No need to talk to me. No need to talk to anybody else. That is almost a recipe for disaster. And you’ll see that at the best organizations. Onboarding of a new human being is treated seriously, because we know that we have to inculcate them into the culture at Mayo Clinic or UPMC, or wherever. And those docs that work at UPMC versus the docs that work at the Mayo Clinic, two world-class organizations, by the way, are not interchangeable. Why is that? It’s just medicine, right? It’s because each organization operates under slightly difficult, different ethical decision-making, slightly different workflows, slightly different everything. And that slight differences is what creates the unique culture of each organization. And so, this is leadership lesson for AI. It’s very easy. Don’t treat the thing as a new piece of software. Treat it as if it was another human being, and then figure out what would you do to a human being based on what you were asking that human to do? If they’re going to be doing claims reimbursement. What would you do to train them on the claims reimbursement? If they were doing surgeries, how would you train them to do surgeries in your environment? And this is the one thing you know, as you have. You talked to a lot of leaders, you should ask them about. This is how should we treat incoming software when it’s AI, treat it as a human. When it’s not, I treat it as software because the other humans are handling the work.

Martin Cody: That’s a profound answer, very valuable. So, I appreciate that insight. And it’s interesting too, because there’s nuance and context there that you talked a little bit about. Because if you’re treating AI at Mayo, at Hopkins, at the clinic, by the way, none of them are paid sponsors of the program. If you’re treating it as a human, yes, it’s AI, but AI is going to respond differently in each one of those cultures, each one of those environments, even though it could theoretically be the same AI. So, there’s that part of the nuance perspective, or shades of gray, if you will, that is completely unique to each other. Institution, regardless of it, could be Cedars-Sinai could be a community hospital in the middle of Kansas. Does it matter that culture is different? So the AI is going to respond differently. And so, you have to frame it from that aspect, that treat it like a human that you’re onboarding. Love that from a standpoint. Now take that. And one of the questions I’m interested in, because we do speak to a lot of experts, a lot of digital health experts in that capacity is, where do hospitals have a blind spot as it relates to AI adoption in their environments?

Shahid Shah: Yeah, the biggest blind spot is that we let’s like, I’ve been a CIO and a CTO at a number of different organizations. In the old days, I used to be able to say, I choose the software at the top, I make my selections, I bring them in, I organize them by arch enemies. That’s right. And so, in the old days, we were able to do some level, some semblance of control. But this idea of do-it-yourself technology, especially the first time we saw this, was with mobile phones. The second time we saw it was consumer-grade SaaS applications that came in. But in the end, if you are a hospital that believes that by telling people that they should not use AI, and that you then believe that you’ve told them not to use AI, then they won’t use AI in your environment. Either you’re delusional or you’re lying to yourself. One of the two or both? Yeah, yeah. Either way, it’s a problem. And this is an ethical dilemma itself here. You know, for example, that I if I told you, if you’re, let’s say my children, right, we all have kids that we run into this problem with. We ask the kid to know something that they cannot do, and then we blame them for not doing it. That’s what we’re doing with AI. So, you have to assume that people are using AI. It’s on their mobile phones, there’s apps, etc. You have to train them on how to use it properly, in the same way that we would train each other how to be nice to each other, as opposed to having fights with each other. And how do you interact with humans inside your organization? If you go back to that advice of treat it as a human, know that these little minions called AI are running around the hospital. Everybody knows that they’re running around the hospital. Every one of them. Every person at the hospital has a minion on their shoulder, inside their phone, etc., and they will unless give unless not given any other alternative, probably use it wrong. There’s not a scenario that they’re not going to use it, but there will be a scenario that they’ll use it wrong. That’s how you almost like to treat it. So, the blind spot is that you can control the technology from coming into your shop. So, what do you do? Just assume it’s already in your shop. Treat everybody like adults. Like this is the one thing I complain to most CIOs, CEOs, board members sitting in a boardroom. I’m saying you pay all of this money for these human beings. They’re well-trained. Doctors are some of the smartest people in the world. Nurses are some of the smartest and most caring people in the world. You bring them in, and then you say, don’t do this and don’t do that. Then don’t do this and don’t. Why? It’s if you are trusting them with the lives of your patients, for God’s sakes. How can you say I don’t trust you with AI? I trust you that I trust that you won’t kill my patients when they come into the hospital. But God forbid you use AI. We don’t know what you’re going to do. No, that means you’re treating them like children. Treat people like adults. Train them. Let them understand. What are you afraid of? Now, the other blind spot, though, Martin, is that so? One part is they feel they can control the technology. The second part is they think they know how to best use it. Both of them are false. One, you can’t control it from coming in. And number two, no one knows what they’re doing. Anybody that comes in and tells you that they are an AI expert should then be summarily shot, because that is not a way to operate in a world where nobody knew this particular kind of technology. Five years ago, even though I did my AI, I’ve been talking about I’m classically trained in AI, but not in what we have today. Nobody can say that they know this environment. So, that second blind spot is. First is the easy one. The second one is a little bit harder because they’re like, okay, we’re going to bring in these experts, really experts in what? We don’t have any psychologists that know how to operate LLMs yet. Those are the only people that you could bring in to say, here’s how LLMs operate everybody else. You have to get them to try things, understand that things are going to break, they’re going to do some wrong, things are going to accidentally happen. And you set up an environment for friendly failure, not to kill somebody, not to harm somebody, etc., but friendly failure means not everything is going to work out the way you want to. And that second blind spot is easy to cover as well, then.

Martin Cody: Well, it’s interesting. I like the phrase friendly failure. Understand it. The thing that I also think might be an unintended consequence here is that if there are, in fact, no AI experts, I also agree with you there. And by the way, if you say you’re an expert today, you’re out of date tomorrow because that’s how fast it’s changing. I think there’s also a tendency to introduce something that the healthcare industry does not need, and that would be paralysis by analysis. So, I see the leaders that I speak to. There’s a consistent theme with regards to what they attribute their success to. A lot of it is risk-taking. A lot of it is leaning in and trying things that it’s okay, …, you talked about friendly failures, it’s okay if we don’t get the unintended or the intended outcome. What we’re going to get regardless is an education. And then we can iterate upon that education to make a better informed decision in two months’ time. How does a healthcare organization, an insurance company, a payer, anybody like that, adopt that framework and then make certain that they just aren’t analyzing to the cows, come home, and actually not actually making decisions?

Shahid Shah: Yeah. So, in those areas, it’s similar to the way that they’re already familiar with how to make a clinical decision. There are things that are very typical. A broken leg doesn’t need a super long analysis. You got to do your normal clinical process to fix that problem. But a complex multi-organ tumor. Hell, you got to do an enormous amount of analysis to figure out how are we going to do this? What are the teams going to look like, etc? So, if you treat every AI problem the same, you’re going to have that problem. But again, if you treat the AI as a human or a thing that is adaptable and changeable, and I want to be completely fair to my fellow colleagues at the places where we have to make decisions, right? It’s a lot easier for us sitting on the outside, not having to live with the consequences of our talking. But as a CIO, if I made a decision as a CTO, if I made a decision as a CEO, if I make a decision, I have to live with those consequences. I can actually be fined or, in some extreme circumstances, go to jail. So, putting that aside, it is a cultural problem where, if you, and this is why I say that healthcare is uniquely positioned to accept AI risks, because we take far greater risks with human life. If you walk into a hospital, the chance that you can die of an infection goes up the moment you walk in. People know how to handle risk inside a hospital. It’s just what we do all day long. Now, if we treat and say, well, AI risk is somehow unique to IT. Well, that’s why we’ve shot ourselves in the foot. But if you treat AI as a human and you know how to manage human risk. You hire the wrong nurse, she’s going to hurt somebody. You hire the wrong PA. He or she might harm somebody. What do you do? You put process into place. You separate and say this type of process, this type of outcome, needs to have these workflows. The double mastectomy decision is very different than a broken leg decision. But if you put everything in the same bucket, you’ve got a problem. And I think that’s really the best way is treat it as the normal risk that you take with humans every day. We in healthcare take risk and manage it all day long. So, how do we take the AI risk and patch it into what we’re already doing? That’s a great framework for thinking, it’s not easy. Not trivial by any means, sure, but if you think about it the wrong way, you’re going to get the wrong output.

Martin Cody: And do you think that ideology at the top in the United States healthcare system is prevalent?

Shahid Shah: It is in certain environments. So, for example, in academic medical centers, people know what their risks are different than in a suburban clinic, for example. However, what they don’t have is guys like us elucidating. All we’re doing is we’re kind of like surfacing things to say, hey, none of us really know what we’re doing. But last week I found out that if I treated my eye as a human, it makes my decision-making easier. That kind of thing. I think we need to interpolate and understand, and put out. There’s a little talk that I did. It was a TED talk a few years ago where we were explaining, how do you understand the world of medical education in a way that you can apply it to healthcare AI. So, before the 1850s, by the way, there was no formal medical education. You learned what you did from sundry places. But the first class medical school that had a formal education had to understand how do we formally educate humans to do things over and over again in a scientific manner? We are now in that same, almost like a hazy world where we don’t have the first medical schools who are designed to treat teach AI. So, we’re all doing it together. We’re learning how that goes. I think 15 years from now, I will probably go through the same med schools in the same way. It’s going to have to interact with us in the same way humans will be learning with the AI, learns with the humans all in school, and then you put them after training into an inference-based world within a hospital. That, to me, seems like a reasonable long-term trajectory, because if we treat it like humans, it will interact with us like humans, and then we’re working as a team as opposed to it’s a piece of software.

Martin Cody: And there’s some of us of a certain age where we’re already envisioning Hal 9000 talking to us, and we’re treating it like a human. We get a little scared, but that’s okay. It’s a different topic of a conversation. I want to switch gears. I could spend a lot more time on this because I think it’s fascinating, and I think the audience also recognizes that we are literally on just the precipice of step one into AI. It’s a long journey, so it’s interesting that we get this right. I want to stress the ethical component of that, which I think is another fascinating topic, but I am going to switch a little bit to switch gears here to that speed round. With regards to, I’ll give you a sentence or a phrase, and you tell me the first thing that pops into your head. You ready?

Shahid Shah: Yep. Let’s go for it.

Martin Cody: I love your background. As it relates to the CIO, the CTO area, technical skills versus in-the-trenches skills, which is more important?

Shahid Shah: In the modern era, in the trenches tied with prompting. If we’re talking about AI or adapting existing technology to your current environment, the biggest mistake is treating all technology for all hospitals the same. So, the, in the trenches tells you what is my culture and how does my workflow operate, and then how do I adapt known technologies to that local environment? That’s the essence of most CIO work today.

Martin Cody: Awesome, I like that. We talked a little bit about blind spots and from a healthcare ecosystem as it relates to AI, as it relates to just decision making in digital health. Another component of your experience and expertise, and I’ve talked about it on the program a lot. Digital health is another catchphrase which gets bantered about all the time, and there’s no true definitions, I think, that are similar and what it is when it comes to digital health. At a we’ll say, hospital. What do you think is the most important two attributes that a hospital CEO needs to focus on, he or she needs to focus on to be successful in digital health?

Shahid Shah: Yeah. So, first would be what lines of business are going to be able to improve my line of business in a way to either attract more patients, allow the care that I’m giving to my patients to be of higher quality, allow consistency of care across multiple patients. So, you notice that a good CIO, a good CEO, a good CFO first thinks about the business that they are in, and we pretend like we say, I’m patient-centric. No, you’re not. You’re an institution. Don’t try to fool yourself. You are an institution. You’re operating certain lines of business in that institution. And now you have to see what technologies of which there are a plethora. There is no need to worry about the invention of new things. Just start applying them to the line of business that you are in, and things then work out a little bit better than if you say, well, if I brought this new technology in, I could be in a new line of business. That doesn’t make any sens,e as your first order of business is, you know what you’re doing. Find the tech that hits all of your requirements, such as revenue capture. Make sure your margins are set. Ensure that patient safety is maintained. All of those kind of things. And that’s your first and primary goal. Now, the second and primary goal is to keep your institution in business. Right now, in that particular case, you can start to say, do I acquire my technologies in on its own and then build out personnel to use that technology? Or could I just buy a small practice or a small hospital that already is using that technology that I want, and then embrace it and build it into my business? So hospitals take a long time to change workflows and people and everything else. Sometimes it’s easier to just go buy the thing that you want and incorporate that. And of course, we see this in the regular business world. No big pharma company is in the true R&D business, right? The R&D business is in small companies, small clinical trials, and then pharma companies buy them and then take them through the clinical trial process, and then go and build brands. Hospitals have been acquiring other hospitals and clinics, etc., but not for anything, usually other than a creative revenue or spreading the risk out and those kind of things. So, we need to go back to our business roots and say that the technical part just needs to support the business decision, not the other way around. 25 years ago, pure tech made sense, right? Nobody knew anything. But unlike AI, we know a lot about digital health, we know a lot about how to do integrations, etc. So, you got to have your business hat on at that time.

Martin Cody: I like it, and I’m going to switch gears on that because that’s, we’ll just say that’s on the clinical side of the health system side. Now let’s send the regulatory side. You are in control of CMS, and you have two days to enact policy, and none of the policies you enact can be reversed for five years. What two things are you putting in place?

Shahid Shah: Number one, what I would do is to establish the idea that continuous care at home is more important than big-box care within a hospital. So, almost all regulatory burdens today are placed in a way that says when the patient comes into the office. Here’s everything that I do with them. We have this, and CMS has this huge blind spot that says everything about healthcare is in an institution. You and I know that’s not true. Everybody knows that’s not true. We haven’t established our regulatory framework. So, the number one thing I would say is how can we rethink regulatory regulations so that in a quote, first principles way assume that digital is in place, AI is in place and that the care is continuous in nature, at home, at the office, etc., and that the acute environment that we come into is a one off that we need to help. Obviously, it has to be regulated as well, but it is a part of the longer-term journey that would be like part number one. Part number two is what could we do to our regulations to move away from being organ-based care to body-based care. So, whether that’s a functional medicine view of the world, an integrative medicine view of the world. Now, I’m not going to foo-foo medicine. I’m saying there are well-known, well-established clinical trials that have described ways of treating integrative medicine through the body. Like, for example, we’ve now learned in the past 25 years how the gut and the brain work together, right? We have almost two brains now that have to work in an integrated manner. We know that pure organ-based care works well when we can identify a particular organ is bad, but we don’t know how to fix it by itself. Like if you have fatty liver disease, for example, fatty liver disease make you think that the problem is in the liver. But no, the fatty liver is caused by a whole bunch of other things, with diet, exercise, food, etc., all kinds of things. And so, we are currently assuming that our organ-based care is going to take us into the future, and it certainly won’t.

Martin Cody: I like that distinction. And from a holistic standpoint, I also love the gut biome analysis. With regards to, we do have two brains, and there’s a lot of data out there that says the brain in our stomach is a little bit more important because it’s going to influence everything. So, treat it with nutrient versus some of the stuff we’re putting in our stomachs. All right. Last question. You are stranded in an airport, and you have a long delay, and you’re pondering your last 30 years of healthcare. And you’re thinking to yourself, boy, I would love to sit down with this individual and talk a little bit and bend their ear. This person can be living or deceased within healthcare. Who are you sitting down with for an extended conversation about all things that you want answers to, and what are you drinking?

Shahid Shah: Wow, that’s a great question. So, I would say one of my favorite healthcare characters, especially throughout history, is Avicenna or Averroes, one of the earliest Arab. He had just happened to be an Arab doctor, and he was one of the first that actually in embodied this idea that you can’t just observe. You got to be able to make a thesis, have a bunch of observations, figure out what’s going on. So, a lot of what we know about the scientific method, though not formalized in his time, came out of that idea is pure observations alone aren’t enough, pure theory alone isn’t enough. And you have to do this observational back and forth. So, what I’d be drinking is tea because I don’t drink coffee. It’ll be probably Turkish tea. It happens to be my favorite tea, and the conversations would be around, what the heck made you think about the fact that it’s like everybody else was doing it this way? What made you think about it this way? Especially because obviously in that time the internet wasn’t around, phones weren’t around, so you had to do a lot more thinking on your own. You weren’t talking to 500 people simultaneously and getting all these ideas. So, that really fascinates me is how Einstein by himself, or an Avicenna by himself, or a Newton by himself, they’re just able to, without a whole lot of conversations from everywhere else, come up with these unique, literally world-changing kinds of views. So, I would love to have that to go back in time and have that conversation.

Martin Cody: I love it. And you’re right, they didn’t have the tens of thousands of inputs with regards to consideration, evaluation, and stuff like that. But yet this idea germinated inside their heads. And how did that happen, and how did they leverage it and pursue it? I love that concept. Shahid, thank you so much. I’m curious, you’re the CEO of Netspective. How does our audience, if they’re interested in further dialogue or even exploring some of these ideas formally with you, how do they get ahold of you?

Shahid Shah: Yeah. So, you go to Netspective.com, you can also go to ShahidShah.com, that has my profile and things that I do. If you want to see those TEDx Talks ,if you just go to YouTube and say Shahid Shah TEDx, there’s two TED talks that I’ve done that, one of them is what I talked about at the very top of this conversation, which is how leaders should say yes to things like in healthcare, our normal philosophy is nah, don’t need this, etc. So, that was a that was pre I even talking about what are good ways to find ways to say yes. And that was one of them. And the other one is called Medical Science 3.0, which basically explains how do we know that we’ve reached a nirvana of medical research, and I’d love to come back to your podcast someday and talk more about how we could do medical research and maybe cure something every once in a while.

Martin Cody: That would be awesome on both counts, curing and having you back. I really appreciate the time and certainly you sharing some of your wisdom. There are a ton of lessons that people can use today, especially around AI. So, thank you so much for sharing and for the audience. Thank you so much for tuning in. Can’t do this without you. Enjoy the rest of your day, and we’ll be talking very soon.

Shahid Shah: Thank you so much.

Martin Cody: Thanks for diving into The Edge of Healthcare with us today. I hope these insights will fuel your journey in healthcare leadership. For more details, show notes, and ways to stay plugged in to the conversation, head over to MadaketHealth.com. Until next time, stay ahead of the curve with The Edge of Healthcare, where lessons from leaders are always within reach. Take care of yourselves, and keep pushing the boundaries of healthcare innovation.

Listen to the Latest Episodes