Episode 10 – Mental Health Tech Ethical Dilemmas

January 16, 2023

#FuturePsychiatryPodcast discusses novel technology and new ideas in the field of mental health. New episodes are released every Monday on YouTube, Apple Podcasts, etc.

Summary

Emily Carroll, a fourth year medical student of Western Michigan School of Medicine, and aspiring psychiatrist, spoke to us about common ethical concerns in regard to mental health tech. We are quick to  agree to terms of service, but what are we actually agreeing to? Not only do we not read terms of service, but they are hard to understand. This could by why only 41% of people in the US trust devices that collect health data. We may not realize that we agreed to allow the company to sell “deidentified” data. Why is “deidentified” in quotes? Because it’s surprisingly easy to analyze data and identify you! Laws clearly can’t keep up with the rapid pace of technology growth and innovation. When tech grows too quickly, patients can find themselves in a situation where they are relying on technology for something, but their situation is far worse than when they originally began. Who will be monitoring their progression? How would tech developers manage urgent or emergent situations? 

Chapters / Key Moments

00:00 Preview

02:52 Equitable Access to Telehealth Care

07:32 Ethical Telehealth Research

09:30 How do you make a recommendation based off limited knowledge?

12:27 Patients make the best teachers

16:02 Is anonymous data really anonymous?

17:37 Trust in Technology

21:02 Data Brokerage

23:29 Terms and Conditions

26:27 How do you act on the data?

27:41 What does an app do upon discovery of high risk situation?

35:09 Telehealth Education in Medical School

Transcript

[00:00:00] Emily Carroll:  What do you do when your model returns what you consider a high risk of an adverse event there is discussion about what would it mean for, response to be automatically triggered without a patient ever talking to a clinician. Is that something that we want in our healthcare? What benefits can that bring versus what harms could that do? And we need to decide where the risk threshold is. We need to decide, is the risk threshold just high risk or does it need to be high risk and What are we deciding is the threshold for intervention with the situation where this patient is not seeing a clinician? I don’t have a solution to that one because I feel like we don’t have a standardized system in place, even for in-person care.

[00:00:47] Bruce Bassi: Hello and welcome to the Future Psychiatry Podcast, where we explore novel technology and new innovations in mental health. I’m your host, Dr. Bassi, an addiction physician and biomedical engineer. If you are joining for the first time, I would greatly appreciate if you subscribe and share it with your friend network on social media. Additional resources, a full transcript can be found on our website, telepsych health.com. Then click podcast in the top right corner. Thank you so much for joining us, and I hope you enjoy the discussion today.

So welcome. Today we have Emily Carroll. She’s a fourth year medical student at Western Michigan University. She’s going into psychiatry and she has a very strong interest in ethics in relation to technology use and psychiatry. Welcome Emily.

Emily Carroll:

Thank you for having me.

Bruce Bassi:

So had a pretty good discussion so far and I think there’s a lot to dive into. Where should we start in relation to the ethics of how to use technology appropriately. In psychiatry, there’s so many big concerns between privacy, access to care, what do you think is the the main topic?

[00:01:49] Emily Carroll: I do think both of those are important. I think that something especially important to talk about is that access to care, because that’s often pushed forward as the reason for, pursuing these models. Virtual models of care and technology, machine learning can improve access to care, especially in rural areas. But one of the concerns is that it doesn’t always improve access to care equitably. So there are going to be populations that are left out of these advances. People in the disability community, if these apps are not accessible or these models are not accessible to them people who do not have access to reliable internet.

Perhaps the mobile devices necessary to use these kinds of models, they’re going to be entirely left out of these advances. So I think that’s definitely something important to consider when we’re discussing, these technological models, is that we need to be sure that we’re bringing along the populations that may need them most instead of leaving them out of this progress.

[00:02:52] Equitable Access to Telehealth Care

[00:02:52] Bruce Bassi: Yeah, let’s talk about access to care in regard to telehealth first.

[00:02:56] Emily Carroll: Sure. So telehealth, I know I’m going to school in Michigan and the upper peninsula has some pretty significant lack of access to in-person care, especially in those much more rural areas where you’re very far from medical centers and up there telehealth has been extremely useful, not only in mental health, but in other types of healthcare such as, virtual stroke teams that are able to respond virtually. And so it’s been really helpful there.

It’s more profitable to get a license in populated states. So some of those states that have lower populations that are primarily rural are going to have less equitable access to virtual healthcare and telehealth. They’re not going to have those reap those same benefits that, rural populations in more populous states that may be more desirable to telehealth practitioners are going to get.

[00:03:48] Bruce Bassi: Right. One thing I noticed when I was starting my practice was that most of the patients found the practice through their insurance directory, and if I had a location entered in there, that was in a metropolitan area, I would get a lot more callers, a lot more people interested in the practice because there are more people who live in that area.

But then, practitioners are incentivized to an office and or mailbox that’s in a metropolitan area so that they’re in the directory under the city and they’ll grow faster. Where as you try to have a little bit more equitable access to care across the board and get the, census of individuals who is also in rural areas, practice is going to grow much more slowly. So it’s, almost like one of the great advantages of telehealth, which is that ability to people, to reach people who don’t have the means to drive into office setting their, the transportation barrier exists for them then. know, it throws that right out the window because the clinicians and the way the insurance directory is structured, they are trying to get into areas to increase that visibility in the directory.

[00:05:14] Emily Carroll: Absolutely. And I do think there is something to be said for increasing access to care, even in more populated areas as well. There are going to be patients that struggle with finding childcare or transportation. So I don’t think it’s necessarily all negative for people in a more populated area to also have that access to telehealth, but you’re absolutely right. If we are kind of focusing on those more populous areas because there are incentives to do so, and it’s a secondary effect that, people in the rural areas surrounding those metropolitan areas are able to get increased access to care. We’re not pursuing that primary goal of improving access to healthcare in those rural communities that really need it the most.

[00:05:58] Bruce Bassi: Yeah. We sometimes take for granted, we just assume that people all have phones. I have some patient population. They’re in transitional housing or halfway houses, quote unquote, where they don’t have, the ability to use a camera phone in the house there may be banned, or have some people who are in extremely rural areas, and the, the internet service or cellular data service really depends on the weather and other sort of variables and, or it’s just very spotty. Maybe they have it at like work but not at the house, so, know, it’s not necessarily a one size fits all, like just give them access to telehealth, then you’re going to meet needs across the board, even in those rural areas that there is quite a lot of variability there in terms of access to the internet across the country.

[00:06:49] Emily Carroll: Absolutely. And I also think that this can lead a little bit into discussion of privacy. For a lot of patients that are unhoused, they may not have, a private space with good internet connection to have their healthcare visits. They may be forced to have those visits from, a public space where there is free wifi available if they even have the mobile devices to do so. So that raises a, a problem where they’re not getting the same privacy for their care, that you know, patients who have better access to resources are going.

[00:07:22] Bruce Bassi: Right. Yeah. I can’t tell you how many visits I’ve done with an their car, park,a lot of the That’s just a comfortable place for a lot of

[00:07:32] Ethical Telehealth Research

[00:07:32] Bruce Bassi: When people do research in a lab setting and this kind of tails off of access and whether or not appropriate sample of the entire population, so, one main criticism of, research that’s done in a lab laboratory setting is that only getting the people who have the time to volunteer for this study. Maybe there’s other sort of exclusion criteria for them to be in this study. And so say one potential solution is that, is that you can run this study in their native environment, home perhaps, there was a study of machine learning healthcare devices that were approved by the fda.

Investigators found that most of the 130 tools approved did not if they were evaluated at more than one site. So even there’s these, really good available that fall under the realm of, digital therapeutics and machine learning, and they actually– like many of them aren’t taking a full snapshot of of various diverse populations.

[00:08:45] Emily Carroll: And I think that’s one of the same pitfalls that traditional research falls into is that, you know, whatever study population they’re looking at, the, their results and their interventions may not generalize to more diverse populations. And in a lot of these cases, I mean, these are the populations that are being most adversely affected by mental health concerns.

So, I definitely think that’s a concern, and especially as someone going into psychiatry, knowing which of these resources I should be recommending to patients, how much is it going to help them versus could it possibly harm them? And it’s very difficult to figure out between, the wide variety of tools available, what to recommend for patients to improve their mental health.

[00:09:30] How do you make a recommendation based off limited knowledge?

[00:09:30] Bruce Bassi: Yeah, we talked before the show a little bit about how to recommend an app perhaps. When physician recommends something to a patient in a clinical setting, there is this element that the physician know should know it or have gotten a decent amount of feedback on it, or it’s been vetted perhaps, that might not always be the case because there’s just so many apps out there. I was talking to Scott Burwell, who’s on the show, and of their main reasons that they want to get the neurotype device FDA approved is because that it adds this layer of scientific validity of acceptance that it’s, it’s been vetted by number of scientists and now a doctor can more safely feel comfortable recommending and prescribing it. So it’s no longer, in the realm of many sort of online apps for mindfulness, et cetera, that aren’t FDA approved. I see a kind of a binary distinction there, FDA approval versus not, even though there’s this non-enforcement thing that happened with COVID where they’re, they’re actually not really enforcing whole digital therapeutic kind of terminology during the covid health emergency, which I think is interesting and another topic to talk about, what are your thoughts about the whole suggestion versus prescription thing for digital therapeutics for doctor to patient?

[00:11:07] Emily Carroll: Sure. And that, even as a medical student, patients are constantly bringing in things that they’ve found online or heard about and asking you, do you think this is going to help me? And a lot of times you’re needing to make some relatively quick decisions about that. And, as a medical student, I’m a bit more shielded from that because I can go staff the patient with my attending physician and talk about that resource. And maybe we can have a few minutes to do a search of the scientific literature to see what kind of data is available. But, moving forward into a career as a psychiatrist, you’re not going to have that buffer.

You’re going to need to make those decisions a little more quickly. I think that it is much easier to tell a patient, I don’t think this would harm you, rather than endorse, I think this is going to help you.

So I think a suggestion or a prescription, you need some pretty significant evidence to actually tell a patient, I think this is going to help you. It’s much easier to say, I don’t think this is going to hurt. Why don’t you try it and see if it helps.” So similar to that FDA approval, it requires a lot for a physician to be able to stay with confidence, I think you should devote your time and energy to this. Because I think it’s important to balance their suggestions to a patient about what they should be doing with their probably very limited, time, energy, and resources.

[00:12:27] Patients make the best teachers

[00:12:27] Bruce Bassi: One principle that I fell back on is that my patients are the best teachers and I think their experience in using these apps is actually very valuable. It’s a real world, real case example that other people can learn from. So what I started doing was actually keeping a list of, like our practice’s database of suggestions from other patients. And I’ll just jot down their comments and how they liked it. And I’ll just get, a little bit of a snapshot a real world review, so to speak, of an app. And so I have a, a list of mood tracking apps and substance sobriety apps and meditation apps, some book suggestions. It’s been pretty helpful because we share it among the clinic and so other people add to it.

It also gives a sense of popularity too, when more than one person recommends it. So I make a recommendation with a caveat that this is a suggestion from another patient. Make the, make what you will of the experience in it. Although if I say it like that, there might be not be as much commitment to it. So the more uncertainty I add to it, I feel like they go into it with a little bit more hesitation. But I think the word of mouth recommendation component to it, where it’s like filtered out through a physician. It’s not coming from me necessarily, but I’m taking the input from other people and then filtering out what their experience was, obviously making it de-identified and then giving it on to another person without any conflict of interest or, I have no stake in the game. I don’t own any sort of ownership to any of these apps anyway, so it’s fun. It’s like part of what I’m interested in and something that’s helping other people as well.

[00:14:22] Emily Carroll: And I think there are– there are so many resources available. And it’s interesting the ones that patients will bring in and the ones that will click for different patients. But one of the things that I, I’ve enjoyed is engaging in that shared decision making with patients, going over resources and then maybe seeing them again in a few weeks and asking them how it went. Did it help? How often were you able to actually use it? I know with meditation apps, some of some patients are they’re really effective, they do it every day or multiple times a day. And some patients, it doesn’t work as well. So it is interesting to see what works for which patients.

I’m excited to see that this is a large area of research and development and psychiatry.

[00:15:04] Bruce Bassi: Yeah. You mentioned earlier that there’s little harm to be experienced from using an app. And then that kind of got me thinking a little bit about the privacy concerns.

[00:15:16] Emily Carroll: Hmm,

[00:15:16] Bruce Bassi: And we can talk about privacy issues surrounding the apps themselves, what those regulations are and whether or not the data is not brokered out, and if it is, easy is it to re-identify it, even though it’s de-identified? Can people re-identify it?

[00:15:33] Emily Carroll: Sure.

[00:15:34] Bruce Bassi: So what are your thoughts on that topic before we get into the nitty gritty?

[00:15:38] Emily Carroll: Sure. So that with a lot of things is going to vary by where you are, the regulations. I know that there’s much stricter consumer data protection legislation in place and, for example the EU or in California. But for the rest of the US I think our data is not as well protected as we think it is. And it’s not as well protected as we treat it when we’re giving our data away.

[00:16:02] Is anonymous data really anonymous?

[00:16:02] Emily Carroll: And actually there is research that shows that it is fairly easy to re-identify data if you’re providing a lot of data to a company. It may be de-identified, but if someone knows what they’re doing, it’s fairly easy to re-identify you and there are concerns there actual harms that can be done using this kind of data collected from mental health apps in terms of employment opportunities in terms of, insurance rates going up in certain areas. If they’re able to identify locations where certain rates of illness are higher. There are non-medical harms that are done to patient that could be done to patients who are using these apps thinking that they are anonymous and that their data is well protected. So I think that is important to consider, especially because some of these apps or medical devices are collecting so much data. They’re, they’re collecting, vitals, they’re collecting movement data. Some of them are collecting text message data, speed of typing on keyboards, they’re collecting, they’re able to go through camera rules of devices. There’s a lot of data that’s being collected.

And something we did talk about is that a lot of the terms and conditions that, people are accepting for these apps are incredibly dense and difficult to get through. I mean, very few people are actually reading the entire terms and conditions. It’s unreasonable to expect the average consumer to be able to consume a legal document, fully understand it, and provide fully informed consent for their data being collected.

[00:17:37] Trust in Technology

[00:17:37] Bruce Bassi: Yeah, totally. I want to talk about the level of trustworthiness that people have in technology because I think it relates to the privacy concerns. And more and more lately, there’s been this theme I feel in our society about scams, scammers pulling these really amazing digital heist off in the crypto space or in the technology space.

And over the past decade, trust in technology has declined 24% according to one study. And just 41% of people in the US find the devices that collect health data to be either somewhat or very trustworthy. So less than half. And that also parallels another study that looked at individuals in different countries and the US actually ranked the lowest in trustworthiness of technology businesses to do what is right.

And only 54% of people in the US trust technology businesses do what is right. And so I think that’s part of the reason why these word of mouth recommendations are so important because the trust isn’t there by default. It needs to be built from either somebody who knows the, the company or some sort of easy to relate to type of mission that they’re on. So when people, and especially in especially difficult and salient in psychiatry apps, because in order for them to engage most appropriately with the app, they need to have that level of trust there so they can open up if they’re asked to do journaling, for instance. And what’s the point of journaling if you’re like mentally filtering out because you’re worried about, know, what the company is going to see or whether or not your data’s going to get out there.

Kind of defeats the whole purpose of it. Nobody’s going to continue to journal well and in fully engaged manner if they feel that they can’t trust the app or the device itself.

[00:19:44] Emily Carroll: Absolutely. And I think that’s, similar to all engagement with psychiatry, it’s built on trust. If the trust isn’t there you’re not going to be able to provide optimal care. One of the things I, I was thinking about while you were talking about that is that with some of these apps or machine learning programs it’s interesting because it’s thought that some users will actually filter themselves intentionally either to get a response or to not get a response if they’re worried that they can be identified and that emergency services could be contacted on their behalf by, whatever telehealth service they’re using.

And that’s a, a privacy concern as well. And it affects the care that’s being given if a patient is not going to report a suicidal ideation while using a mental health app. And that’s really what they’re struggling with, is the care being provided there targeting, the most important aspects of their mental health or, or not? Are you missing a large component of what you need? If patients are worried that their privacy may be violated and like, like you were saying, the trust is pretty low. So that, that’s an interesting thing to think about is how patients are going to censor themselves while using some of these programs.

[00:21:02] Data Brokerage

[00:21:02] Bruce Bassi: Mm-hmm. There was a study that looked at over 80,000 health related websites, and they found that more than 90% of those websites actually sent information to third parties with 70% including specifics on symptoms, treatment, and diseases.

So the data brokerage industries actually very very strong and very powerful and it adds another revenue stream for these companies because they can sell this data that they’ve collected to another third party, and they’re in compliance by keeping it de-identified. But the more granular the data is, for example, IP address or a geo tracking or some other cookies that they’re tracking, they can actually re-identify that later on and get more specific detail about that individual, even though it wasn’t, wasn’t directly disclosed from that original company.

That’s pretty concerning. And I think that’s part of the reason why people don’t necessarily trust these companies by default, perhaps because they’re operating really under a different set of standards and guidelines than physicians’ office would be. It’s not really the same regulation. HIPAA doesn’t apply to them, is kind of interesting.

[00:22:17] Emily Carroll: Yeah, absolutely. And, and it’s interesting because I feel like a lot of the same harms that can be done exist there. So in medicine, HIPAA is there to try to prevent some of those harms. And I did see an Australian study where it was possible to re-identify a lot of data that had been, successfully de-identified. It’s not very difficult and it is so profitable to sell this data to third parties that, I think especially in areas with less consumer data protection this is a, a valid concern for patients and for users of these kinds of services.

 So if we’re going to continue you know, using or improving these models of health and improving hopefully access to healthcare and mental healthcare in particular because we simply don’t have the number of professionals that we need to serve the population with a mental health condition.

I think that we need to look pretty carefully at what’s being done with the data and do we need additional protections for the users of these kinds of apps or machine learning programs or, healthcare devices.

[00:23:29] Terms and Conditions

[00:23:29] Bruce Bassi: Yeah, absolutely. I’m guilty as charged. When I sign up for new service, look for a checkbox so I can click it and then start using it. And there were researchers that showed that if you were to actually read the privacy and terms of service for all of the services that you come across over a year, it would take you about 76 work days if you were reading straightforward from nine to five, eight hours a day.

It would take you 76 work days to read all the privacy each year, the median length of a privacy policy was about 2,500 words, which is was about 400 to 500 words per page, roughly, I would say. So that, that’s about five pages before you even get through it and 75% skip privacy policies and just click the button instead. So they calculated the average adult reading speed, which is 250 to 280 words per minute, and it would take them about 30 minutes to read privacy policies in about 20 minutes to read the average terms of service. And can you imagine like spending roughly an hour, of your day, probably a couple times a week if you’re signing up for different company here and there?

Actually that’s just reading and not necessarily a guarantee that you’ll understand. So it’s not really like informed consent because in informed consent you want to guarantee that the person has the capacity to understand and that they do understand what they just agreed to. It’s really just I don’t even even call it consent really.

Well, you are clicking a button and consenting, but there’s really no nothing– it’s a misleading word though. It’s a misnomer. I don’t know a better, I wish we could come up with a better word for it.

[00:25:28] Emily Carroll: Yeah. And I think, in the reading that I was doing, that was also a pretty significant concern with a lot of the data collection is that even in, scientific studies where they are doing an informed consent process the same way that you would be doing it for, traditional research. There are concerns that patients do not fully understand the implications of what can be done with the vast amounts of data that are being collected. It’s really hard. Even the researchers may not know what is possible to be done with data about, exactly where a person is located, all day, what they’re doing, what their heart rate is what their typing speed is, things like that. It’s very difficult to say that you’ve done, a responsible informed consent process when a lot of the, the participants and even, even the researchers may not fully comprehend what can be done with the data. So I agree it’s hard to call it, a fully informed consent in, in those kinds of cases.

[00:26:27] How do you act on the data?

[00:26:27] Bruce Bassi: Mm-hmm. Yeah, that’s a good point. What to do with the data, the data itself after you get it. So say you’re a physician and you recommended a, an app, and the app this person has a — or they have a 54% chance of developing suicidal ideation in the next 30 days.

You know what? What do you do with that? Say the patient’s like, ok. First of all, do you disclose it to the patient? Second of all, when you do disclose it, what do you say about it? What we’re going to do about it? Third of all, how do you even explain where it came from when the machine learning algorithm.

[00:27:06] Emily Carroll: Yes.

[00:27:07] Bruce Bassi: Has developed a model behind it that is really unexplainable. Like you don’t, so you, for, for Point 3 you would have to really to them what machine learning is, how it, how it works, and why we don’t necessarily know why it works or what factors are that they’re taking into consideration.

[00:27:26] Emily Carroll: Yeah. And I think that that lack of transparency is one of the big problems here. And especially when a lot of, algorithms are proprietary, so, the general public researchers may not actually have access to these. But I think that that’s another issue.

[00:27:41] What does an app do upon discovery of high risk situation?

[00:27:41] Emily Carroll: In terms of what do you do when your model returns what you consider a high risk of an adverse event there is discussion about, what would it mean for, response to be automatically triggered without a patient ever talking to a clinician. Is that something that we want in our healthcare? What benefits can that bring versus what harms could that do? And we need to decide where the risk threshold is, which is, I feel, different for a lot of practices, a lot of physicians we need to decide, is the risk threshold just high risk or does it need to be high risk and imminent? What are we deciding is the threshold for intervention with the situation where this patient is not seeing a clinician they’re just interacting with a virtual health model. So I think that’s a, that’s a difficult– I don’t have a, a solution to that one because I feel like we don’t have a standardized system in place, even for in-person care in that, in that situation.

[00:28:45] Bruce Bassi: Mm-hmm. Yeah. That article that came out in 2020 by Nicholas Jacobson titled, “Ethical Dilemmas Posed by Mobile Health and Machine Learning and Psychiatry Research” was very good because exactly some, something similar to what you said, and it was talking about is that intervention going to be triggered by passively collected data or is it only going to be triggered by actively collected data such as what the user is physically inputting into the device but– like Google can predict quite a lot about human behavior. And that’s both a mixture of passively and actively collected data.

So say, say there are some factors related to where you’re moving throughout the day and how that changes over the course of a week. How often you’re, you’re getting out of bed. Certain types of photos that you save to your camera roll. These are examples of passively collected data that might have some sort of bearing about what your future behavior is going to be. The authors brought up the point are, should, should the company or who, who’s even responsible?

Is it the company, the person who recommended it? Or the person themselves? Should they who, who should be responsible for taking action?

[00:30:13] Emily Carroll: Yeah.

[00:30:13] Bruce Bassi: When the model shows that there’s a high risk to come for that individual based on that, that data.

[00:30:21] Emily Carroll: Yeah. And that’s a tough one. And like I said before, I don’t have any, I don’t think there are easy answers to that. It does affect the research sometimes because then they don’t want to collect data that can put them in that kind of ethical dilemma. But then, I think we’re missing out on an important dimension of research as well, because those are some of the highest acuity situations that we’re dealing with in mental health, and I think some of the highest need is there. So if we are shying away from things like looking for suicidal ideation in patients in these studies, I think that, know we’re missing a very important dimension of mental health and it’s difficult to determine, a) What is our threshold for intervention first, should we be intervening based off of this data? And two, who should be doing that? And it, it is, you know, a violation of patient autonomy in that we’re taking this step to contact emergency services, maybe if that’s the case, if that’s going to be the intervention.

And I know that is done in other telehealth models, for example in crisis text line, but anyone who contacts crisis text line, that is, they’re fully aware that, that, that, that may be happening. Confidentiality is discussed and it is discussed that, confidentiality is of utmost importance and will only be violated if there is significant and imminent risk of harm to, the texter who’s reaching out.

It’s interesting to, to think then back on, enrollment into these studies. How full is a participant’s understanding that, real world emergency services may be showing up at their house depending on what data is collected from them. And if it’s passive data patients need to be aware that that may be a consequence of participating in such studies or using these kinds of apps or virtual models.

[00:32:29] Bruce Bassi: Yeah, in, in timeframe too, at least the advantage with the research study is that they’re enrolled in a study. You know the onset when they’re going to be coming the lab and when they’re going to be terminating this study with that individual. So there’s a clear distinction there.

Whereas with an app, The person and the data is going to be flowing and engaging at all hours of the day, maybe for months or years, depending on the person in the app itself. So there’s a real question there of feasibility. You know what, how do you plan an intervention at two in the morning when you have that risk there that crosses that threshold criteria what do you do in those off hours potentially.

So my prediction is that there’s going to be some case law that you know is pertaining to these technology companies and whether or not they are taking responsibility for the outcomes of the patient. It might be a little bit harder to Illustrate direct causality between the harm and the app itself, but there are certainly instances that are accumulating now where AI is biting off a little bit more than it can chew, and getting into these chats with patients that are highly acute state, and they make recommendations or they, say things that are not pertinent to the person’s situation at hand.

So like the technology companies want the accolades and the notoriety and the rewards for for all the benefits that they’re providing for patients. But on the flip, I think if they’re wanting to venture into this space, then they also need to take professional responsibility for harms that could potentially take place outside, or while, while the person is using the app too. It just seems like it would make sense. Like you’re, you’re using or recommending an app or providing a service, but you know, shouldn’t be like a disclaimer just gets you out of the potential shortcomings or the harms that that occurs from it.

[00:34:47] Emily Carroll: Yeah. And I think, if they’re going to be providing healthcare and like you said taking responsibility for the benefits of that healthcare with all healthcare, especially in those high acuity situations, there is a large potential for harm that can be done. And if you’re going to claim one side of it, you need to claim the other as well. That seems like a reasonable prediction to me of where that’s going.

[00:35:09] Telehealth Education in Medical School

[00:35:09] Emily Carroll: I also wanted to talk about maybe predictions about how this is going to be included into medical education because it’s, it’s becoming a much larger part of the practice of medicine, especially after the pandemic, telehealth has become a really important modality of care.

So we have had more education in medical school regarding telehealth. That’s been, developed pretty quickly over the past couple of years. And I think especially in psychiatry, as I’ve been interviewing for residency programs, a lot of them have been discussing the opportunities for telepsychiatry at their institutions.

So, I mean, it’s here to stay and it is very different than what we actually, what we’re, what we do in medical school and clinical rotations. So I wonder if there’s going to be more opportunities for telehealth involved, at ever lower levels of training to get trainees comfortable with telehealth earlier.

 Even on some of my rotations this year, I’ve been doing telehealth visits with patients that don’t need a physical exam. So it’s interesting to watch it bleed down the levels of healthcare and try to get trainees proficient in it earlier because it’s a very useful tool, like we talked about.

[00:36:25] Emily Carroll: There are some drawbacks, but you know, there’s a lot of potential for good to be done as well.

[00:36:30] Bruce Bassi: What kind of changes have you seen in your program at Western Michigan lately that coincided with what you’re talking about?

[00:36:37] Emily Carroll: Sure. So during the pandemic there were a lot more opportunities for virtual learning for medical students as well, and a lot of, care switched to telehealth. So even, during my first year of core rotations, I was having telehealth visits with patients in family medicine, in, in pediatrics.

And a lot of patients really like the telehealth visits. They don’t have to travel if they don’t need a physical exam. They prefer them to an in-person visit. So that’s something that, we have had to get proficient with. And, there’s been discussions usually in clinic about the ethics of a telehealth visit. You need to hold it in a patient room that’s private. The same obligations are there to protect the patient’s privacy, confidentiality.

We really do need to treat it with the same respect and, ethical rigor that we approach in-person care, even though, sometimes it seems like it might be easier. It seems less formal sometimes. But we really do need to be careful to treat it with the same respect that we would treat an in-person visit. And that’s one thing that I think might change in medical school curriculum. We do have ethics courses where we discuss a lot of the rights of patients and our responsibilities to patients, and I think that discussing some of those, some of the differences between, how we need to treat virtual visits or virtual data hopefully would make it into medical school curriculum, especially in terms of psychiatry.

Because so much of psychiatry now is able to be done through telehealth and I think that there are some unique concerns regarding, privacy data protection especially with privacy in a patient’s home. Noticed a lot of concerns with treating children adolescents, making sure that they actually did have privacy did have a safe space to do their virtual visits, was, an interesting concern. You don’t have to worry about that when they’re in the office because if they’re the only one in the room, you’re guaranteed your privacy so these are things that we touched on when they came up in clinic, but I could definitely see them becoming an important part of the didactic curriculum through rotations, especially, especially in psych where so much of your care is able to be done virtually.

[00:38:54] Bruce Bassi: Yeah. Very good insights. I really appreciate you taking the time to talk to me, and thank you so much for your work that you are doing on and understanding the ethical dilemmas within digital therapeutics and digital health and psychiatry, and that and taking that to educate our own patients to be more empowered with their understanding of how their data is going to be used in these apps. So really do appreciate you taking the time to talk to me.

[00:39:22] Emily Carroll: Thank you so much for having me. Really interesting discussion.

[00:39:25] Bruce Bassi: As a reminder, if you’d like to support this show, one way you can help us is by subscribing to the channel on YouTube and leave a comment if you’d like.

It’d also mean the world to me. If you can share it with your social media network. Maybe there’s somebody out there who might be interested in the podcast. Hope to see you next week, next Monday. New episodes are released every Monday morning. Thanks a lot. Take.

Resources

Ethical perspectives on recommending digital technology for patients with mental illness

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5293713/

Ethical dilemmas posed by mobile health and machine learning in psychiatry research

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7133483/

Navigating the Ethical Landscape of Digital Therapeutics

https://www.psychiatrictimes.com/view/navigating-the-ethical-landscape-of-digital-therapeutics

 

 

Are You a Journalist Writing About This Topic?
Are You a Journalist Writing About This Topic?

If you are a journalist writing about this subject, do get in touch – we can provide a comment for you.

Pin It on Pinterest

Share This