As the adoption of utilizing Artificial Intelligence (AI) is proliferating in just about every context and use application imaginable, we at Person Centered Tech are understandably focused on what the legal, ethical, and clinical standard of care considerations are – and on identifying the resultant risk management, privacy/security, and clinical care outcome implications, so that we can help equip you to navigate those considerations in a way that meets your practice’s needs on a wholistic level. That’s the PCT Way.
We know that your center is your clients; your focus is on how you can provide clinically effective and supportive care, while managing all of the components that actually consists of – which requires much more beyond “just” your training, skills, and experience that inform and provide the basis for the care you provide – from the regulatory and technological aspects to the more logistical pieces of how you manage an operational practice. Our center is you… what do you, as the helpers, need to be helped with in order to provide the help that you do?!
The professional ethics codes have yet to explicitly address AI, the legal and regulatory landscape is also playing catch-up to this advent of technological power. We (mere mortals/humans) are all still trying to – and will be, for a long time to come – understand its potential, power, and dangers. And, it is powerful *and* dangerous!
In a nutshell, AI presents both tremendous opportunity and potential benefit that can be harnessed and realized in amazingly positive ways that can directly benefit client care outcomes and practice operational efficiency – freeing up capacity and resources from the more tedious and rote tasks of practice management to either be channeled into client care, or to that elusive work/life balance and equilibrium that we’re all seeking. AI also has the potential to create real, tangible harm in the context of its utilization in mental health care; from AIs perpetuation of biases, to security and privacy issues that don’t “just” violate ethics codes, HIPAA compliance requirements and responsibilities, but also violate client trust and present actual and material risk of client harm in myriad ways.
It is a lot to navigate. So, PCT is currently channeling a lot of our resources into providing the information, support, and resources that we know you need. From trainings that address these considerations, to providing a rubric for the legal-ethical and responsible integration of AI into your practice, to what Policies & Procedures you need in place, to what you need to train your workforce on (if you have workforce or helpers in your practice), we’re busy both creating and curating resources for you so that you don’t have to try to filter through all the information, decipher what it means, and figure out how to apply it in-practice on your own (or based on listserv and social media discussions.)
To that end, we are pleased to be providing two supportive and topical trainings:
- Modern Progress Notes: Considerations for Teletherapy, Insurance Audits, and Artificial Intelligence (AI) presented by the fantastic Dr. Maelisa McCaffrey (Hall), PsyD of QA Prep
- The Evolving Legal-Ethical Standard of Care for the Clinical Use of Artificial Intelligence in Mental Health (live presentation on June 15th. Presented by Eric Ström, JD PhD LMHC, and myself (Liath, the Director of PCT.)
Register
CE Credit HoursLegal Ethical CE
We’re also in the process of authoring a HIPAA Security Policy & Procedure insert that specifically addresses the criteria and processes for responsibly implementing AI in your practice (and a Workforce Manual section as well, for those of you with Group Practices.) We’ve got you.
I know this is already long – though PCT isn’t necessarily renowned for our brevity ;-) – but I wanted to share a recent news story about the replacement of the human support-line staff at the National Eating Disorders Association (NEDA) with an AI chatbot, Tessa, that centers and illustrates something really important, in my reading:
AI, while it might have amazing capacity and can do *some* things ”better” and faster than we humans can, it cannot build rapport, cannot express *genuine* empathy, and therefore cannot create an effective therapeutic alliance in any way that resembles what you can, and do, provide. You are not replaceable. And, when there is a hasty attempt to replace that element of what you as a provider actually provide, that can be really harmful.
From the news story linked to above:
WELLS: Professor Marzyeh Ghassemi studies machine learning and health at MIT, and she is skeptical about this chatbot idea. She worries that it could actually be damaging.
MARZYEH GHASSEMI: I think it’s very alienating to have an interactive system present you with irrelevant or what can feel like tangential information.
WELLS: What the research shows people actually want, she says, is for their vulnerability to be met with understanding.
GHASSEMI: If I’m disclosing to you that I have an eating disorder; I’m not sure how I can get through lunch tomorrow, I don’t think most of the people who would be disclosing that would want to get a generic link. Click here for tips on how to rethink food.
WELLS: Often, the people who come to the NEDA helpline have never talked about their eating disorder before. Helpline staffer Abbie Harper says that is why people often ask the volunteers and the staff, are you a real person, or are you a robot?
HARPER: And no one’s like, oh, shoot. You’re a person. Well, bye. It’s not the same. And there’s something very special about being able to share that kind of lived experience with another person.