ChatGPT and the Impact of AI on Healthcare
ChaptGPT is generating quite the storm as of late, and I want to give my own thoughts on how I view the impact of this type of technology. Is it really as transformative as some people are saying? Is AI going to destroy the role of pharmacists? Are robots going to take our jobs and make us irrelevant?! Let's dig in.
Do Robots Dream of Electric Drugs?
I have always loved sci-fi, and the concepts of technology and humanity have always been a clashing point for some great plot lines. Whether it ranges from Bladerunner, Snow Crash, Cyberpunk, or Ghost in the Shell, the concept of merging technology into our world has always been omnipresent. Conversely, you have worlds like Terminator, Dune, or Warhammer 40k, where AI is regarded as a dangerous being that is out to destroy us all. All praise to the Omnissiah, indeed.
In any event, one thing I've tapped into is that healthcare issues will always be an issue in the future and that many medical providers merge the role of technology and humanity treatment. I love a good plot point targeting what it means to be human. But, know what I've noticed?
There are no pharmacists in these future stories.
Rather, nursing still exists, and so do doctors, though they are a merge of a ripper doc or some technician leveraging incredibly savant technology, but whatever. The point stands. No one thinks of pharmacy.
So wait, what does this have to do with ChatGPT you ask? Well, let's start with the hard stuff then.
ChatGPT & Panic
I'm a child of the AOL AIM days, and SmarterChild was the chatbot we all liked to pick on. Who knew we'd reach a point where the great-great-great-grandchild would tell us that we must rethink our lives.
The prospect of AI has always been something we've seen on TV or rarely get a chance to see in our daily lives, at least in an active participatory manner. Yes, DeepBlue and other IBM AIs crush people playing Chess or Go. We've seen Tesla and related AI autopilots as well. And yes, AI plays a role in almost all the apps and algorithms we touch daily.
But this past week? We got the chance to log into the OpenAI world of GPT3 and take a spin with ChatGPT. And that blew our minds, right?
Let's ask a question, a silly one, like write me a poem or a rap song about XYZ. Oh, cool, it gave a silly answer. Here's an example of an 'Ode to Aspirin.'
Ode to aspirin, wondrous pill Relief from pain, our hearts you thrill From headaches to heartaches, you are our guide A blessing in medicine, a friend by our side
Your tiny white form belies your great power To reduce fever and inflammation in our hour Of need. You are a true lifesaver
And for that, we are forever your debtor
O aspirin, we raise a glass to you For all the good that you do In the world of human medicine
You are a true and trusted ally, so keen
To keep us healthy and free from pain
Long may you reign, aspirin, our dear friend
In the world of health and healing, you are a star
A shining beacon, shining bright from afar.
Simple enough. But, eventually, we start with the harder questions. Like, write me a term paper due in the next 20mins. Here's the prompt. And bam, ChatGPT has become a universal tool to create a hodgepodge answer for all college students' needs and wants.
Then you have the programmers finding out it can help with coding. The writers are looking to generate outlines. The lawyers are drafting up documents. And others putting it through its paces.
It does a good job, and one that has really driven into the heads of many what the capabilities of AI could mean. Because this is the first time that many non-specialists have had a chance to use it. And it opens the philosophical question of what the end of the decade looks like, and all those people talking about automation and AI taking our jobs, were they right?
My Personal Experience
I've seen some other AIs in the past and have asked questions and been delightfully driven to existential ennui. Thankfully, I recovered and started asking questions about what it meant for my career and profession. So I get the shock many are seeing the first time playing with ChatGPT. But playing with it has always given me some part of the back of my brain, some tinges of trepidation I do not think will ever go away.
So what have I done with ChaptGPT? Here's a list:
Tested the limits of my classroom assignments. Yep, I went there. The first step I took when I got access was to see if my students could use ChatGPT to do their classroom work. And, yeah, they could. I developed some prompts that could do the rudimentary low-key assignments that make anywhere between 10-20% of a class grade, such as weekly reflections or simple essays on a topic. And if you make the prompt address the key points I use on a rubric, it'll get it pretty right. A good student would make sure all issues are addressed before submitting them. The fact that the AI creates different answers each time, and you could further refine the prompt, leads me to believe that our plagiarism software such as SafeAssign or TurnitIn couldn't spot it. Unless the makers of GPT embed some means of spotting it, I wonder if our internal tools would at this point. Have fun colleagues!
What about harder assignments? I tried other items, like breaking down medical literature, a mock journal club, or creating good drug consults. And this is where I came across the core issue of ChatGPT. It's an elegant bullshit generator. That's why it could do the previous work so well, but it struggled when asking it the core scientific stuff. But that could also be its experience and the access to medical literature itself. It seems to make up references every now and then that I can't find. I even had an issue where I was asking it questions and realized it didn't know about certain medications or recent drug approvals, which limited its ability to generate the right answer. Which built into the following issue.
I tried to make exam questions. Two can play at this game, right? If students want to use ChatGPT to do their work, I can ask questions, buwhahaha. After all, if you do online testing, such as with ExamSoft, you don't have access to the internet, so I can create as many questions as I want!!! But, it was a mixed bag. Sometimes it made questions without the most updated information (such as recent medication approvals or indications) or was contrary to standard guidelines. Is this also due to the platform not seeing these prompts before? As I started feeding examples of questions I used, it got better. The bottom line is that I had to proof the questions and see what would have to be changed, which is fine. I expect the system to be flawed, but the fact that it can do the initial legwork for me is impressive.
What's next then?
Considering my initial reactions, I want to test this further 'in the field' per se. I plan on trialing this with some experiential pharmacy students. I want to investigate the following:
Can ChatGPT handle curbside consults
How are drug-related questions? It does fine for some things, but could we change the answer and make it easier for our target audience (patient, HCPS, etc)
Generate patient education forms and new monographs
Create patient instructions
Create HCP newsletters and related media posts
How does it write documentation
Again, I expect this to be something other than pretty or perfect. I stand by the fact that because it exists, it demonstrates what may be feasible. It's my same rationale why I didn't look at the Apple Watch for AFib and deride it but ask what would the following decade hold, and now we're in an RPM renaissance.
Overarching Thoughts for Pharmacy
Because this may be the first tool many pharmacists will come across, we'll see more chat about the implications of AI in pharmacy. Radiology and other health professions are seeing the same issue. Whether it's image processing AI or the ability to process patient data for sepsis triaging, a lot of this isn't new.
Nonetheless, I invite you to consider the following implications:
Does ChatGPT invite others to consider using AI as a means of conducting patient education? It's scalable, and I don't think there's not a team out there asking themselves if this is a tool to roll out in clinics and pharmacies to take the brunt of patient questions related to their health. If it's a hard question, then push it to the pharmacist. But low tier? Automate.
Imaging AI and verification. Same idea, and we are seeing it now. ERx or a handwritten script comes in, AI auto-populates, and then conducts the first past review. Refill? Then have a tech sign-off and have the automation handle filling and verifying it's the right medication. New medication? Do first past and double-check for possible interactions and other issues, then push to a remote pharmacist to do final verification. Then again, the remote site handles the rest.
Even getting away from the space of direct dispensing a product, then what about some cognitive services? Do we start seeing AI handle dosing algorithms (like renal or ABx) or changing routes (IV to PO)? How about P&T and MUEs etc, can AI or others take that on? I think it will, definitely.
I think drug information products will incorporate things like smart chatbots and others to handle a lot of issues. Imagine asking Lexicomp or UpToDate a question instead of searching for a topic and reading the work. Lexi, what's the dosing for a patient with XYZ, etc. Bam, you get an actual response. I think it will be fascinating.
So yes, AI poses an interesting conundrum from many aspects. On the one hand, it challenges what and how we teach future learners and healthcare professionals. I can't see us disregarding introducing AI as a tool in education, similar to how I was told long ago that I won't always have a calculator with me because the smartphone and the internet are now ubiquitous in out lives. Similar to how then AI will be implemented in many other tools we currently use. We need to adjust to it.
We need to see our pharmacy organizations and related taking this on, whether it's AACP/ACPE adjusting how we teach or conduct training. I expect 'the great retraining' of the pharmacy workforce to occur this decade. Others need to get invested in regulations and standards of how these tools should be used. Pharmacists need to be at the table discussing the implication and utilization of these tools. Otherwise, opening Pandora's box will prove to be more hectic than we ever hoped for.