I’m talking with Michael Rowe on Thursday about ChatGPT and its impact on healthcare and education. (I’ll post up the conversation on ParaDoxa afterward).
Michael is an Associate Professor in Digital Innovation in Health & Social Care at Lincoln Uni in the UK and a long-time friend of the CPN.
What sort of post-conventional questions should I ask him when we talk about what large language models and AI have in stall for healthcare education?
We have just started trialling AI digital clinical twin prediction models in the hospital I am affiliated with. This digital twin aims to collate 1000s of presentations so that clinicians can give a treatment to the twin to understand the likely response from the patient before it is given to them. This twin will also be able to predict the date a patient will be medically fit enough to leave the hospital and will subsequently inform rehabilitation and social care actions/ timescales. What will this mean for developing "clinical reasoning" for the next generation of professionals? Conversely, what is the potential impingement here for teaching person-centred care when the algorithm says no? Looking forward to the conversation. BW, Meri
We have just started trialling AI digital clinical twin prediction models in the hospital I am affiliated with. This digital twin aims to collate 1000s of presentations so that clinicians can give a treatment to the twin to understand the likely response from the patient before it is given to them. This twin will also be able to predict the date a patient will be medically fit enough to leave the hospital and will subsequently inform rehabilitation and social care actions/ timescales. What will this mean for developing "clinical reasoning" for the next generation of professionals? Conversely, what is the potential impingement here for teaching person-centred care when the algorithm says no? Looking forward to the conversation. BW, Meri