What your *research/*study/*teaching/*work might look like soon?
Late last week, Greg Brockman from OpenAI showed what might be coming to a computer near you soon.
An aside…
There will continue to be a lot of thought given to the ethics of generative AI, but I wonder whether there’s also a more quotidian shift taking place here that will have an equally profound effect on our everyday work/life?
Tools like ChatGPT both break down barriers between users and technology (so that you can now get a vast catalogue of new things done now without needing to have the skills of a computer coder), whilst also relying on a coterie of highly skilled coders continuing to invent and develop the tools AI can then assemble.
So the assumption is that we will still need highly skilled (human) coders to do the high-end development work. And this is reassuring because what it implies is that experts and specialists will still be needed even when AI takes the grunt work away.
But, what will happen to the structure of all work if Large Language Models (LLMs) are able to build the code as well?
Recently, I’ve been playing with tools like this, which allow you to ask an app to generate other apps to answer specific questions.
I asked it to build an app for the ISIH conference in 2024 that showed people cafes and restaurants within five kilometres of where they were staying, ranked by consumer reviews in Google Maps. It did it in seconds, and it’s amazing.
Although it’s a small example, this might actually be how a lot of the problems of professional ‘hollowing-out’ will be resolved.
Hollowing out your profession
Hollowing out is the idea that as the mundane and everyday parts of your work increasingly shift to people and machines that are much cheaper to train and employ than you are, it becomes harder to become an expert.
Conventional wisdom said that the 10,000 hours of low-level, routine, and instrumental work was a necessary foundation for the specialised work that followed.
So how does one become a specialist and an expert in a field when those 10,000 hours of grunt are performed by off-the-shelf apps, low-paid assistants, machines, and robots?
The hollowing-out problem, then, is not so much one of professional deskilling but rather a break in the traditional pathway to specialisation.
LLMs are causing enormous angst in the coding community because many programmers now think that their work has been demystified, and people like you and I can just ‘talk’ to our computers and ask us to build whatever we want.
I think that a lot of people believe that healthcare and education will be largely immune to this kind of disruption.
But I’m not convinced that there is much in the field of health care or health professional education that humans will still control in 20 years time. I’m not one of those who believes that haptic dexterity or empathy work are sovereign human domains.
I really think that LLMs will have an enormous impact on the way we teach, talk, and touch in healthcare in the future.