Updated with new material 14 April 2023
Before he became many other things, Woody Allen was a stand-up comedian, and once he joked that he’d been on a speed reading course where they’d read Tolstoy’s epic story War and Peace. “It’s about Russia”.
In today’s infolopolis people use TL;DR (too long; didn’t read) for pretty much the same purpose. Just give me the bullet points, they say.
Well, thankfully, we now have a tool — or perhaps more accurately, a cornucopia of tools multiplying before our very eyes like rabbits — to do almost anything for us that once required even a modicum of human intellectual labour.
Evil would be very pleased with this day’s work.
But don’t misunderstand me, I’m not some kind of Luddite curmudgeon who sees the emergence of Large Language Models (LLMs) like ChatGPT (Chat Generative Pre-trained Transformer) as equivalent to the fall of man. Quite the opposite, in fact. So, let me say from the outset that I think these tools are completely awesome.
Mollified as I am by shiny objects, the sheer scale of possibilities of these tools makes me wonder what might be possible next week, never mind next year.
But I think, like most people, I’ve been a little overwhelmed by the speed at which this has come and what it might all mean.
So I turned to my good friend Michael Rowe, Associate Professor of Digital Innovation in Health & Social Care at Lincoln University in the UK for some advice and clarification.
Michael has a deep and profound appreciation for digital technology and its potential for health education and professional practice. So I thought I would share some of those thoughts and ideas here.
Firstly, here’s a one-hour talk Michael posted on YouTube explaining what LLMs and ChatGPT are and what they might mean for future health professional education [Highly recommended].
Michael’s interview
And here’s an audio recording of the one-hour conversation Michael and I had around LLMs and ChatGPT and some of the post-critical implications of these developments.
The headline
‘We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT…Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system’ Link
Some tools
Elicit (summarises resarch articles)
Litmaps (AI generated maps of research literature)
Cohere (similar to ChatGPT)
Galileo (design help)
Claude (Rival to ChatGPT)
Midjourney and Stable Diffusion (Image generators)
How to turn voice memos into 1st draft essays
Get a modern neural network to auto-complete your thoughts
Summary of 12 new tools, with these two examples 1 & 2
Sari Azout’s personal ask me file, explained here, and related to this
ChatGPT will destroy Google searches in 2 years and this
Different kinds of information ChatGPT can give you
A tool to help with email marketing
Always appear like you’re looking at the camera
Quora trying out Poe
ChatGPT for students
A practical guide to ethical use of ChatGPT in essay writing
Dangers of sentient AI
‘The reason to regulate AI is not because the technology is out of control, but because human imagination is out of proportion’ Link
AI and the Transformation of the Human Spirit
How the humanities can disrupt AI
Henry Oliver arguing that ChatGPT is magic; ‘Arthur C. Clarke once said that any sufficiently advanced technology is indistinguishable from magic’.
The real thing or just a simulacrum?
Noam Chomsky on ChatGPT: It’s “Basically High-Tech Plagiarism” and “a Way of Avoiding Learning”
Buzz feed to use chatgpt not journalists to write content
‘Tangentially, AI, especially ChatGPT, seems related. In theory it could be one of those tools, providing clearer answers, but it’s actually fed on that same overflow of unvalidated information and provides the same incomplete or plain wrong answers. It’s also, perhaps, the opposite of what we need–for the above unease, anyway. The culture of productivity, capitalism’s demands, and these flows of information don’t leave us enough time to reflect, to slow down. It’s doable, but it’s like swimming against the current. And now we should remove another one of our tools for thinking? We should hand off to an AI our opportunities to think through writing? We need to speed that up to? Automate it? Optimize it? What happens to our time for thinking and reflection then? I love trying those things out, and they will be good for some things, but automating writing seems like another step in the wrong direction’ Link
Chat GPT is a complete con but a feature of a post-truth world
How ChatGPT robs students of motivation to write and think for themselves
The Expanding Dark Forest and Generative AI
ChatGPT, DALL-E 2 and the collapse of the creative process
Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become.
This is what we humble humans can offer, that AI can only mimic, the transcendent journey of the artist that forever grapples with his or her own shortcomings. This is where human genius resides, deeply embedded within, yet reaching beyond, those limitations.
From an email on ISCHP group;
“I am writing to seek any advice on (1) how to differentiate survey responses done by actual humans from ones by AI bots; and (2) what can be done to block AI responses in the future.
My graduate advisee conducted a survey study by using Qualtrics. They used features Qualtrics offers to avoid AI responses, and yet their survey was deeply impacted by AI responses.
Do you have any advice on how we can differentiate which one is actual and valid response and which one is not? They have eliminated responses that were completed within unreasonable amount of time or responded from counties outside of the US, given this is a survey intended for US participants.
Also, if you conduct online survey research, what do you do to block such responses from AI?
Any advice is appreciated.
Positive possibilities
I love this project - Moby Dick and AI
“without any specialized prompt crafting, exceeds the passing score on United States Medical Licensing Examination (USMLE), by over 20 points” Link
ChatGPT Made Me Cry and Other Adventures in AI Land
ChatGPT is great you’re just using it wrong
Stable Diffusion could solve a gap in medical imaging data
‘We are on the verge of a seemingly Cambrian explosion of AI tools. Chatgpt was the starting pistol for an already teeming cluster of technologies to be released upon us. Humata.ai can read and discuss pdf. Consensus can summarise entire fields of academic research. AIs can work in law firms. You can give Dreamix a video and a set of instructions and it will make you an entirely new video. The Poe app gives you rapid responses to any questions. And of course, Bing’s chat-bot search engine Sydney told a New York Times reporter that she loved him and that he was in an unhappy marriage’
DALL-E 2 and Midjourney can be a boon for industrial designers
AI, Foucault, and the disappearance of the human
Teaching Philosophy in a World with ChatGPT
The ChatGPT bot is causing panic now – but it’ll soon be as mundane a tool as Excel
A New AI Solution to Predict and Prevent Unnecessary Surgeries (from Paul Lagerman)
On the humanism of Anti-AI
‘None of this is to say that AI art can’t be interesting or useful. It just won’t be beautiful, or meaningful, or any of the other deep qualities people are drawn to in great art.
`And that what we could get instead is a world full of creation that merely passes for beautiful - where things can be cool or interesting for a moment, but nothing has any enduring value. A world that looks good, and sounds good, and seems good, but just isn’t quite right in some hard to place way…’ ‘Art and Proxies - Jamie Ryan
This is about AI art, but it's also about the hollow nature of so much culture and how AI could (will?) feed into that. Already Hollywood churns out sequels, prequels, and reboots at (seemingly) a higher rate than original works, reiterating and recombining the existing instead of bothering to try and create the risky new. AI does the same, but not for risk averse finance reasons, but because that's all it can do.
It ends on a weird note about the irreplaceable nature of human creative energy, but weird or not I'm receptive to it Link
Commentary on the Commentary
Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice
Releasing ChatGPT was a last resort
Can everyone kindly shut the fuck up about AI
Norms for philosophical publishing with AI
From John Luttig:
there’s nothing missionary built into the technology (AI) itself, unlike blockchains which are trustless and thus anti-institutional. This could be a good thing: AI is an ideological blank canvas, and can be shaped to match human will.
From Ian Bogost:
Their creators haven’t helped, perhaps partly because they don’t know what these things are for either. OpenAI offers no framing for ChatGPT, presenting it as an experiment to help “make AI systems more natural to interact with,” a worthwhile but deeply unambitious goal. Absent further structure, it’s no surprise that ChatGPT’s users frame their own creations as either existential threats or perfected accomplishments.
I found this interview with David Holz, the founder of Midjourney to be really good:
Right now, people totally misunderstand what AI is. They see it as a tiger. A tiger is dangerous. It might eat me. It’s an adversary. And there’s danger in water, too — you can drown in it — but the danger of a flowing river of water is very different to the danger of a tiger. Water is dangerous, yes, but you can also swim in it, you can make boats, you can dam it and make electricity. Water is dangerous, but it’s also a driver of civilization, and we are better off as humans who know how to live with and work with water. It’s an opportunity. It has no will, it has no spite, and yes, you can drown in it, but that doesn’t mean we should ban water.
And when you discover a new source of water, it’s a really good thing. I think we, collectively as a species, have discovered a new source of water, and what Midjourney is trying to figure out is, okay, how do we use this for people? How do we teach people to swim? How do we make boats? How do we dam it up? How do we go from people who are scared of drowning to kids in the future who are surfing the wave?
Just published: The socio-economic argument for the human right to internet access
Abstract: This paper argues that Internet access should be recognised as a human right because it has become practically indispensable for having adequate opportunities to realise our socio-economic human rights. This argument is significant for a philosophically informed public understanding of the Internet and because it provides the basis for creating new duties. For instance, accepting a human right to Internet access minimally requires guaranteeing access for everyone and protecting Internet access and use from certain objectionable interferences (e.g. surveillance, censorship, online abuse). Realising this right thus requires creating an Internet that is crucially different from the one we currently have. The argument thus has wide-ranging implications. https://tinyurl.com/56jp4hra
Enjoyed this publication and discussion with Michael. There is much to discuss - as you both stated - regarding generative AI and its implications for healthcare. I still feel both the ethical and moral implications of AI have not been considered enough. Also, the social-political aspect - which Dave touched upon - needs to be discussed much more. Who will be in control of the workings and data created and collected by the software? Furthermore, capitalism - Silcon Valley being in the heart of it - certainly lends itself to potential corruption and exploitation in pursuit of the dollar.
So much potential for AI if only we could ensure its development and implementation is for the betterment of society, the impoverished, and our environment, and our future.