Hi Dave. Thanks for the links. I was especially interested in no. 7, about AI search citation being a problem. I was surprised because I generally find that citation works well for me. Will need to have a closer look at the article. Also, I apparently write like Arthur Clarke. I wonder if you and I are biased towards science fiction, or if the the analyser is :-)
Hi Michael. You'll know better than me but there does seem to be a lot of wildly contrasting opinion about AI. I read your piece about the costs of AI processing not being as great as people think, and then another piece worrying mightily about the environmental costs of so much energy use. I too find a lot of the citation stuff works well, and I think LLMs are a great analytical tool for comparative ideas, but others hate it. Perhaps the problem is more that some will accept things uncritically? But that's always been the case.
I've found that, typically, when someone has a poor outcome with AI, it's because they've used a naive prompt without providing enough context. But that doesn't seem to be the case with the study you shared. That's one of the challenges with AI; people have different experiences with the same models. This isn't unlike the kerfuffle when Google started personalising search results (https://en.wikipedia.org/wiki/Google_Personalized_Search#Reception); people were concerned that the same search terms returned different results, depending on what Google knew about you. And now personalised search is pretty standard.
Hi Dave. Thanks for the links. I was especially interested in no. 7, about AI search citation being a problem. I was surprised because I generally find that citation works well for me. Will need to have a closer look at the article. Also, I apparently write like Arthur Clarke. I wonder if you and I are biased towards science fiction, or if the the analyser is :-)
Hi Michael. You'll know better than me but there does seem to be a lot of wildly contrasting opinion about AI. I read your piece about the costs of AI processing not being as great as people think, and then another piece worrying mightily about the environmental costs of so much energy use. I too find a lot of the citation stuff works well, and I think LLMs are a great analytical tool for comparative ideas, but others hate it. Perhaps the problem is more that some will accept things uncritically? But that's always been the case.
I've found that, typically, when someone has a poor outcome with AI, it's because they've used a naive prompt without providing enough context. But that doesn't seem to be the case with the study you shared. That's one of the challenges with AI; people have different experiences with the same models. This isn't unlike the kerfuffle when Google started personalising search results (https://en.wikipedia.org/wiki/Google_Personalized_Search#Reception); people were concerned that the same search terms returned different results, depending on what Google knew about you. And now personalised search is pretty standard.