Editorial How would you like ChatGPT to respond?
By Toh Hsien Min
One person I have known professionally was an early enthusiast on GenAI. He used to use it for as many applications as he could find; specifically he used it to write papers for work, and was proud of how much time he saved by doing this. The problem was that these papers ended up being judged to be banal. They looked the part but did not meet the need. There was no sophistication and no appreciation for nuance. No matter how much prompt over-engineering was put in, they could never precisely address the issues that his organisation faced. They were about as informative as LinkedIn posts. Ultimately - which was both technically and essentially true - there was no thinking. When his career took a turn, I offered him a smidgen of advice. Even if you continue to use AI, I said, don't broadcast how much you use it. Because from a line manager's perspective, the prompt that engineers is "why don't I cut out the middleman?" I've tried ChatGPT and its brethren before, of course. Just on Monday, I wondered if ChatGPT could find me an obscure fact about financial statements better than regular Google could (it couldn't). What I learned instead was that the last time I'd used ChatGPT was months ago. It just isn't that useful. Feeding large amounts of text into a Large Language Model (or LLM, the underlying technology of ChatGPT) means almost by design that its output will be slap-bang average. Which is why my feelings about the recent kerfuffle on the Infocomm Media Development Authority (IMDA) exploring the possibility of using Singapore writing to train a Large Language Model were of bemusement. To recap, the IMDA thought it might be a good idea to run a survey among Singapore writers to gauge the community's feelings about this, in the spirit of consultation. Unfortunately, there were responses that took a contrary spirit, choosing instead to criticise how the survey did not include reams of small print about intellectual property and did not commit to paying writers. It's a survey! Indeed, one option is simply to reply to the survey with these opinions. Flogging a survey for not containing one's own specific set of opinions seems to be spectacularly missing the point, unless of course one's agenda is to stifle consultation. And there is a degree to which preciousness about the copyrights and prohibitions on use in research (universities, anyone?) seems to conveniently ignore the very nature of human creativity, which is ineluctably influenced by all the material that an artist has ever seen. It's only a problem if someone else does it, huh? Although one experiment I would love to see is for this possible LLM project to include a reverse auction for copyright holders to submit their work with their own price quotations. I imagine the outcome of that would be even more interesting than anything the LLM could produce. QLRS doesn't have a policy around GenAI. (Which, given the above, is probably a good thing.) But we do look for work that is well above average, containing real surprise and improbable innovation. This doesn't preclude the possibility someday someone will submit something written purely by machine, but it is hard to imagine that work with the compelling interpersonal shades and hues of Natalie Wang's 'A HDB Of One's Own' or Theophilus Kwek's wonderfully defamiliarising poem, or indeed Kristina Tom's attentive analysis of Jee Leong Koh's ambitious latest collection, can have been created by probability models. Instead of taking down artificial intelligence, we should be celebrating true creativity. QLRS Vol. 23 No. 2 Apr 2024_____
|
|
|||||||||||||
Copyright © 2001-2024 The Authors
Privacy Policy | Terms of Use |
E-mail