There's an open letter circulating this week entitled "Stop the Uncritical Adoption of AI Technologies in Academia", initiated by academics at Radboud. I agree with quite a lot of it, but their demands are quite sweeping and as a result I can't sign this letter. My reasons, my response, are in this post. I'd be interested in your reaction too.
Here's my comment:
One of my big concerns is that they don't specify what it is they want to ban. We all know that "AI" is a widely-used term which is sometimes taken broadly and sometimes narrowly. I believe that the aim of the letter is to ban big-industry generative AI from the classroom (judging my the motivations they express). I sympathise with that. However, the authors have chosen to simplify this to the term "AI" without explanation, and that turns their demands into quite extreme ones.
The closest we get to a definition in this letter is AI "...such as chatbots, large language models, and related products." So is it only text generation they want to ban, ignoring image generation? Maybe, but that's probably too narrow. Do they want to ban all use of machine learning, even the teaching of machine learning? I very much doubt it, but it's easy to read the demands that way, since "AI" is understood by many people to include all of that.
(The title of the letter seems to exhibit nuance: "Stop the Uncritical Adoption of AI" is much better than "Stop the Adoption of AI". But the letter's demands go further.)
For myself, I'd like to
while I also want to
For me, the open letter's demand to "ban AI use in the classroom for student assignments" accounts for (a,b,c) but fails at (d) and (e).
I've avoided LLMs so far but I don't believe I can achieve (d) without taking some nuanced tactical alterations to the course that I teach. I might use EduGenAI or possibly an offline local LLM since it helps with points (a) and (c) (but not completely).
So, for my own personal perspective: I don't agree with the open letter because it "throws the baby out with the bathwater": the "baby" being ML tools in the classroom, the "bathwater" being Big GenAI and LLM-induced de-skilling. I would prefer our strategy to be one that deliberately guards against both of those without banning all "AI".
I also have in mind the fatalistic voices who will comment: "Students will use ChatGPT anyway" and "But ChatGPT is better than ____". I work at Tilburg University, whose motto is "Understanding Society". Surely, this now includes understanding the societal context and implications of using LLMs, including the societal position of one LLM versus another LLM. For me, tools like GPT-NL or EduGenAI should help to make this case. (Or offline LLMs?) We can disentangle LLMs as a tool from Big GenAI as an industry, in the messaging we give to students.
I'm grateful to the letter authors for taking a stand, and for providing good food for thought.