ChatGPT and education

I started this post as somewhere I can curate and make notes on what I’m reading about the OpenAI chat tool, ChatGPT, and its impact on education, assessment, and study/students. I’m not expecting any traffic here, it’s just something I’m doing to keep my thoughts in one place, with the list will (hopefully) growing over time.

Since its release, GPT-3 (Generative Pre-trained Transformer 3) has … been used to create chatbots that can hold conversations with users and answer questions, demonstrating its ability to understand and respond to natural language inputs. It has also attracted significant attention and controversy due to its ability to generate realistic and coherent text, raising concerns about the potential uses and impacts of AI in the field of language processing.

Another application of GPT-3 is in content generation. … with some users reporting that the generated text is difficult to distinguish from text written by humans … This has led to concerns about the potential for GPT-3 to be used to create “fake news” or to manipulate public opinion.

Whatever happens on the technology side, this should serve as a wake-up call to university staff to think very carefully about the design of their assessments and ways to ensure that academic dishonesty is clearly explained to students and minimised.

Cotton, D., Cotton, P., & Shipway, J. R. (2023, January 10). Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT.

If universities want to stay true to their missions of equity, inclusion and access, then we need to keep and develop these alternative assessments. The task now is to design assessment that incorporates AI-generated text. Not least because upon graduation, students will be using this technology in the workplace.

The real work for educators is to develop new rubrics that stay true to course learning objectives. GPT-3.5 can create a rubric in less than a minute. But instructors will need training, and time to develop their understanding and skills and to modify their teaching materials. Many already teach this way. We just have another reason to double down now.

Librarians, writing centres and centres for teaching and learning are higher education’s frontline workers … need training in recognising the use of AI-generated papers alongside workshops, tutorials and space for dialogue on how to integrate this software in the classroom.

Dec 9, 2022 – ‘ChatGPT and the rise of AI writers: how should higher education respond?’ by Nancy Gleason

What these doomsayers fail to acknowledge is that programs and services for solving math problems and writing college essays and research papers already exist—and have for some time.

Once we move past the fear-mongering, it’s not difficult to imagine how LLM software can be used to enhance higher education … LLMs (large language model) are surely not going to replace college or magazines or middle managers. But they do offer those and other domains a new instrument—that’s really the right word for it—with which to play with an unfathomable quantity of textual material.”

Jan 10, 2023 – ‘ChatGPT: A Threat To Higher Education?’ by Dr Jason Wingard

That ChatGPT would blow apart university assessment reflects marking processes that are increasingly weighted toward coursework in place of more demanding and stressful exams. Many universities offer students several resubmission opportunities should they fail. That is to say nothing of the increasing reliance on multiple choice quizzes that are already marked automatically, or the mania in the humanities and social sciences for alternative forms of assessment.

Dec 9, 2022 – ‘ChatGPT: a morbid symptom of our declining universities’ by Philip Cunliffe

In London, one academic tested it against a 2022 exam question and said the AI’s answer was “coherent, comprehensive and sticks to the points, something students often fail to do”, adding he would have to “set a different kind of exam” or deprive students of internet access for future exams.

The University of Sydney’s latest academic integrity policy now specifically mentions “generating content using artificial intelligence” as a form of cheating.

Jan 10, 2023 – ‘Australian universities to return to ‘pen and paper’ exams after students caught using AI to write essays’ by Caitlin Cassidy

Like it or not, AI-powered computation tools for written content, image generation and coding are here to stay. Aspects of them will soon be integrated into apps like Microsoft Office. The key is to understand their shortcomings and weak points as well as their strengths. We should all be aware, for example, that ChatGPT’s output can be poorly argued, out of date and factually inaccurate.

“We don’t need to revert to in-person exams: this is a great opportunity for the sector to explore new assessment techniques that measure learners on critical thinking, problem-solving and reasoning skills rather than essay-writing abilities. Factual knowledge can be assessed during the learning process, while the application of that knowledge could be tested in project work.

Jan 10, 2023 – ‘Does ChatGPT mean the end of the essay as an assessment tool?’ by JISC

Putting on my pointy hat of pessimism, here’s how I think it will pan out. The machines will come for much academic work first – essays, PhDs, boring scholarly texts (unsurprisingly it can churn these out right now). Fanfic is instantly doomed, as are self-published novels. Next will be low-level journalism, copywriting, marketing, legalese, tech writing; then high-level journalism will go, along with genre fiction, history, biography, screenplays, TV drama, drama, until eventually a computer will be able to write something like Ulysses, only better. The only prompt will be ‘write a long amazing novel on whatever’.

Jan 10, 2023 – ‘AI is the end of writing’ by Sean Thomas

One conclusion we can take from the current debates is that, despite not being perfect, tools like ChatGPT are improving and are here to stay. Moreover, this is only a small step further than the AI we already expect students to use in their essays, such as spelling and grammar checkers in Microsoft Word, or apps like Grammarly.

The growth of AI-generated original content could mean that the student essay will not be a reliable way of assessing learning for much longer. The impact of this will vary from field to field, but might have particular relevance for degrees that include professional qualifications, like the qualifying law degree (QLD). Professional and regulatory bodies might do well to work with higher education institutions to (re)consider what it means to learn and understand in a world where AI can generate the content for us, and for our students. HEIs might also want to urgently consider the boundaries and borderlands of academic misconduct in an age of AI. If a student uses AI to generate an (original) essay outline and then fleshes out their answer, does this constitute academic misconduct? How much AI is too much?

Jan 12, 2023 – ‘Hype, or the future of teaching and learning? 3 Limits to AI’s ability to write students essays’ by Claire Williams

To harness the potential and avert the risks of OpenAI’s new chat bot, academics should think a few years out, invite students into the conversation and—most of all—experiment, not panic.

Jan 12, 2023 – ‘ChatGPT Advice Academics Can Use Now’ by Susan D’Agostino

Though ChatGPT marks a huge step forward in the evolution of AI text generation, it is not infallible. “It may sound very plausible, but the more detail or facts you need as part of your question, the more likely it is that the algorithm will produce something that looks good, but is completely wrong,” said Michael Draper, professor in legal education at the University of Swansea and an expert on academic integrity and cheating.

He said universities could try to ban it, as they ban other forms of cheating like essay mills, which students pay to produce an original piece of work that they then submit. Draper said: “The other alternative is that you allow students to use it. lf we’re preparing students for the outside world of work and if in the workplace this sort of technology is given to us, then I think we need to embrace it rather than ban it.”

Jan 13, 2023 – ‘Lecturers urged to review assessments in UK amid concerns over new AI tool’ by Sally Weale

[This practical guide has been put together to ‘increase efficiency & effectiveness for educators; and their use and understanding of the rise of ChatGPT]

Jan 15, 2023 – ‘ChatGPT for Educators: A practical guide’ by Dr Philippa Hardman

This is a technology that is already two steps ahead of our attempts to contain it in some form of meaningful, solid assessment strategy. In writing this piece I have avoided the possible option that we ‘do nothing’ in response to ChatGPT-3. Considering how slow Universities can be to implement change this is a possibility, but it is not a viable or sustainable choice. Unless we confront the implications of AI for teaching and learning, and embrace it as a part of our policies and pedagogies to develop critical thinking in an AI world, then we really will start to the loose the value of a University education.

Jan 16, 2023 – ‘ChatGPT and the Future of University Assessment’ by Kate Lindsay

Of course, it may be easier and cheaper to use ChatGPT than other ways of getting around academic integrity, but the fact remains that students who are determined to cheat will find ways to do so. Nothing has really changed; academic integrity is difficult to police and always has been.

For as long as there has been high-stakes assessment in education, there has been cheating. ChatGPT may make it a little easier for students to cheat, and a little harder for us to catch them if they do, but it doesn’t fundamentally change the integrity dynamics in higher education. The best ways of thwarting cheating have never been focused on policing and enforcement; they have been about integrity training, creating a healthy campus culture and reducing incentives to cheat. There is no need to panic about ChatGPT; instead we can use this as an opportunity to modernise our thinking about academic integrity and ensure we’re using best practices in combating dishonesty in the classroom.

Jan 17, 2023 – ‘ChatGPT has arrived – and nothing has changed’ by Danny Oppenheimer

AI21 says it developed “grounding and attribution” algorithms to search for relevant sources to base Wordtune Spices’ responses on and present the source links alongside info. The tool can help write a thesis statement and main ideas, including explanations and counterarguments, as well as provide analogies and creative expressions like jokes and quotes.

Users can choose from different cues (e.g. “legal,” “health care”) to prompt Wordtune Spices to suggest rewrites appropriate for particular professional documents. But Goshen says that Spices was designed to address a wide array of use cases, from writing essays and working on blog posts to drafting financial reports.

Jan 17, 2023 – ‘AI21 Labs intros an AI writing assistant that cites its sources’ by Kyle Wiggers

…the embedding of AI (notably ChatGPT and its inevitable successors) in our teaching and assessment will erode student autonomy over their learning experiences, disenfranchise independent thinking and ultimately nullify freedom of expression.

What I suggest ought to be assessed (and which helps us navigate some of the issues posed by ChatGPT) is a record of the student’s personal, but academically justified, reflections, arguments, philosophising and negotiations. Student work would then be a genuine, warts-and-all record of the process of learning, rather than the submission of a “performative” product or “right argument”, as one of the students in my research so aptly put it. This would enable our students to become better thinkers.

Such an approach would be inclusive, personalised, flexible (its adaptability would make it perfect for aligning with Universal Design for Learning) and encourage students to become critical and reflective thinkers while fostering the “human” skills we often tout as being key for employability. Reflective, dialogic writing offers a way forward for working with and around AI-generated writing while placing agency and learning back where it belongs – in the hands of our students.

Jan 18, 2023 – ‘ChatGPT and AI writers: a threat to student agency and free will?’ by Adrian Wallbank

[The students] may doubt that their work or effort is being taken at face value as their own effort. Secondly, taken to the next logical level, they may doubt that any personalised feedback and grades they seemingly receive from a human educator may in fact have been generated by AI. This ‘weaponisation’ of AI can be by both sides looking for efficiency, or simply a crutch to prop up a lingering doubt that their own work is really any better than AI (yes, academics have imposter syndrome as much as students).

Jan 18, 2023 – ‘Artificial intelligence inserting doubt into the relationship between educators and learners’ By Louise Drumm

On the flip side, this technology could usher in a higher standard, the same way expectations differ for a two-hour written exam vs. an essay you have two months to write.

“traditional pen and paper exams do not represent the type of contexts and work that graduates will work in. Indeed, many students may be required to use AI or similar technologies in their future careers”.

Jan 18, 2023 – ‘AI Writing Tools Like ChatGPT Are the Future of Learning & No, It’s Not Cheating’ by Aleksandra Bliszczyk

So why shouldn’t students use AI tools to help them with take-home exams, such as ChatGPT? Each teacher should make their own decision, but I think we need to work with this technology, just as we have learned to work with others over the years. In the 1970s my teachers wouldn’t let us have calculators because we would forget how to use slide rules and log tables – if you are under 55 this is a meaningless statement, because we realized that calculators saved us some tedious work and got rid of those previous techniques. If you want an assignment task which fits the description of take-home exams above, then you should keep this format, but adapt the parameters of the task.

Jan 18, 2023 – ‘Take-home exams’ by Rachel Forsyth

But GPTZero isn’t an app only meant for teachers to catch their students using ChatGPT to write their essays. It is also meant to encourage users to write with creativity, personality, and originality, which [Edward Tian] argues an AI can’t do, according to NPR. 

Jan 19, 2023 – ‘The Princeton student who built an app to detect ChatGPT plagiarism opposes banning the chatbot in schools’ by Aaron Mok

While there will always be a need for essays and written assignments – especially in the humanities, where they are essential to help students develop a critical voice – do we really need all students to be writing the same essays and responding to the same questions? Could we instead give them autonomy and agency and in doing so, help to make their assessments more interesting, inclusive and ultimately authentic?

As educators, we can even use ChatGPT directly to help us develop such assessments. So, rather than posing the question that generated the start of this article, I could instead present students with ChatGPT’s response alongside some marking instructions, and ask them to provide a critique on what grade the automated response deserves and why.

Such an assessment would be much more difficult to plagiarise. It would also invite the students to develop their critical thinking and feedback skills, both of which are essential when they graduate into the workforce, no matter what their profession. Alternatively, ChatGPT could be used to generate scenario-based tasks that require students to analyse and solve problems they may encounter in their future careers.

Jan 19, 2023 – ‘ChatGPT: students could use AI to cheat, but it’s a chance to rethink assessment altogether’ by Sam Illingworth

“The education system should adapt [to ChatGPT’s presence] by focusing more on understanding and creativity and using more expensive oral-based evaluations, like oral exams, or exams without permission to use technology,” Bengio said, adding that oral exams need not be done often. “When we get to that point where we can’t detect if a text is written by a machine or not, those machines should also be good enough to run the [oral] exams themselves, at least for the more frequent evaluations within a school term.”

Jan 20, 2023 – ‘AI Writing Detection: A Losing Battle Worth Fighting’ by Susan D’Agnostino

In the context of education we are often torn between maintaining an historical connection with our subject area whilst at the same time researching into the latest innovations within our disciplines. The historical connection often manifests itself as “key concepts” or “founding principles” of a subject area and sometimes these might be holding us back from making real progress. In my experience of working with mathematics educators on curriculum design activities, they have generally managed to find a good balance here between developing the fundamental principles of maths education whilst making use of technology to answer increasingly challenging and complex problems. What this means in practice is that human-effort is not wasted on tasks which can easily be completed by a computer-based system (calculator) but that the calculator (in it’s various forms of complexity) is designed into the education and learning so as to make use of it and increase the value and impact of the human-effort.

Jan 21, 2023 – ‘The shifting value of human-centred effort. (Yes this is also about ChatGPT)!’ by Simon Thomson

There is no doubt this tool is a disruptor and will require us to pivot in much the same way as when Google Translate came on board. I note with interest that the NSW Department of Education has just blocked ChatGPT in schools until such time as a review can be undertaken. There is justifiable concern around cheating and plagiarism amongst students however I think the ban (and the hype it’s brought with it) may have had the unintended side effect of making the platform more popular than ever! As a temporary measure, school-based firewalls might help through the day, but there is nothing preventing students from accessing the platform on their phones or at home.

Jan 22, 2023 – ‘Getting on board with ChatGPT’ by Alison Dean

If the outcome of ChatGPT is a greater, deeper discussion about the ways in which we approach academic writing in the classroom, the ways in which we discuss what good writing does and how to assess it, then we should welcome it with open arms. We should be telling our undergraduates that good writing isn’t just about subject-verb agreement or avoiding grammatical errors—not even good academic writing. Good writing reminds us of our humanity, the humanity of others and all the ugly, beautiful ways in which we exist in the world.

Jan 23, 2023 – ‘Worried About ChatGPT? Don’t Be’ by Hetal Thaker

OpenAI’s artificial intelligence chatbot has passed the final exam of an MBA programme designed for Pennsylvania’s Wharton School, according to a new study.Professor Christian Terwiesch, who authored the study, noted that educators should be concerned that their students might be cheating on homework assignments and final exams using such AI chatbots.

The study noted that the AI displayed a “remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers and consultants”.

Jan 24, 2023 – ‘Concerns mount as ChatGPT passes MBA exam given by Wharton professor’ by Vishwam Sankaran

So, as the AI grows, helping teachers peer into its fog of technology will have increasing value. And because being able to detect AI-content is a strong deterrent to using it inappropriately, helping schools avoid massive academic fraud could become its own billion dollar business. Demand for tools to spot ChatGPT won’t be as big as the demand for ChatGPT itself. But there is no reason to doubt it will be pretty big nonetheless.

Based on what we’re being told and what we’ve already seen, the question is not if AI detection systems will exist, but what they’ll look like and when they’ll be in common use. And, of course, whether education providers can risk going without them. In most cases, they likely cannot.

Jan 24, 2023 – ‘The Big, Profitable Education Race To Detect ChatGPT’ by Derek Newton

While it’s understandable that the emergence of ChatGPT has sparked such speculation, it’s important to remember that technology is not inherently good or evil. ChatGPT itself is a neutral tool, and how it is used depends on the intentions of those who use it.

As educators, our fear of a small minority of students misusing technology should not stop us from embracing its productive potential for the majority of learners. To ensure that our intrepid and well-meaning students are not misapplying ChatGPT, our aim should be to educate them on the appropriate times, reasons, and methods for using it, such as aiding in a generative brainstorm or outlining an essay. Alongside considerations of proper use, students should be taught skills—like how to corroborate information and accurately cite sources—that relate to the ethical considerations and implications associated with using ChatGPT.

Jan 24, 2023 – ‘Leveraging ChatGPT: Practical Ideas for Educators’ by Zak Cohen

Calculators changed how mathematics were taught. Before calculators, often all that mattered was the end result: the solution. But, when calculators came, it became important to show how you had solved the problem — your method. Some experts have suggested that a similar thing could happen with academic essays, where they are no longer only evaluated on what they say but also on how students edit and improve a text generated by an AI — their method.

So, as with other AI technologies, humans are still required to review and correct AI-generated texts. That editing is often complicated and requires real knowledge of a subject, and that could be graded at universities in the future.

Jan 25, 2023 – ‘ChatGPT is Changing Education, AI Experts Say — But How?’ by Lukas Stock

Promoting a culture of ethics as it relates to AI could be an effective strategy to address bias within the AI systems that are used in the workplace. This could include updating the performance evaluation process in the workplace to intentionally introduce more ethical AI practices, as well as greater transparency and more discussions around the pitfalls of AI systems. It is no surprise that an AI chat bot like ChatGPT can generate biased responses. But AI technology only mirrors what has been programmed and what it has been trained on. We may be far from the point where we can truly rely on the responses generated from any AI system. While the possibilities of AI systems are limitless, they must be used with a grain of salt and an understanding of their potential limitations.

Jan 28, 2023 – ‘The Dark Side Of ChatGPT’ by Janice Gassam Asare

As we add to the corpus of knowledge, as we find better ways to get good information out there, ChatGPT will get better, and it could become a useful tool in our armouries to help students and graduates get the best information that we can. And it’s not bad at churning out a few paragraphs if you’re up against a deadline, as long as you’re not too fussy about it having any actual content. You can consider that an endorsement from me, Wouter Zweers, an actual expert, as described by ChatGPT itself.

Jan 30, 2023 – ‘Exploring the UK Graduate Labour Market with ChatGPT: An AI-Powered Analysis’ by Charlie Ball

Amid the negative headlines, what is seldom discussed is the opportunities educators have to use this AI technology to their advantage. If educators recognize ChatGPT’s potential, then use it to accelerate learning outcomes, and establish the necessary safeguards and boundaries for student use, they’ll begin to reap the benefits of the revolutionary technology. Here are the “dos and don’ts” educators should keep top of mind when exploring AI.

Educators and policymakers alike have an obligation to ensure students can equally access education technology tools, both in the classroom and at home, and ChatGPT is no exception. At a time where 21% of college students struggle to afford laptops and other technology essentials, implementing additional technologies like ChatGPT into course requirements represents a significant financial burden for many.

Educators themselves, not students, may actually have the most to gain from understanding and adopting AI tools like ChatGPT. However, seizing this opportunity will require educators not jumping to conclusions, driving open and honest dialogue, expanding the scope of their instruction, and establishing the necessary ethical safeguards.

Jan 31, 2023 – ‘ChatGPT Enters Education: The Dos & Don’ts for Educators’ by Jim Chilton

I fear that ChatGPT is the type of tech that students will get and use before a majority of educators know what it is, let alone use and teach it effectively. Then it will be labeled as an evil cheating app. What a waste until educators catch up? Every educator should download the free app and play with it for a while to get somewhat familiar with its capabilities. This should be done before it is thrust upon us by the powers that be in their infinite wisdom and flawless leadership.

It is rather arrogant to think any educator cannot allow students to use a technology that every student has access to. We have gone through these thoughtless obstacles with almost every innovation. People are told to innovate and then their creations are blocked because they cause discomfort from the “tried and true”. Think about cell phones and students’ debate.

Feb 2, 2023 – ‘ChatGPT: Kill It, or Use It?’ by Tom Whitby

The norms around what constitutes plagiarism and cheating in regard to ChatGPT and other AI tools are still being debated. Students should be a part of that debate. These conversations will also help educators learn how best to use ChatGPT and other AI technology.

Feb 6, 2023 – ‘How to Prevent ChatGPT Cheating’ by Erik Ofgang

OpenAI did not pay for the data it scraped from the internet. The individuals, website owners and companies that produced it were not compensated. This is particularly noteworthy considering OpenAI was recently valued at US$29 billion, more than double its value in 2021.

Beyond this, OpenAI gathers a broad scope of other user information. According to the company’s privacy policy, it collects users’ IP address, browser type and settings, and data on users’ interactions with the site – including the type of content users engage with, features they use and actions they take.

Feb8, 2023 – ‘ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned’ by Uri Gal

Unsuspecting users who’ve been conditioned on Siri and Alexa assume that the smooth talking ChatGPT is somehow tapping into reliable sources of knowledge, but it can only draw on the (admittedly vast) proportion of the internet it ingested at training time.

Instead of reactionary solutionism, let us ask where the technologies are that people really need. Let us reclaim the idea of socially useful production, of technological developments that start from community needs. The post-Covid ‘new normal’ has turned out to involve both the normalisation of neural networks and a rise in necropolitics.

Feb 9, 2023 – ‘ChatGPT Is a Bullshit Generator Waging Class War’ by Dan McQuillan

Still, let’s be clear: Some of what you hear about AI is hype. People have been too quick to attribute human thinking to the current state of the art. The truth is that, for all their abilities, most of these AI products need more work. Case in point: When I searched with Microsoft’s new Bing chat, it sometimes gave me wrong information. I even caught it hallucinating about something that doesn’t exist.

But it’s improving rapidly. The current wave of AI products is built on a technical breakthrough called generative AI. It allows a computer to create images or words that look like they might have been made by a human. It does this by studying zillions of pictures and text samples, often scraped from the web. To use them, you feed them an English-language phrase telling them what you want to do. Figuring out the right prompt is becoming an art of its own.

Feb 10, 2023 – ‘Catch me up: How to try the new AI tech everyone is talking about’ by Geoffrey Fowler

Please feel free to add any links or comments in the comment section below (yes, I’ve opened up comments, again).

Photo by Jason Leung on Unsplash