ChatGPT and Education:
Open AI has published a lengthy and detailed document on the ethical issues involving the use of generative AI in education. They discuss the many ways that this technology can be used as a tool for educators, and also discuss necessary guardrails.
We are entering into a new era with generative AI. Some safeguards are reminiscent of issues we currently face with autonomous vehicles—-that is, don’t let the car drive itself unsupervised. By the same token, ChatGPT is another powerful technology that still requires a human in the loop.
They cite a variety of uses for generative AI in education, including help in drafting lesson plans and quizzes, use as a tutoring tool, and personalising existing material to different languages, interests and reading levels.
Key Points from the Document: The Good and Bad
- Generative AI is a useful tool. It should be considered to be a part of “literacy” in the broader sense, much like using a calculator or a computer.
- Personalisation. Generative AI can be used to create personalized teaching materials, draft lesson plans, and provide feedback on writing, but this personalization carries risks such as privacy violations and bias.
- Understanding limitations: disclosure, testing, and plagiarism. Educators should understand the limitations of the technology and should disclose their use of an AI system. Generative AI should not be used on its own to test students. An AI text classifier may be helpful in detecting AI-generated content or plagiarism, but such tools must be used in concert with other, converging assessments.
- Accuracy and breadth of knowledge. The model is never guaranteed to provide results which are truthful, or even rational. The model is only aware of the world up to the point in time in which its training data extends. It is unlikely to perform well on subjects not within its training data, as it has no knowledge of the world outside of it. For example, it cannot access the Internet.