Since the release of ChatGPT in late 2022, questions and concerns about the uses of generative artificial intelligence in education have exploded. As many of you may know, these so-called LLMs (large language models) can be easily accessed as stand-alone research, writing, and image-generating tools from companies such as OpenAI (ChatGPT), Anthropic (Claude), Google (Bard), and others. The interface is simple and uncannily familiar: a chatbot conversation with a natural language processing capability that makes it seem as though a human is generating the composition. For those who have not yet explored but are AI-curious, Poe collates several of the AI-assistants, both free and premium, in one space.
At the same time, these new AI technologies are also quickly becoming integrated into digital tools and resources that are already familiar and fundamental to academic work: word processing, editing and translation, and internet search, for example. Microsoft and Google are leading players in AI development–and they are integrating these newer AI models into their existing platforms. Imagine Microsoft’s Office Assistant “Clippy,” brought back to life in the voice of HAL 9000: Hello, Dave, it looks like you are writing a thesis on Dickens and Derrida, would you like my help?
For example, Elicit correlates AI-assisted research with the Semantic Scholar database. Students have for some time been using Grammarly for writing and editing assistance or Google Translate for translation, before they heard of ChatGPT. And we all have been using more rudimentary forms of AI since our first Google search or spell check correction. Assuming that AI can be ignored or banned entirely (by teachers) or go unacknowledged (by users) is not really an option.
It is not a viable option, at least, from the perspective of teaching and learning. I would, instead, frame the discussion of these newest technologies in composition and research around these questions: How might the uses and limitations of generative artificial intelligence, its affordances and constraints, be understood best in our educational contexts? What are ethically effective and ethically problematic uses in the context of learning?
Foundations for this ethical-educational perspective on AI were laid last spring in Cromwell CTL’s Learning about Machine Learning series, in the Practical Guide to Artificial Intelligence organized by Professor Kyle Wilson and students in Machine Learning, and in discussion in the Presidential Symposium in the fall, The Human and the Machine.
The Washington College Honor Board has also responded to the new challenge of AI in education by updating its guidelines and policies on academic integrity with regard to the use of generative AI. More details and discussion about the changes are coming soon. Here is a draft of the new language that addresses AI use:
Unauthorized Use of AI: using AI software to generate ideas, text, or images and submit them as one’s own work, without proper attribution and/or absent a clear statement of permission from an instructor.
How should we respond in our work? As a useful guide for generating classroom policies and assignment prompts that faculty can use in concert with new Honor Code language, I recommend Justin Hodgson’s “Generative AI: An Ethics of Practice.” Dr. Hodgson (Indiana University) provides students and educators with an ethical framework for addressing a range of AI practices and policy areas, situations where use would and would not warrant disclosure and where ethical consideration should be given, including but not limited to academic integrity.
Dr. Hodgson’s focus on recognizing and then acknowledging the particular kind of AI use, treating the source of the AI as a source, provides a foundation for ethical use that is also educational. It reminds us, educators and learners, that our “Policy” regarding the work students generate should be educational, and our “Practice” guiding the education should be ethical. We should all be thinking about the ideas and their origins in the work we generate. The ethical question (Am I appropriately or fairly using this source of information?) initiates the metacognition needed for effective learning (What is this source of information? Where has it informed my work? How might I expand upon it?). You might also find this Generative AI Acceptable Use Scale designed by Vera Cubero helpful in identifying for students where AI use is acceptable and where it needs to be disclosed.
I have thus integrated ethical guidelines and policies regarding AI assisted work within my existing educational perspective on academic integrity. When students turn in compositions, I expect them to acknowledge sources of assistance and ideas that are not already included in a citation within their work. As we all know, our own ideas and inspiration emerge through other sources and resources. My model is the “Acknowledgments” page of a book or article that many academics read and write very deliberately. My examples: thanking a colleague for ideas or guidance on revision, family, friends and pets for companionship, editing help from publishers and copyeditors. To which I add for students: acknowledging assistance from the Writing Center, the Library, or a peer, or the use of an AI assistant such as Grammarly or Quillbot or others kinds of resources.
You can read a description of my Acknowledgment guidelines here (you are welcome to use and adapt) and see a others from our colleagues. I’d love to add to this collection other policies and guidelines from Washington College colleagues. Please send me a sample or let me know if you’d be willing to share–and also if I can help you develop your guidelines. Here is a large collection of campus and individual course policies from universities around the country.
Recent peer-reviewed research on the Impact of AI on Student Agency raises the question of whether AI assistance in the writing process, and specifically in supporting the metacognitive work of revision and peer review, hinders learning. The initial findings demonstrate effectiveness in using AI-assisted prompts to improve the writing feedback process but also indicate that students tend to rely on AI assistance rather than learn from it. And when AI assistance was removed, without providing human guidance and prompting, the quality of the writing and feedback process declined.
AI Detection tools emerged soon after ChatGPT. But the overwhelming consensus is that they are not reliable and, at least from the perspective of writing pedagogy, should not be relied upon. OpenAI, the developer of ChatGPT created an AI detection tool but has since pulled it. And a number of universities have dropped or passed on the use of Turtin’s AI detection.
Heuristics and Algorithms. We all use algorithms, rule-based procedures, in our teaching. Writing certainly has them. But if we are too algorithmic in our teaching, we don’t leave room for learning. That’s why heuristics, open-ended schemes and templates for decision-making and inquiry, can be more effective for learners. We shouldn’t view AI in education as a binary choice between using technologies or not using technologies in learning; those tools are already in use, and writing was already a technology, long before Microsoft Word. Instead, I would argue that we should use our educational tools, and the curriculum we create, more heuristically. Our goal should be helping students ask questions, with and without AI-assisted resources, teaching for inquiry and not assigning for answers. This is a point I make in this op-ed, “Why Aren’t We Asking Questions of AI?”
One final practical, ethical, educational note for those who might be interested in incorporating an AI-assistant into one of your assignments. Because of legitimate data privacy concerns with many of the AI tools out there, I make the AI assignment optional and include it with other non-AI options (for example, using Microsoft Track Changes or a visit to the Writing Center). If students do want to experiment with AI but don’t want to sign up for an account on their own, I invite them to use mine for the assignment.
Resources:
- The Catalyst: additional resources on AI and teaching technologies
- Generative AI Acceptable Use Scale
- Generative AI Ethics of Practice
- MLA-CCCC Joint Task Force on Writing and AI
- University Policies on Generative AI (archive)
- Why Aren’t We Asking Questions of AI?
–Sean Meehan, Director of Writing and Co-Director, Cromwell CTL

2 Comments