Post by Sean Meehan, Co-Director, Cromwell CTL and Director of Writing
Amidst the predictions that emerged with the explosion of generative AI tools into education this year, many have assumed that the tools are here to stay. Some have even welcomed the demise of traditional features of education (such as the essay or research paper) believed to be on the horizon. Given how rapidly these tools and technologies are evolving, it would seem difficult to be certain about anything staying put. But these tools are here; AI will be back on campus in the fall, returning for its sophomore year in college. More faculty are thus raising questions: “What is the impact of AI on teaching and learning? How should we respond in course policies and practices?” More than answers, I offer some continued thinking on the matter, at the heart of which is this assertion–whatever else we do, we (as educators, scholars, inquirers) need to keep asking questions. It is what we do well and it is good practice for what Kevin Roose calls “machine age humanities.”
In a whirlwind Spring semester 2023, many faculty and students alike at Washington College were introduced to the AI chatbot ChatGPT (from OpenAI). We began to raise good and productive questions, as we did in our Learning about Machine Learning series of discussions and workshops, about the implications of AI-assisted teaching and learning. It seems reasonable to assume that, as the fast-paced evolution of AI continues, more of our students (particularly first year students) will be familiar with tools such as ChatGPT, or at least they will be AI-curious.
Resources. In 2023-2024, Cromwell CTL will continue to host campus-wide discussions and support inquiry regarding the opportunities and concerns with AI in education. We concluded the spring with a discussion focused on academic integrity and ways that the Honor Board and others might develop new guidelines for addressing and orienting students to these new tools and their appropriate use. We look forward to that conversation continuing. To serve as a point of reference, we have updated and will continue to curate this page on resources for Learning about Machine Learning.
For those wanting to do some further exploring and thinking about AI in your courses before September arrives, this a good place to advance your inquiry: Georgetown University’s “ChatGPT and Artificial Intelligence Tools.” And for some specific questioning about “Writing and Learning to Write in the Age of AI,” I recommend Jane Rosenzweig, Director of the Harvard College Writing Center, who raises concerns and offers useful questioning about what happens to critical thinking skills, not merely writing skills, when the thinking process is automated by AI.
Should you address the use of AI tools in your syllabus? I recommend that you do. Certainly, if you prohibit AI-assistance, or consider its use in one form or another as potential plagiarism, it is important to make clear to students where you stand and what the prohibited uses might be. On the Georgetown page you’ll find some sample syllabus statements:
- If you have questions about what is permitted, please reach out to me.
- It is important to remember that ChatGPT and other AI tools are not a replacement for your own critical thinking and original ideas. The ultimate goal of this course and any tool used to submit work is to enhance your own learning and understanding, not to undermine it.
- As a college student, it is your responsibility to maintain the highest standards of academic integrity. This includes a) ensuring that all work submitted for grades is your own original work, and b) properly citing any sources that you use.
- Having AI write your paper constitutes plagiarism. If the source of the work is unclear, I will require you to meet with me to explain the ideas and your process.
Acknowledging AI as a resource. I expect students to acknowledge the use of any resource they have used that is not directly cited in their bibliography, including AI tools. I require students to submit a brief “Acknowledgments” with final drafts of their writing projects; there students are also encouraged to thank and acknowledge other kinds of support including from friends, family, Writing Center tutors–sometimes their professor. I do this to shift the focus from plagiarism detection and violation to recognition that we generate and shape our ideas with the support and ideas from others. I’m an avid reader of the “Acknowledgments” section of books.
My longer syllabus statement on Academic Integrity (with AI tools incorporated):
Washington College has the following policy regarding academic integrity and plagiarism: Plagiarism is defined by the Honor Code as “willfully presenting the language, ideas, or thoughts of another person as one’s original work.” Turning in someone else’s work as your own is plagiarism. Relying on other texts, resources, and intelligence (both human and artificial) to generate our work is basic to what we do as scholars and writers. Acknowledging those resources is also basic to what we do. For that reason, quoting or paraphrasing or otherwise using the words or ideas of other people and resources (such as Wikipedia or AI tools such as ChatGPT) without properly acknowledging your source is also a problem. If you ever have any question at all about whether you are using a source correctly, ask me about it to learn more. (We will be talking and learning more about the rhetorical uses and potential abuses of artificial intelligence tools like ChatGPT). Submitting a paper for this class that contains all or part of a paper that you submitted in another class, without the permission of both professors involved, is also a violation of the Honor Code. A student found guilty of plagiarism may fail the assignment or the course, and may be referred to the Honor Board for further adjudication. Whenever you submit writing projects for this course, you will include as part of a preface an acknowledgment of the resources you have relied upon and a statement that your work has been completed in accordance with the Honor Code.
As I wrote in Inside Higher Ed, ChatGPT expects all writers using the tool to acknowledge its use–the tool is “co-author” but the user is ultimately responsible as primary author of the work. If users (students, scholars, others) are uncomfortable in acknowledging the use of a tool like this, it strikes me as a good reason not to use it. For students and others who do want to use the tool for assistance with inquiry and writing process, I think we can guide them to explore appropriate uses and learn more about is affordances and constraints.
Metacognition. Some colleagues have shared a further step of acknowledgment: requiring students to submit a transcript of their AI chat with any work that has been informed by it. I used this approach in an English 101 assignment that is part of the course final revision project. As a way to initiate the revision of an earlier essay from the course, students have the option to consult AI tools such as ChatGPT or Grammarly as they develop a revision plan; they turn in a reflection on the uses and limitations of the tool and we discuss in class revision strategies students are generating–and where the tools are more and less useful. I call this a “metacognition draft.”
This draft also gives students the option to use other tools and techniques we explore in the course, including re-outlining the essay with their own comments (a Reverse Outline) and a Writing Center consultation with a tutor.
More students selected human intelligence over the AI. However, the value of the assignment lies in the reflection and metacognition students generate. The AI tools, with guidelines, provided ample opportunity for further reflection and discussion; so did the Writing Center visit.
For some writers, the tools might help make the writing process more effective or efficient. For others, it might help by extending and amplifying the thinking, improving it by not making the process easier, complicating it. In either case, it matters that the writer, the student, initiate the process of inquiry (questions), thinking (rethinking), and writing (revision) and in the end acknowledge, as author, what they have done and how their work has been informed. This seems crucial if generative AI is going to remain generative for learning.