Tag: technology

  • No, It Isn’t

    I just finished reading Kate Manne’s “Yes, It Is Our Job As Professors To Stop Our Students Using ChatGPT.” Manne takes Tressie McMillan Cottom’s claim in a recent New York Times interview that “Kids aren’t supposed to be able to resist a highly sophisticated, research-informed platform designed to make you use it. It is incumbent upon us, the adults, the society, to figure out what is the right amount of risk to expose kids to.” From that, Manne goes here: “trusting students not to use AI is putting responsibility where it does not belong—on the shoulders of young people who are straining under numerous stressors and pressures. The responsibility lies with me and other instructors to design classes where AI use is not a feasible option, or at least not a very tempting one.“ 

    I generally agree with the substance of Manne’s post, if not the framing, which makes it seem like we should be cops in the classroom (“our job is to design things that stop students from doing x”). That strikes me as the wrong approach. 

    First, on the matter of student responsibility: in Manne’s post, there is a narrowing from McMillan Cottom’s calling out “adults, society” to “professors.” Sure, professors are a part of society, but so are administrators, who need to be ruthlessly held to account for what they are doing to our classrooms. In my experience, professors love to critique and complain. They are very, very good at it. I would love to see the cantankerous energy my colleagues bring to department meetings and email lists directed up the chain in highly public ways. That work is vital for protecting the integrity of the classroom, or at the very least making the lives of admin uncomfortable until they get the message. 

    College students are also a part of society. At least, I’m more likely to think of college students as “adults” than “kids,” perhaps not adults in the same sense that instructors are, but a kind of adult nonetheless. College students bear responsibility for what they do in our classrooms, and that means teachers have the opportunity to engage them as partners in dealing with the incursion of AI into that space. So, I generally support professors implementing guardrails of various sorts, but we need to find ways to explain to students why we are doing what we are doing, and if possible, to involve students in the process of building those guardrails. 

    There will always be the student looking to cut corners, who doesn’t care about the material, etc. We won’t reach that student, at least not now, but who knows where they’ll be in five to ten years. Education is weird like that. But we have a good chance to reach the majority who are just as anxious and upset as we are about what AI is doing to their education. In fact, according to a Quinnipiac Poll from earlier this year, that’s a remarkable majority of Gen Z. That shared concern is a promising basis for building solidarity in the classroom. The fastest way to lose students is to implement poorly designed or poorly explained policies prohibiting students from doing certain things or using certain tools. “It’s my job to do this because you aren’t capable of doing it” seems unpersuasive to me.

    Second, on the matter of design: Manne writes: “I never asked my undergraduate students to write essays for me because the product inherently mattered. It was always the process: lose that, and the exercise loses its point. We have to devise new ones.” Yes, AI is forcing instructors to rethink assignments. But we should devote our energies toward devising better, more meaningful assignments that emphasize process (such as multiple revisions, process letters, conferences, etc.) rather than directing our energy toward devising AI-proof assignments.

    A quick and dirty example: A lot of teachers are talking about reinstating the in-class essay as a means of AI-proofing writing assignments. I hate the in-class essay for lots of reasons, but here is one way you could make it about process rather than AI: Assign an in-class essay as a credit/no credit draft that serves as the basis for a revision. Then assign a process letter that asks the student to explain not just how and but also why they did what they did. The student then gets a separate grade for the letter and another for the revised essay. You could even weight the process letter heavier than either essay. Yes, they may use AI in the revision, but the letter would get them to think about the process, and AI wouldn’t be able to provide a justification for the revision. If you’re worried about them using AI to write the process letter, then devote a class session to having them write that letter by hand. 

    I’m not saying that’s a perfect assignment, but I think it would be easier to sell students on the value of process with that assignment than with a traditional in-class essay assigned as a means of preventing AI usage. Treating students as if they were either cheaters or infants with no willpower sets up an adversarial context where it becomes difficult to build solidarity with students. Manne and I are probably in agreement on that, but framing our job as designing “AI-proof” assignments seems like the wrong way to go about it. 

    Now, a qualification: In the NYTimes piece, Cottom and her interlocutors point out that there isn’t going to be a one-size-fits-all approach to dealing with AI in the classroom. Responses will need to be sensitive to ages and grade levels. Other variables, such as institutional settings and classroom sizes, will matter too. I’m relatively privileged working at an institution with motivated, high-achieving students and classrooms that range between 20-35 students. Still, with three classes and multiple preps each quarter (and service workload compounded by stunningly byzantine bureaucratic procedures), there are limitations on what I can do. In the NYTimes piece, Jessica Grosse’s example of the assignment that has students form community discussion groups for an Ursula K. Le Guin novel would require so much extra work for me that it’s practically impossible. At some of the institutions where my friends work, it would be even harder to implement–but not because of the students. So, there is a bigger question here of labor and working conditions in general, and the matter of AI in the classroom shouldn’t be divorced from that. Hence my earlier point about teachers finding ways to push back against admin. To that end, the AAUP recently issued an important report on “Artificial Intelligence and Academic Professions” that provides some places to start.

    I’ll have more to say on this subject in the future, I’m sure, but until then I’m gonna try to enjoy these last two weeks of summer before my terms starts up again. Best of luck to everyone as we head back into the classroom!