Education

ChatGPT Is Already Upending Campus Practices. Colleges Are Rushing to Respond.

[ad_1]

It’s hard to believe that ChatGPT appeared on the scene just three months ago, promising to transform how we write. The chatbot, easy to use and trained on vast amounts of digital text, is now pervasive. Higher education, rarely quick about anything, is still trying to comprehend the scope of its likely impact on teaching — and how it should respond.

ChatGPT, which can produce essays, poems, prompts, contracts, lecture notes, and computer code, among other things, has stunned people with its fluidity, although not always its accuracy or creativity. To do this work it runs on a “large language model,” a word predictor that has been trained on enormous amounts of data. Similar generative artificial-intelligence systems allow users to create music and make art.

Many academics see these tools as a danger to authentic learning, fearing that students will take shortcuts to avoid the difficulty of coming up with original ideas, organizing their thoughts, or demonstrating their knowledge. Ask ChatGPT to write a few paragraphs, for example, on how Jean Piaget’s theories on childhood development apply to our age of anxiety and it can do that.

Other professors are enthusiastic, or at least intrigued, by the possibility of incorporating generative AI into academic life. Those same tools can help students — and professors — brainstorm, kick-start an essay, explain a confusing idea, and smooth out awkward first drafts. Equally important, these faculty members argue, is their responsibility to prepare students for a world in which these technologies will be incorporated into everyday life, helping to produce everything from a professional email to a legal contract.

Artificial-intelligence tools present the greatest creative disruption to learning that we’ve seen in my lifetime.

But skeptics and fans alike still have to wrestle with the same set of complicated questions. Should instructors be redesigning their assignments and tests to reduce the likelihood that students will present the work of AI as their own? What guidance should students receive about this technology, given that one professor might ban AI tools and another encourage their use? Do academic-integrity policies need to be rewritten? Is it OK to use AI detectors? Should new coursework on AI be added and, if so, what form should it take?

For many, this is a head-spinning moment.

“I really think that artificial-intelligence tools present the greatest creative disruption to learning that we’ve seen in my lifetime,” says Sarah Eaton, an associate professor of education at the University of Calgary who studies academic integrity.

Colleges are responding by creating campuswide committees. Teaching centers are rolling out workshops. And some professors have leapt out front, producing newsletters, creating explainer videos, and crowdsourcing resources and classroom policies.

The one thing that academics can’t afford to do, teaching and tech experts say, is ignore what’s happening. Sooner or later, the technology will catch up with them, whether they encounter a student at the end of the semester who may have used it inappropriately, or realize that it’s shaping their discipline and their students’ futures in unstoppable ways. A recent poll of more than 1,000 members of Educause, a nonprofit focused on technology in higher education, found that 37 percent of those surveyed said AI is already affecting undergraduate teaching, and 30 percent said it is having an impact on faculty development.

“A lot of times when any technology comes out, even when there are really great and valid uses, there’s this very strong pushback: Oh, we have to change everything that we do,” says Youngmoo Kim, an engineering professor who sits on a new committee at Drexel University charged with creating universitywide guidance on AI. “Well, guess what? You’re in higher education. Of course you have to change everything you do. That’s the story of higher education.”

Serge Onyper, an associate professor of psychology at St. Lawrence University, began to incorporate ChatGPT into his teaching this semester. After experimenting to see how well it could produce an undergraduate research paper, he has become a proponent of using large language models — with guardrails. One thing ChatGPT does particularly well, he believes, is help students learn the “basic building blocks” of effective academic writing.

“What is good writing in the sciences?” he asks. “It’s writing where the argument is clear, where it’s evidence based and where it includes some analysis. And ChatGPT can do that,” he says. “It’s sort of frustrating how good it is at those basic tenets of argumentative writing.”

In his first-year course on the neuroscience of stress, the focus is on writing an essay that includes a thesis and evidence. His goal is also to help students reframe stress as a friend, Onyper says. So he asks students to think of positive benefits of stress on their own, then as a group, then use ChatGPT to see what it comes up with.

Onyper says working in that order helps students see that their own ideas are valuable, but that they can also use ChatGPT to brainstorm further. It should never be a substitute for their own thinking, he tells them: “This is where the lived experience can be important.” He is also fine with students for whom English is not their first language running their writing through the program to produce cleaner copy. He is more interested in their ideas, he says, than the fluidity of their prose.

Ryan Baker similarly invites his students to use ChatGPT. Baker, a professor in the University of Pennsylvania’s Graduate School of Education whose courses focus on data and learning analytics or educational technology, states that students can use a variety of tools “in totally unrestricted fashion.” That also includes Dall-E, which produces images from text, and GitHub Copilot, which produces code. Baker says mastering those technologies to produce the outcomes he’s looking for is a form of learning. He cautions students that such tools are often unreliable and that their use must be cited, but even so, he writes in his course policy that the use of such models is encouraged, “as it may make it possible for you to submit assignments with higher quality, in less time.”

Michael Dennin, vice provost for teaching and learning at the University of California at Irvine, expects to see a lot of experimentation on his campus as instructors sort out what tools are appropriate to use at each stage of a student’s career. It reminds him of what his mother, a high-school math teacher, went through when graphing calculators were introduced. The initial reaction was to ban them; the right answer, he says, was to embrace and use them to enhance learning. “It was a multiyear process with a lot of trying and testing and evaluating and assessing.”

Similarly, he anticipates a variety of approaches on his campus. Professors who never before considered flipped classrooms — where students spent class time working on problems or projects rather than listening to lectures — might give it a try, to ensure that students are not outsourcing the work to AI. Wherever they land on the use of such tools, Dennin says, it’s important for professors to explain their reasoning: when they think ChatGPT might be diminishing students’ learning, for example, or where and how they feel that it’s OK to use it.

Anna Mills, an English instructor at the College of Marin, says academics should also consider the ways that generative AI can pose risks to some students.

On the one hand, these programs can serve as free and easy-to-use study guides and research tools, or help non-native speakers fix writing mistakes. On the other hand, struggling students may fall back on what it produces rather than using their own voice and skill. Mills says that she’s fallen into that trap herself: auto-generating text which at first glance seems pretty good. “Then later, when I go back and look at it, I realize that it’s not sound,” she says. “But on first glance because it’s so fluent and authoritative, even I had thought, okay, yeah, that’s decent.”

Mills, who provided feedback to OpenAI — which developed ChatGPT — on its guidance for educators, notes that the organization cautions that users need quite a bit of expertise to verify its recommendations. “So the student who is using it because they lack the expertise,” she says, “is exactly the student who is not ready to assess what it’s doing critically.”

If we haven’t disclosed to students that we’re going to be using detection tools, then we’re also culpable of deception.

Professors’ concerns about cheating with AI also run the gamut. Some argue that it’s not worth the time spent ferreting out a few cheaters and would rather focus their energy on students who are there to learn. Others say they can’t afford to look the other way.

As chair of the anatomy and physiology department at Ivy Tech Community College, in Bloomington, Ind., Daniel James O’Neill oversees what he says may be the largest introductory-anatomy course in the country offered at the community-college level. Ivy Tech has 19 campuses and a large online unit. This two-semester course and related courses, he notes, are gateways into nursing and allied-health professions.

“There’s tremendous pressure on these students to try to get through this. Their livelihoods are dependent on it,” he says. “I would compare this to using steroids in baseball. If you don’t ban steroids in baseball, then the reality is every player has to use them. Even worse than that, if you ban them but don’t enforce it, what you actually do is create a situation where you weed out all of the honest players.”

There has long been a “manageable but significant” amount of cheating in the course, he notes, with an average of about one out of every 15 assignments caught in a standard plagiarism check. He expects that ChatGPT will only ramp up the pressure to cheat.

A tool that effectively detected cheating with ChatGPT would be a “game changer,” he says. Until one is developed, he needs to think seriously about reducing the likelihood that students can use AI tools to complete their work. That may mean significantly changing or eliminating an end-of-term paper that he considers a valuable assignment.

While he hopes not to go that route, he also says he can’t afford to simply ignore the few who cheat. The argument that instructors should just focus on the students who are there to truly learn underestimates the stress that the honest students will feel when they start ranking behind those who cheat, he says. “It’s real and it’s a moral and ethical issue.”

It’s hard to know how widely students are using ChatGPT, beyond playing around with it. Stanford University’s student newspaper, The Stanford Daily, ran an anonymous poll in January that has gotten some national attention.

Of more than 4,000 Stanford students who responded (which the newspaper noted could be an inflated figure), 17 percent said they had used ChatGPT in their fall-quarter courses. Nearly 60 percent of that group used it for brainstorming and outlining; 30 percent used it to help answer multiple-choice questions; 7 percent submitted edited material written by ChatGPT; and 6 percent submitted unedited material written by the chatbot.

As professors navigate these choppy waters, Eaton, the academic-integrity expert, cautions against trying to ban the use of ChatGPT entirely.

That, she says, “is not only futile but probably ultimately irresponsible.” Many industries are beginning to adapt to the use of these tools, which are also being blended into other products, like apps and search engines. Better to teach students what they are — with all of their flaws, possibilities, and ethical challenges — than to ignore them.

Meanwhile, detection software is a work in progress. GPTZero and Turnitin claim to be able to detect AI writing with a high degree of accuracy. OpenAI has developed its own detector, although it says it has an accuracy rate of just 26 percent. Teaching experts question whether any detector is yet reliable enough to charge someone with an academic-integrity violation.

And there’s another twist: If a professor runs students’ work through a detector without informing them in advance, that could be an academic-integrity violation in itself. “If we haven’t disclosed to students that we’re going to be using detection tools, then we’re also culpable of deception,” says Eaton. The student could then appeal the decision on grounds of deceptive assessment, “and they would probably win.”

Marc Watkins, a lecturer in the department of writing and rhetoric at the University of Mississippi who has been part of an AI working group on campus since last summer, cautions faculty and administrators to think carefully about whatever detection tools they use.

Established plagiarism-detection companies have been vetted, have contracts with colleges, and have clear terms of service that describe what they do with student data. “We don’t have any of that with these AI detectors because they’re just popping up left and right from third-party companies,” Watkins said. “And I think people are just kind of panicking and uploading stuff without thinking about the fact that, Oh, wait, maybe this is something I shouldn’t be doing.”

The campuswide groups established to discuss ChatGPT and other generative AI are examining all these questions.

Drexel’s committee, which has pulled in academics from a range of disciplines, has been asked to develop a plan for educating students, staff, and faculty about AI; create advice and best practices for instruction and assessment; and consider opportunities these technologies present for the university.

Steve Weber, vice provost for undergraduate curriculum and education, is chair of the group. Among the many issues it’s considering, he said, is whether Drexel should require that all students graduate with a level of digital and technological literacy. And if so, how much of that should be discipline- or major-specific?

Weber taught a course last year on fairness in artificial intelligence and found that students were surprised and troubled by the ways in which bias can be built into such tools because they’re trained on existing data that may itself be biased. “They would like there to be greater ethical guidance in their education to deal with the modern questions of technology, which are wide-ranging and not easily addressed” through traditional studies of ethics, he says. ”Bringing it into the 21st century is very important.”

Ultimately, he said, the group hopes to provide a set of broad principles, rather than prescriptions. “It’s also going to be an evolving landscape, of course.”

Teaching centers are also gearing up to provide workshops and other resources for faculty members. At Wake Forest University, Betsy Barre, executive director of the Center for the Advancement of Teaching, is organizing weekly forums on AI to tackle the wide range of issues it raises, from how the technology works to academic integrity to assessment redesign to the ethical, legal and sociological implications. Most faculty members are excited about the possibility of using these tools, says Barre, but that may change if they start to see students misuse them.

“I don’t think it’s realistic to assume that this semester there’s going to be a lot of radical redesign, especially since it’s so close to Covid,” she says. But the risk is that faculty won’t even mention ChatGPT to their students. And in those cases, students might think it’s OK to use even when it may not be. “I don’t expect a lot of intentional deception, but there might be some miscommunication.”

Barre is excited about the possibilities that AI presents for helping professors in their own work. Crafting clear learning objectives for a course, for example, is a challenge for many instructors. She has found that ChatGPT is good enough at that task that it can help faculty members jumpstart their thinking. The chatbot can also provide answers to common teaching challenges and speed up some of the more tedious parts of teaching, like generating wrong answers for multiple-choice tests.

“If it gives us the opportunity to free up time to do things that matter, like building relationships with students and connecting with them,” says Barre, “it could be a good thing.”

Whether professors have the energy to redesign their courses for the fall is another matter. Many are worn out from the constant adjustments they had to make during the pandemic. And teaching is intrinsically complicated work. How do you really know, after all, whether students are absorbing what you think they are, and the best way to measure learning?

“One of the things that worries me is that the urgent will overshadow the important,” says Jennifer Herman, executive director of the Center for Faculty Excellence at Simmons University, who convened faculty-development directors at 25 Boston-area colleges to discuss these issues. “Some of the really hard work involves looking at our curricula and programs and courses and asking if what we set out to teach is actually what we want to teach, and determining if our methods are in alignment with that.”

Mills, who is taking time off from teaching to work on these topics full time, says she hopes the conversation doesn’t get polarized around how to treat AI in teaching. There’s plenty to agree on, such as motivating students to do their own work, adapting teaching to this new reality, and fostering AI literacy.

“There’s enough work to do in those areas,” she says. “Even if we just focus on that, we could do something really meaningful. And we need to do it quickly.”

[ad_2]

Source link

ScoopSky

Scoop Sky is a blog with all the enjoyable information on many subjects, including fitness and health, technology, fashion, entertainment, dating and relationships, beauty and make-up, sports and many more.

Related Articles

Back to top button