Education

AI Chatbots Pose Ethical Risks. Here’s How One University Is Handling Those.

[ad_1]

Visit the University of California at Irvine’s admissions page, and a box pops up in the bottom right corner. Click on it, and there’s Peter the Anteater, a chatbot affectionately named after the university’s mascot, clad in a varsity jacket and grinning.

He’s poised to field questions on topics as diverse as scholarships, majors, athletics, and campus life — a 24/7 digital assistant as the university manages record-high numbers of applicants.

“We’re so excited that you’re looking to become part of the Anteater Nation at UCI, Taylor.” Peter writes. “Feel free to ask me any questions. 🙌”

Colleges nationwide are increasingly adopting artificial-intelligence tools such as chatbots to expand and streamline communication. In an Educause Quick Poll from June 2021, 36 percent of IT professionals who responded said chatbots and digital assistants were already in place on their campuses, while 17 percent reported they were in the works.

After all, they offer compelling benefits. These tools can field common queries so staff can focus on more personal or complex questions. Data from institutions including New Jersey’s Ocean County College and Missouri Western State University suggest that chatbots can help lift retention rates and save weeks of staff time.

The poll simultaneously revealed, however, that 68 percent of respondents saw ethical concerns as a barrier to adoption.

Indeed, experts caution that the choice to deploy chatbots, and AI more broadly, raises many questions: Is the tool going to be accessible to non-native English speakers, students with language or reading impairments, or students with older devices? Are policies in place to dictate what happens with the information amassed, and who has access to it? Is the college protecting against bias and discrimination by ensuring that both the datasets informing the tool, and the people involved in its development and implementation, are diverse? What does evaluation look like?

Peter the Anteater

U. of California at Irvine

Peter the Anteater

The University of Texas at Austin, for example, terminated a machine-learning system in 2020 after critics cited bias and discrimination concerns. The system helped the computer-science department evaluate Ph.D applications, scoring an applicant’s likelihood of being accepted via algorithms based on historical data.

Sometimes, people developing these tools “are simply looking at the challenges as technical challenges,” such as how to safeguard against hacking, said Elana Zeide, an assistant professor at the University of Nebraska College of Law who researches the ethical implications of AI. “People should be more aware that there are more fundamental challenges.”

The University of California system, including UC-Irvine, has been at the forefront of thinking about ethical AI, including and beyond chatbots. The system adopted recommendations from a nearly 80-page report in the fall of 2021 — among the first of its kind in higher education — that includes best practices for incorporating AI into different aspects of the “student experience,” such as admissions and financial aid, advising, mental health, and remote proctoring.

The report came out of a task force that was formed after researchers across the UC system realized they had similar questions about AI, said Tom Andriola, UCI’s vice chancellor for information, technology and data. They joined together to create a framework for UC’s campuses that reflected a range of perspectives and expertise.

Peter the Anteater predates the report’s publication, having gone live in the fall of 2019, but he offers a glimpse of how one UC campus has kept ethics in the foreground of its work.

Not a ‘Set It and Forget It’ Tool

The extent to which a chatbot draws on artificial intelligence varies. Peter is “a hybrid,” according to Bryan Jue, Irvine’s senior director for outreach and communications for undergraduate admissions. The bot is programmed to look for keywords and phrases and supply preset directives, but he can learn, too.

Peter will ask the team, “Was this the right answer to do or not?” if he is less than 77 percent confident in the answer he provided, Jue said. If he wasn’t correct, a staffer can “retrain” him by identifying the right answer for the next time a similar question arises. “It’s like you teach a kid, ‘Don’t touch a hot stove,’” said Jue. “It does require some guidance” on the back end.

To expose Peter to a diverse array of queries, the team leaned on students, such as campus tour guides and those working in the admissions office, to feed him questions they remember having as prospective students, or questions that regularly surface during tours. They’ve introduced Peter to colloquial terms that might stump him, like ‘Boba,’ a campus drink staple, and questions that Jue and Morales said they wouldn’t have thought of. Do you have vegetarian options, or kosher meals? What if I have allergies?

This approach dovetails with the UC report’s recommendation that training data be representative of the broad demographic of UC students and applicants. “Our student body is a very diverse one, and all of these different perspectives and experiences inform how somebody might think,” and the questions they might have, said Patricia Morales, UCI’s associate vice chancellor for enrollment management. “Otherwise, you almost have an echo chamber.” Enrollment at UC-Irvine is 37.5 percent Asian, 25.2 percent Latino/Latina, and 16.3 percent nonresident alien.

Collaboration with a tool’s target population is a crucial part of its success, said Richard Culatta, chief executive of the International Society for Technology in Education. “The massive problem that we have in higher ed right now” is that user experience is often not a priority. Adjustments needed “are rarely the adjustments you think you need to make.”

The report also emphasizes that with AI tools, “a human must remain in the loop.” Peter is not a “set it and forget it” tool with little oversight, Jue said; two staffers in the admissions office act as his supervisors. “We treat it like another staff member,” he said. The idea is, just like any other staffer, “I’m going to train you, I’m going to correct you, I’m going to monitor you pretty much daily.“

Transparency around data collection is another key tenet. Peter does require a first name, last name, email address, and broad descriptor of who the user is — such as a parent, current student, or prospective student — to start a conversation, which isn’t the case for all bots. That information gives Peter a tip-off of what questions to expect and helps the office follow up with individuals as needed (with their consent). Conversations through the bot are never linked to students’ applications — a worry brought to UCI’s attention in the past — and data is not shared outside of the admissions office.

Prior to adopting the tool, which comes from third-party vendor Gecko, the university closely reviewed that company’s data-security policies, Jue said.

“Don’t give us your [Social Security Number], don’t give us the number of credits,” he said. “We don’t want any of that stuff” in the chat. If someone has a question “that’s more private” or customized, that’s a conversation that would be referred to a human staffer.

Since launching, Peter has had more than 63,000 conversations, Morales confirmed via email. In 2021, the bot resolved about 87 percent of questions the admissions office received.

Morales and Jue consider that a huge win. Still, they acknowledge that, as with AI more broadly, improvements still need to be made. They want Peter to converse in more languages, for one — especially Spanish, given the state’s substantial Latino/Latina population. (Right now, Peter speaks English, German, and Chinese; the chatbot vendor confirmed it supports 75 languages, including Spanish. Jue said his team wants to test the translation capabilities in-house first, though, and is locating students and staff who are native Spanish speakers to assist.) Jue said that he’d also like to see Peter provide “tangible” services, like signing someone up to receive promotional materials if they’re interested..

Morales added that it’s worthwhile brainstorming ways to better serve as many communities as possible, including those with disabilities. That’s a point that Andriola, the vice chancellor, thinks about too.

In the near future, there could be ways to ask questions “using different channels,” like speaking aloud — perhaps while looking at an avatar — versus just typing, Andriola said.

For now, Peter, the humble anteater, continues to do his best to serve the campus.

[ad_2]

Source link

ScoopSky

Scoop Sky is a blog with all the enjoyable information on many subjects, including fitness and health, technology, fashion, entertainment, dating and relationships, beauty and make-up, sports and many more.

Related Articles

Back to top button