A Mental Health Tech Company ran an AI Experiment on Real Users

 When people log in to Koko, an online emotional support chat service based in San Francisco, they expect to swap messages with an anonymous volunteer. They can ask for relationship advice, discuss their depression or find support for nearly anything else — a kind of free, digital shoulder to lean on.


But for a few thousand people, the mental health support they received wasn’t entirely human. Instead, it was augmented by robots. In October, Koko ran an experiment in which GPT-3, a newly popular artificial intelligence chatbot, wrote responses either in whole or in part. Humans could edit the responses and were still pushing the buttons to send them, but they weren’t always the authors. 


About 4,000 people got responses from Koko at least partly written by AI, Koko co-founder Robert Morris said. The experiment on the small and little-known platform has blown up into an intense controversy since he disclosed it a week ago, in what may be a preview of more ethical disputes to come as AI technology works its way into more consumer products and health services. 


Morris thought it was a worthwhile idea to try because GPT-3 is often both fast and eloquent, he said in an interview with NBC News. “People who saw the co-written GTP-3 responses rated them significantly higher than the ones that were written purely by a human. That was a fascinating observation,” he said. Morris said that he did not have official data to share on the test. Once people learned the messages were co-created by a machine, though, the benefits of the improved writing vanished. “Simulated empathy feels weird, empty,” Morris wrote on Twitter. 


When he shared the results of the experiment on Twitter on Jan. 6, he was inundated with criticism. Academics, journalists and fellow technologists accused him of acting unethically and tricking people into becoming test subjects without their knowledge or consent when they were in the vulnerable spot of needing mental health support. His Twitter thread got more than 8 million views. Senders of the AI-crafted messages knew, of course, whether they had written or edited them. But recipients saw only a notification that said: “Someone replied to your post! (written in collaboration with Koko Bot)” without further details of the role of the bot. 


In a demonstration that Morris posted online, GPT-3 responded to someone who spoke of having a hard time becoming a better person. The chatbot said, “I hear you. You’re trying to become a better person and it’s not easy. It’s hard to make changes in our lives, especially when we’re trying to do it alone. But you’re not alone.” 


No option was provided to opt out of the experiment aside from not reading the response at all, Morris said. “If you got a message, you could choose to skip it and not read it,” he said. Leslie Wolf, a Georgia State University law professor who writes about and teaches research ethics, said she was worried about how little Koko told people who were getting answers that were augmented by AI. 


“This is an organization that is trying to provide much-needed support in a mental health crisis where we don’t have sufficient resources to meet the needs, and yet when we manipulate people who are vulnerable, it’s not going to go over so well,” she said. People in mental pain could be made to feel worse, especially if the AI produces biased or careless text that goes unreviewed, she said. Now, Koko is on the defensive about its decision, and the whole tech industry is once again facing questions over the casual way it sometimes turns unassuming people into lab rats, especially as more tech companies wade into health-related services. 


Congress mandated the oversight of some tests involving human subjects in 1974 after revelations of harmful experiments including the Tuskegee Syphilis Study, in which government researchers injected syphilis into hundreds of Black Americans who went untreated and sometimes died. As a result, universities and others who receive federal support must follow strict rules when they conduct experiments with human subjects, a process enforced by what are known as institutional review boards, or IRBs. 


But, in general, there are no such legal obligations for private corporations or nonprofit groups that don’t receive federal support and aren’t looking for approval from the Food and Drug Administration. Morris said Koko has not received federal funding. 


“People are often shocked to learn that there aren’t actual laws specifically governing research with humans in the U.S.,” Alex John London, director of the Center for Ethics and Policy at Carnegie Mellon University and the author of a book on research ethics, said in an email. He said that even if an entity isn’t required to undergo IRB review, it ought to in order to reduce risks. He said he’d like to know which steps Koko took to ensure that participants in the research “were not the most vulnerable users in acute psychological crisis.” 


Morris said that “users at higher risk are always directed to crisis lines and other resources” and that “Koko closely monitored the responses when the feature was live.” After the publication of this article, Morris said in an email Saturday that Koko was now looking at ways to set up a third-party IRB process to review product changes. He said he wanted to go beyond current industry standard and show what’s possible to other nonprofits and services.


There are infamous examples of tech companies exploiting the oversight vacuum. In 2014, Facebook revealed that it had run a psychological experiment on 689,000 people showing it could spread negative or positive emotions like a contagion by altering the content of people’s news feeds. Facebook, now known as Meta, apologized and overhauled its internal review process, but it also said people should have known about the possibility of such experiments by reading Facebook’s terms of service — a position that baffled people outside the company due to the fact that few people actually have an understanding of the agreements they make with platforms like Facebook. 

Comments

Popular posts from this blog

"Significant Other" is Horror Film Provided a Romantic Twist

If you're Sitting all Day Science Shows How to Undo The Health Risks

healthcare ecosystems might assist enhance well being outcomes and management prices