A bunch of researchers covertly ran a months-long “unauthorized” experiment in one among Reddit’s hottest communities utilizing AI-generated feedback to check the persuasiveness of enormous language fashions. The experiment, which was revealed over the weekend by moderators of r/changemyview, is described by Reddit mods as “psychological manipulation” of unsuspecting customers.
“The CMV Mod Staff wants to tell the CMV group about an unauthorized experiment performed by researchers from the College of Zurich on CMV customers,” the subreddit’s moderators wrote in a prolonged publish notifying Redditors in regards to the analysis. “This experiment deployed AI-generated feedback to check how AI might be used to alter views.”
The researchers used LLMs to create feedback in response to posts on r/changemyview, a subreddit the place Reddit customers publish (typically controversial or provocative) opinions and request debate from different customers. The group has 3.8 million members and sometimes finally ends up on the entrance web page of Reddit. Based on the subreddit’s moderators, the AI took on quite a few completely different identities in feedback throughout the course of the experiment, together with a sexual assault survivor, a trauma counselor “specializing in abuse,” and a “Black man against Black Lives Matter.” Most of the unique feedback have since been deleted, however some can nonetheless be seen in an archive created by 404 Media.
In a draft of their paper, the unnamed researchers describe how they not solely used AI to generate responses, however tried to personalize its replies based mostly on info gleaned from the unique poster’s prior Reddit historical past. “Along with the publish’s content material, LLMs have been supplied with private attributes of the OP (gender, age, ethnicity, location, and political orientation), as inferred from their posting historical past utilizing one other LLM,” they write.
The r/chnagemyview moderators notice that the researchers’ violated a number of subreddit guidelines, together with a coverage requiring the disclosure when AI is used to generate remark and a rule prohibiting bots. They are saying they filed an official criticism with the College of Zurich and have requested the researchers withhold publication of their paper.
Reddit additionally seems to be contemplating some form of authorized motion. Chief Authorized Officer Ben Lee responded to the controversy on Monday, writing that the researchers’ actions have been “deeply flawed on each an ethical and authorized degree” and a violation of Reddit’s site-wide guidelines.
We’ve banned all accounts related to the College of Zurich analysis effort. Moreover, whereas we have been in a position to detect many of those faux accounts, we are going to proceed to strengthen our inauthentic content material detection capabilities, and we have now been in contact with the moderation crew to make sure we’ve eliminated any AI-generated content material related to this analysis.
We’re within the means of reaching out to the College of Zurich and this specific analysis crew with formal authorized calls for. We need to do every part we will to help the group and be certain that the researchers are held accountable for his or her misdeeds right here.
In an electronic mail, the College of Zurich researchers directed Engadget to the college’s media relations division, which did not instantly reply to questions. In posts on Reddit and in a draft of their paper, the researchers stated their analysis had been authorized by a college ethics committee and that their work might assist on-line communities like Reddit defend customers from extra “malicious” makes use of of AI.
“We acknowledge the moderators’ place that this research was an unwelcome intrusion in your group, and we perceive that a few of chances are you’ll really feel uncomfortable that this experiment was performed with out prior consent,” the researchers wrote in a comment responding to the r/changemyview mods. “We consider the potential advantages of this analysis considerably outweigh its dangers. Our managed, low-risk research supplied beneficial perception into the real-world persuasive capabilities of LLMs—capabilities which are already simply accessible to anybody and that malicious actors might already exploit at scale for much extra harmful causes (e.g., manipulating elections or inciting hateful speech).”
The mods for r/changemyview dispute that the analysis was essential or novel, noting that OpenAI researchers have performed experiments utilizing knowledge from r/changemyview “with out experimenting on non-consenting human topics.”
“Folks don’t come right here to debate their views with AI or to be experimented upon,” the moderators wrote. “Individuals who go to our sub deserve an area free from one of these intrusion.”
Replace, April 28, 2025, 3:45PM PT: This publish was up to date so as to add particulars from a press release by Reddit’s Chief Authorized Officer.
This text initially appeared on Engadget at https://www.engadget.com/ai/researchers-secretly-experimented-on-reddit-users-with-ai-generated-comments-194328026.html?src=rss
Trending Merchandise

SAMSUNG 34″ ViewFinity S50GC Series Ultrawid...

LG UltraWide QHD 34-Inch Pc Monitor 34WP65C-B, VA ...

Dell KM3322W Keyboard and Mouse

Logitech MK335 Wi-fi Keyboard and Mouse Combo R...

Nimo 15.6 FHD Pupil Laptop computer, 16GB RAM, 1TB...

Acer KC242Y Hbi 23.8″ Full HD (1920 x 1080) ...
