> They also included 2,000 prompts based on posts from the Reddit community r/AmITheAsshole, where the consensus of Redditors was that the poster was indeed in the wrong.
Sorry, anonymous people on reddit aren't a good comparison. This needs to be studied against people in real life who have a social contract of some sort, because that's what the LLM is imitating, and that's who most people would go to otherwise.
Obviously subservient people default to being yes-men because of the power structure. No one wants to question the boss too strongly.
Or how about the example of a close friend in a relationship or making a career choice that's terrible for them? It can be very hard to tell a friend something like this, even when asked directly if it is a bad choice. Potentially sacrificing the friendship might not seem worth trying to change their mind.
IME, LLMs will shoot holes in your ideas and it will efficiently do so. All you need to do ask it directly. I have little doubt that it outperforms most people with some sort of friendship, relationship or employment structure asked the same question. It would be nice to see that studied, not against reddit commenters who already self-selected into answering "AITA".
> Sorry, anonymous people on reddit aren't a good comparison.
Yeah especially on r/AmITheAsshole. Those comments never advocate for communication, forgiveness and mending things with family.
I believe this. There is a graph somewhere of the relationship subs tending towards breaking up over time.
Yes, it is a toxic sub, where the notion that there can be greater happiness on the other side of forgiveness than cutting ties is all but absent.
To be fair, it’s easier to concisely explain cutting someone off than justifying forgiveness. And the latter will land with some people versus others, while the former will only be rejected by people who have themselves concluded a theory of forgiveness. As a result, the simpler pitch gets upvoted. Even if the majority would have been swayed by a collection of arguments the other way.
It’s a good theory. My theory is, for whatever reason, jaded, narcissistic, miserable people congregate in r/AITA and try to drag other people into their misery because that’s easier than accepting responsibility and doing something to change.
It's often that a lot of "NTA" answers are downright antisocial.
"No one owns you anything, you don't own anyone anything" mentality, without a crumb of social awareness.
“AI is nicer than the average redditor” would be a more accurate title
IMHO it's not about being nice. AITA threads show an interesting phenomenon of social consensus, I think the authors wanted to show that the LLMs they checked don't have that.
Is it the _average_ redditor? The most upvoted would be even worse.
Pretty sure the average Redditor is AI now.
How the hell is a study on stanford.edu assuming posts on Reddit are genuine? That should be enough to get you kicked out of Stanford.
Though interestingly, the observed difference in assessment suggests (though does not prove) that sampled AITA posters are not one of these models. I guess it’s possible they have a very different prompt though…
I would say people on /r/amitheasshole are more biased towards the poster, i.e. nicer.
There's plenty of those I've read where I thought it sounded like the poster was the asshole and the top replies were NTA.
r/AmItheAsshole is biased towards breaking off relationships rather than fixing them. They also hate social obligations.
e.g. If the OP is asking "I ghosted my friend in AA who insulted me during a relapse", Reddit would say NTA in a heartbeat, while the real world would tell OP to be more forgiving.
On the contrary, if the post was "the other kids at school refuse to play with my child", Reddit would say YTA because the child must've done something to incite being cut off.
Absolutely. I wonder how many parents have been no contacted, SOs broken off with, friendships broken because of the Reddit hivemind's attitude. Pretty sure it's doing a huge amount of societal damage.
I wouldn't blame reddit, it's what you get when you ask several thousand teenagers to give collective relationship advice.
“I got divorced based on advice from complete strangers on the internet, AITA?”
Yeah every single time I click on one of those posts the top comments are NTA. A couple times I tried randomly opening a few dozen posts and checking the top comments to see if I could find a single YTA and struck out.
Granted many of the OPs are very biased in the poster's favor. Most I've read fall into one of two buckets: either they want to gripe about some obviously bad behavior, or it's a controved and likely fake story.
It’s gendered, by the way
Many of the posts are A/B tests of a prior post where only the genders were flipped of the OP and antagonist to see how the consensus also flips
What's your research background in this area?
>Obviously subservient people default to being yes-men because of the power structure. No one wants to question the boss too strongly.
This drives me nuts as a leader. There are times where yes, please just listen, and if this is one of those times, I'll likely tell you, but goddamnit, speak up. If for no other reason I might not have thought of what you've got to say. Then again, I also understand most boss types aren't like me, thus everyone ends up conditioned to not bloody collaborate by the time they get to me. It's a bad sitch all the way around.
Indeed. I directly ask my reports to discover and surface conflicts, especially disagreements with me, and when they do I try to strongly reinforce the behavior by commending and rewarding them. Could anyone recommend additional resources on this topic?
Simon Sinek has a lot of good content around this. Step one is building trust. People won’t speak up if they don’t feel safe doing so.
Not only that, but subreddits like r/AmITheAsshole are full of AI slop. Both in the comments and in the posts. It's a huge karma mining operation for bots.
That can be solved by filtering out any posts made after November 2022.
This is sort of funny. Given how common it is to spot bots on Reddit now, it seems like they are likely to completely overwhelm the site and drive away most of actual humans.
At which point the bots, with all of their karma will be basically worthless.
Kind of extra funny/sad that Reddit’s primary source of income in the past few years appears to be selling training data to AI labs, to train the Models that are powering the bots.
> At which point the bots, with all of their karma will be basically worthless.
Not really, it will still be kind of valuable for influence campaigns, a lot of people don't get it when there is a bit in the other side. Hell, a lot of times, I don't get it.
The upvotes ultimately train the bots, reenforcing the content posted. Even the most passive form of interaction has been co-opted for AI.
Plus, there's the disproportionate ratio of posters:commenters:lurkers. The tendency to comment over keeping ones thoughts to themself is a selection bias inofitself.
> This needs to be studied against people in real life who have a social contract of some sort... IME, LLMs will shoot holes in your ideas and it will efficiently do so.
The Krafton / Subnatuica 2 lawsuit paints a very different picture. Because "ignored legal advice" and "followed the LLM" was a choice. Do you think someone who has conversation where "conviction" and "feelings" are the arbiters of choice are going to buy into the LLM push back, or push it to give a contrived outcome?
The LLM lacks will, it's more or less a debate team member and can be pushed into arguing any stance you want it to take.