This is partly what is happening. Reading in out of the loop subreddit the trigger is Victoria but the problem is different. The one I see as one of the biggest problems is. The unpaid volunteers aren't given the tools necessary to handle moderation of subreddits by reddit corporate. The tools moderators use comes from third parties and volunteers. [0] As an outsider I have understand why they are angry, trying to create safe spaces by sweeping(and inconsistent) bans of subreddits but not giving the tools to moderators to keep their subreddits clean.
I'm not sure the reddit community (either moderators specifically, or redditors in general) are intelligent enough to know what tools they need.
I once tried to suggest a new tool, and was attacked there for 3 weeks after on subredditdrama for requesting such a tool.
My crime? Suggesting that moderators be able to ban everyone who (wrongly) upvotes an offtopic post. This would allow them to steer things back on topic, by punishing those who refuse to stay on topic.
Suggesting that moderators be able to ban everyone who (wrongly) upvotes an offtopic post
Reddit already has a problem with moderator abuse and nontransparency, and you want to give them the ability to ban people based on their votes? The literally only control that a user has over content?
I can see why you were "attacked". That is an awful idea.
I don't know, in my experience, Reddit is much more rife with low-quality posting and bandwagoning than moderator abuse. It happens on rare occasions, but most accusations of moderator abuse I've seen have turned out to be nothing more than some first-world anarchists trying to stick it to the man and a bunch of other people bandwagoning on. Meanwhile, most subreddits with more than 10k posters are just awful because it's so hard for moderators to actually moderate the community (see, for example, all the true* subreddits that have sprung up just to create less popular versions of popular subs).
I read a few subreddits that might be considered outsiders. I get the impression that Reddit is slowly being subverted. Subverted by the sort of people that might have a strong voice on Tumblr and Twitter. Slowly censoring anything that's undesirable to their world view.
Being "subverted" by a different community would be the best possible outcome for reddit. It's a proud soapbox for white supremacists and misogynists to virulently spread hatred under the guise of "free speech". If this "community", seemingly comprised entirely of self-entitled white men in their 20s, were scattered to the winds tomorrow, the world would be a better place.
I was banned from r/history a few months ago. The moderator was attempting to humiliate me and added flair to my comments. I turned flair off. Then the moderator abused the stylesheets to hardcode flair that I couldn't turn off...
That's against reddit's own rules, and the admins claim they will ban moderators from reddit sitewide for that.
I went to the admins, they made him turn it off... and 1 minute later I was banned.
The admins never banned him.
Mod abuse occurs all the time. There's just nothing to be done about it, and there's no point in whining so you never much hear of it.
Indeed. I think I read once that HN weights votes or comment positions by how often someone is drawn in by political "honey pot" stories. I don't know if it's true, but it at least suggests less drastic alternatives to banning for voting.
That sounds like the same basic idea -- moderators putting a mark on posts that basically says, "An upvote for this is a negative contribution to the community." Is the problem with the general idea, or with one specific detail which could be modified in implementation (banning versus ignoring votes)?
It's not fully transparent -- not to the moderators and meta-moderators, at any rate -- but it's an effective check and balance for "policing the police."
The entire idea is problematic, massively, in that it allows for covert manipulation of a community.
Why are a small handful of unaccountable people determining what is a "negative contribution", on the sly no less, when the whole reason for the existence of karma systems is to do that organically and by consensus?
More importantly, what does such a system accomplish via deception that flattening such posts in a visible and accountable way does not?
Because karma systems have systematic weaknesses that make them incapable of doing that in many situations.
Easy example: Let's say there are 10000 people who like cat pictures and 150 people who like long, in-depth articles. I'm one of the people who likes long, in-depth articles, so I make a forum for people to post and discuss long, in-depth articles. 10% of the cat-picture-lovers happen upon my forum and are delighted to see a new place that they can post cat pictures. My 149 friends and I downvote these cat pictures with all our hearts, but we simply can't outvote the 1000 cat-picture-lovers. Even worse, a cat-picture-lover can look at and upvote a hundred cat pictures in the time it takes me to read one long-form article, so the content that the forum is intended for is buried before even the 150 long-content-lovers get to see it. So the complete destruction of my long-article forum happens organically and by consensus, just like karma systems are supposed to work -- but it's not a desirable outcome!
Basically, karma systems are good at finding something that some group of people will like, but if that's all you want, subreddits are a bad idea, because the entire point of them is to focus on a particular thing and exclude other things that very likely more people would like to see.
Like I said, all the true* subreddits show that there are people who want to have more focused subreddits. The idea that focus should be impossible if a sufficient number of people who are don't have that focus stumble upon the subreddit just doesn't seem very reasonable to me.
> Why are a small handful of unaccountable people determining what is a "negative contribution", on the sly no less
On some of the smaller (but not tiny) subreddits, a submission can be both against the rules and plainly a bad idea as it pollutes the subreddit's goals.
But because voting is anonymous, no matter how much everyone complains about them being upvoted, no matter who weighs in and admonishes people to stop upvoting the garbage, it still occurs.
Over time, people submitting good content stop doing so, and people who submit bad content start doing it more. Vicious circle.
Mods can't just delete those posts either, because this reddit gets pissy that they're playing favorites.
It would be better if the mods were able to say "if you think this is upvoteworthy for this subreddit, you are no longer welcome here".
Instead of each subreddit being some big generic free-for-all, they could stay on topic. Could get rid of alot of fluff. And you could do it without those being banned being able to claim it was for personal animosity.
> Mods can't just delete those posts either, because this reddit gets pissy that they're playing favorites.
You absolutely can, and it's easy to do so. I moderate a small sub, and you can remove individual posts with the click of a button if you don't think they're a quality contribution.
But beyond that, AutoModerator allows you to automatically remove posts according to criteria that you set up in advance. You can ban certain sites, titles with certain keywords, certain types of content, posts from certain users, and so on. It makes moderation much, much easier.
There is always the problem that users will discover the filters and revolt if the mods haven't been transparent about filters. /r/technology users threw a fit when they discovered that the mods had been filtering stories with NSA in the title. Lots migrated over to /r/futurology as a result.
> It would be better if the mods were able to say "if you think this is upvoteworthy for this subreddit, you are no longer welcome here".
Banning 100s or 1000s of users in one fell stroke for having upvoted a single low-effort post would be far more disruptive and unjust than simply removing the offending post.
Lock the thread (with AutoModerator) and explain why. No need to disappear it.
Over time, people submitting good content stop doing so, and people who submit bad content start doing it more. Vicious circle.
Is this really the case? It's a common complaint, but it's usually presented without any backing. Why is the content necessarily "bad" if the community has chosen to feature it?
Mods can't just delete those posts either, because this reddit gets pissy that they're playing favorites.
Why do you suppose that is?
It would be better if the mods were able to say "if you think this is upvoteworthy for this subreddit, you are no longer welcome here".
Leaving aside the fact that you literally just advocated for thought crime, the problem is that an upvote can mean multiple things.
(Don't say "reddiqutte" either, I'm talking about use in reality)
An upvote can mean "like", or "on topic", or "contributes to the discussion", or about 20 other positive things. A downvote can have just as many other meanings in the other direction. The other thing is that with multireddits and /r/all, "on-topic-ness" isn't even going to be evaluated by the users unless you're looking at the front page of the community itself.
I really don't see how more covert moderation is a valid answer to abuse of non-covert moderation.
Removing the single submission doesn't fix the problem.
The people who upvoted things that shouldn't be upvoted are the problem. They may never notice that it was removed anyway.
So now the subreddit (which may only have 15,000 users) has to staff dozens of moderators just to keep track of the shit.
And those posts still polluted the subreddit. They can't be instantaneously removed.
> Why do you suppose that is?
Because in groups of thousands and tens of thousands of people, there's always someone who thinks not only was the meme post funny, but that everyone should lighten up and allow it anyway.
They argue publicly in a pseudo-anonymous forum. And you can't make them stop either, or even more backlash occurs. Nor can you remain silent, or out-argue them.
So a flame war erupts, and everything goes off-topic even more.
> An upvote can mean "like", or "on topic", or "contributes to the discussion", or about 20 other positive things. A downvote can have just as many other meanings in the other direction. The other thing is that with multireddits and /r/all, "on-topic-ness" isn't even going to be evaluated by the users
Unless those users are punished for failing to evaluate.
Guess we're going to have to agree to disagree on this - but let me know if you ever start a community website, because I want absolutely no part of a place where I can be punished for not liking things the staff wants me to like.
If you're going to run a community like that, why even have the people there in the first place? What you've just described is a blog.
I think what they meant was more along the lines of "for upvoting the things that are supposed to not be what the website is about".
If you are referring to the "Unless those users are punished for failing to evaluate."
I think that was meant to mean "unless those users are punished for not distinguishing whether the thing was in an appropriate location, and therefore upvoting something which does not belong where it is."
Not sure, but seems like a likely interpretation to me.
> Reddit already has a problem with moderator abuse and nontransparency,
1. That's the discussion that should have occurred... debate on the merits of the idea.
2. Mods already ban people, which is the problem. They get rid of people they like, whether or not there is cause for it. This would be mods banning people whose names they don't even know for objectively wrong behavior.
So yes, I think it would be a good idea. At the very least, the experiment could have been run in a few test subreddits.
Since a moderator can kill an entire post, nullifying "the only control that a user has over content" anyway, is it really going out on that big of a limb to allow moderators to ban people who, say, upvoted child porn to the top of r/politics?
But we both know that your fanciful hypothetical is not how that tool will be used in over 99.999% of cases, because that kind of thing almost never happens, while moderator abuse often does.
Just because there are cases where a tool can be used well doesn't mean that introducing it in to a system rampant with corruption and abuse won't make things worse.
Moderators can't disappear posts outright, they can only remove the links from their respective communities. If someone has the link, it still can be interacted with in the usual way.
The hypothetical you chose is disgusting, unnecessary, and impossible in the real world.
Arguments to child porn I think are the modern equivalent of godwin's law - the moment it gets brought up, all productive discussion stops and is replaced by emotion.
Moderators can ban people for posting things. OP was talking about banning people for upvoting things. Your point about the link being removed vs the content doesn't fit. Let's say it just partially bans them and still lets them post stuff that won't make it to the actual page, but if anyone has the link they can see it. Is that really a meaningful distinction?
Yes, because the latter doesn't stifle discussions already in progress. It's the difference between saying "you can't talk about this here" and "you can't talk about this at all".
That doesnt seem meaningful, if a mod happens to catch it fast no one sees it, if she catches it slow, people can begin and continue a discussion. If it was based on any kind of fundamental right of discussion, it is hard to imagine moderator attention and reflex speed factoring in to the underlying principles.
Plus, moderators already use bots to instantly pre-ban things things and then go through and manually whitelist them.
The distinction is between banning for posting vs banning for upvoting. One reasonable argument I can see is that Reddit goes to some length to hide what you up and down vote in every other context, and people might be able to use the ban tool to dox people who thought votes, as opposed to submissions, were wholely anonymous.
I don't see why the mods ever have to know the names of those who upvoted (or downvoted) to be able to ban them. A message would be sent out to those who were banned, letting them know why, and if they choose to reveal themselves as having made the bad vote... no one's doxxing them except themselves.
Well, nothing says that it should be a democracy. But there is discord how moderators manage their communities and what support reddit could add. e.g. there are subreddits that have removed downvoting through CSS, so it is still possible to downvote with apps or changing the CSS. It should be possible to add that as a setting to the subreddit that makes it impossible to downvote.
But like I said it isn't a democracy, reddit can decide to not implement that and explain it to the mods why not.
To be fair you used to post some really antagonistic and trollish comments. Maybe you still do. You're not exactly well known for your agreeable and altruistic behavior.
I don't really see a way around these problems: Paying people would be too expensive for sites like Reddit, and "volunteers" suffer from adverse selection.
>As described, Reddit is an interesting example where people voluntarily fill the same community leader role that Aol’s volunteers did, although they do so with fewer restrictions and more agency. //
Presumably the current Reddit debacle is the stimulation for posting this now. The article mentions Buzzfeed and Youtube models too. Quite interesting.
As far as I understand, reddit did introduce a new search function yesterday, which pissed off mods of several subreddits. [1] Additionally they fired(?) an employee who acted as a interface between the r/IAmA mods and the people who do amAs [2], who are usually not redditors. So the mods of r/IamA set there sub to private. That means that only invited users can see the sub. From there quite a few other mods did declare their frustration with reddits communication, their solidarity with either the IAmA mods or Victoria and various other grievances. And either set their subs to private or at least declared their solidarity. And by now half of reddit is private and the other half rages agains the admins. ( Reddits front page, that is the ~20 highest upvoted posts on the site, are uniformly against reddit.) For a bit more details see [3].
Yes, but it is the reaction of the search desaster by a guy who moderates almost 200 subs. So I included it as a primary source to show the frustration of the mods.
I see a significant difference between AOL and Reddit. AOL selected the monitors and had the power to remove their privileges. That's what made them essentially employees. Reddit moderators are mostly self-regulated. Even in the large subs new moderators are chosen by other moderators and rarely is a moderator of a sub involuntarily removed, except if they have abandoned the sub or violated the terms of service. I can see how Reddit is not obligated to pay them because they volunteer by their own will and not at Reddit's discretion.
Given the market cap of Facebook is $245 billion and it has 1.44 billion active users, then each user is $170 (170 == 245/1.44). If each user generates $1 for Facebook each quarter, that's $4/yr which is an annual return of 2.3% (2.3 == 4/170).
I'm not sure where I stand on calling users "sharecroppers". Seems a little unreasonable. But the economics of it are interesting and concerning.
With 1.44 billion monthly actives, and going with the per user averaging, it comes out to ~$2.45 per user per quarter (as of the most recent quarter; it's safe to assume that will continue to increase for now).
They're generating closer to $10 per year for Facebook. Your annual sales take jumps to 5.9%. A year from now that will probably be closer to 7%. PE ratios usually compress with time due to slowing growth. I would be comfortable predicting that Facebook will eventually get this calculation up to 15%+. Their PE ratio right now is 80, which is producing a massively warped calculation on the return %, as that PE compresses heavily over the next ten years, their return per user will skyrocket.
If their PE ratio were a more sane 40 right now, this number would already be at ~12%. Double their annual income to $6 billion and their sales to $30 billion over a few years, drop their PE to 35 ($210b market cap), boost their user base to 2 billion, and it jumps to 14%. A very plausible outcome four years out. This is all calculated off of sales of course, not income.
What's the problem with letting the community work 'for free'? They are receiving benefits, even if non-monetary. Peer recognition, control, power...
If foobar.com's forums asks me to an un-paid moderator and I gladly comply, I have no right to decide 6 months down the road that I shouldn't have been doing that work for free and deserve payment for it.
Work laws are weird like that. There are certain things you can't consent to even if you really want to.
For example, you can't agree to work for less than minimum wage, you can't agree to work for free to get rid of a debt, you can't agree to have sex with someone in exchange for money.
These things apply even if you voluntarily sign a contract saying you consent.
Great article. It was 1997 I believe and my dad and I sent each other a message over AOL instant messenger for the first time. It felt like magic to communicate that way. The phone was few feet away but this seemed futuristic and awe inspiring.
Lets say, theoretically, that a similar ruling were found for some of the Reddit moderators. Wouldn't it be in Reddit's best interest to immediately terminate their employment and find a way to bring in more cost-effective moderators?
I understand that it is common for facebook users to think that facebook is the internet.
Many do not understand that email has anything to do with the internet.
I was in a psyciatric hospital when I pointed out to the staff that I needed to renew my domain names. They told me I could not do that so I asserted my right to manage my own fincial affairs, as is the law. the staff of that particular hospital know about that law so they will arrange for you to pay rent and the like.
But they regarded my talk of domain names, registrars and renewals as delusionals. Then I started screaming and crying "But I am a webmaster!" You are going to throw me out on the streets."
"We'll see."
This went on for a few days then "WHEN ARE YOU GOING TO MAKE IT HAPPEN!"
The next day one of the staff told me that a fellow patient suggested he put my name into Google.
He apologized, then said "Sometimes I forget why I work here."
Then he set me up on a staff computer so I could renew.
Lots of mental hospitals have wifi or desktops.
Others have no eay to contact the outside world at all.
Thanks for that. I did already know firsthand that KFC had changed its name for a time. I found a source (Snopes) that confirmed it. Internally I was skeptical of the reasoning that was provided (since it conflicted with what I knew from more reliable sources), but the actual reasoning was tangential to my point
[0]https://www.reddit.com/r/OutOfTheLoop/comments/3bxduw/why_wa...