7c0h

Sentiments are the new Spam - Part 2: user groups

So, you have successfully created an online community. People seem genuinely engaged, and you have interesting discussions going on. And then one day I show up, decide that "it would be a shame if something were to happen to your little communnity", and start harrassing your users because... well, because. Call it 4chan, Gamergate, MRA or trolls, there's always a group ready to drag a community into the ground.

Like I said last time, one of the main characteristics about the internet is that you can't block me, you can only block my user. So let's focus, from the simplest to the more complex, in how could you keep me from being annoying and/or harrassing other people in your community.

Privileged users

The first step I suggest you take is a hierarchical scale of users. It doesn't have to be too complex - I'd start with something like this:

Anonymous users are those that have not yet logged in. Usually they are allowed read-only access to the site, but in some cases not even that. As a counter-example Slashdot is known for allowing anonymous users to post and comment on the site, although with a catch that I'll discuss later.

New users should have limited posting capabilities - maybe they can only vote but not comment, or their comments are given partial visibility by default. Getting out of this category should be relatively easy for a "good" user (although time-consuming - no less than an hour, perhaps even days), but it should definitely annoy those that are only "giving the website a try".

Your regular users are the ones that actually use your site as intended. They can post and comment at will. And finally, the power users are allowed some extra permissions - usually this mean they can edit or remove other people's posts. This level should be pretty hard to achieve.

The iron fist of justice

Now that you have user levels, new users are your main concern: it is not unusual for trolls to create thousands of accounts (automatically, of course) and use them to assault a particular user. Remember: any regular user should be able to stop the noise in a simple and straightforward way - otherwise you risk becoming an online harrassing platform, and you'll have to publicly apologize like Twitter's CEO often does.

Our first moderation tool will be karma points. Each time a user contributes to our website, other users can rate this contribution positively or negatively. Contributions with "high karma" will be given a predominant position, while contributions with "low karma" will be buried. This is how Slashdot can allow anonymous contributions without being buried in dumb comments: every comment posted anonymously will have very low karma by default, but if enough users vote it up, it will eventually be seen by everyone else. Similarly, Hacker News will not allow users to vote negatively if they haven't yet reached a certain karma threshold.

Sidenote: you don't want to rank your posts/comments simply based on who has the highest number of votes. Instead, take a look at reddit's comment sorting system.

Another tool you'll find useful is the good old ban. A temporary ban means that a given user cannot post for a given period of time, while a permaban (permanent ban) means that the user is kicked out forever. This is a standard tool in every forum, but we can still do better: given that nothing stops a banned user from creating a new account and continue their toxic behavior (and remember, now they are pissed for being banned), you can use a hellban. When a user is hellbanned, no one but them can see their activity. The user can still log in, comment and post, but this activity is invisible to everyone else. From their point of view, it looks as if no one cares about them anymore, and it's not unusual for them to just leave.

Finally, you might also want to consider a "report" button, through which users can report unruly behavior. This should be more or less automated, but you cannot blindly trust these reports: you risk trolls banding together and reporting users at will. To prevent this, an automated recourse method should be enough - a moderator is notified, and the user is not fully banned until a final decision is reached. And finally, if you want to go the extra mile, you could have a "protected" flag that keeps certain users from being reported.

That's about all you can do at this level. There are no new ideas here, which is good - now you know that these concepts have been tried and tested before. In next two posts I'll be discussing about things that might not make as much sense, so stay tuned.