Vivek from QuestionPro wrote about censorship and community management from a high level. He raised some great points and I’ll share my perspective.
Historically, ISPs managed user generated content for Terms of Service (TOS) violations and would essentially disconnect unruly customers. Fast forward to day — ISPs don’t manage user behavior the way they did years ago. Instead content providers outsourced the practice of moderating and managing inappropriate activity to the users. (Web 2.0 Speakers call this, “crowdsourcing”)
This has achieved a lot of efficiency for content providers; conversely, it also opened up for some pretty intense mistakes when you leave it to the users to manage your community. The HD-DVD Digg community backlash probably the most vivid example of when users and community founders have different ideas about what’s acceptable. While, Digg became completely chaotic, their founder, Kevin Rose, responded to their most passionate users citing Digg made a mistake in deleting stories without communicating about it.
Vivek mentions an “Angry Mob” effect as a result. Oh, don’t I know it!
During my time at AOL, I’d say that a good 80% of reported content was in fact legitimate and complied to the TOS. The remaining 20% of violations were mostly fiery online battles between one to three users at a time. It wasn’t that users maliciously reported otherwise legitimate content; people just have different tolerances as to what is “acceptable.” Well, after time, some folks are getting hip to the “Mass TOSing,” activity.
Without leaking any intel, I had some ideas on how to address this but I don’t think the right people heard the ideas there. One of the techniques involves pulling content into a staging area for staff review, then if it’s acceptable it gets pushed back. Another idea was to score reports on their accuracy of the report. This would allow for a system to penalize users for engaging in “mass-TOSing,” and providing intel on abusers of reporting systems.
It’s still possible today to form an angry mob and essentially kick legitimate users off the service, pick them off and rejoice about it in their community areas. It takes even less effort of abusers just create a drawer full of sock puppets and engage in this activity.
It’s my belief that people crave attention in this attention-quenched online environment. More endorphins are releases when someone can successfully inflict “internet pain” on someone else compared to just having relaxing community discourse. These people are often known as trolls so I need not offer any more words on them.
Allow me to share my perspective on these topics relating to managing user-generated content:
- Terms of Service / Guidelines: The Terms of Service is a legal document that people rarely review. People will interpret it in the same way as a defense attorney, looking for loopholes. However, “Community Guidelines” go over a little easier because people don’t like being told what they can and can’t do. Community guidelines offer additional shades of gray for users and content providers in handling violations.
- Reporting / Flagging Violations: Since people began to report online content for the sake of reporting content to retaliate against someone, this had placed the 1.0 model of content moderation at severe risk. Users should have the ability to promote content just as much as they can demote content. Users must provide reasoning on why it’s a violation and make it an effort to report offensive content. While it may contradict user-experience advice, it will result in more legitimate violations to make it your way. Content that gets flagged by a mob shouldn’t be deleted or removed… instead be promoted to the top of a moderation queue so it gets handled quickly. If a moderator find the content acceptable, then it becomes “greenlit” from being reported again — with some exceptions.
- Appropriate “Inappropriate-ness:” I’ve sometimes dreamt of what it would be like to segement community content effectively by interest. That is, provide adult content (read: gigs and gigs of free porn), as well as tons of content for children, teenagers and a variety of other social niches. I’m not the perv, I’m just referring to the popular search queries from AOL users. Obviously, adult content can be monetized for higher margin. How does this prevent censorship? Well, if people are divided into their interests, very few will cross lines and cause trouble unless they want trouble.
As the Web matures into Web 3.0 (let’s wait until we get to Web 2.0 first…), user-generated content becomes the asset and must be protected accordingly. Eventually, content providers will only need to facilitate the framework and the user experience while users build your castle. Protecting user-generated content is just as important as removing it. The last thing we want is a bunch of