It’s probably a good time that I share my thoughts on the tireless debate surrounding online community moderation of user contributed content — what works, what doesn’t and what the future holds for moderation of the Web.
Moderation has a variety of different interpretations ranging from total review of user-contributed content like postings, uploads and other forms of contribution from users. This range is best explained in the illustration below:
With full moderation, an online area can be declared as “safe” but possibly not “enjoyable.” Conversely, an area with no moderation can be declared “enjoyable” and possibly not safe. This problem has surfaced among many online community spaces ever since the growth of social media in the ’90s.
During the growth of AOL, many new concerns arose regarding the moderation of online community areas — specifically pertaining to children and their accessibility by adults, defamation and defining where the line of responsibility fell in certain situations. Like any online provider would be expected, they heired on the side of safety and committed to implement various levels of access (parental controls) and subsequently, moderation throughout the service. AOL wasn’t alone in this challenge, Yahoo faced similar problems within their Chat and Message Boards services, too. While AOL had an image of safety and security, it often perpetuated a shadow of restrictions cast behind a handful of outspoken users.
Then came MySpace. Their users had complete and total of their online profiles with no expectation of moderation being performed. As their service matured, more attention was focused again on the accessibility of minors from adults. After much pressure, MySpace has agreed to deploy various types of moderation of content uploaded from users, earning 49 state Attorney Generals’ nod of approval to keep their service safe from child predators. While MySpace earned the green light from every AG except for Texas, it still can’t get away from a shadow of insecurity and vulnerability by the media.
These are two examples of both ends of the spectrum and what happens when you rest on either side. While everyone expects an online provider to sit right in the middle of moderation, it’s proven difficult when executed. Despite this, a provider can deploy a combination of different types of moderation to blend the experiences together, allowing users to communicate freely and safely. Providing users tools to self-manage online areas, they sit in the driver’s seat and can manage content to their liking while alleviating the burden on the shoulders of an online provider.
Best Practices: What Works?
- Communicating and setting proper expectations of community areas.
- Providing open two-way communication paths to a provider’s moderation team, or at least a liaison.
- Standing firm on your community guidelines, but permit a path for review in the event of an error.
- Use your best judgment — look at it from the author’s perspective, as well as the other spectators, too.
- Empower users to be in control of their experience such as blocking/filtering.
- Continually assess the needs of various community areas, adapting to user’s needs.
- Set internal expectations on acceptable “friendly-fire” (legitimate casualties of moderation).
- Adaptive enforcement technology and policies to handle almost any type of abuse.
Best Practices: What Doesn’t Work:
- Positioning yourself on any one extreme of the moderation spectrum.
- Allowing users to have >50% editorial control with respect to content areas/portals.
- Shutting online areas down in response to excessive negative user activity.
- Throwing more moderation staff (aka, “body count”) into an area with excessive negative user activity.
- Static enforcement technology and/or policies
While enforcement is a different stage (and an entirely separate topic), it should be poised to be reasonable, easy to understand and difficult to exploit keeping in mind the same concepts as noted above.
Over the past few years, moderation has taken on a different form — aggregated user feedback. Providing users the tools to kick out the trolls, psychos and haters pays off because passionate users will help defend your community, enabling you to aggregate data on community abuse and react to it quicker. Digg, Propeller and Reddit have defined this new type of moderation in a valuable way by letting users vote for or against a given piece of content, resulting in offensive content dropping from public view in seconds, not days — and engaging content to attract new users.
I believe out of evolution online community dynamics, there will be a time when moderation will become a non-issue because Internet users will eventually mature out of their abusive ways in addition to sophisticated anti-abuse technology will be a pre-requisite in any community product that expects to scale beyond 1MM+ users. Either that, or until Congress mandates restrictions for community providers by means of age or identity verification.
Note: Some of this information stems from my experience at AOL; however, understand that I’m no longer with the company and am not in a position to address AOL-specific concerns, but can share my overall community management insight.