On Twitter, Be Your Own Censor.
When I saw this Polygon post last month, about how Twitter “gives so little attention to the now-routine harassment experienced by so many members of the service” because “it drives engagement,” I thought, “surely there’s a market solution”:
I’ve done some Twitter scripting, and the three proposed tools would, I think, be easy for a third party to implement. — Mark W. Bennett (@MarkWBennett) July 31, 2014
The three proposed tools were allowing a user to block all users whose accounts are less than 30 days old, allowing a user to block all users whose follow counts are less than some threshold, and allowing a user to block any user who has been blocked by more than N people she is following. The proposed tools came from this post, titled, “The least Twitter can do.”
Today I saw a Slate post from the beginning of August (a week after the “harassment drives engagement” post) discussing three free-market solutions to the problem of Twitter harassment: Block Together (an app that “allows users to ‘share their list of blocked users with friends’ and, if they like, ‘auto-block new users who at-reply them.’), Flaminga (which “helps Twitter users conspire to create secret mute lists they can share with one another to silence users they don’t want to hear”), and the Block Bot (which that identifies Twitter’s “anti-feminist obsessives”…).
Problem solved, right?
Not so fast.
These apps won’t actually inspire Twitter to shut down the serial abusers who use their Twitter accounts to harass and threaten women. They won’t help attract serious legal attention to their crimes. And they won’t compel Twitter to instruct its brilliant developers to imagine new sitewide solutions for the problem, or else lend its considerable resources toward educating government officials and law enforcement officers about the abuses its users are suffering on its network.
Twitter provides a communication channel. It is a channel and a metaphor that didn’t exist nine years ago, and it is free. Twitter’s users give it nothing but their attention. Twitter owes its users nothing.
Twitter could “instruct its brilliant developers to imagine new sitewide solutions for the problem, or else lend its considerable resources toward educating government officials and law enforcement officers about the abuses its users are suffering on its network,” there would be nothing wrong with that.
On the other hand, Twitter could explicitly market itself as a place to abuse and be abused by others; it could even rig its API so that free-market solutions would be impossible. There would be nothing wrong with that either (and, sadly, plenty of people would sign up).
Twitter is a business. It exists for profit. If shutting down serial abusers were cost-effective it would do it. Twitter users could theoretically make it cost-effective for Twitter to shut down serial abusers by boycotting Twitter, but that’s not going to happen. There’s no viable alternative, and besides, where would you go to organize a boycott of Twitter?
If actively encouraging abuse and making it impossible for third parties to reduce the abuse that users saw were cost-effective, it would do that instead. It’ll never be cost-effective because, contrary to the suggestion in the July Polygon post I quoted first, Twitter will get more of its users’ attention if those who don’t want to be abused are able to customize their experience to reduce the abuse they see.
What Twitter has done is somewhere in between. It provides the channel, and it allows developers to build things like Block Together, Flaminga, and The Block Bot, but it hasn’t poured money into protecting users from abuse. Even when it isn’t cost-effective to shut down serial abusers, it’s cost-effective to allow developers to create tools to allow a user to eliminate abusers from her timeline.
(Here’s how Twitter works: if Althea and Bartimus have Twitter accounts (@A, @B), each can choose to follow or block the other. If @A is following @B, @A will see everything @B says on Twitter. If @A blocks @B, @A will see nothing that @B says on Twitter. If @A neither follows nor blocks @B, then @A will see only what @B says mentioning @A. If Carla creates an account (@C) and tweets “@A [something abusive],” @A will see it unless she is using some third-party solution (for example, blocking new accounts or accounts with few followers).)Twitter’s solution, which decentralizes control over users’ timelines to those users, is not good enough for the authors of the Slate and Polygon posts. It’s not good enough that @A has third-party options for keeping @C from contacting her; Twitter must keep @C off the channel, or at least spend more of its money trying to do so.The notion that the provider of the channel, Twitter, should police the content on the channel has appeal, I guess, to those who are convinced that those running Twitter will always share their own ideological orthodoxy. But leaving the channel wide-open and letting users, via third-party software, choose what they will and won’t be exposed to is the better option for anyone who can even imagine some day being politically incorrect. ((Though that particular horse has left the barn.))
Recent PostsSee All
Under section 46.05(a)(3) of the Texas Penal Code, it is a felony to possess, manufacture, transport, repair, or sell a "prohibited weapon," including a chemical dispensing device. Chemical dispensing
What is Online Solicitation of a Minor? Online Solicitation of a Minor is one of two offenses created by sections 33.021(b) and 33.021(c) of the Texas Penal Code: Sec. 33.021. ONLINE SOLICITATION OF
Facing drug-possession charges can be a harrowing experience with potentially severe consequences. To navigate the complex legal system and protect your rights, you'll need a top drug-possession lawye