In the wake of recent terrorist attacks, I've heard several media outlets talk about making social media sites like Facebook and YouTube responsible for removing groups and posts that promote terrorist activities. They almost seem to point the finger at them, partially blaming them for these disasters. They ask if it's possible for social media sites to remove groups and posts that promote violence. The answer to that is yes. As the owner of a business intelligence company who has been involved in intelligence projects, big data and data analytics for over 21 years, I can tell you that this has been possible for years.
Companies, including consumer goods companies, monitor social media to create personalized marketing experiences for their customers. This can be done for any industry, including government. So when I hear people ask if social media sites can shut down groups that recruit violence, I say yes, they can. The real question is not "CAN social media sites police themselves and shut down these sites?" It's, "SHOULD they?” In my opinion, the answer is No. "Why not?" I think there are three key reasons why social media should not block these groups.
1. First, blocking and monitoring are two different things. Blocking would remove the site. Monitoring allows us to scan chatter for key words and phrases. This has become a chess game and it is our job to out maneuver the bad guys. There is more value in monitoring and knowing what discussions are taking place, so that we have insights into activities we may be able to prevent. It seems to me there is more value in having access to these sites, than in shutting them down. They can provide extremely valuable information. Gathering social media chatter, applying sentiment (positive, negative or neutral), and searching for key words is a common event that often falls under the category of "big data." If harnessed and analyzed correctly, they will allow us to predict and prevent violent incidents.
2. Second, there are many social media sites today. Including sites designed specifically for recruiting and spewing hateful rhetoric. If we shut them out of leading social media sites like Facebook, we push them to private sites where we have limited access to monitor what they are saying. These private sites already exist so we already have more limited access to activities.
3. Third, who would be responsible for doing the policing and the blocking? What rules would they put around it? Who’s to say that if a conservative or liberal was responsible for monitoring this, that they wouldn’t use this censorship capability to boost their own agenda and limit the speech of those with opposing views.
Obviously, I don't agree with these groups and don't like how they are leveraging social media for recruitment. But I believe that giving social media sites the power to determine whether or not they deem something offensive or dangerous, is dangerous in and of itself.