The Challenge

3787 followers

How can we strengthen the Internet for free expression and innovation? read the brief

Winning entry

OnlineCensorship.org - Protecting Free Expression on the World's Most Powerful Commercial Social Media Platforms

http://OnlineCensorship.org is a new online free speech project arising from the critical need for a more systematic approach to resolving free expression cases on the world's most powerful social media platforms. The project’s goal is to defend and advocate for the speech rights of civil society groups, grassroots activists, citizen media makers, and independent journalists on social media. http://OnlineCensorship.org is the first project to attempt to systematically solicit “incident reports” on these cases, collect data on their prevalence, and test possible steps for addressing them.
Who are the users or target customers of your project, and what have you learned from them so far? Please give specific examples.

Our target users are individuals around the world who have experienced content removals or account takedowns of protected speech on social media websites.  

We know this is a broad definition - that's why we did an initial round of outreach to superusers; that is, human rights defenders, activists, and organizations with some degree of awareness toward this problem.

That said, we were surprised by some of the submissions we received during our alpha round!  We found that certain users within our networks were experiencing this phenomenon disproportionately. Users in Hong Kong and Palestine, for example, were frequent reporters in round one.  We also found that human rights organizations aren't immune to the whims of corporate Terms Of Service (TOS) enforcement - we were able to help a major organization publicize a takedown of their content, resulting in its restoration!

What assumptions are you making in what you propose, and how will you test them?

The major assumption that we are making is that social media companies are over-enforcing their Terms of Service (TOS); that is, they are taking down content that does not actually violate company policy.  We suspect that this often happens unintentionally, due to the speed with which these companies must respond to user reports of TOS violations.

We will test these assumptions by comparing data that we receive via incident reports against the TOS of the various companies concerned. We hope to be able to provide a set of concrete examples demonstrating where companies are "overcomplying" with their own policies.

How will you get your project in front of the necessary people or organizations?

Luckily, we're uniquely well positioned to do this.  Our team is situated across advocacy and academia, with advisors from the EFF, the Berkman Center, Global Voices, and Penn's Annenberg Center for Communication.  These partners have already established channels of communication with our target companies and are able to talk to them about policy.
 
We plan to expand our advisory group and reach out to human rights bodies and other relevant organizations to promote the project, so as to create the best possible data set. Both EFF and Global Voices (with which we work closely) have connections with dozens of news media and free speech advocacy organizations all over the world. We will reach out to these groups to encourage their community members and audiences to file incident reports through the Onlinecensorship.org platform.
 
Finally, technology, information design, and data visualization are key strength areas of our team via our connection with Visualizing Impact. This greatly expands our possibilities for raising awareness on our data and insights among key target audiences. We hope that this data set will benefit not only the advocacy community, but also academics conducting research on these topics.

What are the obstacles to implementing your idea, and how will you address them?

We have thought seriously about the risks and challenges we may face in implementing this project. The most notable of these are:
 
Accuracy and quality of information
We are seeking data on legitimate free speech cases. We may receive reports with false or misleading information, or reports on content that has been removed because the user was engaging in abusive behavior against other users. During our startup phase, we are evaluating reports to better understand the problems associated with quality control, vetting, and verification. We have already installed technical anti-spam and anti-robot measures, and we are now focused on refining our systems and processes to minimize the number of reports we receive that clearly fall outside the parameters of legitimate free speech cases.
 
Privacy and security
The identity of users filing reports, and information that can be used to identify users, will not be shared publicly or with companies without users’ explicit opt in. We have a first draft of a privacy policy that addresses these issues and makes it clear to users that they have control over if their case is shared with companies or with the public.
 
Engagement of target companies
What if the companies we target don't listen? While this is a possibility, our experience with Google, Twitter, Yahoo, and Facebook thus far indicate a willingness to work with us once the project is up and running.  
Whatever obstacles we might face, we think this is a challenge worth taking on, and have stacked the deck with a fantastic group of advisors to help us think through potential issues.

How much do you think your project will cost, and what are the major expenses?

To get this project truly up and running, we are seeking $400,000. The budget for the project falls into the following major areas:

Management and Editorial Team
  • Improving incident report questionnaires;
  • Finalizing the privacy policy;
  • Evaluating incoming incident reports;
  • Engaging target companies on individual cases and systemic issues;
  • Expanding target platforms - users can currently report incidents experienced on Google+, Facebook, Youtube, Twitter, and Flickr, but we hope to expand to other platforms such as Blogspot, Wordpress.com, Tumblr, Pinterest, Dailymotion, Vimeo, and Instagram. 

Design and Technical Team
  • Implementing design improvements;
  • Designing and launching our mobile platform, which is key because users in the developing world access social networks increasingly and in some cases exclusively through mobile technology;
  • Begin to integrate data visualization tools that will map locations and dates of reported incidents. 

Translation
  • Making Onlinecensorship.org available in Arabic, Chinese, Farsi, French, Portuguese, Russian, Spanish, and Vietnamese, with more languages to follow. 

Engagement
  • Developing and implementing a community engagement strategy;
  • Spreading the word about Onlinecensorship.org at major venues over the next year.   

How do you see this project scaling and sustaining itself in the long term, if needed?

As noted above, we plan to expand our project to include additional social media platforms and languages, as well as to incorporate innovative data visualization elements. Fortunately, much of the initial budget is to get the project off the ground - web design and translation can be expensive, but most of that work needs to happen only once.  After that, we hope to spread the costs across several organizations through a partnership model, similar to  Chilling Effects.  We imagine that the project will eventually need one full-time person (or a combination of part-timers) to maintain the site and wrangle the data, costing less than $100,000/year.
 

Need for OnlineCensorship.org
We treat social media platforms as if they are the public sphere, but they are actually commercial entities with their own proprietary terms of service. Facebook and YouTube have over 1 billion users and Twitter has millions, making these platforms the "sovereigns of cyberspace" with the power to police and regulate our online lives, including our speech.

In many countries, particularly where the press is not free, independent media and activists are heavily dependent on the Internet to reach their audiences or convey ideas and messages. They are also increasingly dependent on large social networking platforms run by corporations.  A significant number of activists and journalists have encountered problems maintaining their accounts and content on Facebook and other social media. Community policing of content is done quickly - anyone can report anything, and the employee monitoring reports has merely a split second to make a determination. This results in regular takedowns of legitimate content.

This is an impediment to the work of journalists, bloggers, and activists  and an obstacle to their ability to exercise their freedom of expression. Unfortunately, cases in which content or accounts have been censored are rarely resolved unless the user is lucky enough to have the right connections and support. This problem needs to be addressed strategically and systematically. 

The actual scope of censorship affecting journalists’ and activists ability to do their work is not known – to the public or, we believe, to the companies themselves. Solicitation of "incident reports" and follow-up surveys would enable us to identify, track, analyze, and publicize the facts of these incidents. Only with the proper research and data will it be possible to convince companies that their policies are inflicting unacceptable levels of “collateral damage” on the free expression rights of journalists and activists, which requires greater prioritization of staff and resources to address.

Examples
On November 7, 2012 administrators of a Facebook group called “The
Uprising of Women in the Arab World” discovered that their accounts had been temporarily disabled due to unspecified violations. Facebook deleted a photo on the page of a woman with cropped hair and a tank top holding a passport photo of herself fully veiled, attached to a sign protesting her lack of freedom. Other postings calling on followers to support the woman on Twitter were also deleted. Efforts by the page’s administrators to contact Facebook met with no response. After the press began running stories about what had happened, Facebook responded to a reporter’s queries with a statement acknowledging that the photo had been taken down in error and that “an item was removed because it was reported to us and found to have violated our community standards.” [1]

Egyptian activist Sally Zohney explained to a reporter that this page had become as important to a growing women’s movement in the Arab world as another Facebook page, “We are all Khaled Said,” had been for Egyptian human rights and democracy activists. The latter page was not only a driving force for the growing anti-torture movement in Egypt in late 2010, but also played a central role in organizing the January 25, 2011 Tahrir Square protest that eventually led to the fall of the Mubarak regime. Interestingly, “We are all Khaled Said” was taken down by Facebook administrators in the fall of 2010 due to terms of service violations (the administrators, fearing arrest and worse, were not using their real names), and the page was only reinstated after international human rights groups intervened with Facebook executives.

Activists in the Middle East and North Africa are not the only Facebook users complaining of account deactivations and content removals of journalistic and political speech that is clearly protected by the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights. In Hong Kong last year, several days before planned protests to mark the anniversary of the 1989 Tiananmen Square Massacre, a large number of user accounts were suddenly suspended without explanation. Many of these users reported in other social media and to journalists that the suspensions occurred soon after they posted messages urging their friends to attend the June 4 protest. Suspended accounts included known democracy activists and journalists critical of Beijing [2]. Facebook later responded to journalists’ queries with vague explanations about a technical problem that was later fixed, but users themselves never received an explanation [3]. 

[1] Kristin Deasy, “Activists: Why is Facebook censoring this photo?” Global Post, November 8, 2012, at: http://www.globalpost.com/dispatch/news/regions/middle-east/121108/activists-why- facebook-censoring-photo
[2] Oiwan Lam, “Hong Kong: A Large Number of Facebook User Accounts Suspended at the Eve of Annual June 4 Vigil,” Global Voices Advocacy, June 2, 2012, at: http://www.globalpost.com/ dispatch/news/regions/middle-east/121108/activists-why-facebook-censoring-photo
[3] Rebecca MacKinnon, “Ruling Facebookistan,” Foreign Policy, June 14, 2012, at: http:// www.foreignpolicy.com/articles/2012/06/13/governing_facebookistan 

Our Objectives
  1. Learn more about the scope of this problem and better understand its impact on activists, citizen media makers, and journalists around the world;
  2. Educate affected groups and communities about the nature of the problem and the reasons behind it;
  3. Educate companies about the real-world human impact of failing to address the problem adequately;
  4. Establish formal channels of communication with companies to resolve specific cases and to identify solutions/improvements;
  5. Encourage companies targeted by our project to show concrete improvements to their systems and improve the way they communicate with users about reasons and processes around content/account censorship;
  6. Develop a set of recommended best practices that all social media companies should follow if they want to be considered compatible with and supportive of the free speech rights of activists, citizen media makers and journalists. 

Our Action Plan
  1. Build and promote the Onlinecensorship.org platform (desktop and mobile) in order to solicit “incident reports” from journalists, online activists, and citizen media creators all over the world;
  2. Document cases in which social media accounts have been deactivated, or in which protected journalistic and political content and speech has been deleted - We will base the definition of protected speech on commonly agreed standards of international human rights law.
  3. When information is available, document whether the content has been deleted or blocked to enforce commercial rules or at the behest of governments;
  4. Conduct systematic surveys of affected communities and follow-up research to identify the nature and scope of problems first identified through the crowd-sourced reports;
  5. Carry out education of activists, bloggers, and journalists about the reasons behind the problems as well as possible steps to address or rectify them.
  6. Engage with companies about how their practices can be improved or changed, and organize grassroots campaigns to convince companies that their global reputations will be affected unless the rights of activists and journalists are better protected and defended on their platforms. 
In ONE sentence, tell us about your project to strengthen the Internet for free expression and innovation.
We believe that OnlineCensorship.org can play an important role in holding Internet companies publicly accountable for the way in which they exercise power over people’s digital lives.
Who will benefit from what you propose? What have you observed that makes you think that?
The beneficiaries of OnlineCensorship.org will primarily be journalists, bloggers, and activists whose speech rights have been challenged by social media platforms. Our data will also benefit academics and other researchers. From ad hoc incident reports, we know that affected individuals currently do not know who to hold accountable for incidents of censorship on social media. In the future, OnlineCensorship.org will be the first resource for those who believe their rights have been infringed.
What progress have you made so far?
We are currently conducting pre-funding work to demonstrate the validity of our concept. We have drafted a privacy policy and incident report questionnaire (see http://beta.onlinecensorship.org/questions.php). The technical team has developed and tested a beta platform for crowd-sourced incident reports, including installation of a spam filter, captcha, and other technical mechanisms to prevent abuse of reporting.
What would be a successful outcome for your idea or project?
We will consider the first phase of this project highly successful if our online and mobile platform is widely promoted in mainstream global media and by partner advocacy organizations. We aim to receive several dozen valid incident reports per week, where a 'valid report' fits the criteria of free speech cases harming the free speech rights of journalists, citizen media creators, activists, or civil society groups. The second phase of this project will be successful if companies covered by the reporting system have agreed to set up regular channels for resolving individual cases and to discuss specifics of broader measures that would minimize harm to the legitimate activities of journalists, citizen media creators, activists, and civil society groups.
Who is on your team, and what are their relevant experiences or skills?
Our team is comprised of some of the top experts on the Internet and free speech rights, as well as a group of socially engaged techies and designers. Jillian York (senior advisor) is the Director of International Freedom of Expression at the Electronic Frontier Foundation and a member of the Board of Directors of Global Voices. York is at the forefront of research, writing, and advocacy related to the impact of private intermediaries on citizens’ free speech rights. Danny O'Brien (advisor), also of the Electronic Frontier Foundation, is heavily involved with efforts to resolve problems encountered by journalists and bloggers with account deactivation and content removal. Ryan Budish (advisor) provides valuable experience and advice on the advantages and challenges of crowd-sourcing information about censorship based on his experience with Herdict.org. Dr. Ben Wagner (advisor) is an academic whose research focuses on free expression online. Finally, Ramzi Jaber (technology and creative lead) houses the project at Visualizing Impact, a startup striving for social impact at the intersection of data science, design, and technology.
Location
Beirut, Lebanon and San Francisco, CA, US.

Comments

Join the conversation and post a comment.

Peter M

April 22, 2014, 11:25AM
This project could benefit information and communications technology companies in their efforts to uphold the UN Guiding Principles on Business and Human Rights. Despite many commitments to provide users with channels to communicate complaints, this sector has more work to do on the right to remedy, the 'Third Pillar' of the Ruggie Framework,. Have you considered how this project could support corporate efforts to deliver access to remedy, especially in areas where the courts are unwilling or unable? Thanks!

Jillian York

April 23, 2014, 19:06PM
Thanks Peter,

We agree and we're certainly thinking this! If this project receives funding, one of the things we would like to do is a legal analysis of how we might be able to support corporate efforts to deliver access to remedy.

Emi Kolawole

March 31, 2014, 01:57AM
Hi Jillian - This is a fascinating project. I am curious at what point the companies end and the nations in which they operate (and their respective governments) begin. In other words, is it the social media platforms that are really the source of the injustice, or higher powers? If it is the latter, how do you and your team plan to address them?

Jillian York

March 31, 2014, 18:49PM
Hi Emi,

Thank you! So, in my full time job at the EFF, I focus quite a bit on what governments do. And there are other organizations, such as the Global Network Initiative, tasked with guiding the relationships between US companies like Google and Facebook and foreign governments, and helping them make the right decisions about user data.

That said, there's an entire other realm of decisions that are made by companies alone - for example, the fact that Facebook requires you to use your real name or that Vine doesn't allow sexual content. The mechanisms used by the companies to track that content (essentially, user flagging) are sometimes broken, or faulty. That's what we're trying to address - the companies' role, rather than the role of the higher powers, which we feel is being adequately addressed by other groups.

Best,
Jillian

Emi Kolawole

March 31, 2014, 23:35PM
Thanks Jillian -- this was very helpful! Best of luck with the challenge!
Login
Close
Login to News Challenge
 
or