A General Assembly United Nations Human Rights Council

advertisement
A/HRC/26/49
United Nations
General Assembly
Distr.: General
6 May 2014
Original: English
Human Rights Council
Twenty-sixth session
Agenda item 9
Racism, racial discrimination, xenophobia and related
forms of intolerance, follow-up to and implementation
of the Durban Declaration and Programme of Action
Report of the Special Rapporteur on contemporary forms of
racism, racial discrimination, xenophobia and related
intolerance, Mutuma Ruteere*
Summary
The unprecedented, rapid development of new communication and information
technologies, such as the Internet and social media, has enabled wider dissemination of
racist and xenophobic content that incites racial hatred and violence. In response, States,
international and regional organizations, civil society and the private sector have undertaken
a variety of legal and policy initiatives.
In the present report, the Special Rapporteur examines the context, key trends and
the manifestations of racism on the Internet and social media, and provides an overview of
the legal and policy frameworks and the measures taken at international, regional and
national levels, as well as some of the regulatory norms adopted by Internet and social
network providers. He presents examples of measures taken to respond to the use of the
Internet and social media to propagate racism, hatred, xenophobia and related intolerance,
while highlighting the overall positive contribution of the Internet and social media as an
effective tool for combating racism, discrimination, xenophobia and related intolerance.
*
GE.14-14128
Late submission.
A/HRC/26/49
Contents
Page
I.
Introduction .............................................................................................................
1–5
3
II.
Activities of the Special Rapporteur .......................................................................
6–12
4
III.
A.
Country visits ..................................................................................................
6–8
4
B.
Other activities ................................................................................................
9–12
4
Use of internet and social media for propagating racism, racial discrimination,
xenophobia and related intolerance ........................................................................
13–63
5
A.
Context............................................................................................................
13–15
5
B.
Manifestations of racism, xenophobia, hate speech and other related forms of
intolerance on the Internet and social media networks ....................................
16–21
5
Regional and national levels and by internet and social media providers.......
22–54
7
Civil society initiatives undertaken to counter racism, xenophobia, discrimination
and other related hate speech on the Internet and social media ......................
55–63
15
64–74
17
C.
D.
VI.
2
Paragraphs
Legal, policy and regulatory frameworks and measures taken at the international,
Conclusions and recommendations .........................................................................
A/HRC/26/49
I. Introduction
1.
The present report is submitted pursuant to Human Rights Council resolution 16/33.
It builds upon the report of the Special Rapporteur submitted to the General Assembly
(A/67/326), in which he examined issues relating to the use of new information
technologies, including the Internet, for disseminating racist ideas, hate messages and
inciting racial hatred and violence. It also builds upon reports of previous mandate holders,
taking into account recent developments and information gathered in particular through an
expert meeting and research conducted by the Special Rapporteur.
2.
The present report should be considered together with the previous report of the
Special Rapporteur, in which he raised concerns at the use of the Internet and social media
by extremist groups and individuals to disseminate racist ideas and to propagate racism,
racial discrimination, xenophobia and related intolerance. States, national human rights
institutions, civil society organizations and academics have also expressed concern
regarding the increased use of the Internet and social media to proliferate, promote and
disseminate racist ideas by extremist political parties, movements and groups. Concerns
have also been raised about the rising number of incidents of racist violence and crimes
against, in particular, ethnic and religious minorities and migrants, and the lack of adequate
data on such violence and crimes. Extremist groups and movements, particularly far-right
movements, use the Internet and social media networks not only as a means to disseminate
hate speech and incite racial violence and abuse against specific groups of individuals, but
also as a recruitment platform for potential new members.
3.
Although there are a number of legal, regulatory, technical and other practical
challenges to the fight against racism and incitement to hatred and violence on the Internet,
the enforcement of laws and regulations is challenging owing to the complexity occasioned
by the lack of clarity of legal terms. The effectiveness of national legislation also becomes
limited since States adopt differing laws, policies or approaches with regard to hate or racist
content on the Internet. In addition, due to their transborder nature, cases related to racism,
incitement to racial hatred and violence through the Internet most often fall under different
jurisdictions, depending on where the inappropriate or illegal content was hosted and
created and where hate crimes prompted by such racist or xenophobic content took place.
4.
The unprecedented, rapid development of new communication and information
technologies, including the Internet, has enabled wider dissemination of racist and
xenophobic content that incites racial hatred and violence. In response, States, international
and regional organizations have undertaken a variety of legal and policy initiatives. Civil
society organizations have also contributed to addressing this phenomenon through various
measures and initiatives. Some technological measures have also been initiated by the
private sector, including social media platforms and networks. A more comprehensive
approach to combat the challenge is however necessary. Such an approach should involve
dialogue and consultation among the relevant actors and stakeholders.
5.
In the present report, the Special Rapporteur, after providing an overview of his
activities, examines the context, key trends, manifestations of racism on the internet and
social media (sections A and B). He then provides an overview of the legal and policy
frameworks and measures taken at the international, regional and national levels, and also
some of the regulatory frameworks adopted by Internet and social network providers
(section C). He then presents different examples of measures for responding to the use of
the Internet and social media to propagate racism, racial hatred, xenophobia, racial
discrimination and related intolerance (section D), while highlighting the positive
contribution of the Internet and social media as an effective tool for combating racism,
3
A/HRC/26/49
racial discrimination, xenophobia and related intolerance. The conclusions and
recommendations are outlined in section VI.
II. Activities of the Special Rapporteur
A.
Country visits
6.
The Special Rapporteur would like to thank the Governments of Greece and of the
Republic of Korea, which have accepted his request for country visits. He welcomes the
agreement on dates with regard to the visit to the Republic of Korea for September and
October 2014, and hopes to undertake the visit to Greece before the end of 2014.
7.
The Special Rapporteur renewed his request to visit India, South Africa and
Thailand, for which he was awaiting an invitation at the time of writing. The Special
Rapporteur also sent a request for a follow-up visit to Japan and a request for a visit to Fiji.
8.
The Special Rapporteur visited Mauritania from 2 to 8 September 2013.1 He
expresses his sincere gratitude to the Government for its cooperation and openness in the
preparation and conduct of his visit.
B.
Other activities
9.
On 14 and 15 May 2013, the Special Rapporteur participated in a conference on the
theme “Right-wing extremism and hate crime: minorities under pressure in Europe and
beyond”, in Oslo, organized by the Ministry of Foreign Affairs of Norway. The conference
brought together international experts to discuss challenges and possible solutions relating
to the rise of right-wing extremism and hate crimes directed against minorities in Europe
and beyond.
10.
On 24 and 25 June 2013, the Special Rapporteur attended the International Meeting
for Equality and Non-Discrimination in Mexico City, organized by the Mexican National
Council to Prevent Discrimination (CONAPRED). The meeting brought together
international experts to debate and review international efforts to prevent discrimination
and share good practices.
11.
On 6 and 7 March 2014, the Special Rapporteur participated in the third and final
meeting of the Teaching Respect for All Advisory Group at the United Nations
Educational, Scientific and Cultural Organization (UNESCO), Paris, which reviewed the
outcomes of several pilot projects conducted at the national level and validated the
Teaching Respect for All Toolbox, an initiative undertaken by UNESCO to develop
educational material promoting non-discrimination and inclusion.
12.
On 20 March 2014, the Special Rapporteur held an exchange of views with the
European Commission against Racism and Intolerance of the Council of Europe in
Strasbourg, where themes of common interest and possible areas of future cooperation were
discussed.
1
4
See A/HRC/23/49/Add.1.
A/HRC/26/49
III. Use of the Internet and social media for propagating racism,
racial discrimination, xenophobia and related intolerance
A.
Context
13.
In his report to the General Assembly (A/67/326), the Special Rapporteur undertook
a preliminary examination of the issue of racism and the Internet, illustrating some of the
key trends and challenges posed by the use of the Internet to propagate racism, racial
hatred, xenophobia, racial discrimination and related intolerance.
14.
In the present report, the Special Rapporteur builds upon the previous discussions,
and illustrates some of the more recent manifestations of racism and hate on the Internet
and social media networks. He discusses the applicable legislation and standards at the
international, regional and national levels, but also through the norms of some key
providers of social media platforms available on the Internet. Drawing from studies and
from an expert meeting that he convened in New York on 11 and 12 November 2013, with
the participation of academics, representatives of Internet and social media providers and
civil society, the Special Rapporteur seeks to provide an updated overview of the remaining
challenges and some good practices in the fight against racism and discrimination on and
through the Internet.
15.
At the above-mentioned expert meeting, discussions were held on the key issues,
dilemmas and challenges posed in combating the propagation of racist and xenophobic
content and incitement to violence on the Internet and social media, the governance of
racist and xenophobic content on the Internet, and the balance between the protection of
freedom of opinion and expression and the control of racist and xenophobic content and
incitement to violence on the Internet. Views were exchanged on the key legal and policy
measures taken at the national, regional and international levels, and good practices in
combating racist and xenophobic content and in the promotion of the Internet as a tool
against racism, racial discrimination and xenophobia were highlighted. The Special
Rapporteur thanks the Association for Progressive Communications and Article 19 for their
role in the successful outcome of the expert meeting.
B.
Manifestations of racism, xenophobia, hate speech and other related
forms of intolerance on the Internet and social media networks
16.
The Internet, unlike traditional forms of communication and information
technologies, has a tremendous capacity to transmit and disseminate information instantly
to different parts of the world. It enables users to access and provide content with relative
anonymity. Materials and content available on the Internet can be shared across national
borders and be hosted in different countries with different legal regimes. Since the
worldwide web was created 25 years ago, it has brought many benefits to all aspects of
modern life as it continues to spread globally; more recently, social media platforms have
emerged and become increasingly popular, and made it significantly easier for users to
share information, photographs and links with friends and family. Since the creation of
social media platforms, the Internet has evolved even more rapidly, to the extent that,
today, every minute, 100 hours of videos are posted on the YouTube platform, and more
than 1.5 billion photographs are uploaded every week on the Facebook network.
17.
One of the greatest drawbacks of the Internet, however, is the fact that it also
provides a platform to racist and hateful movements. The authors Abraham H. Foxman and
5
A/HRC/26/49
Christopher Wolf recently described the situation of how racism, xenophobia and hate
manifest on the Internet:
Today there are powerful new tools for spreading lies, fomenting hatred, and
encouraging violence. […] The openness and wide availability of the internet that
we celebrate has sadly allowed it to become a powerful and virulent platform not
just for anti-Semitism but for many forms of hatred that are directly linked to
growing online incivility, to the marginalization and targeting of minorities, to the
spread of falsehoods that threaten to mislead a generation of young people, to
political polarization, and to real-world violence.
The authors accurately perceive the dangers and challenges posed by the Internet and social
media:
Instead of being under the central control of a political party or group, the power of
the Internet lies in its viral nature. Everyone can be a publisher, even the most
vicious anti-Semite, racist, bigot, homophobe, sexist, or purveyor of hatred. The ease
and rapidity with which websites, social media pages, video and audio downloads,
and instant messages can be created and disseminated on-line make Internet
propaganda almost impossible to track, control, and combat. Links, viral e-mails,
and “re-tweets” enable lies to self-propagate with appalling speed.2
18.
The Special Rapporteur observes that there is an increasing use of electronic
communication technologies, such as electronic mail and the Internet, to disseminate racist
and xenophobic information and to incite racial hatred and violence. Right-wing extremist
groups and racist organizations use the Internet in their transborder communications and
exchange of racist materials. As technology evolves, extremist hate sites have similarly
grown in both number and technological sophistication.
19.
The Special Rapporteur notes with serious concern the use of the Internet and social
media by extremist groups and individuals to disseminate racist ideas and to propagate
racism, racial discrimination, xenophobia and related intolerance. This concern has also
been echoed by many States, non-governmental organizations and Internet providers and
social media platforms. One major consequence of the dissemination of such information is
that the more people see hateful information, the more they tend to accept it and believe
that the ideas are normal and mainstream.
20.
Another major concern is that extremist groups and movements, particularly farright movements, use the Internet and social media platforms not only as a means to
disseminate hate speech and to incite racial violence and abuse against specific groups of
individuals, but also as a recruitment platform for potential new members. The potential of
the Internet is used to expand their networks of individuals, movements and groups, as it
allows the rapid and more far-reaching dissemination of information about their aims and
facilitates the sending of invitations to events and meetings. The web is also used by
extremist movements and groups to distribute newsletters, video clips and other materials.
It is worrying that calls for violence against individuals and groups advocating anti-racism
activities are placed on websites and social media to intimidate, exert pressure or stop social
or political actions or activities directed against extremist groups. 3
21.
The situation of marginalized persons and groups who are discriminated against on
the Internet and social media largely reflects the challenges they face in the real world.
While Internet technology has helped to connect and empower minority groups and
2
3
6
Abraham H. Foxman and Christopher Wolf, Viral Hate: Containing its Spread on the Internet (New
York, Palgrave Macmillan, 2013).
See A/HRC/26/50.
A/HRC/26/49
individuals, it has also increased their vulnerability by collecting their personal information,
which can be accessed by extremists, allowing the groups to extend their reach. The
Internet and social media can also empower extremist groups by giving them the illusion
that their hateful beliefs are shared by a large community. This justification of their beliefs
sometimes affords them the confidence to follow through with hate crimes in real life.
Thus, hate and racism move from the virtual to the real world, with a disproportionate
impact on marginalized groups.
C.
Legal, policy and regulatory frameworks and measures taken at the
international, regional and national levels by Internet and social media
providers
22.
In response to these above-described developments, international and regional
organizations and States have undertaken a variety of legal and policy initiatives. Civil
society and the private sector, and Internet and social media providers in particular, have
also contributed to addressing this phenomenon through various measures and initiatives.
1.
International frameworks and initiatives
23.
The International Convention on the Elimination of All Forms of Racial
Discrimination provides in its article 4 (a) that States parties shall declare an offence
punishable by law all dissemination of ideas based on racial superiority or hatred, as well as
incitement to racial discrimination; and in article 4 (b) it provides that States parties shall
declare illegal and prohibit organized and all other propaganda activities which promote
and incite racial discrimination.
24.
A number of United Nations human rights mechanisms have addressed the issue of
the use of the Internet and social media to propagate racism, racial hatred, xenophobia,
racial discrimination and related intolerance. The Committee on the Elimination of Racial
Discrimination, in its general recommendation XXIX on descent-based discrimination,
recommended that States take strict measures against any incitement to discrimination or
violence against descent-based communities, including through the Internet. Furthermore,
in its general recommendation XXX on discrimination against non-citizens, the Committee
recommended that action be taken to counter any tendency to target, stigmatize, stereotype
or profile, on the basis of race, colour, descent, and national or ethnic origin, members of
“non-citizen” population groups, in particular by politicians, officials, educators and the
media, on the Internet and other electronic communications networks. The Committee has
also expressed concern at the dissemination of racist propaganda on the Internet in a
number of recent concluding observations issued after consideration of regular reports
submitted by States parties, recalling that such dissemination falls within the scope of
prohibitions of article 4 of the International Convention on the Elimination of All Forms of
Racial Discrimination.
25.
More recently, at its eighty-first session, held in August 2012, the Committee on the
Elimination of Discrimination organized a thematic discussion on racist hate speech, which
also touched upon the issue of the Internet and led to the adoption by the Committee of its
general recommendation XXXV on combating hate speech. In the recommendation, the
Committee recalled that racist hate speech, whether originating from individuals or groups,
can take many forms, notably in being disseminated through electronic media, the Internet
and social networking sites, and reminded States to pay full attention to all these
manifestations of racism, xenophobia and related intolerance and to take effective measures
to combat them. The Committee called upon States to encourage public and private media
to adopt codes of professional ethics, and recalled the essential role played by the Internet
and social media in the promotion and dissemination of ideas and opinions. The Committee
7
A/HRC/26/49
recommended that Internet and social media service providers ensure self-regulation and
compliance with codes of ethics, as had been underlined previously in the Durban
Declaration and Programme of Action.
26.
The Durban Declaration and Programme of Action, adopted at the World
Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance in
2001, highlighted several important areas of action, in particular, the positive contribution
made by the new information and communications technologies, including the Internet, in
combating racism through rapid and wide-reaching communication. The progression of
racism, discrimination, xenophobia and intolerance through the new information and
communications technologies, including the Internet, was however a cause for concern for
relevant stakeholders. As such, States were encouraged to implement legal sanctions, in
accordance with relevant international human rights law, with regard to incitement to racial
hatred on the Internet and social media. Furthermore, the Durban Declaration and
Programme of Action called upon States to encourage Internet service providers to
establish and disseminate specific voluntary codes of conduct and self-regulatory measures
against the dissemination of racist messages, to set up mediating bodies at the national and
international levels, also involving civil society, and to adopt appropriate legislation for the
prosecution of those responsible for incitement to racial hatred or violence through the
Internet and social media.
27.
The Special Rapporteur on the promotion and protection of the right to freedom of
opinion and expression addressed the issue of the right of all individuals to seek, receive
and impart information and ideas of all kinds through the Internet in 2011, when he
underscored the unique and transformative nature of the Internet not only in enabling
individuals to exercise their right to freedom of opinion and expression, but also a wide
range of other human rights, and to promote the progress of society as a whole. 4 The
Special Rapporteur also addressed the circumstances under which the dissemination of
certain types of information could be restricted, the issues of arbitrary blocking or filtering
of content, the criminalization of legitimate expression and the imposition of intermediary
liability.
28.
The Rabat Plan of Action on the prohibition of advocacy on national, racial or
religious hatred that constitutes incitement to discrimination, hostility or violence, adopted
in 2012, reiterated that all human rights are universal, indivisible and interdependent and
interrelated, and recalled the interdependence between freedom of expression and other
human rights. The realization of freedom of expression enabled public debate, giving voice
to different perspectives and viewpoints and playing a crucial role in ensuring democracy
and international peace and security. The Rabat Plan of Action also stressed the importance
of the role that the media and other means of public communication, such as the Internet
and social media, play in enabling free expression and the realization of equality, and
reiterated that freedom of expression and freedom of religion or belief are mutually
dependent and reinforcing.
29.
The United Nations Educational, Scientific and Cultural Organization (UNESCO)
developed an integrated strategy to combat racism, discrimination, xenophobia and
intolerance based on a series of studies and consultations on different aspects and forms of
racism, xenophobia and discrimination, including the issue of combating racist propaganda
in the media, in particular in cyberspace. The strategy includes a set of measures to be taken
by the Organization in response to the potential use of new information and
communications technologies, in particular the Internet, to spread racist, intolerant or
discriminatory ideas. In 2013, UNESCO also developed the Teaching Respect for All
4
8
See A/HRC/17/27.
A/HRC/26/49
initiative, where the use of Internet in the classroom and curriculum is encouraged for an
inclusive and tolerant curriculum, and makes recommendations for the development of new
practices, particularly for youth with different cultural backgrounds and perspectives.
30.
The United Nations Office on Drugs and Crime has also addressed the issue of
cybercrime, including specific computer-related acts involving racism and xenophobia, by
various means, such as providing technical assistance and training to States to improve
national legislation and building the capacity of national authorities to prevent, detect,
investigate and prosecute such crimes in all their forms. The International
Telecommunication Union (ITU) has also been involved in the issue of cybersecurity and
efforts to combat cybercrime since the World Summit on the Information Society and the
ITU Plenipotentiary Conference, held in Guadalajara, Mexico, in 2010. ITU has also
initiated the Global Cybersecurity Agenda, a framework for international cooperation
aimed at enhancing global public confidence and security in the information society. ITU
provides Member States with support through specific initiatives and activities related to
legal, technical and procedural measures, organizational structures,
and
capacitybuilding and international cooperation on cybersecurity.
2.
Regional frameworks and initiatives
31.
At the regional level, the Council of Europe Convention on Cybercrime and the
Additional Protocol thereto constitute a legally binding framework with the widest reach.
The Convention, which entered into force on 1 July 2004, is the first international treaty on
crimes committed via the Internet and other computer networks dealing particularly with
infringements of copyright, computer-related fraud, child pornography and violations of
network security. The Additional Protocol, which concerns the criminalization of acts of a
racist and xenophobic nature committed through computer systems and entered into force
on 1 March 2006, makes the publication of racist and xenophobic propaganda via computer
networks a criminal offence. The Additional Protocol also specifies that States parties are to
adopt legislative and other measures that make a number of actions criminal offenses,
including distributing or otherwise making available racist and xenophobic material; racist
and xenophobic-motivated threats; racist and xenophobic-motivated insults; the denial,
gross minimization, approval or justification of genocide or crimes against humanity; and
aiding and abetting any of these actions.
32.
The European Commission against Racism and Intolerance adopted General Policy
Recommendation No. 6 in 2000, which focuses on combating the dissemination of racist,
xenophobic, and anti-Semitic material on the Internet. The Commission also outlined a set
of recommendations for Council of Europe Member States, one of which was the passing of
a protocol to address racism and xenophobia on the Internet, which led to the abovementioned Additional Protocol. The Commission has also published reports on reconciling
freedom of expression with combating racism and on addressing racist content on the
Internet.
33.
The Council of Europe sponsored two initiatives against racism and hate on the
Internet. The No Hate Speech Movement, a campaign focused on youth, was launched in
2012 to map hate speech online, to raise awareness about the risks that hate speech pose to
democracy, to reduce acceptance of hate speech online and to advocate development and
consensus on European policy instruments combating hate speech. This initiative also led to
the creation of Hate Speech Watch, an online platform designed to monitor online hate
speech content and to facilitate coordinated action against it. The Council of Europe has
also initiated Young People Combating Hate Speech Online, an initiative with the aim of
equipping young people and youth organizations with competences and tools to recognize
and take positive action against discrimination and hate speech.
9
A/HRC/26/49
34.
The Organization for Security and Cooperation in Europe (OSCE) has been
enhancing its work in combating racism, racial discrimination, xenophobia and related
intolerance on the Internet, starting in 2003, when the OSCE Ministerial Council made a
commitment to combat hate crimes fuelled by racist, xenophobic and anti-Semitic
propaganda on the Internet. In 2009, OSCE assigned the Office for Democratic Institutions
and Human Rights with the task of exploring the potential link between the use of the
Internet and bias-motivated violence and the harm it causes, as well as eventual practical
steps to be taken.5 More recently, in 2010, OSCE adopted the Astana Declaration in which
it called upon Member States, in cooperation with civil society, to increase their efforts to
counter the incitement to imminent violence and hate crimes, including through the
Internet, while respecting the freedom of expression, and for using the Internet to promote
democracy and tolerance education.
35.
In 1998, the European Union adopted an action plan that encouraged self-regulatory
initiatives to deal with illegal and harmful Internet content, including the creation of a
European network of hotlines for Internet users to report illegal content such as child
pornography; the development of self-regulatory and content-monitoring schemes by access
and content providers; and the development of internationally compatible and interoperable
rating and filtering schemes to protect users. In May 2005, the European Union extended
the action plan, entitled the “‘Safer Internet plus’ programme”, for the period 2005-2008, to
promote safer use of the Internet and new online technologies, and particularly to fight
against illegal content, such as child pornography and racist material, and content that is
potentially harmful to children or unwanted by the end-user. More recently, theSafer
Internet Programme (2009-2013) was implemented to promote safer use of the Internet and
other communication technologies, to educate users, particularly children, parents and
carers, and to fight against illegal content and harmful conduct online, such as “grooming
and bullying”.
36.
Other regional initiatives have been taken to reach a common understanding and
coordination on principles and standards for the fight against cybercrime and to improve
cybersecurity. In 2002, the Commonwealth of Nations developed a model law on
cybercrime that provided a legal framework to harmonize legislation within the
Commonwealth and to enable international cooperation. The same year, the Organization
for Economic Cooperation and Development completed its Guidelines for the Security of
Information Systems and Networks. It is now conducting a review of the Guidelines.
37.
Similar regional initiatives aimed at reaching a common understanding and
coordination on principles and standards for fighting cybercrime and improving
cybersecurity have also been taken in other regional organizations. In 2001, the Association
of Southeast Asian Nations (ASEAN) adopted a work programme to implement the
ASEAN Plan of Action to Combat Transnational Crime. At the thirteenth ASEAN Senior
Officials Meeting on Transnational Crime, in 2013, a work programme for 2013-2015 was
adopted, which focused also on cybercrime and cybersecurity. In 2002, the Asia-Pacific
Economic Cooperation (APEC) issued a cybersecurity strategy that outlined five areas for
cooperation, including legal developments, information-sharing and cooperation, security
and technical guidelines, public awareness, and training and education. More recently,
APEC reiterated the commitment made to strengthen the domestic legal and regulatory
framework on cybercrime and encouraged member States in particular to create strategies
to create a trusted, secure and sustainable online environment, to address the misuse of the
Internet, to develop watch, warning and incident response and recovery capabilities to help
prevent cyberattacks, and to support cooperative efforts among economies to promote a
5
10
Available from http://www.osce.org/cio/40695.
A/HRC/26/49
secure online environment. In 2007, the League of Arab States and the Gulf Cooperation
Council made a recommendation to seek a joint approach on cybercrime-related issues by
taking into consideration international standards. In 2009, the Economic Community of
West African States (ECOWAS) adopted a directive on fighting cybercrime in the
ECOWAS region that provided a legal framework for its member States. In 2012, the East
African Community adopted a framework for cyberlaw, which provided an overview of the
legal issues relating to intellectual property, competition, taxation and information security.
38.
In 2004, the Organization of American States (OAS) implemented the InterAmerican Integral Strategy to Combat Threats to Cyber Security, the main objectives of
which were to establish national “alert, watch and warning” groups, known as Computer
Security Incident Response Teams, in each country, to create a hemispheric watch and
warning network providing guidance and support to cybersecurity technicians from around
the Americas, and to promote a culture and awareness of cybersecurity for the region. Other
initiatives by OAS include the Inter-American Cooperation Portal on Cyber-Crime and the
meetings of Ministers of Justice or Other Ministers or Attorneys General of the Americas,
which are aimed at strengthening hemispheric cooperation in the investigation and
prosecution of cybercriminality. More recently, in 2013, OAS and the Latin American and
Caribbean Internet Addresses Registry signed a cybersecurity agreement to strengthen the
development of the ties among Governments, the private sector and civil society in the
Americas, thus ensuring a multidimensional approach to cybersecurity.
39.
Although the above-mentioned initiatives include specific computer-related crimes
involving incitement to racial hatred and violence, the majority of them do not specifically
address content-related offences, such as the dissemination of racist ideas or incitement to
racial hatred and violence on the Internet and social media, focusing rather on other forms
of cybercrime and cybersecurity concerns.
3.
Legal and regulatory frameworks taken at the national level and by Internet and
social media providers
40.
The issues of racism and incitement to racial hatred and violence on the Internet and
social media have been addressed not only through national frameworks but also through
different initiatives taken by Internet service providers and social media platforms. The
Special Rapporteur has identified three main models emerging from the different
frameworks and initiatives: the censorship model; the regulatory framework; and the
freedom of speech approach.
41.
Internet and social media censorship consists of suppressing information and ideas
in order to “protect” society from “reprehensible” ideas. States are able to censor Internet
content through various means, such as by setting up agencies to monitor the Internet and
deciding what to censor according to established norms. This can also be achieved, by both
States and Internet and social media providers, by programing Internet routers to block or
censor the flow of specific data from Internet servers to service providers, and therefore to
Internet and social media users. Moreover, when “objectionable” information is posted
online, States can promptly demand that the Internet provider or social media platform
remove it. In order to be effective, this type of censorship requires States to have extremely
efficient systems in place to track and survey posted content and then to quickly remove it.
Internet providers or social media platforms themselves may also achieve this goal,
although with a lesser degree of efficiency.
42.
The Special Rapporteur believes that while censorship does not necessarily need to
serve illegitimate ends and can be used to suppress racially hateful speech, there are
significant drawbacks. Censorship policies can have a chilling effect, as users who know
that their online activities are screened by a government agency are less inclined to engage
in freedom of expression. Users can be further deterred if they are unsure whether the
11
A/HRC/26/49
information they post will be removed or their liability will be invoked. In the view of the
Special Rapporteur, the main concern is that censorship impedes the fundamental right to
freedom of expression, as expressed in article 19 of the International Covenant on Civil and
Political Rights and subsequent concluding observations.
43.
Another model is the regulation framework, implemented either by the State or the
Internet or social media provider. States can create a legal framework that determines what
content on the web can be considered illegal. Under this model, a court or regulatory body
determines whether the content should pursuant to the applicable law, be blocked, before
the said content is filtered or removed. Alternatively, a non-governmental organization can
be given the responsibility of monitoring the content posted on the Internet. While courts
are usually seen as independent of political influence, a watchdog organization might,
however, be viewed as applying government policies.
44.
The Special Rapporteur was informed that, under the regulatory model, users may
also be able to submit information to the overseeing body on content they feel violates the
law and should be removed. This allows Internet users to implement self-censorship, and it
also helps to ensure that illegal content will be found and removed, given that more users
are looking out for it. A major benefit of the regulatory model is that it is more transparent
than censorship. Internet users are usually aware of what information is allowed, and can
have a voice in deciding what content will be blocked by voting for the politicians who
make these laws. However, one drawback of allowing users to submit removal requests is
that the decision-making process is often not transparent. Thus, after users submit their
requests, they have no way of knowing why their request was granted or denied. This issue
can be mitigated by ensuring that Internet users are informed of the decision-making
process on content removal.
45.
The above-mentioned regulatory model has been set up in France, where content and
activity on the Internet is regulated by law LCEN 2004-575. The law relieves Internet
service providers and social media platforms of responsibility for illegal content if they
have “no knowledge of illegal activity or material” or if they have “acted promptly to
remove or block access to it as soon as it was discovered”. The providers are also not held
responsible if they did not know how the illegal activity or material arose. Once a judicial
order is issued and proper notice is given, the hosting website is liable for any further
reposting of the illegal material. A website may also be held liable for redirecting Internet
users to other websites hosting illegal content. The benefit of the regulatory system used in
France is that it is more transparent, with a court assessing whether a website should be
blocked before any filtering is implemented. Moreover, it allows the Internet or social
media provider the opportunity to remove the objectionable information prior to the
issuance of a court order.
46.
The Special Rapporteur was informed that, in the United Kingdom of Great Britain
and Northern Ireland, the Internet is regulated by the Office of Communications (Ofcom),
the mandates of which includes protecting audiences against harmful material, unfairness
and infringements of privacy. Internet service providers and social media platforms are
considered mere conduits of information and are not responsible for any illegal information
transmitted. Although the State does not require providers and platforms to monitor
information being transmitted, some filter certain types of content, such as child
pornography materials, on their own initiative. In such a filtering system, called CleanFeed,
the Internet Watch Foundation, a non-profit organization, works with the Government to
compile a list of websites it deems illegal, then transmits the information to Internet service
providers. The body, which is funded by the Internet industry, works with the police, the
Home Office and the Crown Prosecution Service to receive public complaints and to
determine whether the content hosted on a website is illegal. In the CleanFeed system,
however, the list of filtered websites is not made public, which can lead to abuse when
12
A/HRC/26/49
determining which content should be filtered. Moreover, the body’s decisions are not
subject to judicial review, because it was not created by legislation, but has its own internal
appeal procedure.
47.
An alternative way of addressing racism and hate speech on the Internet and social
media is to apply no kind of censorship or regulation, based on the ideal of freedom of
speech and expression. Users are free to express their ideas and post comments or content
that can be offensive, racist, xenophobic or hateful in any kind of meaning, and other users
react to such racist or hateful speech or content by counter-speech. Although such an
approach limits the interference of the State or of Internet or social media providers, it can
allow anyone to spread hateful or racist language, ideas or contents that can offend, ridicule
or marginalize a specific individual, group or population. Such an unlimited right to
freedom of speech and expression can also be destructive to societ,y because it may
increase tensions between certain groups and cause unrest.
48.
The United States of America provides an example of freedom of speech that seeks
to limit prohibitions, based on the First Amendment to the Constitution. In the case of
Chaplinsky v. New Hampshire (1942), the Supreme Court of the United States ruled that:
There are certain well-defined and narrowly limited classes of speech, the
prevention and punishment of which have never been thought to raise any
constitutional problem. These include the lewd and obscene, the profane, the
libelous, and the insulting or “fighting” words – those which by their very utterance
inflict injury or tend to incite an immediate breach of peace… It has been well
observed that such utterances are no essential part of any exposition of ideas, and are
of such slight social value as a step to truth that any benefit that may be derived from
them is clearly outweighed by the social interest in order and morality.
49.
Since the case, the Supreme Court has also ruled that persons are free to choose the
words that best express their feelings, and that strong or offensive language may better
convey the person’s feelings. It has since then also struck down States’ laws that were
considered too broad in prohibiting speech, on the grounds that, if the statute suffers from
overreach, the effect is to chill free speech. A restriction may, however, be placed on
freedom of speech when the insulting or hate speech is likely to incite to or promote
violence, and can thus be prohibited. Nevertheless, this violence must be of an immediate
threat. The Supreme Court has not directly specified which degree of offensiveness of the
language used may constitute such an immediate threat.
50.
Nevertheless, the Special Rapporteur recalls article 19, paragraph 3, of the
International Covenant on Civil and Political Rights, and article 4 of the International
Convention on the Elimination of All Forms of Racial Discrimination, which stipulate that
the right to freedom of opinion and expression may be restricted legitimately under
international human rights law essentially to safeguard the rights of others, and that that
States are to declare an offence punishable by law all dissemination of ideas based on racial
superiority or hatred and incitement to racial discrimination, and prohibit activities that
promote and incite racial discrimination. Content that may be restricted includes child
pornography (to protect the rights of children), hate speech (to protect the rights of affected
communities), defamation (to protect the rights and reputation of others against
unwarranted attacks), direct and public incitement to commit genocide (to protect the rights
of others), and advocacy of national, racial or religious hatred that constitutes incitement to
discrimination, hostility or violence (to protect the rights of others, such as the right to
life).6
6
See A/HRC/17/27.
13
A/HRC/26/49
4.
Models of regulation initiated by Internet and social media providers
51.
Internet and social media providers also have set up internal regulatory frameworks;
for example, Google has implemented an internal regulation policy that allows users to
report content that they believe should be flagged for removal by Google services
(www.google.com/transparencyreport/removals/copyright/faq/). Through the system, users
submit separate requests on an online form for each service from which they believe
content should be removed. These are considered legal notices, each of which is entered in
a database, called “Chilling Effects”, on requests for the removal of information from the
Internet. The purpose of Chilling Effects is to protect websites from cease-and-desist orders
issued by people claiming that posted content is copyrighted, and to determine whether the
content is illegal or violates Google policies. While Google has set up a process for
removing offensive information, such as racist contents or hate speech, the final say in
determining which content to remove or hide remains the sole prerogative of Google.
52.
Similarly, Twitter adheres to a policy on reporting violations of the website’s terms
of service (http://support.twitter.com/articles/20169222-country-withheld-content), which
comprises an internal process of deciding which tweets should be removed for violating
terms. The policy only highlights which content is not allowed, such as impersonation,
sharing a third person’s private information, and violence and threats, which raises the issue
of lack of transparency in this internal regulatory process.
53.
Another social media platform, Facebook, has also set up a removal policy for
offensive content, although there have been criticism to its apparent lack of transparency.
The Special Rapporteur was informed that users have been confronted with the arbitrary
removal of content deemed abusive or offensive without being given. He also learned that
users who reported racist or offensive content had not seen it removed, despite complaints
submitted to social media moderators. Although Facebook has an extensive list in its terms
of service (www.facebook.com/legal/terms) of prohibited content, users still do not know
exactly what content will be censored, removed or left unattended, given that the platform
retains the power to remove posts and content giving any reason. This lack of transparency
has raised concern for users, some of whom believe that the process and reasons for
removing posted content are unfair and prejudicial, while other requests to remove racist or
offensive material are ignored.
54.
The Special Rapporteur notes that Internet providers and social media platforms
seem to be increasingly adapting their norms and policies to the State where users post
content. Providers and platforms accordinglyrecord, store and provide information on users
and information accessed, sometimes a prerequisite for permission to operate in the country
concerned. The Special Rapporteur also reiterates his concern at the issue of what content is
be considered “racist”, “illegal” or “inciting hatred”, and his views that Internet providers
and social media networks should not make decisions regarding user-generated content, and
take such actions as removing or filtering content on their own.7 He also agrees with the
Special Rapporteur on the promotion and protection of the right to freedom of opinion and
expression that censorship measures should never be delegated to private entities, and that
intermediaries such as social media platforms should not be held liable for refusing to take
action that infringes individuals’ human rights. Any requests to prevent access to content or
to disclose private information for strictly limited purposes, such as the administration of
criminal justice, should be submitted through a court or a competent body independent of
political, commercial or other unwarranted influences. 8
7
8
14
See A/67/326, para. 48.
A/HRC/17/27, para. 75.
A/HRC/26/49
D.
Civil society initiatives to counter racism, xenophobia, discrimination
and other related hate speech on the Internet and social media
55.
Civil society actors are essential to the efforts to combat racism and hate on the
Internet and social media. The Special Rapporteur therefore highlights below some of the
projects and contributions brought to his attention and which complement the initiatives
taken at the international, regional and national levels. These actors often work together
with international, regional or national authorities in allowing victims of racism, racial
discrimination and xenophobia to defend their rights and to express their views on the
Internet and social media.
56.
The International Network against Cyber Hate (www.inach.net). based in
Amsterdam, unites organizations from around the world to promote responsibility and
citizenship on the Internet, and to counter hate speech. Its main activity is to act as an
exchange platform for the numerous member organizations based in different countries,
stimulating the sharing of knowledge in order to find new solutions for the challenges
posed by hate speech in social media. Similarly, the Magenta Foundation, a non-profit
organization, also based in Amsterdam, acts in response to human rights violations
perpetrated in Europe and elsewhere. With the main objective of combating racism, antiSemitism and discrimination based on ethnicity, the organization began to focus special
attention on discrimination on the Internet in 1996. With the assistance of the Government
of the Netherlands, the Foundation created the first complaints bureau for discrimination on
the Internet (www.meldpunt.nl/), for handling complaints on discrimination on the Internet
from servers based in the Netherlands. When the bureau deems that a complaint is
acceptable, the website or social media platform is notified and requested to remove the
content; in the event of non-compliance, the bureau may take legal action. In 2012, 78 per
cent of notifications from public complaints were successful in leading to the removal of
racist content and hate speech.9 Besides handling complaints, the bureau promotes
educative measures, such as public lectures in universities and workshops for the Dutch
Police Department and the Public Prosecutor’s Office. It also engages in research projects
concerning trends in discrimination on the Internet.
57.
The Southern Poverty Law Center (www.splcenter.org/), based in Montgomery,
Alabama (United States of America), is dedicated to the fight against hate, racism and other
forms of intolerance, and to seeking justice for the most vulnerable members of society.
The early identification and the removal of racist content online are vital in preventing hate
speech from spreading. The organization monitors the media and social websites in the
United States for hate groups and hate speech based on racism or gender discrimination. Its
findings are published online in a dedicated blog and in a quarterly report. The organization
has also updates its own “hate map”, which reflects the number and localization of hate
speech groups in the United States.
58.
The Anti-Defamation League (www.adl.org) is an organization active in the fight
against anti-Semitism, racism and xenophobia. It has initiated education seminars for youth
to stop cyberbullying, hate and intolerance online. One of the League’s initiatives, the
Cyber-Hate Action Guide, is directed at helping victims of cyberbullying. In the Guide
compiles the hate speech policies of major websites based in the United States, and informs
Internet users on how to report hate speech activities on each site.
59.
The Association for Progressive Communications (www.apc.org) advocates for the
creation of inclusive and non-discriminatory policies online. Several of its projects are
aimed at combating gender-based discrimination and women’s rights, while others are
9
See www.magenta.nl/en/projects/8-mdi.
15
A/HRC/26/49
aimed at empowering women in Internet policy decisions. The Association’s website hosts
an online forum for the discussion of issues relating to gender-based discrimination. It also
publishes a collaborative report, the Global Information Society Watch, which monitors the
implementation of international commitments made by States, with a special focus on
human rights.
60.
Bytes for All (content.bytesforall.pk) is a human rights organization and research
think tank based in Pakistan that advocates for discussions on information and
communication technologies and human rights. Discussions of this type have resulted in an
initiative with the objectives of securing freedom of expression online, strengthening the
digital security of human rights defenders and ending gender-based violence online. The
Open Net Initiative launched by the organization aims to investigate the impact posed by
the increasing amount of censorship on the Internet.
61.
The Take Back the Tech initiative, which involves several human rights
organizations, including the Association for Progressive Communications and Bytes for
All, is aimed at ending gender-based violence by increasing the influence that women have
on policymaking on the Internet and in social media. It creates an online environment that
features effective and reliable systems for reporting violence against women and
empowering women in discussions that involve the creation of policy for social networking
platforms, web hosting and telephone operators.
62.
The Umati initiative developed by the iHub innovation centre (www.ihub.co.ke)
based in Nairobi aims at proposing a workable definition of hate speech and a
contextualized methodology for its analysis; collecting and monitoring the occurrence of
hate speech in Kenya; and promoting civic education and interaction both online and
offline. Similarly, the It Gets Better project (www.itgetsbetter.org), started in 2010 in Los
Angeles (United States of America), aims to provide emotional, psychological and legal
support for LGBT youth victims of hate and discrimination. The project acts on social
media platforms such as Facebook, Twitter and YouTube, posting positive and inspiring
messages to those suffering from gender-based discrimination. Many important public
figures have participated in video messages posted by the project. In Europe, iCud
(Internet: Creatively Unveiling Discrimination) (digitaldiscrimination.eu), the combined
effort of five grassroots organizations in different European Union countries and the
University of Rovira i Virgili in Tarragona (Spain), is a project designed to raise awareness
and to explore innovative ways to combat discrimination online. The initiative, which is
supported by the Fundamental Rights and Citizenship Funding Programme of the European
Union, is aimed at creating a framework for understanding hate speech and discrimination
on the Internet, in addition to being an innovative model for combating discriminatory
material.
63.
The Special Rapporteur would also like to mention some mobile applications
developed to combat racism, homophobia and hate speech. The “Kick It Out” app
(www.kickitout.org), developed jointly by Make Positive and Sherry Design Studios, is
designed to address racism in football matches. Endorsed by the Premier League and
Football League in the United Kingdom, the app allows match-goers and supporters to
report racism, anti-Semitism and homophobia in real time during the matches. The
information collected is then forwarded to clubs and government bodies. Another
application, “Everyday Racism”, was developed by the Australian organization All
Together Now and is supported by the University of Melbourne and other academic
institutions in Australia (alltogethernow.org.au). Designed to raise awareness of racism
against Aboriginal peoples, the application challenges players to live a week in the life of
an Aboriginal person in order to gain an understanding of the prejudice that Aboriginals
face. In France, the “app’Licra” application (www.licra.org/fr/l’app-licra-1ère-application-
16
A/HRC/26/49
antiraciste) allows individuals to take a picture of hate speech or racist graffiti attach its
location and forward the information to the authorities, which can then remove it.
IV. Conclusions and recommendations
64.
Addressing racial, ethnic and xenophobic hatred on the Internet and
social media networks poses new and complex challenges, particularly given the
complexity of the phenomenon and the rapid evolution of technology. The
Special Rapporteur nonetheless stresses that the Durban Declaration and
Programme of Action and other international human rights instruments, such
as the International Convention on the Elimination of All Forms of Racial
Discrimination and the International Covenant on Civil and Political Rights,
and certain regional instruments provide a comprehensive framework for
possible actions in combating the phenomenon of racial, ethnic and xenophobic
hatred. The Rabat Plan of Action also provides a useful framework to States in
combating racism. To that end, he welcomes the continued interest in and
attention paid to racial and ethnic hatred, in particular on the Internet, by the
Committee on the Elimination of Racial Discrimination, He is also grateful for
the contribution of the Special Rapporteur on the promotion and protection of
the right to freedom of opinion and expression to this discussion. The Special
Rapporteur is keen to continue to promote dialogue on the issue, and welcomes
the encouragement expressed by States, other United Nations mechanisms, and
Internet and social media providers, as well as civil society organizations and
academics, to continue research on the issue of racism and the Internet.
65.
The Special Rapporteur also notes that important legal and policy efforts
have been made at the regional and national levels to address the spread of
racial, ethnic and xenophobic hatred and incitement over the Internet and
through social media networks. Legislative measures are central to any strategy
to combat racism, ethnic hatred and xenophobia; for this reason, the Special
Rapporteur encourages States that have not enacted legislation to combat and
prevent racial, ethnic and xenophobic hatred on the Internet and social media to
consider doing so. Legislative measures must, however, take into account States’
obligations to protect other fundamental rights, such as freedom of expression
and opinion, as spelled out in both the International Convention on the
Elimination of All Forms of Racial Discrimination and the International
Covenant on Civil and Political Rights, and should not be used as a pretext for
censorship.
66.
Combating the use of the Internet and social media to propagate racial,
ethnic and xenophobic content and incitement to violence requires a multistakeholder approach. In this regard, the role of the private sector, in particular
Internet service providers and social media platforms, and other relevant
industry players is crucial. The Special Rapporteur has highlighted some of the
measures already taken by the private sector to address the challenges of racism
and incitement to racial hatred and violence on the Internet and social media.
The Special Rapporteur would like to point out the positive contribution made
by such initiatives as the promotion of end-user empowerment and education;
the involvement of Internet service providers and social media platforms in
policy discourses and in consultations on the issues of combating racism and
incitement to racial hatred and violence on the Internet; the development of
intelligent software in the areas of monitoring and filtering; and improvement s
in co-regulation and self-regulation mechanisms.
17
A/HRC/26/49
67.
The Special Rapporteur is aware of the intrinsic limitations of technical
measures, but also of the risk that such measures lead to unintended
consequences that restrict human rights. Given the multitude of stakeholders
that may be involved in combating racism and incitement to racial hatred and
violence on the Internet and social media networks, including Governments,
private sector and civil society organizations at the national, regional and
international levels, the Special Rapporteur emphasizes the importance of
establishing clear responsibilities and roles for the different actors, and of
strengthening and institutionalizing dialogue and collaboration among them.
68.
In accordance with the provisions of the Durban Declaration and
Programme of Action and the Rabat Plan of Action, the Special Rapporteur
encourages States, civil society and individuals to use the opportunities provided
by the Internet and social media to counter the dissemination of ideas based on
racial superiority or hatred and to promote equality, non-discrimination and
respect for diversity. One possible way of countering racism on the Internet and
social media networks is through content diversification, in particular by
promoting local content and initiatives. Directing more local content to the
global network can contribute to greater understanding, tolerance and respect
for diversity and offer great potential for reducing information asymmetry and
misperceptions that feed racist and xenophobic sentiment.
69.
In the global digital network, the voices of victims of racial discrimination
most often remain absent because of their lack of access to the Internet and
social media, therefore often leaving racist ideas unchallenged. In this regard,
the Special Rapporteur reiterates that States and the private sector should
adopt effective and concrete policies and strategies to ensure that access is
widely available and affordable for all, on the basis of the principles of nondiscrimination of any kind, including on the grounds of race, colour, descent,
and ethnic or national origin. National human rights institutions should also be
encouraged to lead the development of these initiatives.
70.
The Special Rapporteur reminds States of the importance of recognizing
the fundamental role that education plays in combating racism, racial
discrimination, xenophobia and related intolerance, in particular in promoting
the principles of tolerance and respect for ethnic, religious and cultural
diversity and preventing the proliferation of extremist racist and xenophobic
movements and propaganda. He therefore encourages them to use the unique
educational potential of the Internet and social media networks to combat
manifestations of racism, racial discrimination, xenophobia and related
intolerance in cyberspace.
71.
The Special Rapporteur notes the significance of educational and research
activities, such as studies on the possible consequences of the dissemination of
racist ideas, hate messages and incitement to racial hatred and violence on the
Internet and social media; research and analysis on the effectiveness of existing
legal, policy and technical measures; the development of educational
programmes and training materials for young people; the promotion of media
literacy programmes, including technical and textual Internet literacy; and the
development and implementation of educational concepts that counter the
spread of racist ideas, hate messages and incitement to racial hatred and
violence on the Internet and social media.
72.
The Special Rapporteur encourages all Internet service providers and
social media platforms to issue clear policies on combating racial and ethnic
incitement and hatred, and to include them in their term of service. Similarly,
18
A/HRC/26/49
Internet service providers, social media platforms and online moderators should
also undertake training and education initiatives to address racism online.
Greater understanding of the social experience of users is essential to develop
technology that supports rather than marginalizes vulnerable users. Ci vil
society and marginalized communities need support to build strong movements
to counter racism and intolerance online. Community ownership of
infrastructure, training in network and content management, and alternative
software use, including free and open software, can help to bridge existing gaps
in knowledge and access.
73.
The Special Rapporteur considers it important to continue examining the
correlation between manifestations of racism on the Internet and social media
and the number of hate crimes and cyberbullying committed. Given the lack of
adequate data on this issue, he recommends that States and national human
rights institutions increase their efforts to identify, investigate and register such
crimes.
74.
Lastly, given the rapid development of the Internet and the ever-changing
nature of the trends observed, such as in social media networks, the Special
Rapporteur would like to remain informed and follow-up regularly on the
different dimensions of the problems of racism and the Internet and social media
in the future.
19
Download