Are Social Media Users Publishers? Alternative Regulation of Social Media in Selected African Countries

Article, Volume 2 Issue 1 2021

Vincent Adakole Obia
PhD Researcher, Birmingham School of Media, Birmingham City University, B4 7RJ, England.
Email: Vincent.Obia@mail.bcu.ac.uk
ORCID iD: https://orcid.org/0000-0003-1650-9103

Download the PDF of this article

ISSN 2752-3861

 

Abstract

This article addresses the thinking behind social media regulation in Africa and explores how it diverges from the approach in Western countries. It gives attention to the debate relating to who should be labelled as publisher on social media, an issue that has significant regulatory implications. As a result, research on social media regulation, largely based on the West, has focused on platform self-regulation or the role that states have in terms of holding platforms responsible for objectionable content. I suggest that such a focus can be problematic since it presents the Western approach as the universal case of regulation, overlooking examples in other regions like Africa. Hence, this article considers the African approach to social media regulation by reviewing the policies that have been drawn up, how social media publishers are determined, and the politics that underlie the policies. To do this, I analyse social media regulation in Africa as an alternative to the Western approach by examining policy and legislative documents in Nigeria, Uganda, Kenya, Tanzania and Egypt. The article uses the multiple stream framework of policy analysis to underscore the problem, policy and politics inherent in the regulatory trend we find in Africa. I interrogate the data using Lawrence Lessig’s (2006) work on the modalities of regulation and Philip Napoli’s (2019) concept of publishers on social media. The article shows that the trend in Africa is to classify social media users as publishers who bear the burden of liability for the content they post. I argue that this approach is preferred because of the politics at play, where the aim is not necessarily to combat online harms but to silence public criticism and dissent.

 

Introduction

On 8 June 2015, the police in Uganda arrested Robert Shaka, an information technology expert, for what was described as the publication of “offensive communication” via social media. Shaka was accused of acting under the name of Tom Voltaire Okwalinga (TVO), a Facebook account that had been critical of the government. In the charge sheet, he was said to have “disturbed the right of privacy of H.E. Yoweri Katuga Museveni (the President) by posting statements as regards his health condition on social media to wit Facebook”.[1] For this, he was charged on the basis of Section 25 of the Computer Misuse Act, 2011, which criminalises electronic communication deemed to be offensive to the peace, quiet or privacy right of any person. I point to the Shaka case because it represents an increasing trend in the regulation of social media “harms” in African countries as I show in this article. We see examples of this in the August 2019 arrest of Nigerian journalist Ibrahim Dan-Halilu because of a Facebook post,[2] and the May 2020 case of comedian Idris Sultan in Tanzania, who was prosecuted for making fun of President John Magufuli’s outfit on social media.[3] This trend has seen the introduction of direct formal regulation where social media users are labelled as publishers. In other words, they are accountable for the posts they share online. It is then possible to apply regulatory weight to specific cases, like that of Shaka, that are deemed to be critical or offensive to the establishment. I suggest that such an approach is tied to the politics of social media regulation in Africa, and I explore this by considering Wæver’s (1995) concept of the securitisation of speech acts; I explain this below.

Overall, the policies that I review indicate the extension of traditional media censorship to the online sphere with implications for freedom of expression on a continent-wide basis. In spite of this, research has remained largely focused on the realities of social media regulation in the West. This line of research usually considers regulation by platforms done with tools such as algorithms or content moderation (Sartor and Loreggia, 2020) or the need for Western nations to regulate social media platforms (Napoli, 2019). However, I argue that such an approach overlooks the contemporary regulatory agenda in developing regions such as Africa with the likely implication of presenting the Western approach as the universal example. This article, therefore, answers the call to de-Westernise the field of data studies and social science in general by considering “the diversity of meanings, worldviews, and practices emerging in the [global] Souths” (Milan and Treré, 2019, p. 323). Hence, my focus on the regulatory policies on social media in five African countries, including Egypt, Kenya, Nigeria, Tanzania and Uganda. My aim is to underscore the fact that Western examples are not universal by drawing a parallel between Western and African approaches. Hence, I ask:

  1. What is the policy approach to social media regulation in the selected Africa countries?
  2. How do the policies frame and assign the publisher label on social media in determining who bears the burden of liability?

In the sections that follow, I review the literature by focusing on Lessig’s (2006) notion of the modalities of regulation and Napoli’s (2019) concept of publishers on social media. Using the multiple stream approach to policy analysis, I show that African countries are choosing an alternative approach to social media regulation, one that is wholly different from the Western pattern and one that can be explained by the politics of regulation in Africa.

 

Modalities of regulation

In the 90s, there was a commonly held belief that the internet was un-regulatable. This view was championed by cyber-libertarians such as Barlow (1996) who saw the internet as a new world outside of this world, beyond the realm of regulation, and to be governed only by norms agreed to by members of this new world. Barlow’s (1996) argument has largely lost currency as we now know that regulation can be applied to internet usage, including social media content. Lessig (2006) establishes this fact, arguing that the internet has always being regulatable because just like any other infrastructure, it is designed, built and can be modified by code. Hence, regulation was near impossible at the start because the architecture of the Internet at the time did not allow for this. With the development of systems of code such as online identity verification, this is now possible and online activities are therefore subject to greater regulation. Hence, if a government wants to regulate the Internet, it only has to “induce the development of an architecture that makes behaviour more regulable” (p. 62). This is using law to regulate computer and platform code so as to indirectly regulate behaviour. He says the code that makes this possible is the real law; hence, “code is law”. In addition to architecture, Lessig (2006) highlights three other regulatory tools comprising law, norms, and market. Together, these four make up what he calls “modalities of regulation”, observing that the modalities are distinct but interdependent – they can support or oppose one another, but they inevitably affect one another.

The inter-relationship between the modalities of regulation can be seen in the self-regulatory system that has been adopted by the major tech giants. When it comes to rules/laws, they have developed internal mechanisms known as terms of service that can be interpreted as binding legal agreements between platforms and users. Hence, Kaye (2019, p. 16) notes that platforms have become “institutions of governance, complete with generalized rules and bureaucratic features of enforcement”. For instance, Facebook has its Community Standards,[4] Twitter has its Rules,[5] and YouTube has its Terms.[6] These rules serve as guidelines against the circulation of harmful contents that typically border on disinformation and discriminatory speech. They are enforced through content moderation practices which can be done by humans or algorithms (Sartor and Loreggia, 2020). The use of algorithms to proactively filter out contents that violate terms of service therefore points to the way in which law and code have been used in the online environment. Code, based on machine learning, is also being used to curate the information that people are exposed to, providing them with pre-selected content and excluding them from others (UN Special Rapporteur, 2018). This is in effect regulation by technology. Zuboff (2019), in her work on surveillance capitalism, shows how regulation of this sort functions under a market regulatory modality where the decisions platforms make about algorithmic recommender systems, filtering and moderation are largely based on profit motives. Consequently, platforms are able to promote attention-grabbing contents, even if they are outrageous, as long as they can be sold to advertisers (Wood, 2021).

Norms as a regulatory modality also find expression in this mix, particularly with reactive content moderation. Generally speaking, norms are collectively shared beliefs about what is typical and appropriate behaviour within a community (Heise and Manji, 2016). On social media content moderation, norms are realised in the notice and take-down system made available by social media platforms where users provide information about questionable materials before a decision is taken about removing them. As Lessig (2006) notes, norms usually apply in online discussion fora where “a set of understandings constrain behavior” (p. 124). Today, this is more commonly realised as cull-out or cancel culture – “a form of public shaming that aims to hold individuals responsible for perceived politically incorrect behaviour on social media and a boycott of such behaviour” (Hooks, 2020, p. 4). In some ways, cancel culture on social media can be seen as an extreme version of norm as a regulatory modality, in contrast to mild and collective policing of online behaviour described by Barlow (1996) in his “Social Contract”. Still, whether mild or extreme, normative regulation of this sort is problematic. The mild version proposed by Barrow (1996) simply cannot handle the volume and realities of online communication in an age of social media powered by surveillance capitalism and confirmation bias. Meanwhile, the more extreme version of cancel culture has been criticised for being a form of virtual war used to censor opposing views rather than sanitise the online space (Trigo, 2020).

Faced with the realities of these shortcomings, states are starting to introduce formal regulation through laws that regulate social media content. Calls for greater state regulation reflects what Baccarella et al. (2019) call the “dark side” of social media and the “curse of progress”. These calls have grown louder after several “mishaps”, prominent amongst which were the Cambridge Analytical scandal, the spread of far-right extremism, and the Christchurch terror incident. This takes us back to the concept of law as a primary regulatory modality as states seek means with which to curtail social media and its excesses. In the United States, there are moves to unbundle social media companies like Facebook to make them fairer in terms of competition.[7] In Australia, a law has been passed requiring platforms like Facebook and Google to pay news outlets for hosting their content.[8] While in the UK and Germany, laws to regulate online harms on social media have been introduced or are being considered. These all refer to the use of law to shape regulatory outcomes either in the market or content moderation design of social media platforms. However, the use of law in this manner is more of a Western feature, and it underscores the debate on whether platforms should be labelled as publishers. It also affords me the chance to interrogate the differences between the West and Africa in how a publisher is identified on social media.

 

Who is a publisher on social media?

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. (Section 230(c)(1), Communication Decency Act, 1996 – emphasis mine).

The above “Good Samaritan” provision in US law absolves Silicon Valley companies from being liable for objectionable social media content posted by third party actors such as social media users, otherwise known as information content providers. Even if the platforms choose to moderate objectionable social media content, Section 230 still precludes them from liability. This is because the law does not deem platforms to be publishers of online content, allowing them the flexibility to characterise themselves either as technology intermediaries or media companies depending on the situation. For instance, in August 2016, Facebook CEO, Mark Zuckerberg told a group of university students that Facebook was a technology company, not a media company.[9] This was followed by a post after the 2016 US election stating that Facebook’s “goal is to show people the content they will find most meaningful” (meaning a technology platform), adding that “I believe we must be extremely cautious about becoming arbiters of truth ourselves”.[10] However, one month later, Zuckerberg admitted that Facebook was a media company, but not in the sense as we know television.[11] Following the aftermath of the Cambridge Analytica revelations in 2018, Zuckerberg has admitted to the platform being both a technology company made up of “engineers who write code” and a media company that has “a responsibility for the content that people share on Facebook”.[12]

At the heart of this prevarication is the determination of whether or not platforms are media publishers or technological intermediaries, and this has fundamental regulatory implications. If platforms are media publishers, for instance, then they are bound to be regulated (House of Lords, 2018; Napoli, 2019). Instead, platforms have largely argued that they are computer or technology companies that should not be subject to media regulation[13] (Flew et al, 2019). Nonetheless, the aftermath of the Christchurch attacks in New Zealand has seen Prime Minister Jacinda Arden described platforms like Facebook as a “publisher”, not just a “postman”.[14] Napoli (2019) agrees, stating that platforms are media companies that should be subject to government oversight. In describing the publishing role of a media outlet, he observes that the job of a media company as publisher is to produce, distribute and exhibit content, and that these processes have been merged on social media such that it now plays a significant role in our news ecosystem. Although social media companies claim they only distribute and do not produce content, Napoli (2019) argues that content production has never served as a distinct rationale of media regulation, pointing out that solely distributive outlets such as cable and satellite platforms are regulated even in Western society.

Napoli’s (2019) argument is also hinged on the practice of moderation and recommendation done by social media platforms. This, he says, should qualify them as news organisations or publishers, “given the extent to which they are engaged in editorial and gatekeeping decisions related to the flow of information” (p. 13). We already see examples of editorial decision-making with the labelling of certain posts as misleading or containing violent content on Twitter. Facebook has recently made moves to institutionalise the process, creating an Oversight Board which reviews decisions taken by the platform to take down certain posts. However, unilateral arrangements by platforms have been criticised. For instance, there are indications that the Facebook oversight board does not address contents that Facebook does not take down. This means the board is essentially aligned with Facebook’s advertising interests while simultaneously making the platform appear to be maintaining decorum on the space.[15] Also, the permanent removal of Donald Trump’s account from mainstream social media has been described by Angela Merkel as problematic,[16] raising new questions about whether governments or platforms should have powers over limitations to freedom of expression.

The move by Europe to regulate platforms then points to an increasing identification with Napoli’s (2019) position. In the EU, there is the Digital Services Act which requires platforms to be transparent and to take down illegal and harmful content. It also places a duty on “very large platforms” like the major social media companies to identify “systemic risks” and take action to mitigate them.[17] Also in the UK, an Online Safety Bill is being considered. The Bill places a statutory duty of care on platforms, mandating them to moderate contents that cause physical or psychological harm. These moves point to a recognition of the liability that platforms have for the content they host, suggesting that these platforms are publishers contrary to the position established by Section 230. African countries are also seeing the need for social media regulation, given the “mishaps” I mentioned earlier. However, their approach represents an alternate form of regulation insofar as it diverges from the approach being taken in the West, particularly in Europe. In the sections that follow, I examine the policy of selected African countries to highlight the regulatory system they are adopting and who they designate as publishers on social media.

 

Method

I use the policy analysis method to examine the legal documents and policies on social media in selected African countries. Specifically, I adopt the multiple stream framework of policy analysis developed by John Kingdon (1984) as the basis for analysing the policies. The multiple stream approach states that problem, policy and politics run concurrently until stakeholders (who Craig et al, 2010 call policy entrepreneurs) bring them together to provide a policy window, which can be an alternative solution (Craig et al, 2010; Browne et al, 2018). As a result, my approach is aimed at exploring the problem that state actors are concerned with, the policy they have constructed, and the politics that underpin particular policy approaches. This use of policy analysis in this fashion enables me to answer my research questions on the policy framework on social media being advanced by African countries and who are framed as regulatory targets or publishers.

The countries that I consider in this study include Nigeria, Kenya, Tanzania, Uganda, and Egypt. They were selected because they provide the most obvious examples of attempts to regulate internet and social media communication on the continent. The documents that I consider are listed in Table 1. They include the current Internet Falsehood Bill in Nigeria that may not be passed because of the significant opposition it faces. Indeed, almost all the laws have been criticised by civil society groups as they face legal challenges in courts on whether or not they violate civil liberties. For instance, the Computer Misuse and Cybercrimes Act in Kenya was ruled in October 2020 to be unconstitutional.[18] This points to the politics that I consider under the multiple stream framework. Although some of these instruments are not yet law or have been deemed to be unlawful, I include them in my analysis as they point to the contemporary regulatory approaches being considered in these countries. Some of these countries also have additional legal instruments such as penal codes and terrorism laws that address social media harms like disinformation. I have not considered these additional laws since they tend to be repetitive.

Table 1: Regulatory documents analysed in this study

Country Objects of Study (Policy/Act/Bill)
Egypt Law on the Organisation of Press, Media and the Supreme Council of Media, 2018
Kenya Computer Misuse and Cybercrimes Act, 2018
Nigeria Cybercrimes Act, 2015
Protection from Internet Falsehood and Manipulation Bill, 2019
Digital Rights and Freedom Bill, 2019
Tanzania Electronic and Postal Communications (Online Content) Regulations, 2020
Cybercrimes Act, 2015
Uganda Computer Misuse Act, 2011
Uganda Communications Act, 2011
Social Media Tax introduced in 2018

 

Regulatory approach in the selected African countries

In this section, I apply the multiple stream framework of policy analysis to my objects of study; that is, the policies on social media regulation in the selected African countries. The first stream, which has to do with problem, has been largely considered in the literature review above. To recount, this is the fact that social media use has become problematic given the challenges of online harms in what has been described as the “curse of progress” (Baccarella et al, 2019). What makes the problem even more daunting is the potential for limits to be placed on freedom of expression regardless of the policy approach that is taken. Added to this is the reality that social media platforms are essentially private corporations that operate globally. Consequently, the regulation that they currently perform using their terms of service or advisory board is an action that would be deemed in some quarters to be a privatisation of regulation (DeNardis and Hackl, 2015). Put differently, it is regulation being done – sometimes via opaque algorithms – by private entities that are not statutorily accountable to society. Their global reach also means they have enormous powers as internet information gatekeepers (see Laidlaw, 2010) that cannot be easily controlled, especially by countries in Africa.

The framing of the problem in the policy documents that I review is a little more targeted. Here, the problem is largely presented as online falsehood or disinformation. For instance, the Law on the Organisation of Press in Egypt targets the publication of falsehood on blogs, website and social media. In a report of 333 cases of digital expression violation from 2016 to 2019, it was shown that cases based on the publication of false news formed the overwhelming majority (Open Technology Fund, 2019). The focus on falsehood is also true in Kenya with the Computer Misuse and Cybercrime Act, which targets “a person who intentionally publishes false, misleading or fictitious data or misinforms with intent”.[19] The law also considers false information that “constitutes hate speech” and hate speech that is inciting, discriminating or harmful to the reputation of others.[20] In Tanzania, the Electronic and Postal Communications Regulations has a provision on “prohibited content”[21] which has ten categories of online harms. This includes the publication of “statements or rumours for the purpose of ridicule” and the dissemination of information that is false, untrue and misleading. In Nigeria, the proposed Internet Falsehood Bill shows the priority of place that has been given to online falsehood and this is clearly stated in the explanatory memorandum. However, there is also the Digital Rights and Freedom Bill, which is more concerned with promoting freedom of expression on social media and makes no mention of online falsehood. This shows the presence of a struggle in Nigeria between two sides of a regulatory divide, one based on censorship and another on freedom. Uganda has the Computer Misuse Act, where three sections are relevant in terms of problematising online harms. These include sections of cyber-harassment, offensive communication, and cyber-stalking.[22]

Uganda’s interest in cybercrimes indicates that cybercrimes laws have been integrated with social media regulatory policies. We also find this in Nigeria and Tanzania, both of which have respective Cybercrimes Act. In Kenya, the integration is clearly visible in its Computer Misuse and Cybercrimes Act. What is curious here is the fact that the Cybercrimes Act is generally justified as an instrument needed to protect “critical national information infrastructure” – an example would be satellites. However, what we find with social media regulation is that these laws have been used to target false information published via a computer such as in Tanzania, and falsehood and cyber-stalking in Nigeria and Uganda.

When it comes to the policy stream, the documents show that the approach in all the five countries is to criminalise online harms such as falsehood or mis/disinformation. With the Nigerian Internet Falsehood Bill, there is a provision for the correction of a false message and a take-down if this is not done. The offender can also be jailed and/or fined if they do not comply. There are also provisions for targeted blocking of blogs, websites or social media pages. We also find similar cases of criminalisation in Egypt, Uganda, Tanzania and Kenya where offenders have been tried and sanctioned. Another policy approach is the use of registration. The Regulations in Tanzania for instance requires bloggers, online publishers and internet broadcasters to be licensed by the Tanzania Communications Regulatory Authority and to pay annual applications fees.[23] The Uganda Communications Commission in March 2018 also mandated all “online data communication and broadcast service providers” to be registered.[24] Uganda also pursued a policy of taxing social media and mobile money users. This was a 200 Shilling (approximately $0.05) daily tax on each user, a policy which forced many to access social media through Virtual Private Networks (Whitehead, 2018). Additionally, the country is notorious for politically motivated blanket social media bans, the latest of which happened during the January 2021 elections. Egypt has also banned social media around politically sensitive periods such as during the Arab Spring uprising. The Law on the Organisation of the Press has been described as an instrument that legitimises the practice of blocking websites without the need for judicial oversight (Article 19, 2018). It also requires social media users who have more than 5,000 followers or subscribers to be subject to regulation by the Supreme Council.

All these points to the politics at play in social media regulatory policies being drafted in African countries. This is obvious in Egypt where the government is wary of another Arab Spring-style uprising fuelled by social media. Hence, regulatory enforcement usually targets the opposition or those who are critical of government since their comments can be conveniently labelled as false (TIMEP, 2019). The politics can also be seen in Kenya where the Bloggers Association of Kenya (BAKE) challenged the constitutionality of the Computer Misuse and Cybercrimes Act in court before Supreme Court declared the law to be void since it did not go through the upper chamber of the legislature.[25] Similar court battles have been waged in Nigeria on the Cybercrimes Act, which has been used to target journalists, bloggers and social media users.[26] One common thread, therefore, running through the cases in the countries under review in this article is the use of regulation not to combat online harms, but to silence public opposition and criticism on digital platforms and social media. Hence, what I refer to as the politics of social media regulation in Africa.

My analysis shows that this is achieved through what Wæver (1995) calls the securitisation of speech acts. In the context of this research, securitisation is the practice of designating online harms such as falsehood and hate speech as security concerns that require extraordinary state intervention. The strategy is to securitise an issue by simply declaring it to be so even if the issue does not require the weight of securitisation. Examples of securitisation of this sort abound in my objects of study, revealing the politics at play in social media regulation on the continent. For example, the integration of cybercrimes law with social media regulation that we find in Nigeria, Kenya and Tanzania points to securitisation since cybercrime laws are justified primarily on “national security” grounds. In Egypt, the Law is also predicated on “national security” and enforcement is carried out by the State Security Prosecution, which only usually handles cases bordering on national security and terrorism (Open Technology Fund, 2019). In Tanzania, securitisation also covers national culture and morality, leading to the securitisation of abuse/insult, such as the case where five people were charged for “insulting” the President in a WhatsApp group chat.[27] The regulation being articulated in Africa is therefore possible because of the way internet and social media users are framed as publishers – a primary diversion from the regulation being considered in the West.

 

Internet and social media users are publishers

As I pointed out previously, the design of digital and social media regulation in places like Europe is based on placing the burden of liability or the label of publisher on social media platforms. Although Section 230 still largely shields platforms from this burden especially in the US, countries in Europe are starting to place on platforms a “duty of care”. This is not the case in the regulatory policies that I reviewed in this article. My analysis indicates that social media regulation in Africa is predicated on users being viewed as publishers of information who are liable for the content they post. This means regulation in Africa bypasses platforms and seeks to regulate user activities directly using the modality of law. The difference in the Western approach is that a co-regulatory approach is preferred. Here, state actors seek to regulate social media usage in partnership with platforms – an indirect form of regulation where states regulate through platforms. We see this enshrined in the Digital Services Act and Online Safety Bill I mentioned earlier. In some cases, this is the use of the modality of law to regulate the modality of code in how algorithmic regulation is done by social media platforms or the modality of market in now advertising is realised.

My review shows that African countries have largely chosen not to regulate platforms in this way. One suggestion is that they do not (yet) have the powers to regulate platforms. Beyond this however, I argue that African countries largely choose to regulate users, not platforms, in ways that allow for the kind of politics I referred to in the previous section. The designation of social media users as publishers is most explicit in the Tanzanian Electronic and Postal Communications Regulations. It states that, “Every subscriber and user of online content shall be responsible and accountable for the information he posts in an online forum, social media, blog and any other related media.”[28] This includes not just offences that are criminally covered in the Tanzanian legal system, but also misinformation, rumours, insults and messages that call for protests. A similar case exists in the other countries.

In criminalising the publication of false information, Kenya’s Computer Misuse and Cybercrime Act places the burden of liability on users, not social media platforms. It is also clear that in Uganda, the social media tax was targeted at users, with the tax said to be supposedly needed to curb the spread of gossip and to improve the quality of information in circulation (Boxell and Steinert-Threlkeld, 2019). The Nigerian Internet Falsehood Bill also targets internet users and content providers before it mentions internet intermediaries. Likewise, the Egyptian Law is aimed at social media accounts with at least 5,000 followers as I mentioned earlier. Critics see the law as targeting social media users because the traditional media establishment in Egypt is already pro-government (Malsin and Fekki, 2018). In addition, spreading false information is already a crime under Egyptian penal code, but social media is not covered by the penal code (Malsin and Fekki, 2018). This points to the move to include people’s decisions over social media content in the government’s web of control by enacting legislations which mirror already existing laws for the traditional media.

It also explains why the legislations I reviewed affect the entire media architecture in the respective countries. As I have shown, they tend to cover all forms of digital content provision including blogs, online publishing and internet broadcasting. This is significant in the present age where it is the norm for media forms, irrespective of size, to have an online presence. In places where intermediaries are referenced such as in Nigeria or Uganda, these are not likely to be social media platforms even though these platforms are referenced in the interpretation section of the Nigerian Internet Falsehood Bill. Instead, I suggest that the aim is to regulate local internet service providers as we have seen in Uganda through the Uganda Communications Act. This law makes it possible for internet service providers to be classified as “communications services” and ordered at will by the Uganda Communications Commission to block access to websites or social media platforms at large – the ultimate ban on all publishers.

 

Conclusion

In this article, I have examined social media regulatory policies in five African countries, highlighting how they diverge from the dominant approach to regulation being considered in the West. Based on my analysis, I make the case that the regulation of online harms in the selected countries follows a pattern of direct formal regulation targeted at users. Hence, I suggest that the African example can be seen as the construction of internet and social media users (content providers in general), as opposed to platforms, as publishers who bear the legal burden of liability for the content they post. The African case then stands in contrast to the Western approach where the debate largely centres around holding platforms accountable for harmful content, whether or not they are illegal (Napoli, 2019). This then leads to the obvious question: Why do African countries tend to construct social media users and not platforms as publishers? One answer is that countries in the Global South do not yet have the capacity to regulate social media companies, especially those classified as Big Tech. However, my argument in this article is that African countries tend to prefer the social-media-users-as-publishers approach because of the politics of regulation. In this regard, I have used Kingdon’s (1984) multiple streams framework to highlight the fact that the policies drawn up to address the problem of social media (mis)use in Africa can be understood by a study of the politics at play. Politics then means that regulation in Africa is not aimed at combating online harms (even if this might be a by-product), but at protecting political leaders from public dissent. As I have shown, this is possible because online harms have been securitised as issues requiring heightened state intervention. I suggest that this presents significant challenges for freedom of expression on social media and does not address the problem of online harms. What we then have is the use of online harms largely framed as falsehood as an excuse to impose censorship on social media usage on the continent.

 

[1] Available at: https://advox.globalvoices.org/2015/06/12/ugandan-authorities-jail-facebook-user-for-offensive-comments-about-president-musveni/
[2] Available at: https://punchng.com/dss-re-arrests-journalist-for-supporting-sowore-on-facebook-2/
[3] Available at: https://www.amnesty.org/en/latest/news/2020/07/tanzania-charges-against-comedian-for-laughing-must-be-thrown-out/
[4] Available at: https://www.facebook.com/communitystandards/
[5] Available at: https://help.twitter.com/en/rules-and-policies/twitter-rules
[6] Available at: https://www.youtube.com/static?gl=GB&template=terms
[7] Available at: https://www.nytimes.com/2020/12/09/technology/facebook-antitrust-monopoly.html
[8] Available at: https://edition.cnn.com/2021/02/24/media/australia-media-legislation-facebook-intl-hnk/index.html
[9] Available at: https://pctechmag.com/2016/08/mark-zuckerberg-says-facebook-wont-become-a-media-company-but-rather-stay-as-a-tech-company/
[10] Available at: https://www.facebook.com/zuck/posts/10103253901916271?pnref=story
[11] Available at: https://www.theguardian.com/technology/2016/dec/22/mark-zuckerberg-appears-to-finally-admit-facebook-is-a-media-company
[12] Available at: https://www.cnbc.com/2018/04/11/mark-zuckerberg-facebook-is-a-technology-company-not-media-company.html
[13] Available at: https://www.bbc.co.uk/news/entertainment-arts-38333249
[14] Available at: https://www.ft.com/content/13722f28-4edf-11e9-b401-8d9ef1626294
[15] Available at: https://www.theguardian.com/commentisfree/2021/mar/17/facebook-content-supreme-court-network
[16] Available at: https://www.cnbc.com/2021/01/11/germanys-merkel-hits-out-at-twitter-over-problematic-trump-ban.html
[17] Available at: https://www.traverssmith.com/knowledge/knowledge-container/eu-turns-the-screw-on-big-tech-the-digital-services-act-package/
[18] Available at: https://www.the-star.co.ke/news/2020-10-29-high-court-nullifies-23-bills-passed-by-national-assembly/?utm_medium=Social&utm_source=Twitter#Echobox=1603964924
[19] Section 22 (1), Computer Misuse and Cybercrimes Act, 2018.
[20] Section 22 (2), ibid.
[21] Third Schedule, Electronic and Postal Communications (Online Content) Regulation, 2020.
[22] Section 24-26, Computer Misuse Act, 2011.
[23] Section 4, Electronic and Postal Communications (Online Content) Regulations, 2020.
[24] Available at: https://www.ucc.co.ug/wp-content/uploads/2018/03/UCC_ONLINE-DATA-COMMUNICATIONS-SERVICES.pdf
[25] Available at: http://kenyalaw.org/caselaw/cases/view/202549/
[26] Available at: https://cpj.org/2020/06/nigerian-journalist-held-under-cybercrime-act-for-covid-19-coverage/
[27] Available at: https://web.archive.org/web/20171117160013/http://www.thecitizen.co.tz/News/Five-charged-with-insulting-Magufuli/1840340-3381718-qbmx20z/index.html
[28] Section 14, Electronic and Postal Communications (Online Content) Regulations, 2020.

 

About the author

Vincent is a PhD researcher in the School of Media, Birmingham City University. His research, funded by the Commonwealth Scholarship Commission in the UK, explores social media forums in Nigeria such as the Nigerian Twittersphere and the debate on policy attempts to regulate the wider social media environment in Africa, situating it in the global setting of new media governance. He currently serves as the Lead Communication Officer for the Postgraduate Network of the Media, Communication and Cultural Studies Association in the UK (MeCCSA-PGN).

 

References

Article 19 (2018) Egypt: 2018 Law on the Organisation of Press, Media and the Supreme Council of Media. Available at: https://www.article19.org/resources/egypt-2018-law-on-the-organisation-of-press-media-and-the-supreme-council-of-media/ (Accessed: 15 February 2021].

Baccarella, C. V et al, (2019) ‘Averting the rise of the dark side of social media: the role of sensitization and regulation’, European Management Journal, 38(1), pp. 3-6.

Barlow, J.P. (1996) A Declaration of the Independence of Cyberspace. Available at: https://www.eff.org/cyberspace-independence (Accessed: 15 February 2020).

Boxell, L. and Steinert-Threlkeld, Z. (2019) Taxing Dissent: The impact of a social media tax on Uganda. Available at: https://arxiv.org/pdf/1909.04107.pdf (Accessed: 20 May 2020).

Browne, J., Coffey, B., Cook, K., Meiklejohn, S. and Palermo, C. (2018) ‘A guide to policy analysis as a research method’, Health Promotion International, 34(5), pp. 1032-1044.

Craig, R., Felix, H., Walker, J. and Phillips, M. (2010) ‘Public health professional as public entrepreneurs: Arkansas’s childhood obesity policy experience’, American Journal of Public Health, 100(11), pp. 2047-2052.

DeNardis, L. and Hackl, A. M. (2015) ‘Internet governance by social media platforms’, Telecommunications Policy, 39(9), pp. 761-770.

Flew, T., Martin, F. and Suzor, N. (2019) ‘Internet regulation as media policy: rethinking the question of digital communication platform governance’, Journal of Digital Media and Policy, 10(1), pp. 33-50.

Heise, L. & Manji K. (2016) Social Norms. GSDRC Professional Development Reading Pack no. 31. Birmingham, UK: University of Birmingham. Available at: https://assets.publishing.service.gov.uk/media/597f335640f0b61e48000023/Social-Norms_RP.pdf (Accessed: 14 April 2021).

Hooks, A.M. (2020) Cancel Culture: posthuman hauntologies in digital rhetoric and the latent values of virtual community networks. A Master’s Dissertation, University of Tennessee at Chattanooga. Available at: https://scholar.utc.edu/cgi/viewcontent.cgi?article=1835&context=theses (Accessed: 14 April 2021).

House of Lords (2018) Social Media and Online Platforms as Publishers. Available at: https://lordslibrary.parliament.uk/research-briefings/lln-2018-0003/ (Accessed: 20 May 2020).

Kaye, D. (2019) Speech police: The global struggle to govern the Internet. New York: Columbia Global Reports.

Kingdon, J. (1984) Agendas, Alternatives and Public Policies. New York: Harper Collins.

Laidlaw, E. B. (2010) ‘Framework for identifying internet information gatekeepers’, International Review of Law, Computers & Technology, 24(3), pp. 263-276.

Lessig, L. (2006) Code Version 2.0. New York: Basic Books.

Malsin, J. and Fekki, A.E. (2018) ‘Egypt passes law to regulate media as President Sisi consolidates power’, Wall Street Journal, 16 July. Available at: https://www.wsj.com/articles/egypt-passes-law-to-regulate-media-as-president-sisi-consolidates-power-1531769232 (Accessed: 15 May 2020).

Milan, S., and Treré, E. (2019) ‘Big data from the South(s): beyond data universalism’, Television and New Media 20(4): pp. 319–35.

Napoli, P. (2019) Social Media and the Public Interest: Media regulation in the disinformation age. New York: Columbia University Press.

Open Technology Fund. (2019) Digital Authoritarianism in Egypt: digital expression arrests 2011-2019. Available at: https://public.opentech.fund/documents/EgyptReportV06.pdf (Accessed: 12 March 2021).

Sartor, G. and Loreggia, A. (2020) The impact of algorithms for online content filtering or moderation. JURI Committee, European Parliament. Available at: https://www.europarl.europa.eu/RegData/etudes/STUD/2020/657101/IPOL_STU(2020)657101_EN.pdf (Accessed: 14 April 2021).

TIMEP (2019) The Law Regulating the Press, Media and the Supreme Council for Media Regulation. Available at: https://timep.org/reports-briefings/timep-brief-the-law-regulating-the-press-media-and-the-supreme-council-for-media-regulation/ (Accessed: 12 March 2021).

Trigo, L.A. (2020) ‘Cancel culture: the phenomenon, online communities and open letters’, PocMec Research Blog, September 29. Available at: https://f-origin.hypotheses.org/wp-content/blogs.dir/7811/files/2020/09/092020_2LTrigo.pdf (Accessed 14 April 2021).

UN Special Rapporteur (2018) Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression. Available at: https://undocs.org/pdf?symbol=en/A/73/348 (Accessed 10 March 2020).

Whitehead. (2018) Uganda Social Media and Mobile Money Taxes Survey Report. Information Communication Technology Association of Uganda.

Wæver, O. (1995) ‘Securitization and descuritization’, in Lipschutz, R.D. (ed.) On Security. New York: Columbia University Press, pp. 46-86.

Wood, P. (2021) ‘Online harms: why we need a systems-based approach toward internet regulation’, Media@LSE Blog, February 19. Available at: https://blogs.lse.ac.uk/medialse/2021/02/19/online-harms-why-we-need-a-systems-based-approach-towards-internet-regulation/ Accessed 23 February 2021).

Zuboff, S. (2019) The Age of Surveillance Capitalism. New York: Public Affairs.

Leave a Reply