AustLII Home | Databases | WorldLII | Search | Feedback

Computers and Law: Journal for the Australian and New Zealand Societies for Computers and the Law

You are here:  AustLII >> Databases >> Computers and Law: Journal for the Australian and New Zealand Societies for Computers and the Law >> 2022 >> [2022] ANZCompuLawJl 10

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Nestorovska, Diana --- "Press Councils: Adapting an Existing Self-regulatory Model for the Social Media Age" [2022] ANZCompuLawJl 10; (2022) 94 Computers & Law, Article 10


PRESS COUNCILS: ADAPTING AN EXISTING

SELF-REGULATORY MODEL FOR THE

SOCIAL MEDIA AGE

DIANA NESTOROVSKA[*]

ABSTRACT

The debate on whether social media can and should be regulated has become polarised in the United States. Some view traditional forms of regulatory intervention as a threat to free speech and a bridge too far towards censorship, while others are sceptical of the efficacy of self-regulation. While Australia and New Zealand do not face the same legislative hurdles in regulating social media platforms, both jurisdictions are grappling with how to regulate social media content. Consideration could be given to adapting the self-regulatory model of a press council to a social media context. It is submitted that adapting this model may be a tentative first step towards greater accountability of digital platforms to the public.

CONTENTS

I INTRODUCTION

Digital platforms have garnered a reputation for being incubators of disinformation and misinformation. In the United States, they are protected from their good faith attempts to regulate content by section 230(c) of the Communications Decency Act 1996. In essence, the provision precludes the social media and tech giants from being treated as either “publishers” or “speakers” and grants them immunity from civil liability for hosting third-party content.

The conceptual distinction between disinformation and misinformation is one of intent.[1] Colloquially referred to as “fake news”, disinformation is content that has the “look and feel” of traditional news but is designed to deceive its audience.[2] On the other hand, misinformation is not necessarily deliberate in misleading an audience, although attempts to harness misinformation in an orchestrated campaign for political purposes can be characterised as a species of disinformation.[3]

Digital platforms have been historically reluctant to remove content that is characterised as “fake news” or misinformation on the basis that such regulation would impede free speech. Facebook (now Meta) has articulated its position in its “commitment to voice” as follows:[4]In some cases, we allow content – which would otherwise go against our standards – if it's newsworthy and in the public interest.”

While the commitment to free speech is a worthy value, particularly in the context of the First Amendment in the United States, the consequences of taking no or delayed action on disinformation can be serious. Indeed, the storming of the U.S. Capitol on 6 January 2021 exposed the dark underbelly of a laissez-faire approach to content regulation. The attempted insurrection was largely fermented through the “echo chamber” effects of social media: Mr Trump’s calls that the 2020 Presidential election had been rigged stirred his supporters into a mob and eventually culminated into the storming of the Capitol.[5] Even as police were securing the Capitol, Mr Trump continued to post inflammatory statements on social media.[6] Facebook and Twitter responded by removing his posts and indefinitely blocking Mr Trump from using the platforms on the basis that the content promoted violence, which violated their Terms of Service.[7] While the social media giants did take action to take down Mr Trump’s posts, it was seen by some as a case of too little too late.[8]

II COMMUNITY STANDARDS AND SELF-REGULATION

Under increased public pressure over the proliferation of misinformation and “fake news,” Facebook and Twitter have promulgated their Community Standards. Both platforms require users, through their Terms of Service, to adhere to these Community Standards, violation of which can lead to the removal of non-complying content and/or suspension of the user’s account. Facebook has a “Community Standard on misinformation” which identifies content that incites interference in electoral processes, and content that undermines public health responses, as objectionable.[9] Twitter goes further and has specific policies relating to health and electoral misinformation, namely, the “Covid-19 misleading information policy” and the “Civic integrity misleading information policy”.[10]

The question remains whether the mere promulgation of Community Standards is enough, especially given Facebook has explicitly stated in its “commitment to voice” that it will allow content which would otherwise go against its standards to remain on the platform if Facebook considers it in the public interest. First, decisions as to whether content is in the public interest is a traditional editorial function of a publisher. One queries whether section 230 is still appropriate in these circumstances, given that the provision granted immunity to digital platforms on the basis that they were mere facilitators of third-party content. In any event, some commentators have urged social media giants to move quickly and decisively towards self-regulation, largely as a way of keeping unwanted government regulation at bay.[11] Others are more sceptical, dismissing Community Standards as a public relations exercise and questioning whether social media giants are capable of meaningful self-regulation.[12] The premise is that so long as the platforms profit from the exploitation of user content, regardless of whether the content is true or otherwise, then they are maximising shareholder return. On this view, the platforms will not choose to limit their ability to make profit unless there are negative consequences imposed on them from an external body,[13] or the erosion of public trust in them is so great as to translate into a tangible decline in shareholder value. Thus, social media giants are incapable of regulating themselves, and calls for greater government regulation of social media should be no more concerning than regulatory intervention in cases of product liability. In such cases, companies can and have been held to account for faulty products and this has not irrevocably undermined free enterprise in the United States.[14]

Aside from updating its Community Standards, Facebook has taken additional steps towards self-regulation by creating a quasi-regulatory body called the Oversight Board (“Board”). The Board is funded by a Trust in which Facebook is the sole contributor, and is overseen by independent Trustees who are appointed by Facebook.[15] According to the Board’s Charter, a request for review of a Facebook decision on content can be submitted to the Board by either the original poster of the content, or a person who previously submitted the content to Facebook for review.[16] However, the request can only be submitted in circumstances where they do not agree with Facebook’s decision and have exhausted other internal avenues of appeal.[17] Facebook can also submit requests for review to gain the Board’s opinion on whether any action it has taken is justified or to request direction on a new emerging area of policy.[18] The right to be heard is not guaranteed: the Board has the discretion to decide whether it will review a request and will be guided by essentially utilitarian principles: only cases that have the greatest potential to guide future decisions and policies will be heard, though any case which could result in criminal or regulatory sanctions will be declined.[19]

Facebook’s Board was put to the test on the issue of the Capitol insurrection. On 21 January 2021, Facebook announced that it had referred this case to its Board, asking it to consider whether Facebook had correctly decided on 7 January 2021 to prohibit Mr Trump's access to posting content on Facebook and Instagram for an indefinite amount of time.[20] The Board ruled that “it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension.”[21] In other words, Facebook went a step too far and needed to take action that was consistent with consequences that are applied to other users of the platform.

It is commendable that Facebook has sought to create a mechanism for users to air their grievances over content decisions. Ultimately, however, the Board does not have the hallmarks of accountability that would give it the legitimacy of a regulatory body. The term “accountability” has been described as “the process of being called to ‘account’ to some authority for one’s actions.”[22] Accountability has several features in that it involves giving account to an external third party; one party demanding account and the other responding and accepting sanctions; and it implies that one party has the authority to assert rights over those who are accountable.[23]

Thus, while the Board is technically a separate entity with Trustees that do not answer to Facebook, Facebook finances the Trust and appoints the Trustees. Even in the absence of any actual conflict of interest, this still creates a perceived conflict, which may undermine public trust in the Board. Also problematic is that the Board will only review a handful of decisions, and even then, effectively perform an internal audit function: that is, it will determine whether the decision was consistent with Facebook's content policies and values.[24] In other words, Facebook is accountable to rules that it sets by a body that it effectively funds, and not rules set by a third-party authority. Prior Board decisions have “precedential value”, and decisions will be published, but this pronouncement is watered down by the qualification that past decisions “should be viewed as highly persuasive when the facts, applicable policies or other factors are substantially similar.”[25] Facebook also commits to the independent oversight of the Board in relation to its decisions on content, and states that it will provide reasonable assistance to the Board and implement its recommendations.[26] However, this is only to the extent that the requests are “technically and operationally feasible” and not an undue drain on resourcing.[27] Apart from reputational consequences for failing to adhere to Board rulings, there is nevertheless no sanction imposed by the Board.

The Board’s purpose is consistent with Facebook’s publicly stated policy position on the regulation of content. Indeed, it is difficult to resist a conclusion that the Board was set up for a self-serving purpose. Facebook has articulated a view that procedural regulation – in other words, requiring companies to maintain certain systems and procedures—is the preferred way forward, at least for jurisdictions outside of the United States.[28] Procedural regulation would include requirements that Facebook has already implemented, such as requiring companies to publish their content standards, provide avenues for people to report to the company any content that appears to violate the standards, respond to such user reports with a decision, and provide notice to users when removing their content from the site. Such regulations could require a certain level of performance in these areas to avoid regulatory consequences. However, Facebook does not elaborate on what those regulatory consequences could be.

It is beyond the scope of this paper to critically assess whether self-regulation is appropriate or whether firmer regulatory intervention is required. Self-regulation is not uncommon and in the United States, this accountability mechanism has included companies in the business of video games, an industry that has also seen its fair share of community concern.[29] In the context of section 230 and general wariness of subjecting free speech to potentially overreaching and unconstitutional government control, self-regulation of social media may be a more incremental, palatable, and achievable policy reform in the short to medium term.

III SAFE HARBOUR

By way of comparison, there is no equivalent of “section 230” in Australia: current safe harbour provisions for intermediaries are narrow in scope and do not necessarily extend to social media platforms.[30] In New Zealand safe harbour is available to intermediaries that “facilitate” defamatory content provided they follow the take down procedure set out in the Harmful Digital Communications Act 2015. In general, however, legislative interventions targeting social medial content have been reactive and address specific harms (e.g. cyberbullying) rather than targeting disinformation or misinformation. For example, in 2019, the Australian Federal Parliament enacted laws requiring platforms, under pain of criminal sanction, to take down “abhorrent violent material” capable of being accessed in Australia.[31] This was in response to the live streaming on Facebook of the Christchurch terrorist act against a mosque. Australia has also moved to give victims of cyberbullying and harassment the right to apply for take down orders of content hosted by online platforms.[32]

There is some civil jurisprudence in Australia that holds online platforms liable for defamatory content posted by third-party users.[33] Each case, however, turns on its facts, as shown by the recent High Court decision in Voller. That case arose out of lower court defamation proceedings launched by Dylan Voller, whose mistreatment at a juvenile detention facility sparked a formal government inquiry.[34] The High Court ruled in a 5-3 decision that the appellant media organisations were strictly liable for comments they invited on their own Facebook posts, but did not rule that Google as the search engine was liable as a publisher.[35] Following the Voller case, former senior politician, John Barilaro, won a case for defamation in the Federal Court against Google. Interestingly in that case, Mr Barilaro launched defamation proceedings against Google as the publisher of You Tube videos uploaded by one Mr Shanks, with the Court finding that the tirades against the then politician amounted to online harassment, and Google’s failure to remove them a clear violation of its own policies that amounted to publication.[36]

Of the three jurisdictions discussed in this essay, New Zealand has perhaps articulated the most cohesive framework for social media content regulation that would also broadly cover misinformation and fake news. New Zealand’s internet watchdog, Netsafe, and industry group NZTech have launched a voluntary Aotearoa New Zealand Code of Practice for Online Safety and Harms (“Code of Practice”) which covers the following thematic areas: child sexual exploitation and abuse; bullying or harassment; hate speech; incitement of violence; violent or graphic content; misinformation and disinformation.[37] Signatories to the Code include Meta, TikTok, Google, Amazon and Twitter, all of which have agreed to comply with the Code by, inter alia: publishing annual reports on their systems, policies and procedures for removing harmful content; and adhering to a public complaints process for breaches of the Code for which they may receive sanction.[38] At the time of writing, however, details of the complaints process have yet to be released.

IV PRESS COUNCIL MODEL

It is submitted that policy makers in the United States, Australia and New Zealand do not necessarily have to “reinvent the wheel” when it comes to a model for social media content regulation: the model of a press council is worthy of further consideration. In its traditional form, a press council is a body established by the major actors of the media industry, namely media owners, editors, journalists, and the public.[39] It is responsible for investigating potential breaches in the ethical codes of conduct that are adopted by members.[40] Media organisations are not compelled to join a press council, and its efficacy depends on the funding provided by member organisations and the cooperation of all parties. This is important in a democracy on the basis that the media performs a key role in informing the public and holding a government to account. Thus, in principle, a press council should not be funded by government, nor should it have government appointees on the council. In the context of social media, it is submitted that a “social media council” could be established which includes in its constituent membership social media and technology companies, contributors of content, and diverse representatives from the public. This would be an improvement on Facebook’s Oversight Board because while the body would be industry-funded, it would also be responsible for creating a set of industry-wide Community Standards that members would need to adhere to, and a process of adjudication for dealing with content complaints that is more at arm’s length. Applied to the case of the Capitol insurrection, any adjudication would be based on industry-accepted standards rather than standards set by Facebook itself. This would be a modest step towards greater transparency.

At the time of writing, the United States does not have a press council to which the public can submit complaints about the traditional media.[41] Arguments for and against the idea of a press council have been posited for decades and are not revisited here.[42] The position is different in other countries including Australia and New Zealand, which still have active press councils, namely the Australian Press Council and the New Zealand Media Council. Indeed, in the context of New Zealand’s Code of Practice, it is envisaged that a “multi-stakeholder governance group” will administer the Code.[43] Arguably, public policy makers could leverage the existing model of New Zealand’s Medial Council as a starting point.

Taking the example of the Australian Press Council (“Press Council”), its stated purpose is to promote freedom of speech and responsible journalism, set standards, and respond to complaints about material in Australian newspapers, magazines, and online-only publications.[44] Most complaints are resolved without resorting to adjudication and result in a correction or apology from the publication.[45] For those that cannot be resolved at the initial stages of the process, the Press Council may refer the matter to an adjudication panel to hear from the complainant and the publisher. Adjudication panels are drawn from a pool of Press Council members that represent the public and independent journalists (but not member publications), and a group of panel members with community and media backgrounds who are not Council members.[46] While adjudication decisions are published online, it is important to note that adjudications can only be made in relation to members, and as such, there may be media organisations that are not members and therefore beyond the reach of the Press Council. In addition, the Press Council does not have the power of enforcement: generally, it can issue a reprimand or censure, and call for (but not require) the publication to apologise, correct or revise content.[47] Under the New Zealand model, members of the public are required to submit their complaint directly to the member publication for resolution at first instance.[48] If the complainant is not satisfied with the publication’s response, then they may complain to the Media Council. Each complaint is assessed against the Statement of Principles and referred to the Chair, a Committee of Council, or the full Media Council depending on the outcome of the assessment. The members, who are drawn from industry and the public, will determine whether the complaint should proceed and whether there has been a breach of ethics.

V CONCLUSION

No doubt digital platforms may find the concept of an industry body akin to a press council worrying on the basis that such a model stemmed from the traditional print and online media businesses. This may draw an unwanted inference, contrary to section 230, that digital platforms are comparable to publishers and hence liable for third party content they publish. Nonetheless, it is submitted that the model of a social media council is a starting point, noting that the model is not posited to be a panacea for all actual and perceived social media ills. The purpose of an industry-wide body would not be to undermine section 230, but rather, to acknowledge the significant role that social media plays in facilitating free speech, and to progress a constructive debate around social media regulation.


[*] Practising Australian Lawyer and Public Member, Australian Press Council. This article was originally submitted as a paper during the author’s international exchange program at UCLA Anderson’s School of Management, Summer Term 2022. The author thanks Professor Steven E Zipperstein for his comments on the original paper. The views expressed in this article are the author’s own personal views and do not in any way represent the views of any organisation affiliated with the author.

[1] Andrew M Guess and Benjamin A Lyons, ‘Misinformation, Disinformation, and Online Propaganda’ in Nathaniel Persily and Joshua A Tucker (eds), Social Media and Democracy: The State of the Field, Prospects for Reform (Cambridge University Press, 2020) 10-33.

[2] Ibid.

[3] Ibid.

[4] See ‘Facebook Community Standards’, Meta (Web Page) <https://transparency.fb.com/en-gb/policies/community-standards/?source=https%3A%2F%2Fwww.facebook.com%2F

communitystandards>.

[5] Pablo Barberá, ‘Social Media, Echo Chambers, and Political Polarization’ in Persily and Tucker (n 2) 34-55; Dmitriy Khavin, Haley Willis, Evan Hill, Natalie Reneau, Drew Jordan, Cora Engelbrecht, Christian Triebert, Stella Cooper, Malachy Browne and David Botti, ‘Day of Rage: How Trump Supporters Took the U.S. Capitol’, New York Times (online, 18 July 2023) <https://www.nytimes.com/spotlight/us-capitol-riots-investigations>.

[6] ‘Case decision 2021-001-FB-FBR’, (Oversight Board) <https://www.oversightboard.com/decision/FB-691QAMHJ/>.

[7] Ibid.

[8] Craig Timberg, Elizabeth Dwoskin and Reed Albergotti, ‘Inside Facebook Jan. 6 violence fuelled anger, regret over missed warning signs’, The Washington Post, (online, 22 October 2021) <https://www.washingtonpost.com/technology/2021/10/22/jan-6-capitol-riot-facebook/>.

[9] Terms of Service, cl 3.2.1, Meta (Web Page) <https://www.facebook.com/terms.php> and ‘Community Standard on Misinformation’, Meta (Web Page) <https://transparency.fb.com/en-gb/policies/community-standards/misinformation/>; Terms of Service, cl 4, Twitter (Web Page) <https://twitter.com/en/tos> and ‘Community Standards on Platform Integrity and Authenticity’, Twitter (Web Page) <https://help.twitter.com/en/rules-and-policies#twitter-rules>.

[10] Twitter (Web Page) <https://help.twitter.com/en/rules-and-policies/election-integrity-policy> and <https://transparency.twitter.com/en/reports/covid19.html#2021-jul-dec>. Note that the ‘Covid-19 misinformation information policy’ has not been enforced since 23 November 2022.

[11] See, eg, Michael A Cusumano, Annabelle Gawer and David Yoffie, ‘Social Media Companies Should Self-Regulate. Now.’, Harvard Business Review, (online, 15 January 2021) <https://hbr.org/2021/01/social-media-companies-should-self-regulate-now>.

[12] See, eg, Yolanda Redrup and Andrew Tillett, ‘Social Media Platforms Can’t Self-Regulate’, Australian Financial Review, (online, 28 March 2019) <https://www.afr.com/technology/social-media-platforms-can-t-self-regulate-20190327-p517y5>.

[13] Ibid.

[14] Ibid.

[15] Oversight Board Charter, art 5.

[16] Ibid, art 2.

[17] Ibid.

[18] Ibid, art 2 and 5.

[19] Ibid, art 2.

[20] ‘Case decision 2021-001-FB-FBR’, (Oversight Board) <https://www.oversightboard.com/decision/FB-691QAMHJ/>.

[21] Ibid.

[22] The author has previously considered the issue of accountability in a regulatory context: see Diana Nestorovska, ‘Assessing the Effectiveness of ASIC’s Accountability Framework’ (2016) 34(3) Company and Securities Law Journal 193, 199 citing R Mulgan, ‘Accountability: An Ever-Expanding Concept?’ (2000) 38 Public Administration 555, 555–5.

[23] Ibid.

[24] Oversight Board Charter, art 2.

[25] Ibid.

[26] Ibid, art 5.

[27] Ibid.

[28] See ‘Charting a way forward on online regulation’, Meta (Web Page), <https://about.fb.com/news/2020/02/online-content-regulation/>.

[29] See, eg, Entertainment Software Rating Board (Web Page) <https://www.esrb.org/>.

[30] See, eg, Copyright Act 1968 (Cth), pt V div 2AA, which provides safe harbour for intermediaries against copyright infringement; Max Mason, ‘Google and Facebook excluded from safe harbour copyright reforms’, Australian Financial Review, (online, 5 December 2017) <https://www.afr.com/companies/

media-and-marketing/google-and-facebook-excluded-from-safe-harbour-copyright-reforms-20171205-gzz3fw>.

[31] Monica Biddington, ‘Regulation of Australian online content: cybersafety and harm’, (Parliamentary Library Briefing Book, July 2019) <https://www.aph.gov.au/About_Parliament/

Parliamentary_Departments/Parliamentary_Library/pubs/BriefingBook46p/Cybersafety>.

[32] See Online Safety Act 2021 (Cth); eSafety Commissioner (Web Page) <https://www.esafety.gov.au/whats-on/online-safety-act>.

[33] For an interesting discussion on why New Zealand is not likely to follow Australia down this route, see Alex Latu, ‘Why NZ is unlikely to follow Australia’s lead on social media defamation laws’, The Spinoff, (online, 17 September 2021) <https://thespinoff.co.nz/media/17-09-2021/why-nz-is-unlikely-to-follow-australias-lead-on-social-media-defamation-law>.

[34] ‘Facebook defamation ruling by High Court exposes all page owners to lawsuits, not just the media’, ABC News (online, 12 September 2021) <https://www.abc.net.au/news/2021-09-12/facebook-defamation-high-court-ruling-exposes-more-than-media/100451198>.

[35] Fairfax Media Publications v Dylan Voller; Nationwide News Pty Ltd v Dylan Voller; Australian News Channel Pty Ltd v Dylan Voller [2021] HCA 27, [173] per Kiefel CJ, Keane and Gleeson JJ.

[36] John Barilaro v Google LLC [2022] FCA 650, [403].

[37] ‘Netsafe, NZTech and global tech companies act to tackle digital harms’, NZTech (Web Page) <https://nztech.org.nz/2022/07/25/netsafe-nztech-and-global-tech-companies-act-to-tackle-digital-harms/>.

[38] ‘Aotearoa New Zealand Code of Practice for Online Safety and Harms Draft’, (Netsafe, 2 December 2021) <https://netsafe.org.nz/aotearoa-new-zealand-code-of-practice-for-online-safety-and-harms-draft/>.

[39] See Lara Fielden, ‘Regulating the Press: A Comparative Study of International Press Councils’ (Reuters Institute for the Study of Journalism, April 2012) <https://reutersinstitute.politics.ox.ac.uk/

sites/default/files/2017-11/Regulating%20the%20Press.pdf>.

[40] Ibid.

[41] Accountable Journalism (Web Page) <https://accountablejournalism.org/press-councils/USA>.

[42] John A Ritter and Matthew Leibowitz, ‘Press Councils: The Answer to Our First Amendment Dilemma’ [1974] (5) Duke Law Journal 845; Dr Ralph Lowenstein, ‘Press Councils: Idea and Reality’ (Freedom of Information Foundation, April 1973) <https://repository.uchastings.edu/cgi/viewcontent.cgi?

filename=0&article=1078&context=nnc&type=additional>; Ray Finkelstein and Rodney Tiffen, ‘When Does Press Self-Regulation Work?’ [2015] MelbULawRw 6; (2015) 38 Melbourne University Law Review 944.

[43] ‘Netsafe, NZTech and global tech companies act to tackle digital harms’, NZTech (Web Page) https://nztech.org.nz/2022/07/25/netsafe-nztech-and-global-tech-companies-act-to-tackle-digital-harms/>.

[44] Australian Press Council (Web Page) <https://www.presscouncil.org.au/>.

[45] Ibid <https://www.presscouncil.org.au/complaints>.

[46] Ibid <https://www.presscouncil.org.au/about-us/who-we-are/>.

[47] Ibid <https://www.presscouncil.org.au/complaints/handling-of-complaints/>.

[48] New Zealand Media Council (Web Page), <https://www.mediacouncil.org.nz/faqs/>.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/ANZCompuLawJl/2022/10.html