Regulating Digital Media Platforms:

Challenges in Content and Data Governance in the Networked Information Economy

17–25 minutes

Contemporary digital media platforms have become key actors in mediating the digital public sphere, essentially gatekeepers and power brokers of public discourse and public opinion (Rahman, 2018). Benkler (2006, p.3) uses the term ‘networked information economy’ to refer to this phenomenon of a new economic structure centred around information and cultural production, where the decentralised means of production have allowed individuals throughout the world to create, consume and share information simultaneously. As of July 2022, Kemp (2022) reports that there are 4.7 billion active social media users and 5.03 billion internet users, accounting for 59% and 63.1% of the global population respectively, in which the latter has averaged double-figure growth each year for the past decade. There emerges only a few digital media platforms, however, that facilitate the majority of online social activity. For example, in the same report by Kemp (2022), Google as a search engine directed 91.86% of global web traffic. At the same time, 60.1% of Facebook’s nearly 3 billion monthly active users – which makes it the most-used social media platform – use the site as their primary means of accessing news (Kemp, 2022). The oligopoly of contemporary digital media platforms has thus raised questions on how national governments and supranational institutions across the world should handle the increasing power enjoyed by these global corporations. Drawing parallels with other industries that operate as public infrastructure, regulation is a key theme that emerges when dealing with these issues; however, there are many challenges that come with it. This essay will discuss the challenges in regulating digital media platforms with reference to content regulation and the digital public sphere, and regulating data collection with regard to user privacy.

Theoretical Framework

Dwyer (2010, p. 2) defines media convergence as the process in which existing media and communication practices accommodate new technologies by adapting and merging with one another, where media and technology systems that were once thought of as separate and self-contained now overlap and are increasingly integrated. Media convergence brings with it Habermas’s (1964, p. 49) concept of the public sphere, in which ordinary citizens, as representatives of civil society, come together to form a public body and participate to shape public opinion. It is “with the guarantee of freedom of assembly and association and the freedom to express and publish their opinions”, conferring in an unrestricted manner about matters of general interest (Habermas, 1964, p. 49). Mediating between society and the state, the media of the public sphere, such as television, radio and newspapers, are necessary to reach the masses (Habermas, 1964). In the contemporary world, globalisation and technological innovations have facilitated the public sphere through online communications, which Schäfer (2015, p. 327-328) defines as the digital public sphere. There is still much debate on what actually constitutes the digital public sphere, as some differentiate between online channels such as discussion boards and micro-blogs, and some argue that there are different types of ‘publics’ under the digital public sphere umbrella such as ‘issue publics’ and ‘counter-publics’, where some also emphasise the prerequisite of dialogue and mutual appreciation between interlocutors (Schäfer, 2015). This essay focuses on digital media platforms in particular, which ties in with the phenomenon of platformisation. As large, multinational tech corporations are becoming more central to public and private life, Poell, Nieborg and van Dijck (2019) define platforms as “(re-)programmable digital infrastructures that facilitate and shape personalised interactions among end-users and complementors, organised through the systematic collection, algorithmic processing, monetisation, and circulation of data” (p. 3), in which platformisation is “the penetration of the infrastructures, economic processes, and governmental frameworks of platforms in different economic sectors and spheres of life” (pp. 5-6). Through this understanding, platforms and platformisation is a by-product of media convergence.

As the digital public sphere ‘convenes’ in these spaces mediated by digital media platforms, it raises the question of accountability and responsibility of regulation, both on the ‘user’ end – represented by the digital public sphere – and that of the platforms, considering that ordinary people now also have the power to not only consume content, but also to produce and share it, which Bruns (2006, p. 2) defines as ‘produsage’, situated in Banks and Deuze’s (2009) broader context of co-creative labour. On regulation, most scholarship and governments consider platforms as public utility and attribute them as common-carriers, which was initially based on breaking up the monopolies of, for example, railroad and telecommunication companies to ensure fair and equitable access to citizens, encouraging a competitive market and regulated in the best interests of the public (Rahman, 2018, p. 236). Platforms and infrastructures share many similarities, such as but not limited to how they enjoy gatekeeping power and are able to structure and/or manipulate flows of information and activity within the network (Rahman, 2018, p. 237). In the digital space, Humphreys and Simpson (2018, pp. 75-76) offer the term ‘net neutrality’ as a parallel to the common-carrier concept, acting as a governance construct and set of practices that emphasise non-differential access to Internet services, remaining neutral and undifferentiated. The affordance of power over public debate leans into Castells’ (2009, pp. 42-47) differentiation of power in social networks: ‘networking power’, referring to how actors in global networks have power over those excluded from the network; ‘network power’, where exercising power depends on imposing inclusion rules of the network; ‘networked power’, how certain actors have more power than other actors in the same network; and ‘network-making power’, the ability to construct networks based on the interests of the ‘programmers’. As platforms become a necessary conduit for the public to gain access to information, this essay approaches the issue of regulating contemporary digital media platforms through this framework.

Content Regulation and the Digital Public Sphere

In regulating contemporary digital media platforms, algorithms play a major role in how content spreads and how the digital public sphere interacts with it, prioritising and favouring content that attracts the most interactions and engagement from users regardless of the repurcussions, where regulating algorithmic codes and its usage can lead towards an ecosystem of net neutrality, although that also has its own potential consequences. The algorithms that platforms use are, to an extent, governance mechanisms, where Just and Latzer (2016) argues that even though they are general-purpose technologies, in that they are enabling technologies that are contingent on social-use decisions, they can be “used to exert power and as increasingly autonomous actors with power to further political and economic interests on the individual but also on the public/collective level” (p. 245). They are used to automate the assignment of relevance to selected pieces of information to maximise user engagement with content, whereby the commodity in this attention economy is data (Just & Latzer, 2016). Algorithmic selection, however, shapes the construction of individuals’ realities, as not only does it influence what we think based on what a user is shown, ‘recommended’, or ‘suggested’, but also how we think (Just & Latzer, 2016). The visibility of content on platforms, with Facebook for example, Bucher (2012) argues that visibility is rewarded for user interaction, where a lack thereof presents the “threat of invisibility” (p. 1175) and becoming ‘obsolete’ in the social network, which consequently modifies user behaviour both in producing and consuming content. By greatly affecting the individual, it also indirectly shapes the collective consciousness, where a constructed reality could threaten a democratic digital public sphere.

Polarising the digital public sphere is an underlying issue, especially how algorithmic selection proliferates political tensions in the contemporary era, which would require a closer look at current trends of citizen engagement and how filter bubbles and echo chambers play a role in the polarisation process. Pariser (2011, p. 84) defines filter bubbles as how personalised search technology, algorithms in particular, only shows content that reflects the user’s preferences, which consequently isolates the individual from alternative perspectives and can potentially skew the individual’s perception of reality. These create echo chambers, where personal beliefs persist unchallenged and untested and is reinforced by like-minded people, which Sunstein (2017, p. 115) argues promotes confirmation bias and may encourage the adoption of more extreme beliefs. Scholars such as Spohr (2017) and O’Callaghan et al. (2013) have shown how the proliferation of fake news and misinformation, caused by algorithmically curated filter bubbles, has led to the rise of extremism. In using the 2016 U.S. presidential elections and the 2016 Brexit referendum as case studies, Spohr (2017) argued that producers and curators of fake news are able to monetise their content through Facebook’s and Google’s advertising ecosystem, in which user segregation and ideological polarisation in the two contexts are driven by the match between the user’s own preferences and what content is shown. O’Callaghan et al. (2017) strengthen this claim, where with the case of YouTube, users who consume far-right videos on the platform are highly likely to be recommended more far-right content, where exposure to alternative perspectives is unlikely. Some scholars have suggested several solutions, where Berman and Katona (2020) propose three different types of algorithms that platforms can use: the perfect algorithm, which shows users content when it exceeds their calculations of a utility threshold, subjective to the user; the quality algorithm, displaying content that passes the quality standard though unable to measure the user’s preference for it; and the distance algorithm, selects content based on closest social distance to the user’s contacts but unable to filter quality. Amrollahi (2021) proposes a similar solution in suggesting the implementation of new architecture as an integrated tool in social networks, designed to alert users about a potential filter bubble and ‘bursting’ that bubble when necessary. However, this works under the assumption that such architecture would be adopted by platforms in the first place and regulators have the capacity to enforce these regulations.

Although Pariser (2011, p. 237) argues that government oversight may be needed to “ensure that we control our online tools and not the other way around”, suggesting platforms to self-regulate and function for the public good, as well as in regulating the editorial ethics of platforms, limits the defining characteristic of the digital public sphere: the freedom to share information and express opinions. As Spohr (2017) also found through his research, selective exposure by the individual user is also a major contributing factor to polarisation and echo chambers, which means algorithms are only one side of the problem. In a study on how liberal and conservative civil society organisations [CSO] use social media in the U.S. state of North Carolina, Schradie (2019) found that conservative CSOs utilised social media much more than their liberal counterparts, in which most cases involved in mobilising support, but most importantly, sharing and discussing information that they saw as the ‘truth’, as opposed to what conservative CSOs argue as the saturation of leftist propaganda in American mainstream media. In this case, fake news and misinformation spreads not necessarily because of algorithmic affordances, but through user agency. Schradie (2019) points out that certain larger conservative CSOs operate as central nodes to a social network, working collaboratively with others and sharing information with like-minded people. This evidences how networked power has greater importance than network-making power, which shifts the challenge of regulating platforms less on the algorithms themselves to also highlighting whether to regulate the extent of democracy within the digital public sphere. To achieve a vision of net neutrality, Humphreys and Simpson (2018) also argue the need for more oversight and control by regulating bodies, where although it may use its powers to deal with anti-competitive practices that may infringe on net neutrality, the same principles could also be applied to consolidate network-making power. Consequently, should platforms be directed to act with net neutrality and practice non-differential access to information, it guarantees the freedom of the digital public sphere although simultaneously risking the proliferation of fake news and misinformation. However, in using the suggested solutions presented by the likes of Berman and Katona (2020) and Amrollahi (2021), imposing a regulatory set of practices in constructing algorithms based on the quality of information, although prevents misinformation, limits the freedom of access to information. Different people consider different information as different sets of truths and often accessing such information is their own choice, evidenced by Schradie (2019) and Spohr (2017). Limiting such choices would defeat the essence of the digital public sphere. This section has shown that although there is a need to regulate the content on platforms, the issue of which aspect to actually regulate and how to regulate it is not straightforward and presents an ongoing challenge.

Regulating Data Collection and User Privacy

Platformisation has brought with it datafication, where the data platforms obtain from individual users are being used for commercial interests and often sold to third parties, in which although regulation is necessary, the challenge is how should data be treated in a media ecology of net neutrality. Datafication is the process where networked platforms are able to render data through automated means, specifically harvesting metadata from a user’s online behaviour and even the device that they are using (van Dijck & Poell, 2013, p. 9). This process has led to a paradigm shift in classifying the role of platforms, where scholars such as Bucher (2012) and van Dijck and Poell (2013) suggest that platforms are no longer only intermediaries and facilitators of the digital public sphere that directly monetises user activity; instead, they now operate as ‘big data’ firms, creating a business model based on their ability to harvest and repurpose data. Van Dijck and Poell (2013, p. 10) succinctly summarises:

“… Platforms claim they can track instantaneous movements of individual user behavior, aggregate these data, analyze them, and subsequently translate the results into valuable information about individuals, groups, or society at large.”

The primary issue with datafication is that there is a lack of transparency on what data in particular are being harvested and how that data is being used. Pariser (2011, p. 213) argues that the personalisation of information afforded by algorithms is a transaction, which the user exchanges privacy and control to the algorithm. For regulators to enforce transparency, the necessary precondition for users to control their information is not only in knowing how their data is being used, but also to intuitively understand the datafication process, which is ofen buried under the platform’s complicated technological jargon and can be changed at a moments notice to serve their interests (Pariser, 2011, p. 231, 239). This section approaches the question of regulation through this understanding of datafication and user privacy.

There is an ongoing trend on making data ‘platform ready’, especially for use by a platform’s application programming interfaces [API], in which it facilitates data exchange between applications and allows for programmers to build applications within a platform’s ecosystem, however it also works vice versa (Helmond, 2015). In Helmond’s (2015) research on Facebook’s API, there is evidence to suggest that Facebook uses it platform as an infrastructural model “ to extend itself into external online spaces and how it employs these extensions to format data for its platform to fit their eco-nomic interest through the commodification of user activities and web and app content” (p. 8). Through understanding the ‘dual logic of platformisation’, Helmond (2015) argues that although the API serves as a technological framework in which programmers can build their applications upon, Facebook leverages this process to absorb external data and strengthen their own database. There are, however, existing regulations to deal with such data usage for platforms in general like the European Union’s Digital Markets Act (2022). Article 5 s. 2 and Article 6 s. 2 of the Act prohibits the cross-usage of user data, in that platforms should not sell a user’s personal data to third-party services and combine said personal data with data obtained not from the platforms themselves (Digital Markets Act 2022).

Although this is a step in the right direction, some have been critical on how the generalised obligations presented by the Act may lead to a lack of possible enforcement (Caffara & Morton, 2021). An issue that Caffara and Morton (2021) raise of the 2020 draft of the Act is that regulation should be specific to the platform’s business models, as the usage of data is dependent on how platforms use that data in the first place. Through an economic lens, this vision of net neutrality on data collection and user privacy might not be possible, as in a neoliberalist market this would both weaken the financial leverage platforms have with clients on the other side of the multi-sided market, such as advertisers, alongside deteriorating user protection, as now the barrier to entry to the data market has lowered (Schawe, 2020; Tormo, 2020). To mandate freedom of data access may allow entrants to the data market compete with the platforms that have made a business model of monetising data, which may lead to platforms losing advertisers and the revenue they bring (Tormo, 2020). Data access by and for platforms is often crucial for innovation and competitiveness in the digital economy market (Schawe, 2020). Schawe (2020), Tormo (2020) and Caffara and Morton (2021) agree that existing competition law, such as that of the European Union, is not well suited to regulate data collection, arguing that using public utility and common-carrier laws are not directly applicable, where a key weakness is its inability to be generalised and conditioned to be adjusted on a case-by-case basis.

Financial considerations aside, governments in particular also have a vested interest in accessing the data harvested by platforms, where the vast troves of data that they store are valuable for government surveillance. Research by Rider (2018) shows that current privacy laws, such as information encryption, in fact fuel government surveillance and law enforcement, where the high barrier to entry of such data silos protect government use of personal data. Ying (2021) and Wang et al. (2022) argue that governments are still underutilising the data that they retrieve from platforms, where in their case it can assist with city planning and promote social reform. As platforms enjoy the protection that governments give under the condition that they share unrestricted access to data, platforms are incentivised to sustain the current paradigm on data collection. Pariser (2011) agrees with Schawe (2020), Tormo (2020) and Caffara and Morton (2021), as Pariser (2011, pp. 240-241) suggests that on regulating data collection, we should move away from thinking about data and platforms through a common-carrier framework and instead regard data as an individual’s personal property. A user’s data has long lasting implications for the individual, directly affecting how a user interacts with content and what is shown, tying back with how algorithms, which feed on the user’s data, construct filter bubbles and echo chambers (Pariser, 2011).

As media convergence also collapses the individual’s identity to one singular online profile, it opens avenues for users to increasingly share for information about their personal lives for the public to see, where although some are aware of such an issue, others are not (Albrechtslund, 2008; Tsay-Vogel et al., 2018). Such surveillance now not only comes from the government, but also from the user’s peers. As the privacy of the self erodes on digital platforms, risks to privacy increases, implying that should users choose to communicate on these platforms, these platforms should also have an obligation to, at the very least, make explicit to the users how they use their metadata (Tsay-Vogel et al., 2018). Although there is precedence on laws protecting personal information (Pariser, 2011, p. 241), it cannot be independent of the business models and revenue streams of the platforms, as proponents of regulation and deregulation struggle to compromise and legislate effective laws that controls data collection. It is more feasible in achieving a media ecology of net neutrality in this context than on content regulation and the digital public sphere, yet challenges persist when different actors pursue different priorities.

Conclusion

There are many challenges when it comes to regulating digital media platforms. Regulators have approached the issue through a public utility and common-carrier framework, arguing that as platforms become global it is a given that many in the public sphere gravitate towards this space. Regulating the relationship between platforms and the digital public sphere to achieve a vision of net neutrality highlights even more issues, however, such as whether or not platforms should be held accountable for the algorithms responsible for filter bubbles and echo chambers, as opposed to shifting the blame to the user’s selective exposure. The question of unrestricted, fair access to information also questions the same principles of net neutrality when applied to datafication and the process of data collection, where user privacy, although central to the issue, is ineffectively enforced by both platforms and regulators, both of which seems to be more concerned with maximising economic activity and achieving neoliberalist goals. Regulating digital media platforms are an area of ongoing debate where this essay has only been able to touch the surface of this global issue, yet as different regulators create different laws, more research should be done on platform regulation policies in different contexts.


Bibliography

Albrechtslund, A. (2008). Online Social Networking as Participatory Surveillance. First Monday, 13(3). https://journals.uic.edu/ojs/index.php/fm/article/download/2142/1949

Amrollahi, A. (2021). A Conceptual Tool to Eliminate Filter Bubbles in Social Networks. Australasian Journal of Information Systems, 25. DOI: 10.3127/ajis.v25i0.2867

Banks, J. & Deuze, M. (2009). Co-creative labour. International Journal of Cultural Studies, 12(5), 419-431. DOI: 10.1177/1367877909337862

Benkler, Y. (2006). Introduction: A Moment of Opportunity and Challenge. In The Wealth of Networks : How Social Production Transforms Markets and Freedom Contract : Freedom in the Commons (pp. 1-8). Yale University Press.

Berman, R. & Katona, Z. (2020). Curation Algorithms and Filter Bubbles in Social Networks. Marketing Science, 39(2), 296-316. DOI: 10.1287/mksc.2019.1208

Bruns, A. (2006). Towards Produsage: Futures for User-Led Content Production. In F. Sudweeks, H. Hrachovec, & C. Ess (Eds.), Proceedings: Cultural Attitudes towards Communication and Technology 2006 (pp. 275-284). Murdoch University, Perth.

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New media & society, 14(7), pp. 1164 –1180. DOI: 10.1177/1461444812440159

Caffara, C. & Morton, F. S. (2021). The European Commission Digital Markets Act: A translation. https://cepr.org/voxeu/columns/european-commission-digital-markets-act-translation

Castells, M. (2009). Power in the Network Society. In Communication Power (pp. 10-53). Oxford University Press.

Digital Markets Act 2022 (EU). https://eur-lex.europa.eu/legal-content/EN/TXT/?toc=OJ%3AL%3A2022%3A265%3ATOC&uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG

Dwyer, T. (2010). Introduction. In Media Convergence (pp. 1-23). McGraw-Hill Education.

Helmond, A. (2015). The Platformization of the Web:  Making Web Data Platform Ready. Social Media + Society, 1(2), 1 –11. DOI: 10.1177/2056305115603080

Humphreys, P. & Simpson, S. (2018). Access and opportunity online: the debate on Internet neutrality and converging media. In Regulation, Governance and Convergence in the Media (pp. 75-98). Edward Elgar.

​​Just, N. & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39 (2), pp. 238–258. DOI: 10.1177/0163443716643157

Kemp, S. (2022). Digital 2022: July Global Statshot Report. https://datareportal.com/reports/digital-2022-july-global-statshot 

O’Callaghan, D., Greene, D., Conway, M., Carthy, J., & Cunningham, P. (2013). The extreme right filter bubble. Science Foundation Ireland. DOI: 10.48550/arXiv.1308.6149

Pariser, E. (2011). The filter bubble: what the Internet is hiding from you. Penguin Press.

Rahman, K. S. (2018). Regulating Informational Infrastructure: Internet Platforms As The New Public Utilities. Georgetown Law Technology Review, 234(2). https://georgetownlawtechreview.org/regulating-informational-infrastructure-internet-platforms-as-the-new-public-utilities/GLTR-07-2018

Rider, K. (2018). The privacy paradox: how market privacy facilitates government surveillance. Information, Communication & Society, 21(10), 1369-1385. DOI: 10.1080/1369118X.2017.1314531

Schäfer, M. S. (2015). Digital Public Sphere. In G. Mazzoleni (Ed.), The international encyclopedia of political communication (pp. 322–328). Wiley Blackwell.

Schawe, N. (2020). It’s all about data: Time for a data access regime for the sharing economy? Information Polity, 25, 177-195. DOI: 10.3233/IP-190206

Schradie, J. (2019). The Revolution That Wasn’t: How Digital Activism Favours Conservatives. Harvard University Press.

Spohr, D. (2017). Fake news and ideological polarization:Filter bubbles and selective exposureon social media. Business Information Review, 34(3), 150-160. DOI: 10.1177/0266382117722446

Sunstein, C. R. (2017). #Republic: DividedDemocracy inthe Ageof Social Media. Princeton University Press.

Tormo, J. (2020). Estimating reasonable prices for access to digital platforms’ data: what are the challenges? European Competition and Regulatory Law Review, 4(3), 172-184. DOI: 10.21552/core/2020/3/4

Tsay-Vogel, M., Shanahan, J., & Signorielli, N. (2018). Social media cultivating perceptions of privacy:  A 5-year analysis of privacy attitudes and self-disclosure behaviors among Facebook users. New media & society, 20(1), 141-161. DOI: 10.1177/1461444816660731

van Dijck, J., & Poell, T. (2013). Understanding Social Media Logic. Media and Communication, 1(1), 2-14. DOI: 10.12924/mac2013.01010002

Wang, N., Hu, B., Wang, D., Wang, L., & Li, W. (2022). Design of a big data platform system for power grid enterprise decision-making. 2nd International Conference on Consumer Electronics and Computer Engineering (ICCECE). Guangzhou, China.

Ying, S. (2021). Research on government affairs publicity of provincial government websites in big data environment. 2021 International Conference on Public Management and Intelligent Society (PMIS). Shanghai, China.


Originally submitted as coursework for the Master of Global Media Communication, University of Melbourne