Information Technology and Ethics/Social Media Content and Targeting

Legal and Ethical Issues of Social Media Content and Targeting
Social media serves as a double-edged sword, offering unprecedented ways to connect and share information while also posing significant legal and ethical challenges. These platforms not only shape public discourse through the content they display but also raise questions about privacy, manipulation, and fairness due to their content targeting practices. The algorithms that underlie these processes can amplify certain voices or suppress others, impacting everything from individual mental health to democratic processes. As such, the intersection of social media content and targeting encompasses a broad spectrum of legal and ethical issues, including freedom of speech, censorship, and the influence on elections and political beliefs. The ethical implications of social networking are complex and multifaceted. According to Shannon Vallor, they can be categorized into three broad areas:


 * 1) Direct impacts of social networking activities themselves.
 * 2) Indirect impacts stemming from the business models that enable these platforms.
 * 3) Structural implications that reflect the role of social networking as a transformative sociopolitical and cultural force.

Social Media Content
Social media content encompasses a wide array of outputs, from user-generated posts and shared news articles to sponsored content and algorithmically determined feeds. The selection and presentation of this content can significantly influence public opinion and societal norms, making it a critical area of ethical scrutiny.

Social Media Targeting
Social media targeting is the practice of delivering content to users based on an analysis of demographic, behavioral, and psychological data. This practice allows platforms to serve seemingly relevant content to each user but also poses serious ethical questions regarding privacy, autonomy, and the potential reinforcement of societal divisions and biases.

Freedom of Speech and Social Media Content
Freedom of speech is a cornerstone of democratic societies, enshrined in the First Amendment of the U.S. Constitution, which asserts that "Congress shall make no law...abridging the freedom of speech, or of the press..." However, this right is primarily protected from government infringement and does not apply to private entities, including social media companies, which can define and enforce their own rules regarding acceptable language and content.

Social Media Platforms as Arbiters of Free Speech
Social media platforms serve as both a boon for free expression and a potential venue for censorship. These platforms enable individuals to share their views widely and mobilize for various causes. Yet, they also have the power to suppress speech they deem inappropriate, whether for violating community standards or for being legally contentious in certain jurisdictions.

As private entities, social media companies often make intricate decisions about the content they allow. This includes decisions to permit certain types of speech from specific users—like heads of state—while blocking similar expressions from others, potentially flagging them as hate speech or terrorist content. This selective enforcement has raised concerns about the consistency and fairness of social media policies."'This power that social media companies wield over speech online, and therefore over public discourse more broadly, is being recognized as a new form of governance. It is uniquely powerful because the norms favored by social media companies can be enforced directly through the architecture of social media platforms. There are no consultations, appeals, or avenues for challenge. There is little scope for users to reject a norm enforced in this manner. While a blatantly illegitimate norm may result in uproar, choices made by social media companies to favor existing local norms that violate international human rights norms are common enough.'"For more information on the regulation of content by social media companies, see the discussions by Kay Mathiesen, who characterizes censorship as limiting access to content either by deterring the speaker or the receiver from engaging in speech.

Legal and Ethical Considerations
The legal frameworks that govern freedom of expression on social media vary significantly across countries, which can impact how speech is regulated on these platforms. In more restrictive regimes, social media companies might be compelled to comply with local laws that demand the removal of content that could be considered lawful in other contexts.The ethical challenges of balancing protection from harm against the rights to free speech create a complex landscape for content moderation.

Globally, the influence of social media on freedom of expression is profound and multifaceted. Companies must navigate not only diverse legal landscapes but also broad public expectations and international human rights norms. The power wielded by these platforms can sometimes align with local norms that may infringe on universally recognized rights, thus raising questions about the role of social media as a new form of governance without traditional checks and balances.

Critics argue that the architectures of social media platforms enforce norms directly through their design, leaving little room for debate or appeal. This unilateral approach to governance has sparked debates about the legitimacy of such power, especially when it might suppress voices advocating for social or political change.

Restrictions to Speech and Content on Social Media
Social media platforms, as private enterprises, have the authority to set their own rules about what constitutes acceptable content on their networks. This control is essential not only for maintaining the quality of interactions within these platforms but also for complying with legal standards and protecting users from harm.

Several areas of speech are particularly controversial and subject to restriction on social media, including hate speech, disinformation, propaganda, and speech that can cause harm to others.

Hate Speech
Hate speech on social media often targets specific racial, ethnic, or other demographic groups and can incite violence or discrimination against them. For instance, organizations like the Ku Klux Klan have used social media to spread offensive content about various groups, significantly increasing the reach and impact of their hateful messages. The Southern Poverty Law Center reports a high number of active hate and anti-government groups in the U.S., illustrating the scale of this issue.

Disinformation and Propaganda
The spread of false information or disinformation on social media is a major concern, especially given its potential to influence public opinion and election outcomes. Studies have shown that false stories reach more people and spread faster than the truth, often due to sensational or controversial content that captures user interest.

Social media platforms have also been exploited to disseminate propaganda by various actors, including foreign governments. During the 2016 U.S. presidential election, there were documented cases of such activities intended to sway public opinion or create discord.

Calls are often made, particularly by political leaders, for social media platforms to take down so-called "fake news," but in almost all cases, lying is classified as protected speech under the First Amendment of the U.S. Constitution.

Misinformation
Misinformation refers to false or inaccurate information that is spread, regardless of intent to mislead. Unlike disinformation, which is deliberately deceptive, misinformation can be spread by individuals who believe the information to be true or who have not intended to cause harm. Misinformation can cover a wide range of content, from simple factual errors to more complex misunderstandings or misrepresentations of data.

The dissemination of misinformation often occurs through social media, news outlets, or word of mouth and can accelerate quickly due to the viral nature of online sharing. The effects of misinformation can be widespread, influencing public opinion, affecting decisions, and potentially leading to social or political consequences.

The COVID-19 pandemic has been a fertile ground for the spread of misinformation, affecting public understanding and response to health measures, vaccines, and the virus itself. Misinformation surrounding various aspects of the pandemic, such as the efficacy of masks, the safety of vaccines, and the nature of the virus, has led to varied and sometimes contradictory public responses. One particularly damaging rumor was the unfounded claim that COVID-19 vaccines cause infertility in both men and women. This specific piece of misinformation created vaccine hesitancy, significantly impacting public health efforts to combat the virus. Despite being debunked by reputable sources including the American College of Obstetricians and Gynecologists, the American Society for Reproductive Medicine, and the Society for Maternal-Fetal Medicine, the initial rumors had already sown deep seeds of doubt.

Speech That Can Cause Harm to Others
Certain types of content on social media, such as doxing or swatting, can directly lead to physical harm. This category also includes speech that may incite violent acts or provide information on committing harmful activities. The responsibility of social media platforms to mitigate the spread of such harmful content is a significant ethical concern.

Defamation
Defamation on social media can damage individuals' reputations through the spread of false information. Legal measures often require platforms to take action against defamatory content to protect the affected parties. This is a critical area where the freedom of speech intersects with the right to protection from slanderous or libelous statements.

Algorithms and Content Delivery
Social media has become incredibly prevalent in modern society, delivering incalculable volumes of content to user’s phone and computer screens. A common topic of discussion regarding social media platforms is the ominous and vague “Algorithm” that dictates user interaction and what content is popular. This “Algorithm” has its roots in the idea of a computer algorithm, broadly defined as “a step-by-step procedure for solving a problem or accomplishing some end.” Essentially, an algorithm is a method that is used to solve a problem.

Algorithms are used by social media platforms to achieve the end of “content-based recommendation systems” that decide what content users can and cannot see based on a profile of the user’s interests. This profile is created using numerous data points, which are then used to gauge the user’s interest in the content being displayed to them and then serve similar content to the user. This is all in an effort to keep the user engaged with the social media platform.

Incentive for User Engagement
Social media platforms want to keep users engaged with their content so that they can serve them ads, also using algorithms to determine which users should receive which ads and when. Advertisers then pay the social media platforms for displaying their ads to target audiences, granting social media platforms a constant source of revenue.

Some consider using algorithms to target users with ads unethical, believing that it will inevitably target those most vulnerable. This is just one controversy, but much of the discourse surrounding social media platforms is entwined with these controversies that have arisen as the result of their use of algorithms.

Clickbait and Journalistic Integrity
With the boom of social media, many news organizations found themselves needing to adapt. These news organizations now use social media platforms to distribute their content to audiences. This switch to social media has led to the news organizations relinquishing control over distribution, becoming reliant on algorithms to distribute their content. News organizations and content creators alike know that not receiving enough user interaction will hurt their futures on these social media platforms. This has led to many news organizations and content creators engaging in a practice known as “clickbait”, defined as “something … designed to make readers want to click on a hyperlink especially when the link leads to content of dubious value or interest.” Many news organizations have also traded away “traditional journalistic conceptions of newsworthiness and journalistic autonomy” in favor of content that increases user engagement and algorithmic viability.

Targeted Content
To keep users engaged, social media platforms serve up “targeted content” which is synonymous with “content that was selected for users by an algorithm because it believes they would engage with this content.” This content is targeted based on data points like the target’s career, wealth, and education information, among other data points. Critics of targeted content have pointed out that this targeting method is predatory, allowing for targeting of extremely niche groups of people who may be most vulnerable to the ads being served to them. Critics also point out that these social media companies must harvest and process large swaths of user data to get this granular level of targeting, often done with the basis of informed consent. The concern is that users aren’t well informed of the fact they are signing away some of their privacy expectations as the result of using these social media platforms, therefore nullifying the basis of informed consent.

Critics also point out that content targeting oftentimes leads to addiction to these social media platforms. This is based on the six “properties of addiction criteria: salience, mood modification, tolerance, withdrawal symptoms, conflict and relapse” that excessive use of social media has fostered in users. You can read more about social media addiction in the dedicated section found here.

Content Suppression
There is another side to social media platforms’ use of algorithmic targeting and promotion of certain content, and that side is content suppression. Just as algorithms promote engagement with certain kinds of content, they also suppress other kinds of content. Critics point out that the creation of algorithms is not a neutral process, and often times the biases of the creators or society as a whole can influence the selection of content to promote and suppress. There have been examples of content creators claiming that their content was being suppressed on platforms like TikTok for posting Black Lives Matter content.

Subversion of Content Targeting: Influence on Elections and Political Processes
Recent key examples highlight a multitude of ethical and legal concerns associated with content targeting on social media. Notably, instances like the involvement of Cambridge Analytica in the 2016 U.S. Presidential Election and the Brexit Referendum demonstrate how social media can be exploited to manipulate public opinion and influence political outcomes. These cases shed light on the powerful effects of targeted content strategies and the profound implications they hold for democracy.

Cambridge Analytica and the 2016 U.S. Presidential Election
Cambridge Analytica, a British consulting firm and a subsidiary of Strategic Communications Laboratories (SCL), gained notoriety for its significant role in political events during the mid-to-late 2010s, ultimately closing in May 2018. The firm's controversial actions stemmed from acquiring the private information of over 50 million Facebook users without authorization. This breach enabled the construction of detailed user profiles, which were then exploited to influence U.S. politics, notably during the 2016 Presidential Election and the United Kingdom’s Brexit Vote.

The operation began in 2014 when Alexander Nix, the chief executive of Cambridge Analytica, proposed using psychological profiling to affect voters' behaviors. These strategies were employed to sway public opinion in favor of conservative candidates in local elections, with funding from figures such as Steve Bannon. The insufficient data initially available to the firm led to hiring Aleksandr Kogan, a Russian-American academic, to develop an app that harvested data not only from users but also from their Facebook friends. This massive data collection was facilitated by Facebook's permissive data usage policies at the time.

Targeted advertising, fundraising appeals, and strategic planning of campaign activities, such as deciding where Donald Trump should visit to maximize support, were all based on these profiles. Simultaneously, tactics to demobilize Democrat voters and intensify right-wing sentiments were employed, showcasing the dual use of targeted content to both mobilize and suppress voter turnout.

Brexit Referendum
Across the Atlantic, similar profiling techniques were used to influence the Brexit vote. Connections were discovered between the Leave.EU campaign and Cambridge Analytica through a Canadian firm known as Aggregate IQ, which was linked to various political campaign groups advocating for the UK to leave the European Union. In the crucial final days of the campaign, voters identified as persuadable were inundated with over a billion targeted advertisements, a strategy pivotal in securing the narrow margin needed to pass the referendum.

These events have prompted significant changes in how social media platforms manage data and have ignited a broader discussion about the need for stringent oversight of content targeting practices to safeguard democratic processes.

Censorship and Content Suppression
Censorship on social media can be nuanced and multifaceted, generally manifesting in two primary forms: censorship by suppression and censorship by deterrence. Each method has its implications and is employed under different contexts, often stirring debate over the balance between free speech and regulatory needs.

Censorship by Suppression
Censorship by suppression involves prohibiting objectionable material from being published, displayed, or circulated. In the United States, this form of censorship is often equated with "prior restraint," a concept generally considered unconstitutional unless it meets a high standard of justification, typically only upheld in cases of national security or public safety.

Social media platforms sometimes engage in practices that could be considered censorship by suppression when they delete or block access to certain types of content. This might include automated algorithmic suppression of content that mentions specific topics deemed sensitive or controversial. While platforms argue that this is necessary to maintain community standards, critics often view it as a form of censorship that restricts free expression.

Copyright Strikes as a Form of Suppression
The issue of intellectual property rights in the context of social media highlights another form of suppression. Copyright strikes are used by platforms to enforce intellectual property laws automatically, often without thorough investigation. This practice can lead to the suppression of content, even if it falls under fair use provisions.

Censorship by Deterrence
Censorship by deterrence does not outright block or forbid the publication of material. Instead, it relies on the threat of legal consequences, such as arrest, prosecution, or heavy fines, to discourage the creation and distribution of objectionable content. This form of censorship can be particularly chilling, as it targets both the publishers of the content and those who might access it, fostering a climate of fear and self-censorship.

One of the critical issues with both forms of censorship is the difficulty in distinguishing between publishers (those who create and post content online) and platforms (those who host content published by others). In theory, platforms are protected from liability for user-generated content by Section 230 of the Communications Decency Act, a key piece of internet legislation that allows online services to host user-generated content without being liable for its content under most circumstances.

Legal Framework Governing Social Media Content
The legal landscape of social media is heavily influenced by Section 230 of the Communications Decency Act (CDA), enacted in 1996. This legislative framework provides platforms with broad immunity, protecting them from lawsuits resulting from user-generated content. Section 230 is pivotal as it allows platforms to moderate material without facing legal repercussions, thereby promoting innovation and free online communication.

Section 230 of the Communications Decency Act: Challenges and Criticisms
Section 230 shields social networking sites from lawsuits related to user-posted information, enabling them to control content without being held responsible for the information they disseminate. However, this provision has faced criticism for its role in facilitating the spread of harmful content while limiting platforms' accountability, despite its intentions to foster free speech and innovation.

Proliferation of Harmful Content
Critics argue that the protection afforded by Section 230 has led social networking companies to prioritize user interaction and growth over stringent content moderation. This has allowed platforms to avoid doing enough to halt the spread of harmful content, such as hate speech, false information, and cyberbullying. The lack of legal penalties for hosting such content enables bad actors to exploit these platforms, spreading dangerous materials that can harm communities and individuals.

Degradation of Responsibility
The legal immunity granted to social networking sites under Section 230 is said to undermine accountability and discourage victims from seeking legal recourse for harassment or defamation experienced online. If platforms face no potential legal repercussions, they may not be motivated to proactively remove harmful content or provide adequate support to those affected.

Evolving Legal Interpretations and Future Directions
The debate over Section 230 continues to evolve as stakeholders from various sectors call for reforms that balance the benefits of online free speech against the need for greater accountability. Legal scholars and policymakers are increasingly examining how laws can adapt to the complexities of content management on social media platforms, suggesting that a more nuanced approach may be necessary. This involves considering the potential for algorithmic regulation and the proportional responsibility of platforms regarding online speech.

Content Moderation
Social media platforms are tasked with the critical responsibility of moderating content to curb the proliferation of harmful information. This duty involves removing posts that propagate hate speech or incite violence and suspending users who breach platform policies. The scope and efficacy of content moderation can be swayed by various factors, including political influences, cultural norms, and economic incentives.

Content moderation refers to the process of screening user-generated content on digital platforms to determine its appropriateness. This encompasses evaluating text, images, and videos to ensure they adhere to the platform's guidelines. Given the immense volume of content uploaded daily, content moderation is indispensable for maintaining a safe online environment. Content moderators, the individuals at the forefront of this operation, often face significant psychological challenges due to the nature of the content they review, including exposure to violent or disturbing images and texts.

Recent legal cases highlight these challenges, with Facebook settling a lawsuit for $52 million with moderators over the trauma incurred from their job duties. Similar legal challenges are faced by other platforms like TikTok, emphasizing the severe impact of this work on mental health.

Ethical Issues with Content Moderation: Workers' Rights
Moderators are tasked with filtering a range of undesirable content, from spam and copyright infringement to severe violations like hate speech and graphic violence. The distress associated with continuous exposure to such content is profound, affecting moderators' mental health long after their roles end. This is true regardless of whether moderators are employed directly by the platforms or through third-party contractors. However, those employed in-house often benefit from better compensation, work conditions, and access to mental health resources compared to their outsourced counterparts.

The Role of Artificial Intelligence in Content Moderation
Large platforms like Facebook employ artificial intelligence (AI) systems to detect a majority of toxic content. Mark Zuckerberg, CEO of Facebook, reported that AI systems were responsible for removing over 95% of hate speech and nearly all content related to terrorism on the platform. Despite these advances, the sheer volume of harmful content that still requires human moderation is overwhelming. AI, while efficient and capable of processing content in multiple languages, often lacks the subtlety needed to understand context or the nuances of human language, particularly in complex cases like memes where text, image, and implied meaning must all be considered. Ethically, Artificial Intelligence used as a tool to moderate content may prove promising in addressing the mental toll on content moderators, but the use of AI in content moderation could also serve to exacerbate existing algorithmic and societal biases.

Social Media Targeting and Mental Health
Social media platforms, with their sophisticated design features such as algorithms and infinite scroll, exploit psychological principles to foster habitual use, sometimes leading to addiction. These designs are not benign; they have significant impacts on mental health, influencing user behavior and societal interactions in profound ways.

Social networking sites are crafted to exploit variable reward systems, a concept rooted in the behaviorist psychology of B.F. Skinner. Interactions like likes, comments, and shares provide unpredictable yet frequent rewards, compelling users to engage repeatedly in hopes of social validation. This pattern of interaction can stimulate the brain’s reward centers akin to gambling and substance use, leading to compulsive behaviors where users feel an uncontrollable urge to log onto these sites, often at the expense of other activities and responsibilities. The ramifications of this behavioral addiction are evident in reduced productivity, strained relationships, and decreased physical activity.

Infinite Scroll: Never Ending Targeted Content
The infinite scroll feature on social networking sites exemplifies persuasive design, intended to maximize user engagement by leveraging natural human curiosity and the fear of missing out (FOMO). This design often leads users into a state of 'flow,' a deep level of engagement that makes time feel like it is passing unnoticed. While flow can be beneficial in activities like learning or art, on social media, it often results in significant time mismanagement and distraction from fulfilling tasks, including disruptions to sleep patterns which can have serious cognitive and health consequences.

Dark Pathways: The Mental Health Consequences of Social Networking Content
The term 'dark pathways' describes the detrimental trajectories users might follow due to excessive social media use. Key mental health issues associated with these pathways include anxiety, depression, and social isolation. The drivers for these outcomes are multifaceted:


 * Social Comparison: Users are often presented with curated versions of others' lives, leading to unfavorable comparisons and distorted self-perceptions. This phenomenon is linked to lower self-esteem and body image issues, particularly among adolescents and young adults.
 * Cyberbullying and Online Harassment: The anonymity of social platforms can foster aggression and bullying, with victims reporting higher levels of stress and anxiety, and in severe cases, suicidal ideation.
 * Information Overload: The vast amounts of information processed during prolonged social media use can overwhelm the brain's processing capacity, impairing decision-making and increasing stress levels.