Twitter as a Social Media Platform
Twitter has always been a large contributor to facilitating the public conversation on social media. As a popular mainstream social media platform, Twitter is used by millions of people around the world and it is a go-to source for trending news for many users. With this, it is important for such a big social media platform like Twitter to properly filter the information being posted/tweeted and regulate or remove content that may prove harmful to other users and the general public. In this article, we will conduct a basic run-through of Twitter’s community guidelines.
With most social media platforms, content at any given time may be requested to be removed by third parties for a variety of reasons. Twitter reserves the right to remove content that violates the User Agreement, which includes copyright or trademark violations or other intellectual property misappropriation, impersonation, unlawful conduct, or harassment. Users usually are not fully aware of the policy terms one to be aware of when posting this content and the third party might not be aware of the platform’s terms of service that would help to back up their claims for removal.
Twitter’s Rules on Harassment
Twitter’s terms of service define what harassment is when a user continuously receives unwanted, targeted replies on Twitter, and users can report the behavior to Twitter. Twitter also states to not threaten violence against individuals or a group of people.
By and large, speech on Twitter is relatively free as long as it does not entail blatant threats to individuals or groups of such. Statements that express a wish or hope that someone experiences physical harm, making vague or indirect threats, or threatening actions that are unlikely to cause serious or lasting injury are not likely to be removed under their policy but may be reviewed. Twitter states that content posted on the platform that users do not agree with or have received unwanted communication does not necessarily constitute online abuse and that users should unfollow and end any communication with accounts if they see content or receive a reply they don’t like.
What To Do if You are Harassed on Twitter?
Twitter recommends using the block feature which will prevent users from following, seeing your profile image on their profile page, or in their timeline, and not seeing their replies or mentions in the Notifications tab although their tweets may still appear in search. Twitter writes in its terms of service that by using Twitter, you may be exposed to content that might be offensive, harmful, inaccurate, or otherwise inappropriate, and in some cases, content that has been mislabeled or is deceptive.
How Twitter Handles Threats
Twitter’s policy states users can’t state an intention to inflict violence on a specific person or group of people. Twitter defines intent to include statements like “I will”, “I’m going to”, or “I plan to.
- threatening to kill someone
- threatening to sexually assault someone
- threatening to seriously hurt someone and/or commit another violent act that could lead to someone’s death or serious physical injury
- asking for or offering a financial reward in exchange for inflicting violence on a specific person or group of people.
Twitter does allow contexts when describing what is seen as a threat and what is not. Twitter recognizes that some people use violent language as part of hyperbolic speech or between friends, and as such, allows some forms of violent speech when it’s clear that there is no abusive or violent intent, using a hyperbolic example in the terms of service of “I’ll kill you for sending me that plot spoiler!”
Twitters Copyright Policy and Content on Twitter
Users are responsible for any content they post on Twitter and the content is the responsibility of the person who originated it. Twitter does not monitor or control the content that is posted. Twitter responds to copyright complaints submitted under the Digital Millennium Copyright Act (“DMCA”) and Twitter will respond to reports of alleged copyright infringement, such as allegations of a copyrighted image as a profile or header photo and allegations concerning the unauthorized use of a copyrighted video or image uploaded on the platform.
How To file a strike?
For a third party to file a copyright complaint a user must provide
- A physical or electronic signature of the copyright owner
- Identification of the copyrighted work claimed to have been infringed such as a link to the original work or clear description of the materials allegedly being infringed upon
- Identification of the infringing material and information that would permit Twitter to locate the material on the website.
- Contact information, including address, telephone number, and email address.
- A statement that you have a good faith belief that the use of the material in the manner asserted is not authorized by the copyright owner, its agent, or the law.
- A statement that the information in the complaint is accurate.
What Happens When Content is Removed?
Twitter marks Tweets and media to indicate to viewers when content has been removed and sends a copy of each copyright complaint and counter-notice.
No One Is Safe From Twitter Copyright Claims: Twitter Strikes Trump
Twitter deleted a video from President Donald Trump’s official Twitter account campaign account during a Monday Night Football game that featured Trump superimposed in an NFL highlight.
The Trump campaign account tweeted out a highlight of San Francisco 49ers wide receiver Brandon Aiyuk scoring a rushing touchdown during a game against the Philadelphia Eagles. In the clip, Aiyuk hurdles over an Eagles defender at the 5-yard line before landing in the end zone. The campaign Twitter account superimposed Trump’s head on Aiyuk’s body and put a picture of the coronavirus on the Eagles’ defender. When played, the video showed Trump hurdling the coronavirus.
Another Twitter copyright incident involved a campaign video that President Trump had retweeted and the band, Linkin Park. The Linkin Park song “In the End” was featured in the background of the video, which included images of President Trump and excerpts from his inauguration speech. Twitter confirmed that it removed the video stating “Per our copyright policy, we respond to valid copyright complaints sent to us by the copyright owner or their authorized representative,” a Twitter spokesperson said. The band tweeted that it was pursuing a “cease and desist” and that it had not authorized use of its song in the video saying “Linkin Park did not and does not endorse Trump.” The incident highlights how no one is above copyright law.
Like many other popular social media platforms such as Facebook and Instagram, it is easy for users to post violent, criminal, or even terrorist content on Twitter. Both the users and Twitter itself may want this type of content removed if it is posted. According to Twitter’s community guidelines, any content that threatens a certain individual or group of individuals will be removed. It is also prohibited to post anything that promotes the “glorification of violence.” Regarding content that implies terrorist or violently extremist actions, Twitter claims that it also does not tolerate this type of activity and content on its platform.
Child sexual exploitation
According to their community guidelines, Twitter also has a “zero tolerance” policy against child sexual exploitation. Furthermore, if users reported any type of content with child sexual exploitation, it will likely be removed as it goes against the rules Twitter has established for its users.
Other users may also be concerned with content that users could post cyberbullying content. Twitter says that their users may not “engage in the targeted harassment of someone or incite other people to do so.” Specifically, Twitter wrote that this would include wishing or hoping that someone experiences physical harm. Another form of cyberbullying that may be of concern to users is posts driven by hate. Twitter does not allow the promotion of violence or threats against people based on their “race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”
Self-harm and graphic violence
Third party users may want content that contains implications of suicide/self-harm, sensitive/graphic violence, and promotions of illegal goods to be removed from the app. Similarly to Facebook and Instagram, Twitter prohibits this type of activity to be promoted on their social media platform.
Privacy is another issue that users of social media care about and they may want content that releases their own information to be removed. Twitter does not allow its users to post or threaten to post the private information of other users without their proper authorization or consent. It also prohibits posting content of another’s nudity that was produced or distributed without the person’s consent.
Regarding the authenticity of the content posted on Twitter, Twitter does not tolerate tweets that: are considered spam, that interfere with civic processes such as elections, include fraudulent activity, fake or manipulated news, and contain unauthorized use of intellectual content. All of these may be harmful in some way to the general public or specific individuals thus many users may want this type of content removed anyway.
All of these rules are properly enforced by Twitter and users can learn more about the app’s enforcement of such issues here.