With all the controversy occurring on Twitter these days, you’re likely more aware of the platform’s ability to flag, limit or remove users’ posts than ever. Contrary to what you might believe, the site does not actively monitor user-generated content and encourages users to settle disagreements on their own terms wherever possible. This policy is reflective of Twitter’s “two-part commitment to freedom of expression and privacy.” However, the site also reserves the right to edit or take down content in response to reported violations of its User Agreement. Twitter’s Help Center clearly expresses that the platform “will not tolerate behavior that harasses, threatens, or uses fear to silence the voices of others.”
You may be surprised by the multitude of reasons why someone may want to have a third party’s content taken down. A quick glance at Twitter’s rules and policies will reveal rationales covering everything from hateful conduct to platform manipulation. Read ahead to learn about the recourse available to those looking to protect their reputations against accounts committing these 11 infractions.
Note: This article is not and is not intended to constitute legal advice. It also does not aim to speak on behalf of Twitter. You may wish to seek the help of an attorney or refer to Twitter’s rules and policies directly when settling a conflict.
1. Impersonation
The Twitter Rules’ section on authenticity forbids users from engaging in impersonation that “is intended to or does mislead, confuse, or deceive others.” Those who run parody, newsfeed, commentary or fan accounts are directed to use their bios and account names to indicate non-affiliation with the subject of their account so as to avoid being reported for this offense.
Specific guidelines as to what qualifies as impersonation on the platform can be found in the Impersonation Policy. Simply using the same name or profile picture as another person, brand or organization is not enough to put an account in jeopardy, particularly if there are no other commonalities apparent and if the account has identified itself as “not [being] affiliated with or connected to any similarly-named individuals or brands.” The account must be portraying another “in a misleading or deceptive manner” to be found in violation.
The impersonated party (or an authorized representative of it) can report an account for impersonation by filling out this form. Bystanders, on the other hand, can flag the account in question straight from the profile itself.
2. Intellectual Property Infringement
The aforementioned authenticity section of The Twitter Rules also states that users cannot violate others’ intellectual property rights. This provision sets forth specific policies regarding trademark and copyright infringement respectively.
Trademark
The Twitter Help Center rules and policies page on trademark policy defines a trademark as “a word, logo, phrase or device that distinguishes a trademark holder’s good or service in the marketplace,” noting that trademark law may prevent others from using one in an “unauthorized or confusing” manner. More specifically, it goes on to describe use that “may mislead or confuse people about your affiliation” as a violation of Twitter policy. There is also a ban on advertising trademark infringing content (such as Promoted Trend names or embedded media) in a way that creates confusion about an advertiser’s brand affiliation. However, using a trademark in a “nominative or other fair use manner” as outlined in the parody, newsfeed, commentary and fan account policy or “in a way that is outside the scope of the trademark registration” is acceptable. Twitter acknowledges that simply referencing another’s trademark is not a violation in and of itself.
The sale or promotion of counterfeit goods falls into the domain of a separate policy that forbids Twitter users from offering, promoting, selling or facilitating unauthorized access to content. Both “non-genuine” products masquerading as genuine products of a trademark or brand owner and products described as “faux, replicas or imitations” are considered counterfeit (as are certain other types of goods). The illegal or certain regulated goods or services policy also covers counterfeit goods and services, as well as drugs and controlled substances, human trafficking, products made from endangered or protected species, sexual services, stolen goods and weapons.
To report a trademark policy violation, you must be the owner of the trademark in question (or an authorized representative of the trademark holder). Reports can be submitted through this form. Note that the perpetrator may receive your name and other information you provide in the report. Those concerned about violations of the counterfeit policy can file a report using an alternate support form.
Twitter may release squatted usernames in the event of trademark infringement.
Twitter’s copyright policy is a bit more extensive. It provides a course of action for copyright holders who have observed activity on the site that infringes upon their copyright. Examples of such activity include the unauthorized use of copyrighted images in account headers or profile pictures, Tweets containing links to infringing materials and the unauthorized use of copyrighted images or videos uploaded via Twitter’s media hosting services. Thanks to emerging technology, Twitter and Periscope broadcasts are now subject to automated copyright claims.
As per the Help Center, Twitter responds to complaints submitted under the Digital Millennium Copyright Act. Complainants are advised to review DMCA section 512 for information on formally reporting infringement and an explanation of the compliant counter-notice process available to the alleged violator. Once they have reviewed the fair use policy, Twitter recommends that complainants ask alleged violators to remove the content in question by responding to their tweets or direct messaging them. Should this course of action prove insufficient, copyright holders can go ahead and file a formal report. If there is any uncertainty as to who holds the rights over the copyrighted work or whether the activity concerned is actually infringing, the policy points out that it may be wise to consult an attorney first. Fraudulent and/or bad faith submissions can result in serious legal and financial consequences (see DMCA 17 U.S.C. § 512(f)).
Once they have made the decision to move forward, either the copyright holder or a person authorized to act on their behalf will need to input the following into this form:
- Their signature (physical or electronic)
- Copyright holders concerned about sharing their contact information with alleged infringers may consider appointing an agent to submit the DMCA notice on their behalf.
- Identification of the copyrighted material claimed to have been infringed upon (such as a link to the original work or a clear description of the content involved)
- Identification of the offending material and information that will allow Twitter to locate it
- If the material in question is part of a Tweet, submit a direct link to that Tweet. Otherwise, specify where the infringement has occurred (e.g., in a profile page’s header or avatar).
- Their own contact information (address, phone number, and email address)
- A statement confirming that they believe in good faith that the use of the material in the manner asserted has not been authorized by the copyright owner, its agent or the law
- A statement (made under penalty of perjury) verifying that the information in the complaint is accurate and that they are authorized to act on behalf of the copyright holder
Twitter will send complainants a ticket confirmation after the report is submitted. Complainants should only resubmit a complaint if they never receive this ticket, as duplicated submissions can delay processing. Purported violators should receive a full copy of the report as well as instructions on filing a counter-notice in the event that Twitter decides to take action. A redacted copy of the complaint will be forwarded to Lumen, an independent database tracking cease and desist letters concerning online content. If the complaint has been determined to be accurate, valid and complete, Twitter may remove or restrict access to the reported material. Withheld Tweets and media will be clearly identified as such.
3. Hateful Conduct, Violent Threats, Abuse and Harassment
The Twitter Rules forbid users from expressing hate towards another person, group or protected category (e.g., race, religion, gender, orientation, disability). Promoting violence against, threatening or harassing others on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability or serious disease is also not allowed. These rules cover profile information (usernames, display names, profile pictures, header images and bios), live video, Tweets and direct messages. That means restrictions are not limited to textual content, as “logos, symbols or images whose purpose is to promote hostility and malice against others” are expressly banned in the hateful conduct policy.
Content that “wishes, hopes, promotes, or expresses a desire for death, serious and lasting bodily harm, or serious disease against an entire protected category and/or individuals who may be members of that category” is specifically prohibited, as are references to “mass murder, violent events or specific means of violence” where the intent is to harass “protected groups [that] have been the primary targets or victims.” Likewise, content that seeks to “incite fear or spread fearful stereotypes about a protected category” (e.g., posts asserting that members of a protected group are more likely to participate in dangerous or illegal activities) is banned. The same goes for “repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category.”
The hateful conduct policy also addresses violent threats, defining them as “declarative statements of intent to inflict injuries that would result in serious and lasting bodily harm, where an individual could die or be significantly injured.” There is also a separate violent threats policy pertaining to “statements of an intent to kill or inflict serious physical harm on a specific person or group of people.” It states that Twitter views statements such as “I will,” “I’m going to,” “I plan to” as well as “If you do X, I will” as being indicative of intent. Pairing such statements with a threat to kill, sexually assault, seriously harm or collect a bounty for inflicting violence on a specific person or group is grounds for an immediate and permanent suspension. “Praising, celebrating, or condoning statements” made in reference to acts committed by civilians that resulted in death or serious physical injury, terrorist attacks or violent events that targeted protected groups are barred in the glorification of violence policy. Examples of such statements include “I’m glad this happened,” “This person is my hero,” “I wish more people did things like this” or “I hope this inspires others to act.” Actions perpetrated by state actors or situations in which violence was not primarily targeting protected groups may not be considered violations under this policy.
Similarly, engaging in targeted abuse and harassment or inciting others to do so is also against Twitter rules. The section of The Twitter Rules on safety notes that “wishing or hoping that someone experiences physical harm” also qualifies as targeted harassment. Qualifying posts may be similar to those that might trigger a hateful conduct report, including those that make violent threats, contain abusive slurs, epithets and/or racist or sexist tropes, reduce someone to less than human or incite fear. Mere insults (unless aggressive and intended to harass or intimidate others) do not fit into Twitter’s definition of abuse and harassment, but unwanted sexual advances (sending unsolicited and/or unwanted adult media, participating in unwanted sexual discussion of someone’s body, soliciting sexual acts, etc.) do. To simplify, Twitter considers “attempt[s] to harass, intimidate, or silence someone else’s voice” to constitute abusive behavior. As is the case with hateful conduct, Twitter may excuse such behavior if the affected content’s public-interest value outweighs the risk of harm (i.e., if there was no threat of violence or promotion of suicide/self-harm involved).
All users are encouraged to report any hateful, violent or abusive material they come across on the platform. There are options to do so in-app or via a form on Twitter’s help site. However, Twitter may reach out to victims directly to better understand the context surrounding an incident reported by a third party. Twitter’s team will examine the apparent intentions of the individual who posted the allegedly hateful material and decide whether or not the content was part of a consensual conversation. Those rightfully reported for making violent threats can only avoid permanent suspension if their content is determined to be a form of hyperbolic speech due to Twitter’s zero-tolerance policy against them.
4. Terrorism and Violent Extremism
The policy governing violent organizations is separate from those regarding hateful conduct, violent threats and the glorification of violence. It states that “terrorist organizations, violent extremist groups [and] individuals who affiliate with and promote their illicit activities” are not welcome on the site due to how “the violence [they] engage in and/or promote” affects the physical safety and well-being of those they target. Users may be held in violation for acting in service of violent organizations in the following (as well as other) ways:
- Engaging in or promoting acts on their behalf
- Recruiting for them
- Providing or distributing services or information to further their stated goals
- Using their insignias or other symbols to promote them or indicate affiliation or support of them
Twitter uses national and international terrorism designations to supplement its own violent extremist group and violent organizations criteria to identify threats in this category. Exceptions may be made for groups that have reformed or that are currently in the process of seeking a peaceful resolution. Those with representatives who have been democratically elected to public office may also be exempt from this policy’s ban, as are state or governmental organizations themselves. In addition, it’s noted that the discussion of terrorism or extremism for educational or documentary purposes is permitted.
Both those with and those without Twitter accounts may report violations of this policy through the same complaint process outlined above under “Hateful Conduct, Violent Threats, Abuse and Harassment.”
Terrorism and violent extremism is one of the categories in which reported users are less likely to be granted a public-interest exception.
5. Platform Manipulation and Spam
This is a broad category designed to encompass commercially-motivated spam (content that aims to drive traffic or attention from Twitter to accounts, websites, products, services or initiatives), inauthentic engagement (artificially making accounts or content appear more popular or active) and coordinated activity propagated through technical coordination or social coordination. In general, Twitter defines platform manipulation as using the site to engage in “bulk, aggressive, or deceptive activity that misleads others and/or disrupts their experience.”
Specific behaviors categorized as manipulation and spam include:
- Operating fake accounts to mislead others (e.g., using stolen or copied profile information)
- Selling/purchasing followers or engagements
- Using or promoting third-party services or apps claiming to add followers or create engagement for Tweets
- Participating in “follow trains,” “decks,” “Retweet for Retweet” behavior and other forms of reciprocal inflation.
- Transferring or selling Twitter accounts and/or usernames or temporary access to them
- Sending unsolicited replies, mentions or direct messages in bulk
- Repeatedly posting and deleting the same or nearly identical content
- Repeatedly posting Tweets or sending direct messages consisting solely of links to the point where such material makes up the bulk of your activity
- Following and then unfollowing accounts in large numbers to inflate your follower count
- Following a large number of accounts indiscriminately in a short time period
- Duplicating another account’s followers (particularly through automation)
- Aggressively adding users to Lists or Moments
- Using a popular hashtag to subvert a conversation or redirect traffic or attention
- Tweeting with excessive, unrelated hashtags
- Publishing or linking to malicious content (e.g., phishing)
- Posting deceptive URLs such as affiliate links
The Help Center is careful to note that using Twitter pseudonymously, running a parody/commentary/fan account, operating multiple accounts with distinct identities and purposes and coordinating with others to express support or opposition towards a cause are not necessarily violations of this policy.
Anyone can report accounts or Tweets that appear to fall into this category by using the spam reporting form and selecting “I want to report spam on Twitter.” Financial scams (using Twitter’s services to deceive others into sending money or personal financial information using fraudulent or deceptive methods) can be reported using the same method. Platform manipulation and scam can also be reported directly from a Tweet.
6. Manipulating or Interfering in Civic Processes
Twitter has used its civic integrity policy to express its belief that it is responsible for protecting the integrity of conversations regarding civic processes (e.g., elections, censuses and ballot initiatives) on its site. Willful attempts to directly manipulate or disrupt these processes through the distribution of false or misleading information are forbidden. One can violate this policy by sharing misleading information about how to participate in a civic process (whether in regard to the date/time, eligibility requirements or legal methods of doing so), engaging in suppression and intimidation tactics, spreading misleading information about outcomes to undermine public confidence or creating accounts/content that misrepresent the affiliation of the poster to a candidate, elected official, political party, electoral authority or government entity. There is also a separate synthetic and manipulated media policy that was introduced this year ahead of the upcoming U.S. election in order to address potential misinformation. According to Twitter, false or untrue political information is not manipulation or interference in and of itself. Content that simply happens to result in confusion around civic processes may be fitted with labels providing additional context.
Users based in locations about to undergo a civic process will receive the opportunity to report violations of the civic integrity policy straight from concerning Tweets leading up to the first officially-sanctioned event in the process.
7. Child Sexual Exploitation
This is another act against which Twitter holds a zero-tolerance policy. Any content featuring or promoting media, text and/or illustrated or computer-generated images of a human child engaging in sexually explicit or sexually suggestive acts are subject to its consequences. Since this is considered one of the most serious potential violations of Twitter’s rules, the platform does not examine intent when ruling on a report. All a user needs to do to be held in violation is view, share or link to the type of material described above. Sexualized commentaries about or directed at a minor (regardless of whether the target’s age is known by the transgressor) also come under this policy.
Furthermore, users cannot:
- Share fantasies about or promote engagement in child sexual exploitation
- Express a desire to obtain materials that feature child sexual exploitation
- Recruit, advertise or express an interest in a commercial sex act involving a child, or in harboring and/or transporting a child for sexual purposes
- Send sexually explicit media to a child
- Engage or try to engage a child in a sexually explicit conversation
- Attempt to obtain sexually explicit media from a child or try to engage a child in sexual activity through blackmail or other incentives
- Identify alleged victims of childhood sexual exploitation by name or image
- Promote or normalize sexual attraction to minors as a form of identity or sexual orientation
However, discussions related to child sexual exploitation are allowed as long as they don’t normalize, promote or glorify it in any way. This exception pertains to advocacy against illegal or harmful activity involving minors as well, provided that no material featuring child sexual exploitation is shared or linked to. This also goes for conversations in which individuals struggling with attraction to minors are seeking help, as well as for depictions of nude minors in a non-sexualized context or setting (e.g., art by internationally renowned artists, news reporting, scientific or educational content, etc.).
Any individual with or without a Twitter account may report an account that’s distributing or promoting child sexual exploitation using this designated form. The offending account’s username and links to its allegedly violating content should be included in the report. Content deemed to be depicting or promoting child sexual exploitation will be removed and reported to the National Center for Missing and Exploited Children. Twitter does not anticipate a child sexual exploitation case that would call for a public-interest exception.
8. Sensitive Media
The Twitter Rules state that any media that is violent, adult or “excessively gory” must not be on display within highly-visible areas on Twitter, such as in list banner, profile or header images or within live video.
According to the sensitive media policy, the platform groups sensitive media into the following categories:
- Graphic Violence: Media that depicts “death, violence, medical procedures or serious physical injury in graphic detail.” Examples include (but are not limited to) depictions of violent crimes, physical fights, and/or physical child abuse.
- Adult Content: Defined as “any consensually produced and distributed media that is pornographic or intended to cause sexual arousal.” Examples include (but are not limited to) depictions of nudity and sexual intercourse. There is also a separate non-consensual nudity policy regarding explicit sexual photos or videos that were produced or distributed without the subject’s consent.
- Violent Sexual Conduct: Any media depicting either real or simulated violence in association with sexual acts qualifies. Examples include (but are not limited to) depictions of rape/non-consensual sexual acts and sexualized violence where it is “not immediately obvious if those involved have consented to take part.”
- Gratuitous Gore: “Any media that depicts excessively graphic or gruesome content related to death, violence or severe physical harm, or violent content that is shared for sadistic purposes.” Examples include (but are not limited to) depictions of dismembered/mutilated humans, charred/burned human remains, exposed internal organs or bones and the torture or killing of animals (except in certain cases pertaining to religious sacrifice, food preparation and hunting).
- Hateful Imagery: “Any logo, symbol, or image that has the intention to promote hostility against people on the basis of race, religious affiliation, disability, sexual orientation, gender/gender identity or ethnicity/national origin.” Examples include (but are not limited to) depictions of hate group-associated symbols and images altered to include hateful symbols or references to a mass murder that targeted a protected category.
However, there are some types of sensitive media that cannot be shared regardless of an account’s media settings because of their potential to normalize violence and cause distress to viewers. On the other hand, Twitter may make exceptions to these rules for documentary, educational, artistic, medical or health-related content.
Anyone can report potential violations via Twitter’s dedicated in-app and desktop reporting flows. The sensitive media policy provides instructions for each method.
Violators will be punished according to the type of media they shared and whether or not it is their first offense. Twitter is more likely to make a public-interest exception in regard to its sensitive media policy compared to some of its other policies, as long as non-consensual nudity and violent sexual assault are not involved.
9. Exposure of Private Information (Doxxing)
Twitter users are not permitted to post other people’s private information (such as phone numbers, email addresses, home addresses, government-issued IDs, social security numbers, financial account information, sign-in credentials, medical records, etc.) without their “express authorization and permission.” It’s also against Twitter’s rules to threaten to expose or incentivize others to expose this type of information (such as by using another’s private information for blackmail or by offering a financial reward to someone to publicly expose such information). The private information policy is primarily intended to protect individuals from suffering physical harm as a result of having their information posted. As a result, consequences will vary according to the risk associated with the leaked information and the intentions of the source who shared it. This policy may cover the distribution of hacked materials as well. Those who post their own private information as well as those who share information that Twitter does not consider to be private (e.g., name, age/birthday, place of education or employment, physical appearance descriptions, rumors, screenshotted messages that don’t contain private information themselves, information publicly available elsewhere, etc.) will not be considered violators of this policy. However, home addresses that are publicly available outside of Twitter may still be considered private information for their potential to bring about physical harm.
Anyone can report the abusive sharing of private information using the instructions for in-app or desktop reporting found in the policy or via the private information report form. In order for the platform to take action on posts that lack a clearly abusive intent, Twitter’s team will need to hear directly from the alleged victims or their authorized representatives. Twitter is less likely to make a public-interest exception for violations of the private information policy.
10. Deceased Individuals
Under this policy, Twitter may ask users to remove images or videos taken “at the point of, immediately before or after someone’s death.” The deceased individuals policy also prohibits the sharing of “excessively gruesome” images or videos as well as the posting of media depicting a deceased individual for “sadistic purposes.” These regulations apply to images or videos of a “reasonably identifiable person” who is clearly deceased, images or videos depicting the murder of an identifiable individual and content that mocks or takes pleasure in the suffering of the deceased. This list of examples is not comprehensive. Exceptions may be made for police shootings or other newsworthy events.
Anyone can report “media depicting excessively gruesome content” or “media depicting deceased individuals shared for sadistic purposes” by filling out the private information report form and selecting the “unauthorized photo or video” option. Twitter will only request removal of other types of content under the private information policy when asked to do so by a family member or authorized representative of the deceased individual in question. Authorized representatives of an estate and verified immediate family members can also request the removal of a deceased person’s Twitter account.
11. Suicide and Self-Harm
Tweets, images and videos (including live video) are subject to the suicide and self-harm policy (among others). It prohibits users from promoting or encouraging eating disorders, the self-infliction of physical injuries or the taking of one’s own life. Twitter views statements such as “the most effective,” “the easiest,” “the best,” “the most successful,” “you should,” and “why don’t you” as indicative of promotion and encouragement. Users are not permitted to ask others for “encouragement to engage in” self-harm or suicide (e.g., seeking partners for group suicides or suicide games) either. It is also a violation to share information, strategies, methods or instructions that would assist others’ engagement in these acts. In some cases, reported content will also be evaluated under the sensitive media policy. However, it is acceptable to tell personal stories related to self-harm or suicide (without sharing strategies or methods), to share coping mechanisms and resources for addressing self-harm or suicidal thoughts and to discuss self-harm or suicide prevention by focusing on research, advocacy and education.
Anyone can report potential violations of this policy in-app (see the suicide and self-harm policy for details) or through Twitter’s specialized reporting form. A dedicated team will evaluate each individual case. Those who have expressed an intention to engage in harming themselves can be reported through a separate process that will enable Twitter to contact them directly with information about appropriate support resources. The platform has stated that “the public-interest exception is unlikely to override the potential for offline harm” when it comes to self-harming content.
Should you choose to report a violation in one of the categories above, it will be necessary to fill out any necessary forms completely so as to avoid processing delays. Twitter will often follow up to collect any information you neglect to provide. Users struggling to access these forms are advised to update their current browser or switch to an alternative one.
Punishments can range from requiring the offending user to take down the reported content to permanently suspending the transgressor. The severity of the violation committed and the violator’s record on the site typically influences the platform’s response. Sometimes, the reported user’s apparent intent will be taken into account. In general, first-time offenders will receive milder or more temporary punishments than repeat rule violators. However, this may not be the case in the event of a severe infraction. Users who believe that they were wrongfully suspended can submit an appeal.