Section 230 of the Communications Decency Act has entered a period of uncertainty. The legislation, which shields social media companies from liability for content published by others on their platforms, was previously targeted by an executive order from the Trump administration that remains in effect today. Now it is the subject of a number of proposed changes from Congress that would chip away at the immunity in a piecemeal fashion. Missing from this discussion is a complete understanding of how such changes could affect a major actor in the social media environment: bots. In a recent study of social media bots legislation by CNA, we find that the passing of restrictions on Section 230 could provide a major opening for malicious bots on social media.

The debate over Section 230 centers on the impact of these changes on social media platforms, but the effects of Section 230 changes could be significant for bots, too. Even though bots are automated programs, built to interact with humans on social media platforms, certain proposals could require companies to give each individual bot an appeal hearing before kicking it off the platform. This would be the case particularly under some Republican-proposed actions, such as those contained in the Trump administration executive order, which seeks to limit “selective censorship” and could cause social media companies to leave up content they otherwise would have taken down. For example, a bill Republican Sen. Josh Hawley of Arkansas introduced in the last Congress would allow citizens to sue platforms that censor political speech, which could lead to companies refusing to remove any content. This type of legislation could allow malicious bots to continue operating for much longer, providing a boost for those seeking to spread disinformation.

Unintended consequences could also work in the opposite direction. During the presidential campaign Joe Biden called for the complete repeal of Section 230, though that appears less likely now, as most lawmakers appear uninterested in a full revocation of the law. If Congress were to take the step of repealing Section 230 without replacing it, social media companies could be held liable for all content posted on their platforms and might aggressively block all bots to ensure no harmful, bot-generated content slips through. The wholesale blockage of bots could ultimately result in a loss for society, however, since many bots are actually benign or beneficial. For example, the Twitter account @ParityBOT_US sends out a positive tweet each time its algorithm “detects an abusive tweet sent to a woman candidate running in the U.S. election.” The bot @earthquakesSF distributes U.S. Geological Service data on all earthquakes detected in the San Francisco area in real time.

While few proposals have bipartisan support, both sides of the aisle agree that Section 230 needs reform. Concern about disinformation has only grown since the January 6 Capitol attack, which involved large numbers of QAnon believers and other conspiracy theorists who operate largely on social media. Of course the 2016 election also spurred huge interest in the topic. We now know that during that campaign, Russia widely spread contentious and often false narratives by using over 50,000 bots on Twitter alone to create profiles appearing to belong to everyday Americans. However, Congress and the Biden administration must think seriously and strategically about any reform or repeal of Section 230, considering potential unintended consequences before determining the best path forward. This undertaking begins with an appreciation of the state of the law regarding bots, as well as private enforcement efforts. It’s difficult to be optimistic when surveying the short history of efforts to regulate social media bots.

The legal landscape of bot regulation

The United States currently has no federal legislation regulating social media bots, and a number of challenges and legal restrictions constrain the ability of Congress to pass meaningful bot legislation. For one, any law inhibiting the use of bots has to meet First Amendment standards, like all government-imposed speech constraints. Bot speech receives constitutional protection because individuals with First Amendment rights communicate through bots, and those on the receiving end of bot messages also have a right to take in that information. While this does not prevent Congress from passing legislation, it does create a minefield for crafting such provisions.

For as long as it survives, Section 230 also restrains Congress from passing legislation that would hold the social media platforms responsible for content — including bot-generated content. Even if Section 230 evolves or goes away, there are still a number of other barriers to crafting appropriately-tailored federal bot legislation, including the difficulty of identifying which accounts are run by bots and the fact that some bots are benign or beneficial.

Despite these difficulties, there have been attempts in Congress to pass bills to at least partially legislate the use of bots and botnets. (A botnet is a collection of coordinated social media bots, which is often more successful at amplifying narratives than the use of a single bot. For example, in the wake of the Jamal Khashoggi killing, a botnet was activated to distort the conversation and cast doubt on Saudi Arabia’s involvement.) Notably, Sen. Dianne Feinstein’s Bot Disclosure and Accountability Act, introduced in 2018 and again in 2019, would forbid the use of bots by political candidates, parties, and political action committees. It would also require bots to disclose themselves in posts and tweets. The bill is currently stuck in committee.

The only bot-related law to pass in the U.S. so far has been at the state level. California’s Bolstering Online Transparency, or B.O.T. Act, became operational on July 1, 2019. This law makes it illegal to use a bot to knowingly mislead a person in California about an account’s artificial identity in order to influence their vote or persuade them to participate in a commercial transaction. It is still unclear how the state will punish users interacting with someone in California in a global commons like the internet. Other states, including New Jersey and Washington, have introduced similar bills, but they have yet to pass.

International bot regulation efforts, which currently center on Europe, are voluntary and non-binding. The European Union developed a Code of Practice in 2018, signed by many of the big social media companies, including Twitter and Facebook. The signatories pledged to self-regulate in a number of areas, including closing fake accounts and labeling bot interactions. In the European Commission’s first yearly compliance findings, published in May 2020, the commission indicated there is still much room for improvement in these efforts to restrict malicious bots.

Private policing on the platforms

Because legislation and governmental regulation of bots is still nascent, the vast majority of bot control efforts fall to the social media platforms themselves. These companies are hardly the natural locus for such action. Bots pump up customer engagement numbers, which are linked to higher share prices. As a result, the platforms are naturally reluctant to kick off large quantities of bots. This has begun to change, however, as blowback in the wake of the 2016 election has created a greater financial and reputational incentive for the companies to stop spreaders of disinformation.

Another impediment to private policing is Section 230 itself. Platforms have been reluctant to moderate too heavily out of fear that doing so could amount to admitting a duty to regulate content on their sites. That would run counter to the original reasoning for Section 230: allowing sites to provide venues for online interaction without needing to employ hordes of lawyers and moderators to constantly take down offending content. The more these companies monitor content and police bots, the weaker their position to defend Section 230 on those grounds. We have seen this play out recently after Twitter labeled tweets by President Trump that it deemed inaccurate. This led the administration to pass its executive order on Section 230 reform, because, according to the order, such moderation means the sites “cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.”

Despite the reasons deterring social media platforms from controlling bots, they have begun to restrict the use of automation on their sites. Where Russian actors once operated virtually in plain sight, mechanisms now exist to quickly take down botnets and to stop the creation of fake accounts by bots. Twitter has introduced an automated system to help it spot bot activity, and the company has since removed networks operating out of a variety of locations, including Russia, China, Venezuela and Saudi Arabia. Despite great improvements inside the social media companies, difficulties remain, and it can be challenging even for them to distinguish between bot and human activity. As the companies implement new systems, actors are simultaneously evolving their tactics to try to avoid detection, including the use of AI-generated profile photos rather than more easily traced photos of real people.

Many social media sites do not have policies specifically detailing how they deal with bots, but instead fold the provisions potentially applicable to bots into their general policies on banned behavior. In the CNA bots study, we found that social media companies typically control bot behavior through four broad categories of policies: general automation, spam, artificial amplification, and fake accounts and misrepresentation.

This graphic shows how these policies often overlap in addressing bot behaviors. Some behavior types, like spam and artificial amplification, are completely banned (shown in green) under the policies, while general automation and fake account policies permit some types of bot-related behaviors (shown in blue) and ban others. For instance, fake account and misrepresentation policies sometimes allow for the creation of satire and entertainment accounts, while wholly banning more deceptive fake accounts. Twitter, for example, allows the creation of joke accounts set up to impersonate celebrities, like one tweeting as “Queen Elizabeth” with over 1.5 million followers, as long as the account biography makes clear it is unaffiliated with the account subject. At the same time, Twitter’s Impersonation Policy prohibits any attempt to pass as another individual in “a confusing or deceptive manner.”

The way forward

Until now, there has been very little real movement toward meaningful bot regulation, leaving bots largely unregulated, at least from a governmental standpoint. It is unlikely Congress will find a legislative solution to the malicious use of bots in the near future, though they will continue to drag tech CEOs up to Capitol Hill to testify on the measures their companies are implementing to combat these problems. At the same time, proposals to regulate the social media companies themselves have increasingly gained traction, with likely reform or repeal of Section 230 looming in the near future. Those changes could have major implications for the bots that operate on them, though whether it becomes easier or much harder for bots to operate will depend on how the conversation shakes out.

Bots should be an important component of the national discussion on social media and disinformation. These automated accounts participate in our social discourse on a wide range of topics. As narratives spread, bots can even affect the conversation outside of the social media realm. According to Nick Monaco, research director at the Institute for the Future’s Digital Intelligence Lab, there is “never something that’s trending that’s not in some way promoted by bots.” However, bots are not inherently malicious, just as they are not inherently effective at spreading narratives. Rather, it is the intent of the programmer and the employment of the bots that determine where they fall on the scale of good to bad. As the conversation on the future of social media continues, decision-makers would do well to consider bots in all of their complexity.


Kasey Stricklin is a research analyst with CNA's Adversary Analytics team, where she is a member of the Russia Studies Program. Her research specialization is the psychological side of information warfare, including disinformation and propaganda.