Federal Communications Commission Chairman Aiget Pai prepares to testify before the Senate under the Financial Services Committee and the General Government on Capitol Hill. Neutrality in 2017, he made clear his agenda: he wanted to admit "storm drills" to the Obama-era open Internet rules.
However, the rules he wanted to break were formed in 201
5 in a process that elicited a record number of comments from the public – more than 4 million, which is more than any other regulatory investigation in US history. The vast majority of them were in favor of net neutrality rules to prevent ISPs from blocking, slowing down or speeding up access to websites, or charging sites for faster user access. But despite public support for the relatively new rules, Pai's desire to lift net neutrality in 2017 was ultimately successful. Not surprisingly, he broke the record for public participation in regulatory compliance once again, but this time the process proved to be a blurred uncertainty. BuzzFeed's new report makes it even more sketchy. He stresses how vulnerable the commenting process of the federal government is – and what risks if left unchecked.
When a federal regulatory agency wants to change its rules or develop a new policy, it usually has to undergo a "message and comment" process in which the public is invited to weigh the impact of the changes on the rules. Thousands of rules are published annually in this way, usually receiving from tens to thousands of comments. Very rarely, the notification and comment process attracts millions of responses – far less than the 22 million comments, as well as efforts to abolish net neutrality rules.
As the comments spread throughout the second half of that year, it quickly became clear that something was wrong. Just over a week after opening the comment period, John Oliver dedicated the issue to a 20-minute segment of his HBO show, urging users to hear their voices to try to prevent what he said was a "hacking of the cable company." The comments flooded the FCC to such an extent that the agency's electronic registration system was shut down – as an investigation by the FCC's inspector general had determined when it considered the matter. However, when the system first went down, Pai incorrectly told Congress it was because of a mysterious cyberattack. In late May, the Vice Governor found that comments in favor of the FCC repeal were being placed under the names of the dead. Further investigations have shown that comments on net neutrality also come from stolen identities, including from lawmakers such as Oregon Senator Jeff Merkley and Arizona reporter Ruben Gallego, who on their behalf have posted counterfeit comments net neutrality. Bots posted comments. Hundreds of thousands of comments came from Russian email addresses. However, despite these shortcomings, more than 99 percent of organic commentary – that is, evidence suggests that they are factual people rather than pre-written – was in favor of maintaining net neutrality.
Now, according to a new BuzzFeed investigation, it turns out that more than a million suspicious comments made to the FCC were the result of a shady outside firm hired by political campaigns using information from people abducted as a result of data breaches.
With this abundance of snow, it is clear that the online commentary system at the FCC and most likely other government agencies is easy to operate and is likely to be broken in that it does more harm than good. While this may seem a mysterious issue, it is a big problem. With regard to the development of new federal policies, the process of notification and commentary may be the only direct way for a public member to have a say in federal decisions. Regulators are required to legally consider the opinions of Americans. Although policy makers cannot read every comment when millions are posted, comments can be sent to help revise policy proposals. Take a look at what happened in 2014, when the FCC first proposed new net neutrality rules. Back then, during the Obama-era FCC, the original proposal would allow ISPs to offer websites to reach users faster, but would prohibit any blocking of websites. This would create a two-tier Internet. But the public spoke in the comments process. After considerable pressure, the FCC rewrote the rules to prohibit any paid priority, and this version of the rules was finally adopted at the end of the year. In 2004, the nonprofit organization I worked for, the Prometheus Project, even sued the FCC after failing to consider public opinion during the comment process when drafting new media ownership rules – and triumphed. Eventually, the agency was tasked with going back and having six public hearings across the country to better understand the impact of its rules on different communities across the country.
Not surprisingly, the FCC comment process has become a mess. There is currently no CAPTCHA system that asks you to prove that you are human when you post a comment. It is extremely easy to write a web application for submitting comments automatically. Pai even refused to remove fraudulent comments about the net neutrality package when he was asked by victims of identity theft to do so. Despite reports over a year ago that the agency is planning a major overhaul of its comment system, it is not clear that anything has actually been done. On Thursday, FCC Committee Commissioner Jessica Rosenworcel called the FCC's "continued silence" on its broken comment system "shameful."
This is not only a problem at FCC. Fraudulent comments were submitted to the Ministry of Labor, as were the Consumer Financial Protection Bureau, the Federal Energy Regulatory Commission, and others. The Wall Street Journal investigation found thousands of fraudulent comments on the agency's Web sites. This problem is endemic and is not resolved.
The answer to this mess does not end with the comment process. We need ways to weigh the policies that affect our lives after Election Day, especially when it comes to decisions made by unelected officials in regulatory bodies. The answer is to repair the broken system quickly. It requires understanding how fake comments are submitted and working with technologists, consumer advocates, and other stakeholders to discover ways to abuse the system and build a better one. The new system may require posters to use two-factor authentication. Or perhaps agencies need to build a detection system to remove duplicates. When the public is asked to participate on the Internet, there will always be actors who try to suppress it. But democracy is a mess . And to protect it requires constant work.