Facebook called to court for ‘promoting crime’

A commission created under India’s social media law held Facebook responsible for hundreds of hate speech postings that a lawyer for the company said turned violent. Facebook, in a highly unusual move, conceded wrongdoing…

Facebook called to court for ‘promoting crime’

A commission created under India’s social media law held Facebook responsible for hundreds of hate speech postings that a lawyer for the company said turned violent. Facebook, in a highly unusual move, conceded wrongdoing but fought hard to overturn the law, which has since taken effect.

The dispute has led many observers to question whether India, where more than 400 million people use Facebook and the Silicon Valley company has also emerged as an important ally in its ongoing fight against Islamist terrorism, can ensure that online speech will not incite violence.

The question that arose was not whether Facebook committed crimes but whether it should be held responsible when its users acted dangerously.

The Indian law, called the Information Technology Act, is almost double the size of the similar U.S. law. It requires social media companies to remove, when possible, certain messages that the law said could incite crime or violence.

It’s designed to help Indian police track speech that promotes or inspires violence. Facebook and other platforms are only obliged to delete posts with the government’s permission.

The provisions say the posts could incite the commission of a crime “irrespective of the time and place from which it is posted.” The law defines “threatening a person with an act likely to provoke breach of the peace” as “verbal expression, including through use of popular idioms, of an opinion that is likely to increase the likelihood of such an act to be committed.”

The commission recommended penalizing Facebook and the Internet content management company that reports violations with a fine of up to $12,670. It urged the government to “think again” about enforcing the law.

“This gives us, and it should give them, an open-ended liability for harmful content,” lawyer Gopal Subramanian, who represented Facebook, said in an interview. “When every language has limits and no power has unlimited power, if we require speech to be safe, we have to balance that risk against the other side. But this law appears to imply that the state has unlimited power to control speech.”

Subramanian said the law’s requirements that Facebook remove the posts or be charged with criminal crimes when users violate it would be too hard to enforce in India’s rapidly expanding online marketplace, where content posted with a wink may be judged to be dangerous. And users do not post one post to have it taken down, he said.

Facebook and Google compete to dominate online advertising in India, and India is one of the top markets worldwide for both companies.

“Our position has always been clear on this: We take responsibility for all the content published on Facebook and are the first to work with Indian law enforcement,” said Nathaniel Gleicher, a Facebook spokesman. “We agree that India’s legal framework is complex, and that a thoughtful approach to India’s information technology laws will result in stronger laws that help people and prevent illegal activity while giving all parties certainty that they will be protected under the law.”

The Information Technology Act was originally part of a national cyber security law designed to protect information in India that is crucial to protecting national security. The new law was enacted in 2016, well after the rise of Islamic State in India. Critics of the law contended that it was rushed through parliament without sufficient input from a diverse body of political and civil society leaders.

The law’s preventive regulations also have given rise to a flourishing industry of free-speech lawyers. Some lawyers argue that the law’s language is too vague.

Shashank Joshi, a South Asia fellow at the London School of Economics, said the law “really cements Google and Facebook’s liability for content posted on their platforms, whether they know or not that it is not an issue of actual responsibility on their part.”

Several years ago, Facebook said it would hand over information to Indian police and federal law enforcement agencies on users who uploaded hate speech content to settle legal cases. It has also provided the same information to a number of other nations, including the United States, which has complained that the company has been too slow to cooperate.

The commission also said Facebook was cooperating. In its September report on Facebook’s compliance with the law, the commission said that of the 817 messages it had received from Facebook employees, almost 800 were uploaded in an attempt to recoup “user generated content” and the remaining six were dropped “pending further investigation.”

The commission recommended “more responsible corporate citizenship” in India, citing conditions proposed by the Center for Digital Democracy and two other groups to facilitate dialogue with Facebook on software and

Leave a Comment