Followers

Showing posts with label Online Content. Show all posts
Showing posts with label Online Content. Show all posts

Monday, February 08, 2021

Regulating online speech with due process, transparency

 Restrictions on online speech should be reasonable in every sense. Else, cyberspace will lose its most distinguishing feature — that it is an arena where individual liberties can be exercised without fear or favour.The ministry of electronics and information technology (MeitY) and Twitter are sparring over an official order to block certain social media handles and tweets, posted by users on February 1. The content in question is about the farmers’ protests. Twitter is reported to have received a blocking order under Section 69A of India’s Information Technology (IT) Act, 2000, after which several user accounts were disabled. However, Twitter proceeded to reinstate many of them within 48 hours, claiming that the contentious tweets were “newsworthy” and so constituted “free speech”. MeitY has threatened penal action against the company.

This latest imbroglio over political speech shared on social media has wider ramifications than meets the eye. Indian commentators often invoke former United States President Donald Trump’s executive order on online censorship and dub social media platforms as the “21st century equivalent of the public square”. There is indeed a strong case to be made for this assertion — given the scale and ubiquity of social media in India. Over 700 million people have access to broadband internet and nearly half-a-billion use these platforms to share diverse information, making India’s online sphere the largest democratic discussion board in the world. Equally, these inclusive characteristics challenge the monopoly of governments or private actors over censorship of online speech.

Section 69A of the IT Act gives the Centre powers to block public access to any information available online. An emergency provision under this section, which was reportedly invoked to issue orders to Twitter, also allows for “strict confidentiality” about complaints and requests received, and action taken by government to block such access. In other words, neither Twitter nor the government needs to provide citizens with a detailed rationale for content takedowns.

As a result, aggrieved users have limited judicial recourse since they are unable to access or understand such orders. This procedural opacity is comparable to the umpteen examples of unilateral takedowns of user-generated content by social media majors. The most prominent one was Twitter’s removal of Trump’s account last month, following the riots on Capitol Hill.

While “reasonable restrictions” under the Constitution limit the freedom of speech and expression in India, erroneous speech in public squares can redirect to truth only if participants are made aware of its falsehoods. In his famous thesis, On Liberty, John Stuart Mill argued that “all silencing of discussion is an assumption of infallibility”. The government or social media platforms, on refusing a hearing to an opinion, assume a position of absolute certainty. This runs contrary to the history of humankind, which is replete with examples of the fallibility of those who have the power to censor. The digital sphere is celebrated as an exceptional space for individual liberties. In the extant case, a compromise on free speech absolutism online should only be made through transparent and proportionate means. A detailed rationale for blocking information on social media must always be accessible to the public. The confidentiality of the blocking process under Section 69A sets a bad precedent. It stems from the presupposition that the average citizen is too immature to distinguish between good and evil. In doing so, it legitimises unilateral blocking of speech by private actors too. India should instead use this latest digital governance crisis to remedy its approach to social media regulation. Platforms must be made accountable through mandates for greater transparency in their content moderation practices and legal blocking provisions must be modernised to reflect a graded, citizen-centric approach.In Anuradha Bhasin vs Union of India, the Supreme Court affirmed that the wide reach of the internet should not become the basis to deny the right to free speech. Therefore, restrictions on online speech should be reasonable in every sense. Else, cyberspace will lose its most distinguishing feature — that it is an arena where individual liberties can be exercised without fear or favour.

Vivan Sharan is a Partner at Koan Advisory Group, New Delhi

Source: Hindustan Times, 8/02/21

Thursday, January 24, 2019

India should reconsider its proposed regulation of online content

The lack of technical considerations in the proposal is also apparent since implementing the proposal is infeasible for certain intermediaries. End-to-end encrypted messaging services cannot “identify” unlawful content since they cannot decrypt it. Presumably, the government’s intention is not to disallow end-to-end encryption so that intermediaries can monitor content.


Flowing from the Information Technology (IT) Act, India’s current intermediary liability regime roughly adheres to the “safe harbour” principle, i.e. intermediaries (online platforms and service providers) are not liable for the content they host or transmit if they act as mere conduits in the network, don’t abet illegal activity, and comply with requests from authorised government bodies and the judiciary. This paradigm allows intermediaries that primarily transmit user-generated content to provide their services without constant paranoia, and can be partly credited for the proliferation of online content. The law and IT minister shared the intent to change the rules this July when discussing concerns of online platforms being used “to spread incorrect facts projected as news and designed to instigate people to commit crime”.
On December 24, the government published and invited comments to the draft intermediary liability rules. The draft rules significantly expand “due diligence” intermediaries must observe to qualify as safe harbours: they mandate enabling “tracing” of the originator of information, taking down content in response to government and court orders within 24 hours, and responding to information requests and assisting investigations within 72 hours. Most problematically, the draft rules go much further than the stated intentions: draft Rule 3(9) mandates intermediaries to deploy automated tools for “proactively identifying and removing [...] unlawful information or content”.
The first glaring problem is that “unlawful information or content” is not defined. A conservative reading of the draft rules will presume that the phrase means restrictions on free speech permissible under Article 19(2) of the Constitution, including that relate to national integrity, “defamation” and “incitement to an offence”.
Ambiguity aside, is mandating intermediaries to monitor for “unlawful content” a valid requirement under “due diligence”? To qualify as a safe harbour, if an intermediary must monitor for all unlawful content, then is it substantively different from an intermediary that has active control over its content and not a safe harbour? Clearly, the requirement of monitoring for all “unlawful content” is so onerous that it is contrary to the philosophy of safe harbours envisioned by the law.
By mandating automated detection and removal of unlawful content, the proposed rules shift the burden of appraising legality of content from the state to private entities. The rule may run afoul of the Supreme Court’s reasoning in Shreya Singhal v Union of India wherein it read down a similar provision because, among other reasons, it required an intermediary to “apply [...] its own mind to whether information should or should not be blocked”. “Actual knowledge” of illegal content, since then, has held to accrue to the intermediary only when it receives a court or government order.
Given the inconsistencies with legal precedence, the rules may not stand judicial scrutiny if notified in their current form.
The lack of technical considerations in the proposal is also apparent since implementing the proposal is infeasible for certain intermediaries. End-to-end encrypted messaging services cannot “identify” unlawful content since they cannot decrypt it. Internet service providers also qualify as safe harbours: how will they identify unlawful content when it passes encrypted through their network? Presumably, the government’s intention is not to disallow end-to-end encryption so that intermediaries can monitor content.
Intermediaries that can implement the rules, like social media platforms, will leave the task to algorithms that perform even specific tasks poorly. Just recently, Tumblr flagged its own examples of permitted nudity as pornography, and Youtube slapped a video of randomly-generated white noise with five copyright-infringement notices. Identifying more contextual expression, such as defamation or incitement to offences, is a much more complex problem. In the lack of accurate judgement, platforms will be happy to avoid liability by taking content down without verifying whether it violated law. Rule 3(9) also makes no distinction between large and small intermediaries, and has no requirement for an appeal system available to users whose content is taken down. Thus, the proposed rules set up an incentive structure entirely deleterious to the exercise of the right to freedom of expression. Given the wide amplitude and ambiguity of India’s restrictions on free speech, online platforms will end up removing swathes of content to avoid liability if the draft rules are notified.
The use of draconian laws to quell dissent plays a recurring role in the history of the Indian state. The draft rules follow India’s proclivity to join the ignominious company of authoritarian nations when it comes to disrespecting protections for freedom of expression. To add insult to injury, the draft rules are abstruse, ignore legal precedence, and betray a poor technological understanding. The government should reconsider the proposed regulation and the stance which inspired it, both of which are unsuited for a democratic republic.
Gurshabad Grover is a senior policy officer at the Centre for Internet and Society
Source: Hindustan Times, 24/01/2019