The Indian government in association with industry body NASSCOM has declared 2020-30 as the Technological Decade (or “Techade”), as they foresee that the thrust behind economic growth will be in the digital field (AI, Web 3.0, Industry 4.0 etc.). It is further believed that to realize this trillion-dollar digital economy, a safe, secure and trusted internet will be required, with the focus shifting from platforms to the citizen.With the Digital Personal Data Protection (DPDP) Bill at the verge of being re-introduced in Parliament, the next big move is introduction of the Digital India Act (DIA), touted as an updated piece of legislation which will replace the Information Technology Act, 2000 and completely overhaul India’s law and policy regulations in the digital space.
Since 2015, the concepts of intermediary liability and safe harbour have assumed growing importance in the digital dialogue.
In the initial phase, India took a relatively lenient approach and put intermediaries on a long leash, requiring take-down of unlawful content only after receipt of “actual knowledge” of such content, through a court order or a government directive. Section 79(3)(b) of the Information Technology Act, 2000 was struck down by the Supreme Court in Shreya Singhal v. Union of India as it made intermediaries liable to take down content merely based on complaints received from users. At the time, the court based its decision on the plight of mammoth intermediaries such as Google and Facebook that received millions of complaints per day.
Phase 2 (2016-2021) saw greater adoption of the intermediary or aggregator business model, resulting in a perceived surge of so called unlawful content available online. Platforms didn’t make details of sellers or posters of information easily available (many didn’t even collect the information)and demanded a court order even for brazenly obvious infringing content (such as counterfeit luxury goods being sold for nickels and dimes). Steadily, safe harbour was seen as an excuse to avoid responsibility, with some sections of society calling for a complete elimination of this concept.
Phase 3 saw (2022) saw the government amend the Intermediary Guidelines 2021, and mandate intermediaries to ensure that users don’t post unlawful content. But what exactly does this mean? Courts are currently deciding whether this means a stricter and prompt notice-and-take down policy or does this require platforms to pre-screen content and take down unlawful content themselves? The Delhi High Court is hearing this issue in several cases, including the much-watched Aaradhya Bachchan v. Bollywood Time & Ors. (2023) which will decide YouTube’s liability in the context of defamatory and harassing information concerning children.
Platform availability and revenue sharing isn’t automatic ground for denial of safe-harbour!
The recent decisions of the U.S. Supreme Court in Twitter v. Taamneh and Gonzalez v. Google rejected the argument that blurred the line between “passive nonfeasance” and “active support” of unlawful actions. Use of Twitter and Google by terrorist organizations to transmit information to billions of users was seen as being no different than the use of channels of communications such as mail, telephone or public areas. Revenue sharing between YouTube and posters of illegal content was rejected, as this wasn’t evidence of substantial assistance towards a wrongful act.
The DIA should also not proceed on the assumption that provision of a platform and revenue sharing should lead to denial of safe harbour. For if platforms start to re-design their architecture to avoid any potentially problematic content, user experience and free speech will be compromised.
Incentivizing due-diligence and voluntary clean-ups is the way to go!
The European Union’s Digital Services Act (DSA) and the United Kingdom’s proposed Online Safety Bill (OSB) have given interesting solutions to balance rights of users and platforms aiming for a much safer environment.
The OSB introduces an interesting concept, which requires social media platforms to remove “legal but harmful content”, which will be pre-defined by the government. This ensures that social media platforms don’t take down content based on their interpretation of illegal content, which invariably would be subjective.
With the Indian executive being highly proactive, it can define the kind of information that intermediaries should be extra cautious towards and ensure that such information is not posted and is taken down. For other information, the traditional route of “actual knowledge” can continue so that freedom of expression is not compromised at the altar of overzealousness.
In the EU, the DSA introduces the Good Samaritanpolicy, which incentivizes intermediaries to manage content on their platforms, and voluntarily prevent, detect, and remove illegal content. Such platforms do not lose their safe harbour if some content goes undetected, as the traditional actual knowledge and notice-take-down mechanisms remain available to them.
The Digitial India Act should also incentivize voluntary takedowns by intermediaries, and perhaps recommend standardized tools and technologies for pre-screening and detection, which can increase transparency and trust between platforms, users and the government.
Increasing transparency and pinning liability on officials of non-compliant platforms is necessary.
The UK’s OSB seeks to empower the executive with the ability to demand data from tech companies on their algorithms and infrastructure which are responsible for selecting and displaying content to users. India’s DIA can contain similar measures to enable the government to study patterns and assess how it can enact regulations on algorithmic transparency to shield users from harm. To protect platforms’ trade secrets behind their algorithms, the government can contemplate receiving this information on a confidential basis, so that proprietary data is not in the hands of miscreants or business competitors.
To tackle the growing menace of non-compliant platforms that defy court orders or government policies, the DIA can also consider imposing strict civil and criminal liabilities on senior management of platforms. This will enable an environment of compliance and safety of users against harm.
To realize a trillion-dollar digital economy, India needs a regulated framework which is not too burdensome for platforms that collects data but at the same time ensures a safe, secure and trusted internet for netizens.