The Failure of Social Media Platforms to Mitigate Cybercrime in India: Examining Algorithmic Gaps and Legal Responsibilities

Main Article Content

Richa Sharma, Anil Dawra, Anjali Sehrawat

Abstract

There is increasing concern that the algorithmic architecture of social media platforms, particularly when utilized with insufficient human oversight, may exacerbate cybercrime rather than mitigate it. In India, hate speech, cyberbullying, phishing scams, non-consensual personal images, and financial theft have proliferated on platforms such as Instagram, Facebook, and X (previously Twitter). There has been significant public and governmental interest in content filtering; yet, research on how algorithmic amplification exacerbates these issues is limited. Indian law does not recognize platform design problems as a distinct legal harm, despite the stipulations of the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and their amendments in 2023. The study proposes that algorithmic negligence be recognized as a novel legal category within Indian cyber law, utilizing tort characteristics such as duty of care, breach, harm, and causation.  The study demonstrates that nations globally are progressing towards more proactive governance by comparing the EU’s Digital Services Act (DSA), U.S. Section 230 of the Communications Decency Act, 1996 (Section 230) jurisprudence, and Japan’s content moderation regulations. This research proposes implementable enhancements such as obligatory algorithmic audits, tiered duty-of-care frameworks, and transparency requirements. It accomplishes this by doctrinal analysis, case studies, and cross-jurisdictional synthesis. It concludes by proposing a strategy to amend India’s legislation to classify harmful algorithmic design as a legal violation.

Article Details

Section
Articles