WhatsApp has announced it blocked over two million accounts in India in May and June for violating rules.
The Facebook-owned messaging service said 95% of these users were blocked for violating the limits of the number of times messages can be forwarded in India.
The submissions were made by WhatsApp in its first monthly compliance report under India’s controversial new IT rules.
India is WhatsApp’s largest market with about 400 million users.
The company said its “top focus” has been to prevent accounts in India from sending harmful or unwanted messages at scale.
Using advanced machine learning technology, WhatsApp reportedly bans close to eight million accounts across the world every month.
Two million accounts in India sending a “high and abnormal rate of messages” were banned in India alone between May 15 and June 15, the service said.
The service identifies an Indian account as one with a +91 (country code) phone number.
WhatsApp often ends up being the focus of discussions on the spread of misinformation and fake news in India.
Such fake news and hoaxes are forwarded to tens of thousands of users in hours, and it’s practically impossible to counter them.
Messages and videos circulating in bulk have in the past incited mob violence in India, even leading to deaths.
WhatsApp Testing New Feature to Let Users Message Without Using Their Phones
China Blocks WhatsApp Ahead of Communist Party Meeting
WhatsApp Suspended in Brazil
WhatsApp criticized over privacy policy
In addition to responding to user complaints, WhatsApp said it deployed its own tools to prevent abuse on the platform.
The service said it relied on the “behavioral signals” from user accounts, or on available “unencrypted information”, profile and group photos, and descriptions to identify potential offenders.
WhatsApp’s submissions come at a time when tech companies are embroiled in an intensifying battle with the Indian government over the new IT rules.
The guidelines – announced in February and became effective in May – seek to regulate content on social media and streaming platforms, and have raised serious concerns about free speech and user privacy.
Critics say they give the government and law enforcement agencies powers to take down a wide range of content on the internet. But the government claims the rules are meant to prevent abuse and misinformation.