WhatsApp Unveils New Safety Features, Bans Over 6.8m Scam Accounts

Meta-owned messaging platform, WhatsApp has unveiled new safety features to ensure users security across group and individual chats, revealing that it has banned more than 6.8 million scam-linked accounts targeting people globally.
The platform said the tools are designed to provide users with more context before they engage, particularly when they are added to unfamiliar groups or begin conversations with people who are not in their contacts.
“In the first six months of this year, as part of our ongoing proactive work to protect people from scams, WhatsApp detected and banned over 6.8 million accounts linked to scam centres,” the company said.
Kojo Boakye, Vice President of Public Policy, Africa, Middle East and Türkiye at Meta said “The fight against scams is a relentless one, and we are continually evolving our defenses to stay ahead of bad actors. This is part of our unwavering commitment to protect our users, not just by banning malicious accounts, but by empowering individuals with the tools and knowledge they need to recognize and avoid these sophisticated threats. We believe that a safer messaging environment is built through a combination of robust technology, proactive detection, and user education.”
For group chats, the app will now display a safety overview when someone outside a user’s contacts adds them to a group they do not recognise. The overview shows whether the person who added them is in their contacts, whether other members are known to them, and offers tips to stay safe. If users want more context, they can open the chat; otherwise, notifications from the group will remain muted until they choose to stay.
According to the messaging app, this is aimed at curbing surprise additions to large or malicious groups and preventing the spread of fraudulent links through mass invitations.
On individual chats, scammers often begin conversations elsewhere online before moving targets to private messaging. To counter this, WhatsApp is testing ways to provide more context when a user starts a chat with someone who is not in their address book, giving people time to assess the legitimacy of the contact before responding.
The collaboration with OpenAI revealed that fraudsters had been using ChatGPT to craft initial outreach messages that lured victims into WhatsApp chats before directing them to other platforms such as Telegram to finalise the scam. The schemes ranged from fake earnings tasks and pyramid-style rent-a-scooter offers to cryptocurrency investment ruses. In many cases, fraudsters built trust by showing fabricated earnings before pressuring victims to transfer funds into crypto accounts, following a familiar pattern of escalating from low-risk tasks to real financial transactions.
WhatsApp urged users to take time to review messages before responding, question any request that pressures them to act quickly, and verify the identity of anyone claiming to be a friend or family member through another channel.
The company said the new contextual prompts are meant to make spotting red flags easier and reinforce safe behaviour.
The tools are being rolled out gradually as tests continue, with adjustments expected based on user feedback and evolving scam tactics.