TikTok will soon introduce enhanced age-verification technology across the European Union and the UK as governments push for stricter controls on children’s access to social media platforms.
The new system, tested quietly in parts of the EU over the past year, uses a combination of profile details, uploaded content, and on-platform behaviour to identify accounts that may belong to users under the age of 13. Rather than imposing automatic bans, accounts flagged by the system will be reviewed by trained moderators. Users will also have the option to appeal if their accounts are removed in error.
As part of the appeal process, TikTok may request age verification through methods such as facial age estimation, credit card checks, or government-approved identification. The pilot phase of the technology has already led to the removal of thousands of underage accounts.
TikTok said the system complies with data protection and privacy laws and is designed solely to improve safety for younger users. According to the company, age predictions are only used to support moderation decisions and enhance the technology, not for advertising or other purposes.
Existing safety measures will continue, including restrictions on direct messaging for users under 16, screen-time limits for users under 18, and reduced notifications during night hours.
The rollout comes amid increasing scrutiny from regulators in both the EU and the UK over how social media platforms verify users’ ages. Several European countries and the UK are considering stricter age limits, inspired by recent international moves to limit social media access for children, reflecting growing concerns about screen time and online harm.
TikTok said it has worked closely with regulators to ensure the new technology meets legal requirements and strengthens protections for young users.
