
In today’s digital world, powerful AI systems are trained on massive amounts of personal information from hospitals, banks, phones, and more. "Federated unlearning" is a promising new tool that lets people request their private data be truly removed from these AI models, even when the training happens across many locations without sharing raw data.
"Federated unlearning" builds on federated learning (where data stays local) and machine unlearning (removing specific data influences after training). It allows individuals to exercise their privacy rights without forcing companies to retrain entire expensive models from scratch. While the piece rightly warns of potential new cybersecurity risks—such as attackers using unlearning requests to hide poisoned data or backdoors—the core principle remains vital: you should control your own information.
So far, Europe has led the way with its General Data Protection Regulation (GDPR), which includes a clear “right to be forgotten” (Article 17). This gives citizens the legal power to demand that companies erase their personal data and its influence from AI systems. Similar ideas appear in research on applying the right to be forgotten directly in federated learning settings.
America’s approach is weaker and more fragmented. We have no strong nationwide “right to be forgotten” for AI. Instead, we rely on a patchwork of state laws like California’s CCPA, which offers a “right to deletion” but falls short of Europe’s standard in many cases. Our Constitution has long protected privacy rights—from the Fourth Amendment’s defense against unreasonable searches to traditions of individual liberty and limited government. Conservatives have always fought to preserve personal freedom and keep big institutions from overreaching.
In the age of AI, we must uphold and advance that tradition. The United States should pass strong federal privacy legislation that meets or exceeds Europe’s protections. Individual liberty includes the right to be left alone—not just from government, but from unchecked tech companies that profit from our data.
Studies show federated unlearning can help achieve real privacy in collaborative AI, such as medical imaging, while keeping data secure and local. Other research highlights both the promise and the need for careful safeguards against privacy threats or model degradation.
True American leadership means innovating boldly while defending core rights. Policymakers should require companies to implement reliable unlearning tools, verify requests properly, and audit results. This protects citizens without slowing down responsible innovation.
We cannot let Europe set the bar while America lags. Our constitutional values demand better. By advancing strong privacy rights—including effective data unlearning—we can ensure AI serves people, not the other way around.
Realistically, and importantly, Congress and the states should act now to make the “right to be forgotten” a real American standard.
Some useful links for further research:
- Right to be forgotten in Federated Learning: https://arxiv.org/abs/2203.07320
- Federated Unlearning and Its Privacy Threats: https://iqua.ece.utoronto.ca/papers/feiwang-ieeenetwork23.pdf
- Federated Client Unlearning in Medical Imaging: https://papers.miccai.org/miccai-2024/paper/1632_paper.pdf
- "Does ‘federated unlearning’ in AI improve data privacy, or create a new cybersecurity risk?", the Conversation, April 13, 2026: https://theconversation.com/does-federated-unlearning-in-ai-improve-data-privacy-or-create-a-new-cybersecurity-risk-279640
- The Right to Be Forgotten Is Dead (discussing US vs EU): https://techpolicy.press/the-right-to-be-forgotten-is-dead-data-lives-forever-in-ai