Coalition Demands the Halt of Discriminatory Algorithm in French Social Security
Why is the Algorithm Discriminatory?
- Treats marginalized individuals with suspicion
- Violates human rights standards
- Discriminates against those with disabilities and low income
Amnesty International and La Quadrature du Net call for immediate action to stop the discriminatory practice.
What Revealed the Discriminatory Nature?
- LQDN exposed versions of algorithm source code
- System assigns risk scores to identify potential fraud cases
A closer score to one raises investigation probability.
How Does the Algorithm Impact Vulnerable Households?
- 32 million in France receive family and housing benefits
- Data processed periodically, risk scores assigned
- Parameters like low income, disability, and unemployment increase risk score
Investigation into high-risk score cases could lead to biased targeting.
Call for Human Rights Compliance
- AI systems in social protection flatten realities of people’s lives
- Stigmatize marginalized groups and invade privacy
- Authorities urged to address AI-related harms
Strict governance and transparency rules needed for AI deployment.
EUAI Act and the Future of Algorithmic Systems
- Current uncertainty around AI Act provisions
- Clear interpretation needed for social scoring systems
Amnesty International highlights the need for unambiguous guidelines.
Why Stop the Discriminatory Practice?
- Biased system poses risks to marginalized groups
- Authorities must ensure non-discrimination in AI use
Halt needed to prevent harm to community seeking social benefits.
This call comes amid France’s ambitions to excel in AI while facing criticisms for discriminatory practices.
Source: www.amnesty.org