Since February 2, 2025, the EU AI Act — the world’s first comprehensive regulation of artificial intelligence — has been fully in force across all 27 EU member states, including Poland.
It doesn’t ban AI itself. Instead, it bans the most dangerous and manipulative practices that could harm people’s rights, dignity, and freedom.
These prohibitions apply to every company, school, government office, and even individual users. Importantly, the law targets practices and functions, not specific tools. This means even popular AI companions can become illegal in certain uses.
Here are the 7 absolute prohibitions under the EU AI Act:
1. Subliminal Manipulation
AI systems are banned from using techniques that operate below the threshold of human consciousness (subtle changes in images, sounds, or rhythm) to significantly influence a person’s behaviour in a way that could cause serious harm.
2. Exploiting Vulnerability
It is illegal for AI to exploit a person’s age, disability, or difficult social/economic situation to cause physical or psychological harm.
3. Social Scoring
Systems that evaluate or classify people based on their social behaviour or predicted personality traits — and then treat them differently — are completely prohibited.
4. Predictive Policing (“Minority Report” style)
AI cannot predict the likelihood of someone committing a crime based solely on personality profiling or general behaviour. There must be concrete, objective evidence linked to actual criminal activity.
5. Mass Scraping of Faces from the Internet
Building facial recognition databases by scraping photos from the internet or public CCTV without explicit consent is banned.
6. Emotion Recognition in Workplaces and Schools
AI is prohibited from inferring or analysing a person’s emotions in workplaces or educational institutions (with very narrow exceptions for medical or safety reasons).
7. Real-Time Biometric Surveillance in Public Spaces
Remote biometric identification (e.g., facial recognition) in public spaces for law enforcement is generally forbidden. Exceptions are extremely limited (e.g., finding missing children or preventing imminent terrorist attacks) and require prior court approval.
Penalties are severe — fines can reach up to €35 million or 7% of global annual turnover (significantly higher than GDPR).
What Does This Mean for Ordinary Users and AI Companions?
- You now have stronger legal protection against invisible manipulation and mass surveillance.
- Employers and schools cannot legally monitor your (or your child’s) emotions through cameras or AI analysis.
- AI companion apps must be cautious with emotional manipulation, biometric data, and exploitative features.
- If an AI companion uses subliminal techniques or exploits a vulnerability, it may violate the law.
The Bigger Picture
The EU AI Act is not anti-AI. It is pro-responsible AI. Europe has drawn a clear red line: innovation is welcome, but not at the cost of human dignity and fundamental rights.
This regulation positions the EU as a global leader in ethical AI development and may influence similar laws in other countries.
Your Turn
Do you feel safer knowing these strict rules are already in force in the EU? Or do you think the AI Act goes too far and might slow down innovation?
Please let us know in the comments — the most interesting opinions will be featured in our next article!
Want weekly updates on AI regulation, ethics, and the future of AI companions? Subscribe to the newsletter (link in footer) — every Monday the hottest stories straight to your inbox ❤️
