EU AI Act: what obligations will tech companies face in 2025?

EU AI Act: what obligations will tech companies face in 2025?


In 2025, two blocks of rules come into force that directly affect startups and transformation-stage companies that integrate AI into their products and processes in Spain:

- From 02/02/2025: the prohibitions on certain AI practices take effect, together with a cross-cutting duty to promote AI literacy within organisations.
- From 02/08/2025: the rules for general-purpose AI (GPAI) apply, with primary obligations on providers of these models and indirect effects for integrators relying on third parties.

The regulatory focus for 2025 is clear: identify and switch off prohibited uses and require from the GPAI provider the information you need to use the model with legal certainty. Enhanced transparency (e.g., labelling of deepfakes and chatbots) arrives in 2026, so 2025 is the year to prepare for that transition.

---

Key dates to implement the AI Act

02/02/2025. The Regulation’s prohibitions take effect and organisations must promote AI literacy. This affects the entire chain (providers, distributors and integrators/business users). In practice, prepare an inventory of AI use cases, apply a strict filter to detect functions that fall under the prohibitions, and remove or redesign them immediately. In parallel, launch micro-training for product, data, legal and operations teams, focusing on model limits, bias, privacy and security.

02/08/2025. The GPAI duties kick in. The bulk of obligations sits with providers (technical documentation, training-data summary, intellectual property policy, security and, where relevant, enhanced measures for systemic risk). For integrators, the priority is to request, review and retain that documentation and pass on relevant warnings and limitations to customers and users. It’s advisable to update contracts (lawful data origin, licences and usage restrictions, cybersecurity, incident handling) and establish internal channels to monitor material changes in third-party models.

02/08/2026. The transparency block begins to apply (e.g., labelling synthetic content and chatbot notices). Although there is a year to go, it is sensible to finalise in 2025 your labelling strategy: UI/UX decisions, use of watermarks and metadata, and clear messaging to avoid confusion with human-generated content.

---

Who is affected in 2025?


- Providers: those who develop and “place on the market” the AI system/model.
- Importers and distributors: those who market third-party AI systems in the EU.
- Business users / “deployers”: those who integrate AI systems (own or third-party) into their activity.

If a product or service is offered or used in the EU, the AI Act applies regardless of where the provider is established.

---

What you cannot do from 02/02/2025 (prohibitions)


The Regulation bans, among others, the following practices:
- Social scoring of individuals that results in unjustified detrimental treatment.
- Biometric categorisation inferring sensitive attributes (e.g., sexual orientation, political or religious beliefs).
- Mass scraping of facial images from the internet or CCTV to build facial-recognition databases.
- Emotion recognition in workplace or education settings, save for narrowly defined exceptions (e.g., health or safety).
- Manipulative or subliminal techniques that significantly distort behaviour and cause harm.

What to do now
1) Build an inventory of AI use cases (own and third-party).
2) Apply an Article 5-style filter to identify prohibited uses.
3) Remove or redesign those functions and document the decision.
4) Launch a basic AI literacy programme for product, data, legal and operations teams.

Sanctions (indicative): breaches of prohibitions can lead to fines of up to €35 million or 7% of worldwide annual turnover (whichever is higher), with proportionality for SMEs.

---

GPAI from 02/08/2025: what changes if you use third-party models


The heaviest GPAI obligations fall on providers of those models. Among other things, they must:
- Supply technical documentation suitable for integrators.
- Publish a sufficiently detailed summary of training data.
- Maintain a copyright/IP policy aligned with EU rules.
- Implement security and risk-mitigation measures and, where applicable, notify if the model reaches systemic-risk thresholds and deploy enhanced controls.

For integrators (your most likely scenario):
- Request, review and retain the provider’s documentation and the training-data summary; these are the basis for your legal-technical assessment.
- Contract for guarantees (lawful data origin, IP rights, use restrictions, cybersecurity, incident management).
- Pass on notices to your customers/users when generated content may be mistaken for human-made content (even if full obligations arrive in 2026, early adoption is advisable).
- Monitor whether the provider declares or reaches systemic risk (it changes the level of technical expectations).

> Pro tip: aligning with the GPAI Code of Practice (a voluntary instrument) helps demonstrate diligence when integrating third-party models.

---

Spain: governance and public tools


- AESIA: Spain’s specialised AI supervisory authority. It coordinates with sectoral regulators and the AEPD where personal data are involved.
- AI regulatory sandbox: an official testbed for projects (especially high-risk ones) that need to trial requirements under administrative supervision.
- Draft Law on the proper use and governance of AI (2025): aligns Spain’s legal framework with the AI Act and introduces additional measures such as labelling of AI-generated content and a national sanctions regime. (In parliamentary process at the publication date.)

---

Soft law and standards worth following in 2025


Although not binding, authoritative materials guide interpretation:
- European Commission guidance for GPAI and the Code of Practice (published in 2025).
- Guidance from data-protection authorities (EU and Spain) on legal bases, data minimisation and auditing where personal data are involved.
- UNE-ISO/IEC 42001:2025 (AI management system): a useful reference to establish proportionate governance without turning this into “compliance for compliance’s sake”.

---

2025 for integrators: a pragmatic (non-sectoral) roadmap


1. AI map and “hard filters”: inventory + Article 5-style screening (prohibitions). If something falls within, do not launch it.
2. AI literacy: micro-training on model limits, bias, privacy and prohibited uses.
3. GPAI provider due diligence (from 02/08/2025): technical documentation, data-summary, copyright, cybersecurity, incident handling and risk status.
4. Contracts and notices: update Terms/DPA, attribution clauses, use restrictions and transparency towards end users.
5. Transparency 2026, prepared in 2025: define labelling (UI/UX), watermarks and metadata for synthetic content and chatbots.
6. Light-touch governance: minimal policies, roles and logs (model changes, incidents and “no-go uses”), leveraging standards such as UNE-ISO/IEC 42001.

---

FAQs (quick)


Can I use emotion recognition in HR or internal training?
No. It falls under prohibited practices from 02/02/2025, with very limited exceptions.

If I integrate a third-party model, does anything change for me in 2025?
Yes. From 02/08/2025, you must review the GPAI provider’s documentation and adjust notices/contracts across your value chain.

Is the GPAI Code of Practice mandatory?
No. It’s voluntary, but useful to evidence diligence and anticipate regulatory expectations.

Who supervises me in Spain?
AESIA for AI matters, the AEPD where personal data are involved, and sectoral regulators as applicable. The sandbox is a channel to trial complex projects under supervision.

---

In conclusion: what should you do about the AI Act?


2025 is the year to lay the groundwork: identify and disable prohibited uses, raise AI literacy, and demand from providers the documentation and assurances needed to integrate models with legal certainty. If you act now, the shift to enhanced transparency in 2026 will be far smoother.