Accountant or Lawyer? What No One Tells You When Starting Your Startup
Discover why a consultancy isn't a substitute for a lawyer in the startup world. Learn what each one does and how to avoid serious legal mistakes.
Read more →
Artificial intelligence is no longer just a competitive advantage.
With the entry into force of the EU Artificial Intelligence Act (AI Act), using, developing, or integrating AI systems now entails new legal obligations.
If you have a tech startup or use AI models in your product or service, this article is for you.
We’ll explain — from a legal and accessible perspective — what this regulation requires, how your system’s risk level is classified, and what other legal implications you should already be considering, even if you’re not developing your own AI.
---
The AI Act is the world’s first comprehensive legal framework regulating the development, use, and commercialization of artificial intelligence systems.
It has been approved by the European Parliament, and its implementation will be progressive — with the first milestones starting in 2025 and full compliance required by 2026.
🎯 Objective: to ensure that AI used in the EU is safe, transparent, ethical, and consistent with fundamental European rights.
---
The regulation establishes a classification system based on the risk posed by the AI system to people’s rights and safety.
There are four levels:
Examples: cognitive manipulation systems, social scoring, or mass biometric surveillance without legal justification.
If your system falls into this category, it cannot be marketed in the EU.
Applies to AI in sectors such as healthcare, human resources, banking, justice, or critical infrastructure.
It also includes tools that evaluate people, such as recruitment software.
Key obligations:
- Technical documentation and risk management.
- Data governance and traceability.
- Human oversight and cybersecurity.
If your system interacts with humans, generates synthetic content, or impersonates a human (chatbots, deepfakes), you must clearly inform the user that they are interacting with AI.
No specific obligations beyond general compliance requirements (such as data protection and intellectual property).
---
You must still comply with the Regulation if you:
- Integrate third-party AI systems, e.g., APIs based on generative models like ChatGPT or DALL·E.
- Retrain existing models.
- Offer services partially based on automated decision-making.
Liability may fall on the AI provider, the distributor, the integrator, or the professional user, depending on your company’s role.
---
Even if you comply with the AI Act, there are additional legal areas that directly affect any startup developing or integrating AI tools:
AI often processes personal data, profiles, histories, or behavioral patterns.
It’s essential to:
- Have a valid legal basis for processing (consent, legitimate interest, etc.).
- Conduct a Data Protection Impact Assessment (DPIA) if the processing is high-risk.
- Provide transparent information (Articles 13 and 14 GDPR).
- Implement appropriate technical measures, especially for automated decision-making.
Many AI models are trained using data, texts, or images protected by copyright.
And the outputs generated can raise questions about legal protection or potential infringement of preexisting rights.
Make sure to:
- Use legitimate datasets or validly licensed sources.
- Inform users if generated content is not original or may infringe others’ rights.
Does your AI make decisions that affect users? Does it generate content that others publish?
Then you need:
- Clear use policies and liability disclaimers.
- Legal notices about AI use.
- Human oversight or correction mechanisms.
AI-driven advertising can become invasive if it violates consumer protection and ePrivacy regulations.
Be cautious with:
- Unconsented or excessive targeting.
- Using inferred profiles without a valid legal basis.
---
If your startup develops or uses AI, here are some key steps to mitigate risks:
1. Identify the risk level of the AI system you use or market.
2. Analyze your personal data flows and their legal basis.
3. Review the datasets and tools you integrate, including their origin and licenses.
4. Update your legal documents (privacy policy, terms and conditions).
5. Implement human oversight measures, especially in critical decisions.
6. Monitor key AI Act dates to plan your compliance roadmap.
---
At LCL, we assist startups and tech developers in implementing AI compliance plans, including:
- Risk classification and AI Act obligations.
- Legal design and compliance-by-design consulting.
- Data protection impact assessments.
- Review of models, terms, and data flows.
- Drafting tailored legal clauses and documentation.
- Preventive legal risk audits for AI systems.
---
Do it right from the design phase.
We’ll help you grow with legal security and confidence in your technology.