The AI Act introduces a risk-based approach to artificial intelligence systems. But one point is clear: every AI system provider is now concerned, including when the system is not classified as “high-risk”.
A standard SaaS agreement, even when supplemented by a DPA, is no longer sufficient. It must be complemented by AI-specific contractual provisions, often in the form of a dedicated AI annex, to reflect the new obligations introduced by the AI Act and to secure the relationship with customers.
Even for:
the provider must now:
An AI annex allows these obligations to be structured in one place, without overloading the main contract.
The first contractual issue raised by the AI Act is the allocation of roles.
The contract must clearly identify:
This clarification is essential regardless of the system’s risk level.
It prevents the provider from bearing obligations that actually arise from the customer’s use or configuration of the system.
An AI system should never be described in generic terms.
The contract or AI annex should specify:
This precision is essential to:
The closer the system is to a sensitive use, the more precise this description must be.
The AI Act imposes proportionate governance requirements.
All AI system providers must be able to demonstrate that:
For high-risk systems, these obligations are formalised and documented.
For other systems, the same logic applies, but with a proportionate level of detail.
Contractually, it is advisable to provide for:
Data governance is a central issue for all AI systems.
The contract should define:
A key principle applies across the board: the provider should not be responsible for choices made by the customer, particularly when those choices have legal or ethical implications.
The AI Act introduces graduated transparency obligations.
For all AI systems, providers must be able to supply:
This documentation does not need to be embedded directly into the contract. It can be made available through a trust center, on request and subject to confidentiality.
This approach protects both:
Human oversight is not limited to high-risk systems.
Whenever an AI system influences a decision, the contract should specify:
The more critical the system, the more structuring this requirement becomes.
But even for simpler systems, a complete absence of oversight can raise issues.
All AI providers should contractually address:
The contract should clearly define:
Ambiguity on these points creates operational and legal risk.
One issue has become central in AI negotiations: model training.
It is strongly recommended to state clearly that:
This clause is now a key element of commercial trust.
The AI Act promotes a broader view of the AI value chain.
Contracts should identify:
Even for lower-risk systems, this level of transparency has become a strong client expectation.
The AI Act does not apply only to high-risk AI systems.
It requires all AI system providers to rethink how they structure their contracts.
The right approach is to:
This is no longer optional.
It is now a core element of doing business with AI in Europe.


Why and how a lawyer can structure CLM deployment in startups, securing contracts, templates and playbooks without an in-house legal team.

Essential clauses in a software development contract: scope, deliverables, acceptance testing, IP ownership, payments, liability, and legal protection.
Let's build together to grow your business