The AI Act introduces a risk-based approach to artificial intelligence systems. But one point is clear: every AI system provider is now concerned, including when the system is not classified as “high-risk”.
A standard SaaS agreement, even when supplemented by a DPA, is no longer sufficient. It must be complemented by AI-specific contractual provisions, often in the form of a dedicated AI annex, to reflect the new obligations introduced by the AI Act and to secure the relationship with customers.
Even for limited-risk AI systems, decision-support tools, recommendation engines, or AI features embedded in a SaaS product, the vendor must now describe how the system works, define permitted and prohibited uses, allocate responsibilities clearly, and organise transparency and compliance.
An AI annex allows these obligations to be structured in one place, without overloading the main agreement. For more on why an AI clause in a SaaS agreement has become essential.
The first contractual issue raised by the AI Act is the allocation of roles.
The agreement must clearly identify the AI system provider, the deployer (the customer using the system), and, where applicable, AI model providers or technical subprocessors.
This clarification is essential regardless of the system’s risk level. It prevents the vendor from bearing obligations that actually arise from the customer’s use or configuration of the system.
An AI system should never be described in generic terms.
The agreement or AI annex should specify the purpose of the system, its intended uses, prohibited or excluded uses, and the role of the system in decision-making.
This precision is essential to limit the vendor’s liability, prevent misuse, and align expectations on both sides.
The closer the system is to a sensitive use, the more precise this description must be.
The AI Act imposes proportionate governance requirements.
All AI system providers must be able to demonstrate that risks associated with the system have been identified, that appropriate control measures are in place, and that system performance is monitored over time.
Contractually, it is advisable to provide for the existence of a risk management framework, access to compliance-related information, and regular updates when the system evolves.
Data governance is a central issue for all AI systems. The agreement should define the types of data used, safeguards to prevent obvious or systemic bias, and the customer’s responsibility for system configuration and input criteria.
A key principle applies across the board: the vendor should not be liable for choices made by the customer, particularly when those choices have legal or ethical implications.
The AI Act introduces graduated transparency obligations. For all AI systems, providers must be able to supply clear and understandable documentation, usage instructions, and information on system limitations.
This documentation does not need to be embedded directly into the agreement. It can be made available through a trust centre, on request and subject to confidentiality. This approach protects both the vendor’s intellectual property and the customer’s need for reassurance.
Human oversight is not limited to high-risk systems. Whenever an AI system influences a decision, the agreement should specify whether human validation is required, who is responsible for it, and at which stage it must occur.
The more critical the system, the more structuring this requirement becomes. But even for simpler systems, a complete absence of oversight can raise issues.
All AI providers should contractually address a minimum level of security, incident detection mechanisms, and customer notification procedures.
The agreement should clearly define what constitutes an incident, notification timelines, and the respective roles of each party. For further guidance on security provisions in a SaaS agreement.
One issue has become central in AI negotiations: model training. It is strongly recommended to state clearly that customer data is not used to train shared or general-purpose models, that outputs generated for the customer belong to the customer, and that only aggregated and anonymised data may be used by the vendor.
This provision is now a key element of commercial trust.
The AI Act promotes a broader view of the AI value chain. Agreements should identify model providers, technical subprocessors, and notification mechanisms in case of changes. For more on managing subprocessors in a SaaS agreement.
Even for lower-risk systems, this level of transparency has become a strong customer expectation.
The AI Act does not apply only to high-risk AI systems. It requires all AI system providers to rethink how they structure their agreements. The right approach is to complement the SaaS agreement, introduce a proportionate AI annex, clearly allocate roles and responsibilities, and anticipate uses, risks and data issues. For an overview of the key provisions, see the SaaS contracting guide. If you need to structure your AI contracts, book a call.


The Data Act limits what SaaS vendors can charge when you switch providers. Permitted fees, prohibited charges, and the 2027 deadline explained.

Stuck in a SaaS contract your company no longer needs? The EU Data Act gives you a legal right to switch providers. Eligibility, process, and pitfalls.
Let's build together to grow your business