The AI Act introduces a risk-based approach to artificial intelligence systems. But one point is clear: every AI system provider is now concerned, including when the system is not classified as “high-risk”.

A standard SaaS agreement, even when supplemented by a DPA, is no longer sufficient. It must be complemented by AI-specific contractual provisions, often in the form of a dedicated AI annex, to reflect the new obligations introduced by the AI Act and to secure the relationship with customers.

Why include a dedicated AI annex?

Even for:

  • limited-risk AI systems,
  • decision-support tools,
  • recommendation engines,
  • or AI features embedded in a SaaS product,

the provider must now:

  • describe how the system works,
  • define permitted and prohibited uses,
  • allocate responsibilities clearly,
  • organise transparency and compliance.

An AI annex allows these obligations to be structured in one place, without overloading the main contract.

Clarifying roles: provider, deployer, user

The first contractual issue raised by the AI Act is the allocation of roles.

The contract must clearly identify:

  • the AI system provider,
  • the deployer (the customer using the system),
  • and, where applicable, AI model providers or technical subcontractors.

This clarification is essential regardless of the system’s risk level.
It prevents the provider from bearing obligations that actually arise from the customer’s use or configuration of the system.

Defining the purpose and intended use of the AI system

An AI system should never be described in generic terms.

The contract or AI annex should specify:

  • the purpose of the system,
  • its intended uses,
  • prohibited or excluded uses,
  • the role of the system in decision-making.

This precision is essential to:

  • limit the provider’s liability,
  • prevent misuse,
  • align expectations on both sides.

The closer the system is to a sensitive use, the more precise this description must be.

Governance and risk management: a proportional obligation

The AI Act imposes proportionate governance requirements.

All AI system providers must be able to demonstrate that:

  • risks associated with the system have been identified,
  • appropriate control measures are in place,
  • system performance is monitored over time.

For high-risk systems, these obligations are formalised and documented.
For other systems, the same logic applies, but with a proportionate level of detail.

Contractually, it is advisable to provide for:

  • the existence of a risk management framework,
  • access to compliance-related information,
  • regular updates when the system evolves.

Data quality, bias and configuration responsibilities

Data governance is a central issue for all AI systems.

The contract should define:

  • the types of data used,
  • safeguards to prevent obvious or systemic bias,
  • the customer’s responsibility for system configuration and input criteria.

A key principle applies across the board: the provider should not be responsible for choices made by the customer, particularly when those choices have legal or ethical implications.

Documentation and transparency: finding the right balance

The AI Act introduces graduated transparency obligations.

For all AI systems, providers must be able to supply:

  • clear and understandable documentation,
  • usage instructions,
  • information on system limitations.

This documentation does not need to be embedded directly into the contract. It can be made available through a trust center, on request and subject to confidentiality.

This approach protects both:

  • the provider’s intellectual property,
  • and the customer’s need for reassurance.

Human oversight and control of use

Human oversight is not limited to high-risk systems.

Whenever an AI system influences a decision, the contract should specify:

  • whether human validation is required,
  • who is responsible for it,
  • at which stage it must occur.

The more critical the system, the more structuring this requirement becomes.
But even for simpler systems, a complete absence of oversight can raise issues.

Security, robustness and incident management

All AI providers should contractually address:

  • a minimum level of security,
  • incident detection mechanisms,
  • client notification procedures.

The contract should clearly define:

  • what constitutes an incident,
  • notification timelines,
  • the respective roles of each party.

Ambiguity on these points creates operational and legal risk.

Strictly framing data use and model training

One issue has become central in AI negotiations: model training.

It is strongly recommended to state clearly that:

  • customer data is not used to train shared or general-purpose models,
  • outputs generated for the customer belong to the customer,
  • only aggregated and anonymised data may be used by the provider.

This clause is now a key element of commercial trust.

Subcontracting and the AI value chain

The AI Act promotes a broader view of the AI value chain.

Contracts should identify:

  • model providers,
  • technical subcontractors,
  • notification mechanisms in case of changes.

Even for lower-risk systems, this level of transparency has become a strong client expectation.

Conclusion

The AI Act does not apply only to high-risk AI systems.
It requires all AI system providers to rethink how they structure their contracts.

The right approach is to:

  • complement the SaaS agreement,
  • introduce a proportionate AI annex,
  • clearly allocate roles and responsibilities,
  • anticipate uses, risks and data issues.

This is no longer optional.
It is now a core element of doing business with AI in Europe.

Other posts


Blog image
Pourquoi se faire accompagner par un avocat pour déployer un CLM en startup ?

Why and how a lawyer can structure CLM deployment in startups, securing contracts, templates and playbooks without an in-house legal team.

Blog image
Software development contract: the essential clauses to include in the contract

Essential clauses in a software development contract: scope, deliverables, acceptance testing, IP ownership, payments, liability, and legal protection.

Let's build together to grow your business