Ban on AI in confidentiality agreements – practically unworkable?

View as Markdown
3 mins read • Simon • COMMERCIAL LAW • 27 August 2025

When companies and organisations wish to protect their information, a non-disclosure agreement (NDA) is a useful instrument. With the rapid development of artificial intelligence, however, AI prohibitions have begun to appear in NDAs. Wording such as “the receiving party must not use any AI service in connection with the handling of confidential information” is increasingly common. The question is whether such provisions deliver the intended protection – or whether they create a contractual problem in their own right.

AI bans are difficult to comply with in practice

From a contract-law perspective, a blanket AI ban can be misleading. It signals control but is, in practice, hard to comply with. Today, AI sits not only in standalone platforms such as ChatGPT or Gemini. It is embedded across everyday business tools:

  • The email client suggests subject lines and auto-replies.
  • Word processors propose AI-driven drafting suggestions.
  • eSigning tools summarise the contents of agreements.
  • Project tools automatically prioritise tasks based on history.

A prohibition on “all AI” would, technically, capture almost all routine office work – even where no active use of sensitive data occurs. The result is a clause that is in constant jeopardy of breach.

Why banning AI in confidentiality clauses is not enough

Instead of prohibiting a technology category, the NDA or confidentiality clause should address the actual protection objective: that confidential information is not stored, analysed, shared or re-used in ways that risk disclosure. There is a material difference between:

  • An AI service that trains on user data.
  • An AI service that only processes data locally without storage.
  • An AI feature that merely affects the user interface (for example, drafting suggestions).

Contract terms could, for example, provide that parties must:

  • Prohibit the use of public AI services that train on the user’s content.
  • Permit AI tools embedded in approved software where data is not stored.
  • Require the parties to inform each other which tools are used.

This achieves a better balance between technology use and information security and avoids driving inadvertent breach.

AI is already built into everyday tools

Regardless of sector, size or tech maturity, AI is already part of the working environment – from the public sector to small consultancies and large industrial groups. AI is used in email clients, word processors, CRM systems, internal chats, analytics tools and cloud services – often without the user consciously noticing. The functionality is simply baked into the tools used every day.

Excluding all AI usage in a confidentiality clause or NDA is therefore not only impractical – it risks undermining the credibility of the agreement. Where a party cannot, with reasonable effort, understand what is actually prohibited, uncertainty arises as to what is permitted, what constitutes breach, and how the agreement is to be complied with in practice.

The question is no longer whether AI may be used – but how, in which tools, and under what conditions. By regulating these points clearly, you can create confidentiality terms that both protect information and can be followed in an AI-integrated working environment.

At Morling Consulting, our contract lawyers help companies draft and review NDAs that work – even in an AI-integrated working environment.