AI, Strategy
|

AI responsibility in the company: Why mathematical understanding determines liability risks

5 min read

AI – the hype is huge, the opportunities and potential seemingly endless, but a worrying trend is emerging: Companies are using AI systems in critical business processes without understanding the fundamental liability risks. The problem? They are personally liable for every mistake – while AI providers deny any responsibility.

Here’s the framework for responsible AI deployment that every business leader needs to understand before integrating AI into operational processes – and how to protect your business from existential risk.

The mathematical foundation: Why AI systems inevitably produce incorrect outputs

Most managers treat AI like deterministic software. This is a dangerous mistake.

AI systems work in a fundamentally different way to conventional software. They calculate probabilities for each answer based on patterns from their training data. This means Every AI output is a statistical estimate, not a factual certainty. Mathematically speaking, these systems can never guarantee a zero percent probability of convincing but incorrect answers.

This is not a weakness of the system – it is a property of the loss function for which these systems are optimized.

The four critical qualities you need to internalize:

  1. The system advises in case of uncertainty, instead of admitting ignorance, because the optimization goals in the training reward this
  2. Plausibility and correctness are statistically independent – convincingly formulated does not mean factually correct
  3. Reproducibility does not exist across model updates – what works today may not work tomorrow
  4. You are liable for every output in business processes – the vendor is not liable

The legal reality: Full transfer of liability to the user

The legal situation is clear and devastating for companies:

OpenAI Terms of Service 2025: “THE SERVICES ARE PROVIDED ‘AS IS’ […] WE MAKE NO WARRANTIES REGARDING ACCURACY”. Anthropic, Google and Meta use identical wording. The provider is not liable for incorrect outputs. The user bears the full risk.

Managing director liability according to § 43 GmbHG

Anyone who uses systems for which they do not know the types of error is acting negligently. Unlike established technologies, where responsibility can be delegated to qualified persons – such as qualified persons in pharmaceutical manufacturing – there are no state-recognized certifications or regulated responsible parties for LLMs.

The management bears the responsibility directly and cannot delegate it in a legally binding manner.

GDPR compliance: an unsolvable problem

GDPR Article 22 requires explainability for automated decisions. LLMs cannot causally explain why token X was chosen. Fines: up to 20 million euros or four percent of annual turnover.

Empirical cases: When AI errors become expensive

Air Canada 2024: The chatbot gave the wrong refund policy. The company had to pay – not the AI provider. A Transformer-based LLM generated highly-confident, syntactically correct, semantically plausible false statements that were indistinguishable from correct outputs for non-experts.

This case shows the reality: The user is liable, not the technology provider.

The insurance dilemma: incalculable risks

The insurance industry has recognized the problem:

  • No DIN standard for LLM use in critical processes
  • No ISO certification for “hallucination rates”
  • No state-recognized expert examination as with established technologies
  • Public liability insurance often does not cover AI-specific risks
  • D&O insurance only covered with many expensive additional policies

Framework for the responsible use of AI

1. risk assessment according to business criticality

Low-risk applications (recommendation: controlled use possible)

  • Content creation with human review
  • Brainstorming and brainstorming
  • Internal documentation (not legally binding)

High-risk applications (recommendation: avoid or exercise extreme caution)

  • Accounting and financial reporting
  • Contract management and legal documents
  • Compliance-relevant decisions
  • Customer communication with legal implications

2. implementation guide for controlled use

Step 1: Establish a basic understanding of mathematics

  • Train your management team on probabilistic spending
  • Document known limitations and error types
  • Establish clear responsibilities

Step 2: Implement multi-level validation

  • Never use AI output without human review
  • Implement the dual control principle for critical decisions
  • Document all validation steps

Step 3: Legal protection

  • Check your insurance policy for AI cover
  • Consult legal advice for specific use cases
  • Develop internal compliance guidelines

3. monitoring and continuous evaluation

  • Regular check of AI outputs for accuracy
  • Documentation of all errors and their effects
  • Adaptation of the usage guidelines based on experience

The simplifier perspective: Why deterministic automation is often the better choice

At Simplifier, we have deliberately chosen a different path. Instead of relying on probabilistic AI systems, we focus on deterministic business process management solutions.

Why? Because business processes need reliability, not probabilities.

Our recommendation for critical business processes:

  1. Use rule-based automation for core operational processes
  2. Only implement AI in non-critical areas with human monitoring
  3. Invest in deterministic workflows that are traceable and controllable

Conclusion: Mathematical understanding as risk management

For core operational processes with liability relevance, a basic understanding of mathematics is not optional – it is risk management. Not to master formulas, but to understand the consequences.

The key question: Can you explain the four critical characteristics of AI systems and assess their impact on your business?

If not, you should not put the system into production. Or be prepared to accept personal liability.

Your next step

Evaluate your current AI implementations against this framework. Identify high-risk applications and develop a plan to manage them safely or replace them with deterministic alternatives.

Do you already use AI in critical business processes? Share your experiences in the comments – or contact us for advice on safe automation alternatives.

Questions? Let's talk!

Would you like to know more about this topic and find out more insights? Then let’s talk without obligation and I’ll tell you what else there is to report.

Christopher Bouveret
Innovation expert at Simplifier

More news

Register now for the IT Online Conference in January 2026!

Simplifier REAL-TALK series starts: IT decision-makers on topics that move SMEs.