Center for AI and Ethics
DE EN
Topic · AI Literacy

AI Literacy.

Competence rather than tool training. What organisations have owed their staff since 2 February 2025 — and what, beyond that, defines contemporary AI literacy.

Since 2 February 2025, AI literacy in organisations that deploy AI is no longer optional but a legal obligation. Article 4 EU AI Act requires that staff working with AI systems are sufficiently trained — appropriate to role, context and system.

That is the legal frame. But it describes only the floor, not the ceiling. What an organisation really needs is not one certificate per employee, but a shared language for dealing with AI — and people able to judge when an output is sound and when it is not.

This page sets out what Article 4 requires in concrete terms, what distinguishes good AI literacy from mere tool training, and how the Center for AI and Ethics (Europe) thinks about the field.

01

The obligation: Article 4 EU AI Act

The wording of the Regulation is brief but far-reaching. Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.

Sufficient means: appropriate to

  • the technical knowledge required for operation and use,
  • the experience, education and training of the persons concerned,
  • the context in which the AI systems are used,
  • and the persons or groups of persons on whom the systems are to be used.

Three consequences follow. First, the obligation applies not only to developers, but to any organisation that deploys AI — from hospitals to NGOs, from large corporations to local councils. Second, there is no single, uniform training standard; a case worker in a municipal office needs different competencies from a data scientist. Third, the obligation applies now — not only from 2026.

02

A common misunderstanding

"We booked a prompt-engineering seminar. Article 4 is covered."

It is not. Prompt engineering is operation. AI literacy is judgement.

An employee who knows how to query ChatGPT as skilfully as possible, but cannot tell when an answer is hallucinated, which rules the output might breach, or when personal data should not be entered into the tool in the first place, is not AI-literate in the sense of the law. They are merely tool-capable.

This is not a purely legal argument. It is a practical one: organisations that train their people only on tools generate risks that appear on no prompt slide. Incorrect legal text in an Outlook auto-reply. Diagnostic aids that no one challenges any more. Recruitment shortlists that systematically disadvantage certain groups without anyone noticing.

03

Three levels of AI literacy

We distinguish three levels, each building on the previous one. Anyone carrying responsibility in an organisation should be confident in at least two of them.

Level 1 — Basic understanding

What is a language model, and why does it hallucinate? How does an AI system arrive at its answers? What has it not seen? Who trained it, and on what data? Which decisions are made by the human, and which by the system?

Audience: all staff who use AI — even occasionally.

Level 2 — Application and appraisal

Which AI for which purpose? Where do the limits of data protection, copyright and confidentiality lie? How is an output checked? When must a human step in? How is use documented? Which incidents must be reported?

Audience: specialists and managers, compliance, project leads.

Level 3 — Governance and ethics

How does an organisation decide whether to introduce an AI system at all? Which fundamental rights are affected? Who is consulted before it goes live? How is objection organised? What happens when the system gets it wrong?

Audience: leadership, governance functions, ethics officers.

The most common mistake in organisations: Level 1 is taught to everyone, while Levels 2 and 3 remain unresolved. The result is a trained workforce that still does not know where the limits lie.

04

Audiences and settings

AI literacy looks different in a primary school from a law firm, different in a hospital from a town hall. The Regulation reflects this — it calls for appropriateness, not uniformity.

Schools and higher education
Media literacy, source criticism, critical reading of AI outputs. In higher year groups: productive use with clear labelling, reflection on the automation of thinking processes.
Adult education and continuing professional development
Role-specific. An HR officer needs something different from a lawyer, a social worker something different from a controller. At the core: where does AI help, where does it harm, where is it inadmissible?
Companies and public administration
Organisation-wide guidelines, documented training, a role matrix, named points of contact. Here Article 4 meets Articles 26 and 27 (FRIA).
Leadership and oversight
Not how to write prompts — but how to decide which AI enters the organisation, which does not, and who is accountable.
05

Literacy as a stance

AI literacy is not a product you buy once and tick off. It is a stance: the willingness not only to use technology but to understand it; not only to marvel at it but to scrutinise it; not only to fear it but to put it in its place.

This is not an elitist demand. It is what the Regulation means at its core when it speaks of a sufficient level of AI literacy.

That is why, for us, literacy does not sit alongside ethics or regulation but ahead of them. Anyone who deploys AI without understanding what it does cannot take responsibility for it — no matter how many certificates have been issued. This is not a moral claim but a practical one.

06

How CAIE approaches it

The Center for AI and Ethics (Europe) brings together four areas of expertise under one roof: AI strategy and corporate transformation; ethics and philosophy; media and defence; compliance and Agentic AI. Several founding members have been active for years as trainers and advisers at institutions such as WIFI Vienna, universities of applied sciences, and in adult education.

What CAIE is building beyond this is a programme that teaches technical understanding, legal appraisal and ethical reflection not in sequence but together — because competence only emerges where all three meet. No prompt workshop without a liability and copyright component. No compliance training without the question of who benefits from the system and who is harmed by it.

Our first in-house programme is in its final development phase and will launch in autumn 2026. For workshops, expert talks, curriculum consulting or in-house introductions, CAIE can already be reached — please write to office@caie.at.

07

Frequently Asked Questions

01 What does the EU AI Act require regarding AI literacy in organisations?

Article 4 of the EU AI Act obliges providers and deployers of AI systems to ensure a sufficient level of AI literacy among their staff and any other persons dealing with the operation or use of AI on their behalf. The scope depends on context: technical understanding, role, responsibility, target audience and the type of system deployed. The obligation has been in force since 2 February 2025.

02 At what age does AI literacy make sense?

There is no hard threshold. In primary schools it is about basic concepts and media literacy; from secondary level onwards about first productive use and critical appraisal; in vocational training and higher education about subject-specific application and ethical reflection. What matters is not the age, but that content, language and tasks fit the audience.

03 What distinguishes good AI literacy from mere tool training?

Tool training shows how a prompt works. Good AI literacy explains why the model answers the way it does, where its limits lie, which data it has seen and which it has not, and how to appraise the output. The result is not just a more efficient user, but one capable of judgement.

04 Does CAIE offer training?

Our first in-house training programme is in its final development phase and will launch in autumn 2026. Several founding members have been active for years as trainers at institutions such as WIFI Vienna and at universities of applied sciences. For specific enquiries — workshops, expert talks, curriculum consulting or in-house introductions — CAIE can already be reached at office@caie.at.