Skip to content
Back to Blog
Guide

The EU AI Act for SMBs: What You Actually Have to Do Before August 2026

The EU AI Act's high-risk system deadline is August 2, 2026. Most SMBs have heard the term and have no idea what actually applies to them. This guide cuts through the panic and the complacency.

12 min read

The four risk categories and which ones actually touch your process automation

The EU AI Act classifies every AI system into one of four risk buckets, and the compliance burden is strictly proportional to the bucket. Most SMBs worry about the wrong bucket, which wastes legal budget on risks they do not have and leaves real exposure untreated. Before you spend a euro on compliance, know which bucket you are in.

The four categories are unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk systems are banned outright: social scoring by public authorities, certain predictive policing applications, emotion recognition at work or school, untargeted scraping of facial images for biometric databases. If you are reading this because you run a 150-person logistics firm or a 40-person law practice, you are not building any of these. You can skip this category.

High risk is where SMB operators get caught

High-risk systems carry the heaviest obligations: documentation, risk management, human oversight, transparency, data governance, accuracy and robustness testing, registration in the EU database. The category includes AI used in employment decisions (hiring, firing, performance scoring), access to essential services (credit, insurance, public benefits), education assessment, law enforcement, and critical infrastructure. It also includes any AI system that functions as a safety component of a regulated product: medical devices, toys, machinery, vehicles.

Most process automation sits in the limited or minimal risk category, but the edge cases matter. An AI tool that screens CVs is high risk. An AI tool that drafts replies to customer complaints is limited risk. An AI tool that summarizes internal meeting notes is minimal risk. The same vendor can produce systems in three different buckets.

  • Limited risk covers systems that interact with humans, generate synthetic content, or use emotion recognition in non-regulated contexts. Obligation: transparency (tell the user they are interacting with AI).
  • Minimal risk covers everything else: spam filters, recommendation engines, inventory forecasting, most back-office automation. Obligation: none, beyond voluntary codes of conduct.

What triggers high-risk classification, and what does not despite the rumour

The high-risk trigger list in Annex III of the Act is specific and limited. It is not a vibe. Vendors and consultants have been selling panic by implying any serious AI use is high risk. It is not. The category is enumerated, and if your use case is not on the list or does not function as a safety component of a regulated product, you are not high risk.

Annex III covers eight enumerated areas: biometric identification, critical infrastructure, education and training, employment and worker management, access to essential services, law enforcement, migration and border control, and administration of justice. The sectors are specific and the use cases inside each sector are further narrowed. AI that recommends a training course to an employee is in scope. AI that forecasts next quarter's hiring need at the department level is not.

The rumour that trips SMBs up

A common misreading: "we use AI in HR, so we are high risk". Not automatically. HR use cases that are high risk under the Act are the ones that directly affect hiring, firing, promotion, task allocation, or performance monitoring of individuals. A chatbot that answers employee questions about the vacation policy is not in scope. A tool that summarizes candidate interviews for the recruiter is not automatically in scope, depending on whether the summary functions as a decision input. A tool that scores candidates and ranks them is in scope.

The second rumour: "if we process personal data with AI, we are high risk under the AI Act". No. That is GDPR. The AI Act overlays GDPR, it does not replace it. A system can be GDPR-relevant and AI Act minimal risk, or the reverse. Treat them as two parallel regimes.

The six documents you must produce if you deploy a high-risk system

If you concluded the system is high risk, the obligation set is concrete and documentable. Six artifacts cover most of what a regulator or auditor will ask for. None of them require a specialist consultant if the system is well-understood internally.

1. Risk management system documentation

A written description of the risks the system poses to health, safety, and fundamental rights. The mitigations in place. The residual risk after mitigation. The review cadence. For a 200-person company, this is a five to ten-page document, not a ninety-page treatise.

2. Data governance and training data documentation

What data the system was trained on (if you built it), or what data it operates on (if you deploy it). Data quality checks. Bias testing results. Handling of special categories of data. If you are a deployer of a third-party system, this is mostly the vendor's job, but you document what the vendor told you.

3. Technical documentation

How the system works at a level sufficient for a regulator to assess compliance. Architecture, intended purpose, accuracy metrics, limitations. Again, if you are a deployer, the provider supplies most of this and you retain a copy.

4. Logs

Automatic logging of the system's operation, retained for at least six months (or longer in some sectors). This is a platform capability, not a document you write. Confirm your vendor provides it or build it in-house if you deploy internally.

5. Transparency and instructions for use

If you are a provider, you supply instructions to deployers. If you are a deployer, you document how the system is actually used in your context and ensure affected individuals are informed where required.

6. Human oversight arrangements

Who oversees the system. What they are trained to do. When they intervene. How they can override outputs. For most SMB deployments, this is a named human with a written process, not a team of reviewers.

Transparency and human oversight on limited-risk uses

Most SMB AI deployments land in limited risk, and the obligations there are modest but real. Ignore them and you create exposure that is trivially provable in an audit. The core duty is transparency: the user should know they are interacting with AI, and AI-generated content should be labeled where the user would otherwise be deceived.

In practice, this means a few concrete things. If your customer service operation uses an AI chatbot, the chatbot says it is an AI. If you use AI to generate product descriptions and those descriptions are consumer-facing in markets where it matters, you label them as AI-generated. If you use AI to edit images of real people (deepfake-adjacent uses), you label them. The obligations are on the deployer, which in most cases is you, not the vendor.

Human oversight where not strictly required

Even when the Act does not require human oversight, the sensible default for any AI-generated output that reaches a customer, a regulator, or an employee is a human checkpoint. This is not a legal requirement for limited risk, it is a quality requirement. The companies that skip the checkpoint are the ones that discover six months later that their AI contract generator has been inserting the same clause error into every agreement.

  • AI chatbots must disclose they are AI before or during the interaction.
  • AI-generated images, video, and audio that could be mistaken for real must be labeled.
  • Synthetic text that could mislead (a fake review, a fake testimonial) is covered by consumer protection law regardless of AI Act status.
  • Internal uses (drafting, summarizing, analyzing) have no disclosure obligation, but most companies adopt an internal policy anyway.

Penalties, timeline, and what SMBs actually get from the SME exemptions

The Act's penalty structure looks terrifying on paper and is structured to scale with company size. Headline numbers assume enterprise-scale violators. SMB exposure is smaller in absolute terms but not zero, and reputation damage from a public finding scales independently of the fine.

Maximum fines: 35 million euros or 7 percent of global annual turnover (whichever is higher) for prohibited AI violations. 15 million or 3 percent for most other violations. 7.5 million or 1 percent for providing false information to regulators. For SMBs, the Act instructs authorities to take company size and circumstances into account, and for micro-enterprises the caps apply on the lower of the two values rather than the higher. The practical effect for a 50-person company is that penalties scale to something survivable, not bankruptcy-level.

Timeline in plain language

  1. February 2, 2025: prohibited AI practices banned. Already in force.
  2. August 2, 2025: general-purpose AI model obligations in force. Provider-side.
  3. August 2, 2026: obligations for high-risk AI systems under Annex III apply. This is the deadline that matters for most SMB deployers.
  4. August 2, 2027: obligations for high-risk systems that are safety components of regulated products (medical devices, machinery, etc.) apply.

What the SME provisions actually grant you

The Act specifies support mechanisms for SMEs and startups: priority access to regulatory sandboxes, simplified documentation options, free or low-cost conformity assessment assistance in some member states, awareness campaigns and training programs from national authorities. None of these exempt you from substantive compliance. They reduce the cost and complexity of getting to compliance.

A pragmatic compliance starter pack: the next 90 days

With four months to the August 2026 deadline, starter-pack work is enough for most SMB deployers. You do not need a consulting engagement. You need a disciplined inventory and a decision tree, owned by one person who has thirty percent of their time.

Weeks 1 to 2: inventory

List every AI system in active use. Include SaaS tools your teams adopted without a formal procurement process (the shadow AI problem). For each system, record: vendor, use case, who decides with it, whether it produces outputs that affect a specific person. This inventory is the single most important artifact you will produce. Everything else is downstream of it.

Weeks 3 to 4: classification

For each system in the inventory, assign a risk category. High, limited, or minimal. Use Annex III of the Act as your reference. For the uncertain cases, get a one-hour consultation with a privacy or AI lawyer; in 2026 this costs 300 to 600 euros for a qualified specialist. Document the rationale for each classification, not just the conclusion.

Weeks 5 to 8: close the gaps on high-risk systems

For each high-risk system, work through the six-document list in section 3 of this guide. In most cases, three of the six documents are supplied by the vendor and you retain copies. The other three (your risk management, your oversight arrangements, your deployer-side technical documentation) you write yourself.

Weeks 9 to 12: transparency and oversight for limited risk

For every limited-risk system, add disclosure where it is missing. Update chatbot scripts, product descriptions, image generation workflows. Write a one-page internal AI use policy that covers employee-facing AI and expected human review. This is cheap and it closes the most common audit finding.

What your vendor is responsible for versus what you are responsible for

The Act draws a sharp line between providers (who put AI systems on the market) and deployers (who use them in their operations). Most SMBs are deployers almost all of the time. Understanding the line prevents you from paying for compliance work that is not your legal responsibility and prevents you from assuming the vendor covered something they did not.

The provider owns the technical documentation, the conformity assessment, the CE marking (where applicable), the post-market monitoring of how their system performs in the wild, and the registration in the EU database for high-risk systems. If you are buying an AI process tool from a vendor, these obligations are theirs, and the vendor's ability to answer your questions about compliance is itself a signal of their seriousness.

The deployer owns the use

The deployer owns the specific deployment: the intended use in their context, the human oversight arrangement, the data they feed into the system, the instructions they give their employees, the transparency notices to affected individuals. A vendor cannot take on these obligations for you. They live inside your operation by definition.

  • Ask the vendor for: technical documentation extract, data governance description, accuracy and limitation statements, instructions for use, compliance statement or EU declaration of conformity for high-risk systems.
  • You retain and produce: risk assessment in your context, human oversight arrangement, internal use policy, transparency notices to your users or customers, logs of how the system actually runs in your operation.
  • When responsibility blurs: if you fine-tune a general-purpose model on your data, you may become a provider of a new high-risk system. This is the most common unintended escalation we see.

Frequently asked questions

Does the EU AI Act apply to non-EU companies?

Yes, if the AI system's output is used in the EU. A US-based AI vendor selling a CV-screening tool to a French company is in scope. A Canadian SaaS company offering a tool to German customers is in scope. The test is where the output is used, not where the company is headquartered. This is the same extraterritorial logic as GDPR.

What counts as 'deploying' an AI system under the Act?

Using an AI system under your own authority in the course of a professional activity. Buying a tool and letting your team use it makes you a deployer. Embedding a third-party API into your customer-facing product probably makes you a deployer of the underlying model and a provider of your own product. Personal, non-professional use is excluded.

Is the August 2, 2026 deadline likely to slip?

As of April 2026, the Commission has signalled the timeline is holding. Industry lobbying for a delay has been active, particularly around general-purpose AI model rules, but the Annex III high-risk deadline is politically harder to move. Plan as if the deadline holds. A late delay does not retroactively give you compliance.

How do I know if a specific AI process tool is high-risk for my use case?

Cross-reference two things: the tool's function (what it does) and the context of use (what decision it influences). Classification follows the use, not the tool. If the tool influences hiring, firing, credit, education assessment, or access to a service, it is likely high risk in that deployment. If it helps a human summarize documents or draft emails, it is almost certainly limited or minimal risk.

Do we need a Data Protection Officer for AI Act compliance?

No. The AI Act does not require a DPO. The AI Act has its own role, the person responsible for human oversight of high-risk systems, but this is not a formal officer position and can sit with the ops lead or CTO. A DPO requirement may come from GDPR independently, depending on your processing.

Can our existing GDPR documentation cover AI Act obligations?

Partially. Data governance documentation overlaps, and a record of processing activities can form the spine of an AI system inventory. But the AI Act requires artifacts GDPR does not: risk management for AI-specific risks, technical documentation of the system, human oversight arrangements, logging of system operation. Treat them as overlapping but distinct regimes.

Related articles

What Is BPMN? The Complete 2026 Guide to Business Process Model and NotationAI Process Transformation: From Manual Workflows to Autonomous Agents, Without the Gap Year in BetweenWhy AI Transformation Is Not a BPMN Project, and Why That Distinction Decides Whether Your Programme Ships

Ready to Build Your AI Transformation Plan?

Upload any process document and co-build an AI transformation plan with real tool recommendations and ROI projections, in minutes, not weeks.

Try LucidFlow Free