Checklist for AI Compliance in UK SMEs

April 15, 2026

AI compliance in the UK isn't governed by a single law but involves adhering to existing regulations like UK GDPR and the Equality Act 2010. By 2026, businesses must prioritise transparency, accountability, and proper data handling to maintain customer trust and avoid penalties. Here's a quick overview of the key steps:

  • Create an AI Register: List all AI tools in use, including their purpose, data handled, and oversight responsibilities.
  • Classify Risk Levels: Categorise systems into unacceptable, high, limited, or minimal risk, with measures like DPIAs and human oversight for high-risk tools.
  • Draft an AI Use Policy: Define approved tools, data rules, and oversight guidelines in a concise document.
  • Ensure Data Governance: Process only necessary data, introduce human checkpoints, and maintain transparency.
  • Train Staff: Provide role-specific training to reduce risks from "Shadow AI" and ensure compliance.
  • Monitor and Update: Regularly review your AI systems, update risk assessments, and maintain detailed records.

Non-compliance could lead to fines up to £17.5 million or 4% of global turnover. Follow these steps to stay compliant and build trust in your AI systems.

6-Step AI Compliance Checklist for UK SMEs

6-Step AI Compliance Checklist for UK SMEs

UK Small Businesses Are Using AI Wrong (2026 Warning)

Step 1: List All AI Systems You're Using

The first step to staying compliant with AI regulations is straightforward but often neglected: create a complete inventory of every AI tool your business relies on, a process often supported by AI consultancy services. This includes obvious systems like chatbots and recommendation engines, as well as less apparent features embedded within other tools. If you don’t know what you’re using, you can’t manage it. With 93% of UK organisations now utilising AI, but only 7% having governance fully in place, this is a critical starting point. Don’t forget to include unofficial tools your employees might be using without approval.

One of the biggest risks for small and medium-sized businesses is Shadow AI - tools that employees use independently of company oversight. Studies reveal that 32% of UK workers use AI tools without their employer’s knowledge, and 44% of businesses in the UK have faced data breaches linked to such unauthorised usage. These breaches can be costly, with the average incident tied to Shadow AI costing around £500,000. To uncover these hidden tools, conduct a direct survey with your teams. Frame it as a chance to identify useful tools, asking them to list any AI systems they use, including personal accounts like free versions of ChatGPT or Claude.

Your next step is to create an AI Register - a centralised document (such as a spreadsheet) that logs key details about each AI tool. This should include the system's name, its purpose, the type of data it handles (whether personal, anonymous, or sensitive), and the name of a human supervisor responsible for monitoring its outputs. This register is crucial for demonstrating accountability, especially if the ICO conducts an audit. It also becomes an essential tool for ongoing risk management and compliance reviews.

Begin by documenting one key system to establish a process, then expand the register to cover your entire tech stack. Focus first on systems that affect recruitment, customer service, pricing, or any decision-making processes involving people. Even tools that seem low-risk, like those used for drafting content or scheduling, should be included to ensure a full operational audit trail.

Make it a habit to review your AI inventory for an hour every quarter to keep your compliance records up to date and your business safeguarded. As new tools are introduced, add them to your register. This isn’t a one-time task - it’s an ongoing process that provides the foundation for your compliance strategy.

Step 2: Classify AI Systems by Risk Level

The next step is to categorise your AI systems into four risk levels: unacceptable, high, limited, and minimal/low.

Unacceptable Risk

These systems are prohibited outright. Examples include workplace emotion recognition, social scoring by authorities, and AI tools that classify individuals based on sensitive biometric traits like race or religion. While it's unlikely that SMEs will use such systems, it's crucial to be aware of these strict boundaries.

High Risk

High-risk systems have a significant impact on individuals, often involving decisions with legal or similarly weighty consequences. For SMEs, this might include tools for CV screening, credit scoring, insurance pricing, or health triage. If your AI determines who gets hired, approved for a loan, or receives a service, it likely falls into this category. Such systems require strict measures, including:

  • Completing a Data Protection Impact Assessment (DPIA).
  • Ensuring genuine human oversight.
  • Implementing technical logging for accountability.

Accurately identifying and managing high-risk systems is a cornerstone of maintaining compliance.

Limited Risk

Systems in this category have specific transparency requirements. Examples include customer service chatbots, AI-generated marketing visuals, and deepfakes. The primary obligation is to inform users that they are interacting with AI. For instance, a chatbot might display a notice like, "This is an AI chatbot", at the start of the interaction.

Minimal or Low Risk

These systems assist with routine tasks, such as product recommendations, meeting summaries, email drafting, scheduling, or spam filtering. While they must still comply with GDPR principles when handling personal data, the compliance demands are generally less stringent.

Special Considerations

If your AI processes sensitive data - like health information, biometric details, or criminal records - it automatically raises the risk level and warrants additional safeguards.

Documentation and Human Oversight

Record each tool's risk level and the required controls in your AI Register. This should include details like DPIAs, human oversight protocols, and review schedules for high-risk systems. The Information Commissioner's Office (ICO) expects this documentation during audits. As The Ai Consultancy explains:

"The ICO doesn't expect perfection. It expects evidence of thoughtful, risk-based decision-making".

Regularly update your risk assessments and ensure they are thorough and honest. Human oversight can reduce a system's risk classification, provided it is meaningful. Under Article 22 of the UK GDPR, decisions made solely by AI without proper human review are deemed high-risk. However, if a qualified human genuinely reviews the AI's output and makes the final decision, the system may be reclassified to a lower risk level. This review process must be genuine - not a mere formality.

Once you've documented risk levels in your AI Register, you're ready to move on to defining an AI acceptable use policy in Step 3.

Step 3: Write an AI Acceptable Use Policy

Now that you've assessed your AI readiness and classified your risks, it's time to formalise your approach with an AI acceptable use policy. This doesn’t need to be a complex legal document. Instead, aim for something simple and practical that your team can easily understand and follow.

Keep it concise. A one-page document is a great starting point. Currently, over half of UK desk workers lack clear guidance on AI usage, and nearly a third are already using AI tools without oversight. A short, clear policy is far more actionable than a lengthy, unread manual. As LogiSam puts it:

"AI governance for a small business is not a 200-page policy manual or a team of consultants... It is a proportionate set of controls that match your size, your risk, and your regulatory environment".

Your policy should cover five key areas:

  • Approved tools list: Specify which AI tools are allowed, such as enterprise versions of ChatGPT or Microsoft Copilot, and ensure they include Data Processing Agreements.
  • Data handling rules: Use a traffic light system to classify data. For example:
    • Green data (e.g., public information or marketing copy) can be used in any approved tool.
    • Amber data (e.g., internal notes or B2B contact details) must only be used with approved enterprise tools.
    • Red data (e.g., health records, financial details, or legally privileged material) should never be entered into any AI system.
  • Mandatory human oversight: Require human review for critical decisions influenced by AI, such as recruitment or pricing. This ensures compliance with UK GDPR Article 22 and the Data Use and Access Act 2025.
  • Transparency measures: Make it clear that staff must inform individuals when AI processes their data.
  • AI Lead designation: Assign someone to own the policy and handle regulatory inquiries.

Streamline the process for requesting new AI tools. Unauthorised AI use can lead to data breaches, with 44% of UK businesses reporting incidents linked to such tools - each costing an average of £500,000. To avoid this, create a simple approval system. Employees should be able to submit a request via a form and receive a response within one business day. This quick "when in doubt" process helps prevent risky, uninformed decisions.

Finally, schedule quarterly reviews of your policy. AI technology and regulations change quickly, so set aside one hour every three months to update your approved tools list and ensure your policy stays compliant. Document these updates in your AI Register, alongside your risk assessments and oversight protocols. With this policy in place, you’ll be set to address data governance and transparency in Step 4.

Step 4: Set Up Data Governance and Transparency

Once you've established your acceptable use policy, the next step is to manage the flow of AI data in a way that builds trust and ensures transparency. By combining your earlier inventory and risk assessments with solid governance practices, you can safeguard the integrity of your AI systems.

Limit the data you process. Work only with the data that's absolutely necessary, and strip out identifiers like names or email addresses before feeding information into AI systems. A key point to remember: personal data collected for one purpose - such as delivering a service - cannot simply be repurposed for something else, like training AI for marketing, without first establishing a lawful basis or conducting a compatibility assessment. Start by processing only the data that’s essential.

Introduce human checkpoints to verify outputs. Accuracy is critical, especially for sensitive applications like recruitment, credit assessments, or healthcare decisions. Set up mandatory human reviews for high-stakes outputs and document these interventions. The ICO provides clear guidance: "Human intervention should involve a review of the decision, which must be carried out by someone with the appropriate authority and capability to change that decision". This means your staff must have genuine authority to override AI-generated recommendations, not just approve them without scrutiny.

Clearly label AI outputs and disclose uncertainties. Use plain language or visual markers to indicate when an output comes from AI, and be transparent about any uncertainty or confidence levels in the system’s predictions. Keep audit trails that show when and how human intervention has modified an AI decision. With 29% of UK businesses now using AI - rising to 40% among medium-sized firms - clear communication about AI processes is no longer optional.

Before deploying AI for large-scale data processing or systematic profiling, conduct a Data Protection Impact Assessment (DPIA). This legal requirement isn't just about compliance; it helps avoid costly mistakes. The ICO can impose fines of up to £17.5 million or 4% of global turnover for breaches of data protection rules. By implementing these governance measures, you’ll be well-prepared for the ongoing monitoring and adjustments covered in the next step.

Step 5: Train Staff and Keep Records

Once you've established strong data governance, the next step is to focus on equipping your team with the right training and maintaining thorough records to ensure compliance stays on track.

Training and documentation are the backbone of effective AI governance. Here's a startling statistic: only 7.5% of UK workers have received extensive AI training from their employers, yet 32% are using AI tools without their employer's knowledge. This so-called "shadow AI" can create serious compliance risks. Regulators may hold you accountable for these lapses, even if you weren't aware the tools were being used.

Tailor your training to specific roles within your organisation. For example:

  • A 30-minute session for general staff can cover the essentials, like your AI Acceptable Use Policy and basic data handling rules.
  • Managers can benefit from enablement sessions to help them oversee reviews and ensure adherence to policies.
  • For your AI Governance Lead and technical staff, offer advanced training that dives into regulatory requirements and vendor risk management.

Technical teams, in particular, need to understand the limitations of AI tools. They should be able to spot issues like hallucinations in legal documents or bias in recruitment processes.

On the documentation front, keep everything organised in a central "AI Compliance" folder. This should include critical items like your AI Register, data flow diagrams, DPIAs, vendor security checks, decision logs, and training records (including dates and attendees). Just like your AI Register, detailed training records are essential. They not only demonstrate your commitment to compliance but also support accountability and continuous improvement.

The numbers speak for themselves: organisations with mature AI governance report 23% fewer AI-related incidents. Yet, 54.5% of UK desk workers still lack a clear AI policy to guide them. By implementing role-specific training and keeping detailed records, you're not just meeting compliance requirements - you’re creating a safer and more effective environment for AI use in your business.

As LogiSam explains in their AI Governance for SMEs guide:

"AI governance for a small business is not a 200‑page policy manual or a team of consultants sitting in your office for six months. It is a proportionate set of controls that match your size, your risk, and your regulatory environment".

To stay on top of things, schedule monthly 15-minute leadership sessions for reviewing new tools and quarterly 60-minute governance updates to refresh risk registers and training plans.

Step 6: Monitor AI Systems and Review Regularly

Keeping your AI systems compliant isn’t a one-time task - it’s an ongoing process that demands regular attention. The ICO expects organisations to actively monitor their AI systems, maintain detailed audit trails, and integrate AI oversight into existing GDPR practices. These steps naturally build on your established AI Register and prior risk assessments.

Set aside an hour every quarter to review and update your AI Register. Use this time to re-evaluate risk assessments, conduct AI feasibility studies for new updates, confirm that your systems are processing data correctly, and refresh your DPIAs if there are changes in data flows or system functionality. During these reviews, check for any newly implemented AI tools and ensure all documentation reflects the latest updates.

In addition to these manual reviews, implement automated audit trails to log all actions taken by your AI systems. These logs act as critical evidence of accountability, which the ICO will expect to see. Make sure your systems capture all key activities and start automated logging as soon as possible. The stakes are high - non-compliance can result in fines of up to £17.5 million or 4% of your annual global turnover, whichever is greater. With mandatory registration for high-risk AI systems potentially arriving by 2027, laying the groundwork for effective monitoring now is crucial.

Human oversight remains a cornerstone of compliance. Article 22 of GDPR prohibits decisions made solely by automated systems that significantly affect individuals unless specific exemptions are in place. To meet this requirement, train human reviewers to override AI outputs when necessary and ensure they have the authority to intervene meaningfully. The Digital Compliance Academy underscores this point:

"To avoid Article 22 issues, your human reviewers must be trained, have the authority to override the AI, and actually exercise that authority in meaningful ways".

This means regular checks are essential to confirm that staff are carefully reviewing AI outputs rather than simply rubber-stamping them.

Lastly, don’t forget to keep your privacy notices up to date. If you introduce a new AI tool or modify an existing one, update your privacy policy to explain how the AI is being used and the logic behind its decisions. Struan.ai highlights the importance of clarity:

"businesses must also be able to explain, in plain language, how an AI tool reached a particular decision".

To streamline your compliance efforts, centralise all relevant documents - such as your AI Register, data flow maps, DPIAs, and vendor agreements - in one "AI Compliance" folder. This ensures you can provide the ICO with quick access to all necessary records.

Compliance Checklist for UK SMEs

Stay on top of your responsibilities with this quick self-assessment tool. This checklist is designed to help you confirm that you've addressed the key obligations for your AI systems, regardless of whether they fall under minimal, limited, or high-risk categories as outlined by UK GDPR and the Data (Use and Access) Act 2025.

If your AI system processes personal data, there are some universal requirements you need to meet. These include maintaining an AI Register, classifying risk levels, drafting an acceptable use policy, establishing data governance processes, training staff, and conducting regular monitoring.

The ICO doesn't expect perfection but looks for evidence of thoughtful, risk-based decisions.

The main difference between risk levels lies in how detailed and formal your compliance efforts need to be. High-risk systems - such as those processing biometric data or making fully automated legal decisions - require Data Protection Impact Assessments (DPIAs), explicit consent, and safeguards like transparency, the right to challenge, and effective human oversight. Limited-risk systems, such as AI-assisted recruitment tools, can rely on Legitimate Interests but must include a Legitimate Interest Assessment (LIA) and clear explanations of decision-making processes. Meanwhile, minimal-risk tools, like inventory management systems, simply need to adhere to standard GDPR principles without requiring complex balancing tests.

Below is a table to help you ensure you've covered all the necessary steps for your AI systems. If you spot any gaps, update the relevant compliance measures immediately. This approach makes it easier to identify and address any missing elements.

Compliance Checklist Table

Risk Level Inventory Classification Policy Creation Data Governance Training Monitoring
Minimal Risk
Limited Risk
High Risk

To stay organised, keep all compliance documents - such as your AI Register, data flow maps, DPIAs, vendor agreements, and training logs - together in a single 'AI Compliance' folder. This will help you respond quickly if the ICO requests evidence and make quarterly reviews much more straightforward.

Next Steps

You now have a clear path to ensuring AI compliance, covering essentials like your AI Register, risk classification, policy creation, data management, staff training, and routine monitoring. The key is to document your decisions in a way that is both concise and easy to follow.

To move forward, take immediate, actionable steps. Kick things off with a 48-hour challenge: finalise your AI Register, map out critical data flows, and set up a centralised "AI Compliance" folder. This will give you a solid foundation and immediate operational clarity. After that, set up a quarterly one-hour review to ensure your compliance framework stays in sync with your AI activities.

If you encounter technical hurdles or find Article 22 risks difficult to interpret, don’t hesitate to bring in professional support. Services like AI Strategy Development can assist small and medium-sized enterprises with tricky tasks like vendor assessments, conducting Data Protection Impact Assessments, and establishing the human oversight regulators expect. Expert guidance can help you refine and speed up your compliance efforts.

FAQs

Do I need a DPIA for every AI tool we use?

Not every AI tool requires a Data Protection Impact Assessment (DPIA). Typically, you’ll need a DPIA when introducing activities that could significantly impact individuals' rights. This includes scenarios like large-scale processing of sensitive data or automated decision-making.

The key is to assess the specific context of data processing for each AI tool. If the tool involves high-risk activities, a DPIA is likely necessary. For tools with lower risks, you might not need one. Always evaluate the potential impact before deciding.

What qualifies as “meaningful” human oversight under UK GDPR Article 22?

Under UK GDPR Article 22, decisions made entirely by automated systems that have legal or major impacts on individuals must involve real human participation. This means a person must actively review and shape the decision-making process, rather than simply approving or endorsing the automated results without proper scrutiny.

How can we stop staff using “shadow AI” without slowing work down?

To reduce the use of "shadow AI" while keeping productivity intact, focus on a few key areas: clear guidelines, approved tools, and fostering a supportive work environment. Make it clear which AI tools are permitted and ensure they comply with relevant standards. Encourage open discussions about the need for AI tools, so employees feel comfortable sharing their requirements. Offering training on responsible AI usage and stressing the importance of compliance can also help curb unauthorised use without compromising efficiency.

Related Blog Posts

AI solutions that drive success & create value

Our mission is to empower businesses with cutting-edge AI technologies that enhance performance, streamline operations, and drive growth. We believe in the transformative potential of AI and are dedicated to making it accessible to businesses of all sizes, across all industries.