Call us today on 0800 086 2989
Design Studios Open By Appointment Only

AI Security and Responsible Use, A Practical Guide for Any Small Business

AI and data protection for small businesses

AI can be genuinely useful in a small business, but only if you use it with good judgement. This page is a plain-English guide to using AI safely and responsibly, without turning it into a corporate performance.

You’ll also find a copy-and-paste policy template you can use as a starter, including simple instructions on how to adapt it to your own business.

Note: This is not legal advice. It’s practical guidance based on real implementation. If you operate in a regulated industry, or you handle high-risk personal data, get professional advice and tailor accordingly.

On this page

How to use this page

  1. Read the short guidance sections to understand the principles.
  2. Copy the policy template, replace anything in [SQUARE BRACKETS] with your details.
  3. Roll it out with a short approved tool list, plus a simple “never paste” rule.

The one principle that prevents most problems

If you take one thing from this page, take this:

Do not put personal or confidential data into AI tools unless you have a clear reason, a clear process, and a clear owner.

Most small businesses can get huge value from AI without using names, addresses, phone numbers, payment details, medical details, HR issues, or anything else sensitive.

In practice, this usually means:

What AI security means in a small business

When people hear “AI security” they often imagine an enterprise IT team and a six-figure budget. Most of it is simpler than that.

AI security in a small business is mostly about two things:

These are the controls that do the heavy lifting:

It’s not glamorous, but it’s what lets you use AI confidently.

Common mistakes to avoid

1) Copying real client details into prompts “just to save time”

This is usually how problems start. People are busy, they paste too much, and suddenly personal data has been shared in the wrong place.

2) Using lots of tools with no oversight

If everybody uses different AI apps, you lose control of where information goes. A short approved list is both safer and easier.

3) Treating AI output as trustworthy by default

AI can sound confident and still be wrong. You need a simple rule, if it matters, a human checks it.

4) Writing a policy but not changing behaviour

The policy matters, but what really matters is training, workflow guardrails, and making the safe approach the easy approach.

5) Being vague about exceptions

Some businesses pretend exceptions never happen. In reality, exceptions are sometimes necessary. The key is to make them explicit, limited, and approved, rather than casual.

Copy-and-paste policy template

This template is designed as a starter. It’s written for a typical small business and uses plain English.

Instructions:

  1. Copy the entire template below into a document.
  2. Replace anything in [SQUARE BRACKETS] with your own details.
  3. Delete any sections that do not apply to your business.
  4. Set an “Approved AI Tools” list, even if it’s only one tool to begin with.
  5. Share it with staff, then do a short briefing so everyone understands the “never paste” rules.

Start copying from here:

AI SECURITY AND RESPONSIBLE USE POLICY (SMALL BUSINESS TEMPLATE)
Version: [1.0]
Effective date: [DD Month YYYY]
Policy owner: [Name, role]
Approved by: [Name, role]
Next review date: [DD Month YYYY]

1. PURPOSE
This policy explains how [COMPANY NAME] uses Artificial Intelligence (AI) tools safely, responsibly, and in line with confidentiality and data protection obligations. It is designed to protect our customers, our staff, our suppliers, and our business.

2. SCOPE
This policy applies to:
- all employees, contractors, and temporary staff of [COMPANY NAME]
- all AI tools used for business purposes, including chat assistants, content tools, transcription tools, image tools, and automation services
- any device used for company work, including personal devices where permitted

3. PRINCIPLES
We use AI to support people, not replace responsibility.
We minimise data, if we do not need the detail, we do not use it.
We anonymise wherever possible.
We keep human oversight for anything that impacts customers, finances, safety, or legal obligations.
We only use approved AI tools for business work.
We report mistakes quickly, without blame, so we can fix the system and prevent recurrence.

4. DEFINITIONS
AI tool: any software that generates, transforms, or analyses content using machine learning models.
Personal data: information that can identify a living person (directly or indirectly).
Confidential data: business-sensitive information including pricing, contracts, supplier terms, internal documents, strategies, or customer details.
Restricted data: special category personal data (for example health), financial account details, passwords, security codes, identity documents, or anything that would cause harm if disclosed.

5. APPROVED AI TOOLS
Only approved tools may be used for company work.
Approved tools list (maintained by the policy owner):
- [TOOL 1]
- [TOOL 2]
- [TOOL 3]

Any new AI tool must be reviewed and approved before use. Approval must consider:
- how data is handled, stored, and retained
- security controls and access management
- the supplier’s terms of service, including any training or reuse of inputs
- deletion, export, and audit capabilities

6. DATA YOU MUST NOT INPUT (UNLESS EXPLICITLY APPROVED IN WRITING)
You must never paste, upload, or share the following in any AI tool unless the policy owner has approved it in writing for a specific task:

- passwords, access codes, API keys, authentication tokens
- bank details, card details, payment links, or direct debit information
- copies of passports, driving licences, identity documents
- medical or health information
- HR information, disciplinary matters, or sensitive staff information
- full customer records including name + address + contact details in the same prompt
- supplier contracts or confidential supplier pricing terms
- any information labelled confidential by a client or supplier

Default rule:
If you would not put it on a public noticeboard, do not put it into an AI tool.

7. ANONYMISATION AND MINIMISATION (DEFAULT BEHAVIOUR)
Where AI support is useful, we remove identifying details:
- names become “the customer” or “the supplier”
- addresses become “a property in [region]”
- dates become approximate if exact dates are not required
- order numbers and account references are removed

We only include the minimum detail needed to complete the task.

8. HUMAN REVIEW AND APPROVAL
AI outputs must be reviewed by a person before they are:
- sent to customers or suppliers
- used in quotes, contracts, or legal documents
- used in pricing, financial decisions, or credit control
- used in safety-critical guidance
- published publicly under the company name

The reviewer is responsible for ensuring the output is accurate, appropriate, and compliant.

9. ACCURACY AND QUALITY CONTROL
AI tools can produce plausible-sounding errors.
Staff must:
- check facts, figures, and claims before use
- avoid stating assumptions as facts
- use primary sources for technical, legal, or medical claims
- escalate internally when unsure

10. RECORD KEEPING AND AUDIT
[COMPANY NAME] maintains appropriate records of:
- approved AI tools
- who has access to them
- training completed
- approved exceptions
- any incidents or suspected incidents

Where AI is used to support customer work, we keep enough context to explain decisions if questioned, without storing unnecessary personal data.

11. INCIDENT MANAGEMENT
If you suspect data has been shared incorrectly, or AI tools have been used in breach of this policy, report it immediately to:
[NAME / ROLE]
[EMAIL]
[PHONE]

We will:
- contain the issue and stop further exposure
- assess what data may be affected
- record what happened and actions taken
- notify relevant parties where legally required
- improve controls to prevent recurrence

12. TRAINING AND AWARENESS
All staff who use AI tools for business work must complete basic training covering:
- data handling and confidentiality
- anonymisation and minimisation
- how to review AI outputs properly
- how to report issues quickly

Training will be refreshed at least annually, or sooner if tools or risks change.

13. EXCEPTIONS
Exceptions are allowed only if:
- there is a clear business reason
- risks are understood and mitigated
- approval is provided in writing by the policy owner
- scope is limited to what is necessary

14. POLICY REVIEW
This policy will be reviewed:
- at least every [12] months
- after any significant incident
- when introducing new AI tools or major new AI use cases

Signed:
[NAME, ROLE]
[DATE]

Stop copying here.

A rollout plan that actually works

If you want this to be more than a document, keep it simple:

If you want to compare notes

If you’re building AI into your business and you want to compare approaches, I’m happy to share what we’ve learned and point out the common pitfalls.

We always prioritise internal AI work in our own business, so we’re selective about external commitments. That said, I’m always happy to offer advice and compare notes.

You can reach me at david@scottishshutters.co.uk .