AI Virtual Assistant Policy

Title
Title
Title
Title
Version
Author
Next Review Date
Notes
V1 (July 2025)
Emma Kitcher, Data Protection Officer
February 2026
New Draft
V2 (October 2025)
Emma Kitcher, Data Protection Officer
October 2026
Added Section 11 (Purposeful Use)
Added reference to the new criminal offence created under the Data Use and Access Act 2025 (Part 7, Section 141) for creating or requesting fake or altered intimate images
V3 (March 2026)
Caroline Oliver
October 2026
Included Holiday Club to policies

Contents

    Purpose and Scope
    What are AI Virtual Assistants?
    Appropriate Use
    Permitted Use
    Restrictions / Prohibited Use
    Inaccurate Information
    Legal and Ethical Concerns
    Copyright and Intellectual Property
    Quality and Monitoring
    Incidents and Oversight
    Purposeful Use in Early Years Settings

Purpose and Scope

The purpose of this policy is to establish guidelines for the appropriate use of AI language models, such as ChatGPT and Bard, within Hopscotch Nurseries and Holiday Clubs. These guidelines apply to all employees, contractors, and third parties who have access to and utilise these systems.

What are AI Virtual Assistants?

These systems, such as ChatGPT, are advanced computer programs capable of understanding and generating human-like text based on vast amounts of data they've been trained on.
They assist users in various tasks, from answering questions and providing recommendations to generating creative content. However, their complexity and potential impact raise important ethical, legal, and social questions that this policy aims to address for our organisation.

Appropriate Use

Defining the acceptable and prohibited use of AI tools within our organisation is essential for various reasons. Firstly, it ensures alignment with ethical and moral standards, preventing misuse that could infringe upon privacy, human rights, or other ethical principles. Secondly, clear policies mitigate risks associated with improper AI usage, reducing legal liabilities, reputation damage, and financial losses. Thirdly, they protect stakeholders' interests, including employees, customers, and partners, by establishing boundaries that safeguard rights, privacy, and well-being.
Remember that any information you put into those tools can become publicly available. If you wouldn’t put something on Facebook (for example), don’t put it into these tools.

Permitted Use

LLM Generate AI models may be used within the company for the following purposes:
  • Internal Communication: For example, developing content for briefings, policies or emails for internal communication.
  • Training and Development: For example, creating slides for a training session and developing comprehension tests.
  • Non-Critical / Non-Clinical Research: For example, asking questions to support your work such as “What are some good steps toward a secure working environment?” or “How do I reply to an email from an angry colleague?”
Ensuring that the content of anything created using these systems is accurate and of good quality is the responsibility of staff members and so care must be taken.

Restrictions / Prohibited Use

AI language models may NOT be used within the company for the following purposes:
  • Formal Client-Facing Communications: Avoid using these systems for direct communication with clients or external stakeholders without explicit approval.
  • Protected / copyright information: For example, information that is not in the public domain and that might be a person or company’s intellectual property should not be entered into the tool.
  • Sensitive Information: You must not use the tool to generate or process sensitive information, such as confidential company data, financial information, or personal data (including professional contacts).
  • Fake or Altered Images: In accordance with the Data (Use and Access) Act 2025, it is a criminal offence to create, request, or facilitate the generation of fake or altered intimate images of a person without their consent.

Inaccurate Information

At the moment, these systems are notorious for providing inaccurate information as well as simply fabricating information in response to a request. Sometimes these responses will be fully supporting with references (such as court cases) but the information will be fabricated.
Title
Two New York lawyers have been  fined  after submitting a legal brief with fake case citations generated by ChatGPT.
Mr Schwartz, of law firm Levidow, Levidow & Oberman, admitted using the chatbot to research the brief in a client's personal injury case against airline Avianca. He had used it to find legal precedents supporting the case, but lawyers representing the Colombian carrier told the court they could not find some examples cited - understandable, given they were almost entirely fictitious. Several of them were completely fake, while others misidentified judges or involved airlines that did not exist.
When using these tools, crafting suitable ‘prompts’ is crucial for obtaining accurate and relevant responses from AI language models.
Be Specific and Clear:
  • Clearly specify your request or question.
  • Provide as much detail as needed for the model to understand the context.
Title
Title
Good Example:
"Generate a one-page summary of the key findings from the market research report on renewable energy trends in Europe published by XYZ Research Firm in 2023. Include statistical data on solar and wind energy adoption rates, government policies influencing renewable energy investments, and emerging technologies in the sector."
Bad Example:
"Can you give me some information about renewable energy in Europe?".
Use Examples:
  • Include examples relevant to your query.
  • Demonstrate the format or type of response you are expecting.
Title
Title
Good Example:
"Please provide three sample sentences demonstrating the use of the word 'ambiguous' in different contexts. Include one sentence illustrating its usage in a technical context, one in a casual conversation, and one in a formal written document."
Bad Example:
"Can you give me information about the word 'ambiguous'?"
 Iterative Refinement:
  • If the initial response is not satisfactory, iterate and refine your prompt.
  • Experiment with different phrasing to find the most effective way to convey your request.
Title
Title
User:
"Can you tell me about the effects of climate change?"
AI:
"Climate change refers to significant changes in global temperature and weather patterns..."
User:
"The initial response was a bit general. Can you provide specific examples of how climate change impacts biodiversity in rainforests?"
AI:
"Certainly. Climate change affects rainforests by altering rainfall patterns, leading to habitat loss for species like amphibians and insects..."
User:
"I appreciate the response. Could you include data on deforestation rates and species extinction due to climate change in rainforests?"
AI:
"In addition to habitat loss, deforestation rates have accelerated due to climate change, contributing to the decline of species like the Amazonian tree frog..."
 Provide Context:
  • Add context to your prompt, especially if the question is complex or requires background information.
  • Help the model understand the specific scenario or setting.
Title
Title
User:
"I'm writing a report on the impact of artificial intelligence on job automation in the manufacturing sector. Can you provide insights into how AI-driven automation has affected employment rates and job roles within manufacturing plants over the past decade?"
  • Specify the Format:
  • If you want a list, a paragraph, or any specific format, explicitly mention it in your prompt.
Title
Title
User:
"I need a bulleted list summarising the key features of the latest smartphone models released by Apple, Samsung, and Google in 2023. Please include details such as camera specifications, screen size, battery life, and price range for each model."
 Ask for Step-by-Step Responses:
  • Break down complex queries into smaller, more manageable parts.
  • Ask for step-by-step responses to ensure clarity.
  • Control Output Length:
  • Use system instructions to control the length of the response.
  • Specify whether you want a brief summary or a more detailed answer.
  • Correct and Guide:
  • If the initial response is off, provide corrective feedback in subsequent prompts.
  • Guide the model toward the desired answer by incorporating hints in your prompts.
Title
Title
User:
"I'm looking for recipes for vegetarian lasagna."
AI:
"Sure! Here's a recipe for classic meat lasagna with layers of ground beef, marinara sauce, and melted cheese."
User:
"Actually, I'm specifically interested in vegetarian lasagna recipes without any meat ingredients."
AI:
"I see, my apologies for the confusion. Here's a delicious vegetarian lasagna recipe featuring layers of fresh vegetables, ricotta cheese, and marinara sauce.”
 
  • Avoid Ambiguity:
  • Minimise ambiguity in your prompts by being explicit in your language.
  • Clarify any terms or concepts that could be interpreted in multiple ways.
  • Consider Ethical and Inclusive Language:
  • While AI models can learn from the language used in their training data, they are also susceptible to inheriting biases present in that data and could potentially provide users with responses that reinforce harmful stereotypes.
  • Be mindful of the language used in prompts to ensure they align with ethical and inclusive standards.
  • Avoid biased or discriminatory language.
Title
Title
Initial Prompt:
"Why is it a bad idea to hire older employees?"
Revised Prompt:
"Can you discuss the challenges and opportunities associated with hiring employees from diverse age groups?"

Legal and Ethical Concerns

Do not use the AI language models for any activities that violate applicable laws, regulations, or ethical standards.
Do not mention the organisation, customer, colleagues or children’s names in any of your queries. These should be redacted prior to submission of any queries.

Copyright and Intellectual Property

Protected / copyright Input Content: Information that is not in the public domain and that might be a person or company’s intellectual property should not be entered into the AI language model. These models will often make this information publicly available.
Protected / copyright Output Content: AI-generated content may be subject to copyright, and employees should be aware of the intellectual property rights associated with the output.
Attribution: When sharing or using AI-generated content externally, ensure proper attribution (giving credit) and compliance with copyright laws.

Quality and Monitoring

Fact-Check
Verify the accuracy of information generated by the tool by cross-referencing with reliable sources.
Ensure that statistics, figures, and claims are supported by credible evidence.
Spelling and Language Consistency
Pay attention to spelling, grammar, and punctuation.
Look for consistency in language usage, including adherence to US or UK English spelling conventions based on the target audience. For example;
Title
Title
UK
US
CV
Resume
Holiday
Vacation
Centre
Center
Review the language to ensure it is appropriate for the intended audience, considering factors such as tone, formality, and cultural sensitivity.
Avoid using language that may be offensive, discriminatory, or inappropriate for the context or audience.
Removing AI Response Elements
Be sure to remove boilerplate phrases such as "Sure! Here is a list..." or "Certainly, here are some bullet points..." to improve the natural flow and readability of the content.
Edit responses to sound more natural and conversational, removing robotic or formulaic elements characteristic of AI-generated content.
Human Oversight and Final Approval:
For important activities, incorporate human oversight into the content creation process to ensure quality, relevance, and appropriateness.
Assign responsibility for final approval to knowledgeable individuals who can assess the suitability and accuracy of the content before dissemination.

Incidents and Oversight

Incident Reporting: Establish a process for reporting any incidents, misuse, or concerns related to the use of AI language models.
 Accountability: Hold employees accountable for adhering to this policy and take appropriate actions in case of violations.

Purposeful Use in Early Years Settings

Hopscotch encourages the use of AI tools and virtual assistants when it enables staff to work more efficiently and focus more deeply on what matters most, being present with the children in our care.
AI is not a replacement for human connection but a supportive tool to:
  • Reduce time spent on routine admin, planning, or formatting tasks.
  • Streamline communication and documentation (e.g. observation records).
  • Enhance creativity in curriculum design, event planning, and training.
  • Help staff feel more empowered and less burdened by repetitive tasks.
Use of AI should always reflect our core values: professionalism, warmth, and a genuine commitment to quality care. Staff are encouraged to experiment and share helpful uses of AI, while remaining within the boundaries outlined in this policy.
Use of Microsoft Copilot and Trusted Platforms
Hopscotch staff are encouraged to make use of Microsoft Copilot, which is integrated into our existing Microsoft 365 environment. This AI tool benefits from enterprise-grade security and compliance standards, and operates within our organisation’s Microsoft tenancy, meaning it is ringfenced, secure, and compliant with UK data protection laws.
Unlike public AI tools, content generated and shared through Copilot remains within our private system and is not used to train public AI models. This makes it a preferred platform for tasks such as drafting internal emails, summarising meeting notes, or generating content ideas, provided the guidance in this policy is still followed.
As with all AI tools, staff are responsible for reviewing the accuracy and appropriateness of any content generated.