Emma Kitcher, Data Protection Officer
Emma Kitcher, Data Protection Officer
Added Section 11 (Purposeful Use)
Added reference to the new criminal offence created under the Data Use and Access Act 2025 (Part 7, Section 141) for creating or requesting fake or altered intimate images
Included Holiday Club to policies
Purpose and Scope
What are AI Virtual Assistants?
Appropriate Use
Permitted Use
Restrictions / Prohibited Use
Inaccurate Information
Legal and Ethical Concerns
Copyright and Intellectual Property
Quality and Monitoring
Incidents and Oversight
Purposeful Use in Early Years Settings
The purpose of this policy is to establish guidelines for the appropriate use of AI language models, such as ChatGPT and Bard, within Hopscotch Nurseries and Holiday Clubs. These guidelines apply to all employees, contractors, and third parties who have access to and utilise these systems.
These systems, such as ChatGPT, are advanced computer programs capable of understanding and generating human-like text based on vast amounts of data they've been trained on.
They assist users in various tasks, from answering questions and providing recommendations to generating creative content. However, their complexity and potential impact raise important ethical, legal, and social questions that this policy aims to address for our organisation.
Defining the acceptable and prohibited use of AI tools within our organisation is essential for various reasons. Firstly, it ensures alignment with ethical and moral standards, preventing misuse that could infringe upon privacy, human rights, or other ethical principles. Secondly, clear policies mitigate risks associated with improper AI usage, reducing legal liabilities, reputation damage, and financial losses. Thirdly, they protect stakeholders' interests, including employees, customers, and partners, by establishing boundaries that safeguard rights, privacy, and well-being.
Remember that any information you put into those tools can become publicly available. If you wouldn’t put something on Facebook (for example), don’t put it into these tools.
LLM Generate AI models may be used within the company for the following purposes:
- For example, developing content for briefings, policies or emails for internal communication.
- For example, creating slides for a training session and developing comprehension tests.
- For example, asking questions to support your work such as “What are some good steps toward a secure working environment?” or “How do I reply to an email from an angry colleague?”
Ensuring that the content of anything created using these systems is accurate and of good quality is the responsibility of staff members and so care must be taken.
AI language models may NOT be used within the company for the following purposes:
- - : Avoid using these systems for direct communication with clients or external stakeholders without explicit approval.
- / : For example, information that is not in the public domain and that might be a person or company’s intellectual property should not be entered into the tool.
- : You must not use the tool to generate or process sensitive information, such as confidential company data, financial information, or personal data (including professional contacts).
- In accordance with the Data (Use and Access) Act 2025, it is a criminal offence to create, request, or facilitate the generation of fake or altered intimate images of a person without their consent.
At the moment, these systems are notorious for providing inaccurate information as well as simply fabricating information in response to a request. Sometimes these responses will be fully supporting with references (such as court cases) but the information will be fabricated.
Mr Schwartz, of law firm Levidow, Levidow & Oberman, admitted using the chatbot to research the brief in a client's personal injury case against airline Avianca. He had used it to find legal precedents supporting the case, but lawyers representing the Colombian carrier told the court they could not find some examples cited - understandable, given they were almost entirely fictitious. Several of them were completely fake, while others misidentified judges or involved airlines that did not exist.
When using these tools, crafting suitable ‘prompts’ is crucial for obtaining accurate and relevant responses from AI language models.
- Clearly specify your request or question.
- Provide as much detail as needed for the model to understand the context.
"Generate a one-page summary of the key findings from the market research report on renewable energy trends in Europe published by XYZ Research Firm in 2023. Include statistical data on solar and wind energy adoption rates, government policies influencing renewable energy investments, and emerging technologies in the sector."
"Can you give me some information about renewable energy in Europe?".
- Include examples relevant to your query.
- Demonstrate the format or type of response you are expecting.
"Please provide three sample sentences demonstrating the use of the word 'ambiguous' in different contexts. Include one sentence illustrating its usage in a technical context, one in a casual conversation, and one in a formal written document."
"Can you give me information about the word 'ambiguous'?"
- If the initial response is not satisfactory, iterate and refine your prompt.
- Experiment with different phrasing to find the most effective way to convey your request.
"Can you tell me about the effects of climate change?"
"Climate change refers to significant changes in global temperature and weather patterns..."
"The initial response was a bit general. Can you provide specific examples of how climate change impacts biodiversity in rainforests?"
"Certainly. Climate change affects rainforests by altering rainfall patterns, leading to habitat loss for species like amphibians and insects..."
"I appreciate the response. Could you include data on deforestation rates and species extinction due to climate change in rainforests?"
"In addition to habitat loss, deforestation rates have accelerated due to climate change, contributing to the decline of species like the Amazonian tree frog..."
- Add context to your prompt, especially if the question is complex or requires background information.
- Help the model understand the specific scenario or setting.
"I'm writing a report on the impact of artificial intelligence on job automation in the manufacturing sector. Can you provide insights into how AI-driven automation has affected employment rates and job roles within manufacturing plants over the past decade?"
- If you want a list, a paragraph, or any specific format, explicitly mention it in your prompt.
"I need a bulleted list summarising the key features of the latest smartphone models released by Apple, Samsung, and Google in 2023. Please include details such as camera specifications, screen size, battery life, and price range for each model."
- Break down complex queries into smaller, more manageable parts.
- Ask for step-by-step responses to ensure clarity.
- Use system instructions to control the length of the response.
- Specify whether you want a brief summary or a more detailed answer.
- If the initial response is off, provide corrective feedback in subsequent prompts.
- Guide the model toward the desired answer by incorporating hints in your prompts.
"I'm looking for recipes for vegetarian lasagna."
"Sure! Here's a recipe for classic meat lasagna with layers of ground beef, marinara sauce, and melted cheese."
"Actually, I'm specifically interested in vegetarian lasagna recipes without any meat ingredients."
"I see, my apologies for the confusion. Here's a delicious vegetarian lasagna recipe featuring layers of fresh vegetables, ricotta cheese, and marinara sauce.”
- Minimise ambiguity in your prompts by being explicit in your language.
- Clarify any terms or concepts that could be interpreted in multiple ways.
- While AI models can learn from the language used in their training data, they are also susceptible to inheriting biases present in that data and could potentially provide users with responses that reinforce harmful stereotypes.
- Be mindful of the language used in prompts to ensure they align with ethical and inclusive standards.
- Avoid biased or discriminatory language.
"Why is it a bad idea to hire older employees?"
"Can you discuss the challenges and opportunities associated with hiring employees from diverse age groups?"
use the AI language models for any activities that violate applicable laws, regulations, or ethical standards.
mention the organisation, customer, colleagues or children’s names in any of your queries. These should be redacted prior to submission of any queries.
/ : Information that is not in the public domain and that might be a person or company’s intellectual property should not be entered into the AI language model. These models will often make this information publicly available.
/ : AI-generated content may be subject to copyright, and employees should be aware of the intellectual property rights associated with the output.
: When sharing or using AI-generated content externally, ensure proper attribution (giving credit) and compliance with copyright laws.
Verify the accuracy of information generated by the tool by cross-referencing with reliable sources.
Ensure that statistics, figures, and claims are supported by credible evidence.
Pay attention to spelling, grammar, and punctuation.
Look for consistency in language usage, including adherence to US or UK English spelling conventions based on the target audience. For example;
Review the language to ensure it is appropriate for the intended audience, considering factors such as tone, formality, and cultural sensitivity.
Avoid using language that may be offensive, discriminatory, or inappropriate for the context or audience.
Be sure to remove boilerplate phrases such as "Sure! Here is a list..." or "Certainly, here are some bullet points..." to improve the natural flow and readability of the content.
Edit responses to sound more natural and conversational, removing robotic or formulaic elements characteristic of AI-generated content.
For important activities, incorporate human oversight into the content creation process to ensure quality, relevance, and appropriateness.
Assign responsibility for final approval to knowledgeable individuals who can assess the suitability and accuracy of the content before dissemination.
Incident Reporting: Establish a process for reporting any incidents, misuse, or concerns related to the use of AI language models.
Accountability: Hold employees accountable for adhering to this policy and take appropriate actions in case of violations.
Hopscotch encourages the use of AI tools and virtual assistants when it enables staff to work more efficiently and focus more deeply on what matters most, being present with the children in our care.
AI is not a replacement for human connection but a supportive tool to:
- Reduce time spent on routine admin, planning, or formatting tasks.
- Streamline communication and documentation (e.g. observation records).
- Enhance creativity in curriculum design, event planning, and training.
- Help staff feel more empowered and less burdened by repetitive tasks.
Use of AI should always reflect our core values: professionalism, warmth, and a genuine commitment to quality care. Staff are encouraged to experiment and share helpful uses of AI, while remaining within the boundaries outlined in this policy.
Hopscotch staff are encouraged to make use of , which is integrated into our existing Microsoft 365 environment. This AI tool benefits from enterprise-grade security and compliance standards, and operates within our organisation’s Microsoft tenancy, meaning it is with UK data protection laws.
Unlike public AI tools, content generated and shared through Copilot remains within our private system and is . This makes it a for tasks such as drafting internal emails, summarising meeting notes, or generating content ideas, provided the guidance in this policy is still followed.
As with all AI tools, staff are responsible for reviewing the accuracy and appropriateness of any content generated.