Explore what generative AI actually does, and crucially, what it doesn't do, when supporting SEND professionals in creating EHCPs
As generative AI continues to embed into professional practice, including in the SEND sector, it's natural for professionals to have questions and concerns. When something has the potential to change how we work, uncertainty follows. But much of what's being said about AI in EHCP development is based on misconceptions rather than reality.
Let's explore what generative AI actually does, and crucially, what it doesn't do, when supporting SEND professionals in creating Education, Health and Care Plans.
Generative AI can create first drafts, not final documents. Every plan remains fully editable at every stage. SEND professionals and the children, young person and family retain complete authorship and can refine and rewrite any section. The AI provides a starting point that organises information from assessments and reports but nothing is set in stone.
Think of it as an experienced colleague preparing an initial structure. You wouldn't accept their draft without review, and the same applies here. Human expertise guides every decision from first draft to final approval.
SEND professionals make decisions about provision. When AI suggests provision, it's drawing directly from the professional assessments and reports already submitted. These are suggestions, not prescriptions.
Senior SEND professionals maintain complete control over what provision is included, what's amended, and what's appropriate for each child's unique needs and the local authority's context. The AI organises recommendations from reports; humans decide what's final.
Provision suggestions come from the reports used to generate the draft plan, reflecting what professionals have already recommended. There's no automatic budget allocation or binding commitment for the local authority.
Local authority teams review every suggestion through their established governance processes. If a recommendation doesn't align with available resources or local provision, it can be amended, replaced, or removed entirely. Control remains where it should with the professionals who understand local capacity and individual need.
When built with security as a priority, AI platforms for SEND incorporate robust protections that meet or exceed industry standards and compliance with UK data protection regulations.
The question isn't whether AI is secure, it's whether the specific platform has been built with appropriate safeguards. Look for systems designed specifically for sensitive data, with clear security credentials.
Access to data is controlled entirely by the local authority. Third parties should only access information with explicit consent and within defined parameters. The platform doesn't have inherent rights to share or use data beyond delivering the contracted service.
Data governance remains with the local authority. They determine who can access the system, what they can see, and how data is used. AI is a tool deployed under the authority's control, not an independent entity with access rights.
Systems like VITA are built, tested, and secured in the UK by engineers who understand the sensitivity, complexity, and compliance demands of the SEND system.
UK-based development teams work directly with SEND professionals to ensure the platform reflects real-world practice, meets statutory requirements, and respects the nuances of the English SEND framework. Local development matters when building tools for a system this specific.
Personalisation comes from the depth and quality of the information provided, not from who types it first. AI organises the insights from educational psychologists, speech and language therapists, teachers, and families into a coherent structure.
The child's individual circumstances, strengths, needs, and aspirations as documented by professionals who know them form the foundation of the draft. What AI adds is organisation and consistency. What it doesn't add is generic, impersonal content.
AI creates more time for meaningful co-production. By handling the time-consuming administrative task of organising multiple reports into a first draft, professionals gain hours to spend where it matters most engaging with families, children and young people.
Rather than spending days compiling and structuring information, SEND professionals can focus on conversations, collaboration, and ensuring the family's voice shapes the final plan. Co-production becomes more thorough, not less, when administrative burden is reduced.
The child's voice is only as present as the information provided to the AI. When professionals submit quality person-centred assessments that capture the young person's views, preferences, and aspirations, that voice flows through to the draft.
AI doesn't diminish or filter the child's voice it reflects what's in the source material. The responsibility for capturing and amplifying that voice remains with the professionals conducting assessments and engaging with the family. AI simply ensures it's woven throughout the plan consistently.
When AI tools are built with SEND expertise embedded in their design, they actively strengthen the golden thread. Purpose-built systems understand how outcomes should flow from needs and how provision should be aligned to achieve those outcomes.
Generic AI models wouldn't understand this, but AI platforms designed specifically for EHCP development, with SEND professionals guiding their architecture, can identify and highlight gaps and ensure coherence. The golden thread is reinforced, not weakened, by intelligent organisation.
Properly designed generative AI for SEND doesn't invent or fabricate information. It carefully preserves and organises the understanding of needs already provided by professionals in their assessments.
The AI's role is to synthesise existing information, not to create new facts. Every detail in a draft should be traceable back to source reports and can be highlighted in features such as citations. This is why human review remains essential professionals verify that the AI has accurately represented the assessments provided.
All professional assessments still occur using local expertise. Educational psychologists, therapists, medical professionals, and teachers conduct their evaluations as they always have. These expert insights are then inputted into the platform.
What changes is how that information is compiled into a coherent draft. The human expertise in understanding the child's needs, the professional judgement in reviewing and refining the plan, and the collaborative decision-making with families, all of this remains entirely human-led.
Every AI-generated draft is a starting point for human refinement. Every provision suggestion is subject to professional review. Every plan goes through the same quality assurance, co-production, and approval processes that ensure children receive appropriate support.
The goal isn't to remove humans from the loop but to position them where they add the most value: understanding complex needs, building relationships with families, applying local knowledge, and ensuring every plan truly serves the child at its centre.
When implemented thoughtfully, with appropriate safeguards and clear governance, generative AI becomes a tool that empowers SEND professionals to do their work more effectively, not a replacement for their expertise.