Scaffolding Neurodivergence: Lessons for Bridging Gaps in LLM Capabilities
I was recently at a talk by Hannah Hamid who is the Regional Leader of Inclusion, Europe and USA at Cognita Schools about Neurodivergence and her poignant insights and ethos for supporting learners. As a business leader and tech founder who is neurodivergent, it struck me how much has changed for the better in modern education, and it got me thinking about potential parallel techniques in the world of AI.
In education and support, the concept of scaffolding has always been key for helping people with neurodivergence. Examples include breaking down big tasks for someone with ADHD, or setting up clear routines for people who benefit from support in this area.
It gives these temporary structures that help people succeed and build their own independence. But what if we took these ideas and applied it to artificial intelligence? Large Language Models (LLMs) - like us humans - are not flawless. They have their own weak spots or skills that lag behind, and it changes from one model to the next.
In this blog I am going to explore how scaffolding ideas from neurodivergence could help bolster LLM performance. For me, these techniques have certainly have changed the way I think about using AI.
Understanding Scaffolding in Neurodivergence
The word ‘scaffolding’ comes from building sites, and it got picked up in education thanks to Lev Vygotsky - the renowned Russian psychologist. It means giving learners helpful setups so they can accomplish tasks that are currently beyond their independent capabilities.
This concept is closely tied to Vygotsky's Zone of Proximal Development (ZPD), which is the space between what a learner can do alone and what they can achieve with the right support from a more knowledgeable other, such as a teacher or peer. The ZPD is where real learning happens, through collaboration and guided assistance.
To illustrate Vygotsky's ZPD:
| Aspect | Description | Example |
|---|---|---|
| Actual Developmental Level (ADL) |
What the learner can do independently |
A child solves 5-piece puzzles alone |
| Zone of Proximal Development (ZPD) | Gap between ADL and what they can do with help | Same child solves 10-piece puzzles with adult guidance |
| Potential Developmental Level | Full ZPD + ADL combined | Child eventually solves 10-piece puzzles alone |
Scaffolding operates within this framework , providing temporary support that gradually fades as the learner gains independence. For neurodivergent people, whose brains process information differently - such as dyslexia, dyspraxia, or troubles with planning and focus - this could look like:
-
Task decomposition: Dividing a large task into smaller, manageable parts to reduce overwhelm.
-
Visual aids and prompts: Things like checklists, timers, or colour codes to keep attention on track and stay organised.
-
Gradual fading: Beginning with substantial support, say direct pointers, then gradually reducing it as proficiency increases, so they end up doing it on their own.
-
Personalised adaptations: Adjusting the support to fit what they are good at, like capitalising on the hyperfocus in ADHD for really diving deep into something they love.
Vygotsky's concept isn’t about trying to "fix" neurodivergence. It is about empowering it, seeing that different brains offer fresh perspectives when supported appropriately. So how does it apply to AI?
The 'Neurodivergence' of LLMs - human-like AI?
Recent research has extended Vygotsky's ZPD to LLMs, showing how it can explain and improve their learning behaviours. For instance, a paper by Peng Cui adapts ZPD to in-context learning in LLMs - measuring the gap in performance with and without prompts.
Additionally, LLMs are positioned as modern partners in the ZPD, offering personalised scaffolding and feedback. These insights reinforce the value of applying human learning theories to AI.
People have neurodiversity, and in a way, LLMs have this "model diversity" too, with their strengths and weaknesses. No model is excellent at everything. Certain capabilities lag because of biases in the training data, limits in how they are built, or just not enough computation power. For example:
-
Hallucinations and factual inaccuracies: A model might generate information that sounds right but is not, similar to when memory slips and you get details wrong.
-
Contextual limitations: Older or smaller models have a hard time with long bits of text, similar to how focus issues make it tough to stick with something for ages.
-
Reasoning gaps: Tricky step-by-step thinking, say in maths or tough moral questions, can falter without a nudge, similar to planning hiccups.
-
Creative or empathetic shortfalls: They are great at spotting patterns, but the finer points of feelings or coming up with original concepts might need a prompt to get going.
A recent paper by UT Austin & Cognizant AI Lab, Solving a Million-step LLM task with zero errors (Thanks to Juan Sanchez from Inteleos for the signposting) highlights the brittleness of LLMs, where even small per-step error rates, around 0.45% for models like GPT-4.1-mini, lead to catastrophic failure in long-horizon tasks, such as solving Towers of Hanoi with more than a few disks. Success rates drop to near zero beyond 5-6 steps due to accumulated errors, mirroring how lagging capabilities can compound over complex processes. These are not things to wipe out. They are areas where scaffolding can assist and help the model shine.
How Human Scaffolding can benefit LLMs
What I appreciate about this comparison is how practical it is. We can borrow ideas from supporting neurodivergence and create 'AI scaffolding' to strengthen those weaker LLM areas. Here are a few ways it could play out, often operating within the model's ZPD:
1. Task Decomposition for Prompt Engineering
Teachers often break down lessons into steps for neurodivergent children, and users could do the same with LLM questions. Rather than posing a broad query like "Analyse climate change impacts," break it down: "First, list the main causes. Second, talk about how it hits ecosystems. Third, suggest mitigation strategies." It is like dividing tasks to ease the mental strain, cutting mistakes and making the answers sharper, or alternatively avoiding distraction or hallucination “While researching on the internet about climate change and watching this video on YouTube I became distracted by this interesting video on Vampire bat migratory patterns”.
In practice, techniques like Chain of Thought (CoT) prompting, where you tell the model to "think step by step," work like an online checklist, steering the AI along paths it might otherwise miss. The aforementioned paper demonstrates this through "Massively Decomposed Agentic Processes" (MDAPs), where breaking tasks into minimal subtasks achieves zero errors over a million steps in Towers of Hanoi, far surpassing monolithic LLM performance.
2. Visual and Structural Aids
Visual tools like mind maps or timers really enhance clarity for neurodivergent people. For LLMs, incorporating mixed inputs like pictures or charts, or asking for structured replies with tables or bullets, does the same job. If a model is weak in sorting data, your prompt could say: "Lay out your thoughts in a table with columns for pros, cons, and proof."
Additional tools like external APIs or code runners act as “helper tools,” alleviating weak points, similar to how voice-to-text helps with dyslexia, so the model can play to its strengths.
3. Gradual Fading and Iterative Feedback
Scaffolding does not stick around forever. You pull it back as skills build or the brain matures. With LLMs, that might mean beginning with highly detailed prompts and refining them as you go. You could incorporate feedback: "You skipped this bit last time; add it in now." Gradually, the model improves through fine-tuning or through context provided via Learnings or pulling in extra information from knowledge stores (A technique we use with ReadyIntelligence), without overburdening the main setup.
This iterative process builds flexibility, much like how practice with support helps neurodivergent people build resilience. The paper's MAKER system uses voting and red-flagging to correct errors at the subtask level, ensuring reliability even with inherent per-step inaccuracies.
4. Personalised Adaptations for Model Diversity
Neurodivergence is not one-size-fits-all, and neither are LLMs. A simple model might require more support for depth, while a more powerful one could use limits to keep it concise. Tailoring these scaffolds with user settings or mixing models gets the best out of them. Think of an “AI neurodiversity toolkit” where developers pick supports based on what the model needs, a lot like personalised learning plans in schools.
Benefits and Broader Implications
Adopting this framework could democratise AI access, letting even advanced models work well despite their limitations. It promotes ethical AI development by focusing on augmenting existing capabilities instead of chasing perfection, and it mitigates misinformation with structured guidance. The UT Austin & Cognizant AI Lab paper shows how such scaffolding overcomes brittleness, enabling log-linear scaling in cost and accuracy for long tasks, reducing risks like error cascades.
Plus, it humanises AI: looking at models like they are neurodivergent fosters empathy, showing that “differences” are chances to innovate.
In the future, as LLMs evolve, perhaps with architectures inspired by human brains, scaffolding could get integrated, self-adjusting based on how things are going.
Conclusion
What scaffolding teaches us in neurodivergence is that help is not a weakness. It is a pathway to greater achievements. Using those ideas on LLMs lets us transform limitations into strengths, creating more robust, versatile AI systems. In the future if we are to develop a true Cognitive Partner drawing these parallels provides promise for a more inclusive future, both human and artificial.