The human side of AI in childhood cancer: children as the stress test for “good” technology
March 9, 2026
Artificial intelligence is transforming cancer care, but paediatric oncology shows why technology must be guided by transparency, ethics and the needs of children and families.
Children’s cancer is different. Treatments developed for adult bodies cannot simply be scaled down to child‑size, and there are no obvious behavioural preventions for children – no equivalent of quitting smoking or avoiding alcohol or the midday sun.
In Australia, all childhood cancers are classed as rare diseases, together accounting for well under one per cent of all new cancer diagnoses each year, even though their impact on families is immense. Leukaemias alone account for about one‑third of childhood cancers in Australia, brain and other central nervous system tumours for another quarter, and everything else – from neuroblastoma to bone and kidney tumours – is scattered across dozens of ultra‑rare diagnoses.
Paediatric cancers are rare. Even large international collaborations struggle to assemble sizeable, balanced datasets across tumour types. That breaks one of AI’s implicit promises: that more data will always smooth out the rough edges.
In practice, it means that for a particular six‑year‑old with a medulloblastoma or a teenager with an ependymoma, the algorithm may have seen only a handful of similar cases before. Here, the numbers are small, the futures are long, and every decision reverberates through a family’s life. That makes childhood cancer an uncomfortable fit for AI models built on “big data” and narrow performance metrics. It also makes it the ideal place to ask a deeper question: what would it mean for AI to make care more human, rather than less?
A second difference is time. Childhood cancer survivors live for decades with the consequences of our choices. Radiotherapy fields, chemotherapy doses and surgical decisions shape not only survival but cognitive function, fertility, employment and independence well into adulthood.
Most technical progress to date has been in childhood brain tumours. There, AI systems can already outline tumours on MRI scans, distinguish between some tumour types, and even hint at underlying molecular changes that guide treatment. These are clever tools. But their real significance is not only in what they can see in the pixels. It is in how they force us to renegotiate relationships between clinicians, families, systems – and children who cannot easily speak for themselves.
A model that works beautifully on 100 scans may still struggle with your child’s particular tumour. When parents are told that “the AI model says…” they deserve to know not just the headline results but how many children like theirs the model has actually seen, and how often it has been wrong. In other words, data scarcity in paediatrics makes transparency and humility non‑negotiable features of any ethical AI.
Families will rightly want to know: who owns those decisions? How do we weigh a small gain in predicted disease control against a higher risk of learning difficulties or secondary malignancy 20 years later? No algorithm can answer that; it can only provide another layer of information for clinicians, parents and – where possible – the child to interpret together.
Technical papers on AI in paediatric brain tumours speak the language of test scores and success rates. These matter. They tell us whether an algorithm is any good at its narrow task. But families live in a different vocabulary: trust, responsibility, fairness.
From a human perspective, this gap between lab performance and lived practice is not just an implementation delay. It is a risk to trust. Building trust in this setting means more than explaining how a computer vision system works. It means honest conversations about uncertainty; clear lines of responsibility when the AI and the clinician disagree; and involvement of parents and young people in deciding how their data are used to train future systems.
Health‑technology rhetoric likes the word “augmentation”: AI will support, not replace, clinicians. Paediatric cancer shows what that should look like when taken seriously.
AI can take over hours of manual tumour measuring and outlining, giving radiation oncologists and radiologists back precious time. Models designed around the messy reality of clinical practice – missing MRI sequences, inconsistent scan quality – show that thoughtful engineering can bridge lab and bedside. Biopsy‑scanning algorithms can pre‑screen slides, allowing pathologists to focus on the most ambiguous regions.
The human question is what we do with the time and mental bandwidth this frees. Augmentation worth having would let clinicians spend less energy chasing images, formatting reports and drawing tumour outlines – and more energy sitting with parents at diagnosis, checking in on siblings, coordinating school reintegration, or discussing fertility preservation.
Used thoughtfully, AI could also augment families. Tools that translate complex imaging findings into more accessible visualisations, or that summarise a child’s progress in language parents can understand, could help turn families from passive recipients into active partners in decision‑making. That is a very different vision from one where AI pushes generic recommendations through a portal without context or dialogue.
Most commentary on AI in health treats paediatrics as an afterthought. Yet there is a strong argument for the opposite: that children with cancer should be at the centre of how we design and govern AI.
They expose the weaknesses of “big data solves everything” thinking. They remind us that some patients are structurally under‑represented in datasets, and that justice requires deliberate over‑representation of their needs when building systems that will shape care. They force us to grapple with intergenerational ethics: using today’s children to train tomorrow’s models, while also protecting those same children from exploitation and harm.
Most of all, they insist that care is relational. An algorithm may be state‑of‑the‑art in reading MRI scans, but if using it means less continuity with a trusted nurse, more fragmented appointments, or fewer opportunities for a teenager to voice fears about relapse, its net effect on “care” may be negative.
If we can get AI right in paediatric oncology – small numbers, high stakes, lifelong consequences – we will have gone a long way towards getting it right elsewhere. That would mean:
- embedding multi‑centre, child‑specific evaluation as the norm, not the exception, before widespread deployment;
- designing models and interfaces with families at the table, not merely as data sources;
- building governance that treats AI decisions as joint human–machine judgements, with clear accountability when things go wrong; and
- measuring success not only in test scores, but in time returned to relationships, reduced distress, and better long‑term outcomes that matter to survivors.
AI in paediatric cancer will continue to evolve: better imaging analysis, richer scan pattern insights, and smarter handling of missing data. The technology curve is steep. The question is whether the human curve – our capacity to use these tools in ways that honour children’s vulnerability and potential – can keep pace.
If we allow childhood cancer to be our ethical compass, it may yet guide AI in health towards a future that is not just more intelligent, but more humane.