AI models could help improve suicide prevention among children

Researchers found that AI models could help health providers identify kids at risk of self-harm.
Researchers found that AI models could help health providers identify kids at risk of self-harm. Copyright Canva
Copyright Canva
By Giulia Carbonaro
Share this articleComments
Share this articleClose Button

Researchers found that the traditional way we monitor and track children receiving emergency care might miss a good number of those at risk of self-harm, but AI can help health providers make better assessments.

ADVERTISEMENT

After the shocking case of a Belgian man who reportedly decided to end his life after an AI chatbot encouraged him to do so, a new study found that machine learning models may actually be effectively used for the exact opposite: preventing suicide among young people.

A peer-reviewed study by UCLA Health researchers published in the journal JMIR Mental Health last week found that machine learning can help detect thoughts or behaviour of self-injury in children much better than the actual data system currently used by health care providers.

According to a 2021 report from UNICEF, suicide is a leading cause of death among young people in Europe. Nine million children aged between 10 and 19 estimated to live with mental disorders with anxiety and depression accounting for more than half of all cases.

In the US, an estimated 20 million young people can currently be diagnosed with a mental health disorder, according to the US Department of Health and Human Services.

UCLA Health researchers reviewed clinical notes for 600 emergency department visits made by children aged between 10 and 17 to see how well current systems to evaluate their mental health could identify signs of self-harm and assess their suicide risk.

What they found is that these clinical notes missed 29% of children who came to the emergency department with self-injurious thoughts or behaviours, while statements made by health specialists flagging at risk-patients - called “chief complaint” in the US - overlooked 54% of patients.

In the latter case, health specialists failed to spot the sign of self-injurious thoughts or behaviours because children often do not report suicidal thoughts and behaviors during their first visit to the emergency department.

Even using the two systems together still missed 22% of children at risk, according to the study. Boys were more likely to be missed than girls, the study found, while Black and Latino youth were also more likely to be left out than white children.

But machine-learning models were found to make a significant difference.

Researchers created three machine-learning models, which looked at data including previous medical care, medications, where a patient lived, and lab test results to estimate suicide-related thoughts and self-injurious thoughts or behaviours.

All three models were better at identifying children at risk than the traditional methods.

“Our ability to anticipate which children may have suicidal thoughts or behaviours in the future is not great – a key reason is our field jumped to prediction rather than pausing to figure out if we are actually systematically detecting everyone who is coming in for suicide-related care,” Juliet Edgcomb, the study’s lead author, said in a UCLA press release.

“We sought to understand if we can first get better at detection.”

While the three machine-learning models were found to increase the chance of false positives - kids who are identified as at risk when, in fact, they are not - Edgcomb said that’s better “than to miss many children entirely.”

If you are contemplating suicide and need to talk, please reach out to Befrienders Worldwide, an international organisation with helplines in 32 countries. Visit befrienders.org to find the telephone number for your location.

Share this articleComments

You might also like