Top of page
Technology

AI could help screen for children with autism and ADHD

AI learning algorithms, Artificial intelligence AI , Automation and modern technology in business as concept

An interdisciplinary team led by USC computer science researchers is creating a faster, more reliable and more accessible system to help clinicians screen children with autism and ADHD.

For children with autism spectrum disorder (ASD), receiving an early diagnosis can make a huge difference in improving behavior, skills and language development. But despite being one of the most common developmental disabilities, impacting 1 in 54 children in the U.S., it’s not that easy to diagnose.

There is no lab test and no single identified genetic cause—instead, clinicians look at the child’s behavior and conduct structured interviews with the child’s caregivers based on questionnaires. But these questionnaires are extensive, complicated and not foolproof.

“In trying to discern and stratify a complex condition such as autism spectrum disorder, knowing what questions to ask and in what order becomes challenging,” said USC University Professor Shrikanth Narayanan, Niki and Max Nikias Chair in Engineering and professor of electrical and computer engineering, computer science, linguistics, psychology, pediatrics and otolaryngology.

“As such, this system is difficult to administer and can produce false positives, or confound ASD as other comorbid conditions, such as attention deficit hyperactivity disorder (ADHD).”

As a result, many children fail to get the treatments they need at a critical time.

An interdisciplinary team led by USC computer science researchers, in collaboration with clinical experts and researchers in autism, hopes to improve this by creating a faster, more reliable and more accessible system to screen children for ASD. The AI-based method takes the form of a computer adaptive test, powered by machine learning, that helps clinical practitioners decide what questions to ask next in real-time based on the caregivers’ previous responses.

“We wanted to maximize the diagnostic power of the interview by bootstrapping the clinician with an algorithm that can be more curious if it needs to be, but will also try to not ask more questions than it needs to,” said study lead author Victor Ardulov, a computer science Ph.D. student advised by Narayanan. “By training the algorithm in this way, you’re optimizing it to be as effective as possible with the information collected so far.”

In addition to Narayanan and Ardulov, co-authors of the study published in Nature Research Scientific Reports are Victor Martinez and Krishna Somandepalli, both recent USC Ph.D. graduates; autism researchers Shuting Zheng, Emma Salzman and Somer Bishop from the University of California San Francisco; and Catherine Lord from the University of California Los Angeles.

In the study, the research team of computer scientists and clinical psychologists specifically looked at differentiating between ASD and ADHD in school-aged children. ASD and ADHD are both neurodevelopmental disorders, which are often misdiagnosed for one another—the behaviors exhibited by a child due to ADHD, such as impulsiveness or social awkwardness, might look like autism, and vice versa.

As such, children can be flagged as being at-risk for conditions they may not have, potentially delaying the correct evaluation, diagnosis and intervention. In fact, autism may be overdiagnosed in as many as 9% of children, according to a study by the Centers for Disease Control and Prevention and the University of Washington.

To help reach a diagnosis, the practitioner evaluates the child’s communication abilities and social behaviors by gathering a medical history and asking caregivers open-ended questions. Questions cover, for instance, repetitive behaviors or specific rituals, which could be hallmarks of autism.

At the end of the process, an algorithm helps the practitioner compute a score, which is used as part of the diagnosis. But the questions asked do not change according to the interviewee’s responses, which can lead to overlapping information and redundancy.

“This idea that we have all this data, and we crunch all the numbers at the end—it’s not really a good diagnostic process,” said Ardulov. “Diagnostics are more like playing a game of 20 questions— what is the next thing I can ask that helps me make the diagnosis more effectively?”

Instead, the researchers’ new method acts as a smart flowchart, adapting based on the respondent’s previous answers and recommending which item to ask next as more data about the child becomes available.

For instance, if the child is able to hold a conversation, it can be assumed that they have verbal communications skills. “So, our model might suggest asking about speech first, and then deciding whether to ask about conversational skills based on the response—this effectively balances minimizing queries, while maximizing information gathered,” said Ardulov.

They used Q-learning—a reinforcement learning training method based on rewarding desired behaviors and punishing undesired ones—to suggest which items to follow up on to differentiate between disorders and make an accurate diagnosis.

“Diagnostics are more like playing a game of 20 questions— what is the next thing I can ask that helps me make the diagnosis more effectively?” Victor Ardulov.

“Instead of just crunching the responses at the end, we said: here’s the next best question to ask during the process,” said Ardulov. “As a result, our models are better at making predictions when presented with less information.”

The test is not meant to replace a qualified clinician’s diagnosis, said the researchers, but to help them make the diagnosis more quickly and accurately.

“This research has the potential to enable clinicians to more effectively go through the diagnostic process—whether that is in a timelier manner, or by alleviating some of the cognitive strain, which has been shown to reduce the effect of burnout,” said Ardulov.

“It could also help doctors triage patients more efficiently and reach more people by acting as an at-home, app-based screening method.”

Although there is still work to be done before this technology is ready for clinical use, Narayanan said it is a promising proof-of-concept for adaptive interfaces in diagnosing social communication disorders, and possibly more.

“Such an approach is truly significant because of its applicability not only within ASD,” said Narayanan. “It could also help in diagnosing numerous mental and behavioral health conditions across the life span, and globally, including anxiety disorder, depression, addiction, and dementia that all rely on similar procedures for understanding and treating them.”

You might also like

Person using AI chatbot Person using AI chatbot

AI Chatbot to help adults with autism get more active

The University of Limerick and Gemmo AI are launching an…

Side view of young using virtual reality goggles Side view of young using virtual reality goggles

VR headsets could be life changing for persons with intellectual disabilities

Immersive virtual reality could open up a whole new world…

AI learning algorithms, Artificial intelligence AI , Automation and modern technology in business as concept AI learning algorithms, Artificial intelligence AI , Automation and modern technology in business as concept

AI technology improves early detection of autism

A new machine learning model can predict autism in young…

file screenshot, women athletes with and without disabilities. Competing together. file screenshot, women athletes with and without disabilities. Competing together.

Apple unveils ‘The Relay’: A short film celebrating inclusive sports

Apple’s latest short film, ‘The Relay’, showcases the abilities of…