News Warner Logo

News Warner

Face analysis tech could spot PTSD in kids

Face analysis tech could spot PTSD in kids

  • Researchers at the University of South Florida are developing a new technology that uses face analysis to identify childhood PTSD.
  • The system, led by Alison Salloum and Shaun Canavan, analyzes facial expressions to detect subtle patterns linked to emotional expression in children with PTSD.
  • The technology strips away identifying details and only analyzes de-identified data, prioritizing patient privacy and context-aware PTSD classification.
  • Studies have shown that the system can accurately identify distinct patterns in facial movements of children with PTSD, even when they are not verbally expressing their emotions.
  • The researchers hope to expand the study to further examine potential bias and validate the system’s accuracy, which could redefine how PTSD in children is diagnosed and tracked using everyday tools like video and AI.

A young girl looks downward, away from the camera.

New technology analyzes facial expressions to identify childhood PTSD.

Diagnosing post-traumatic stress disorder in children can be notoriously difficult. Many, especially those with limited communication skills or emotional awareness, struggle to explain what they’re feeling.

Researchers at the University of South Florida are working to address those gaps and improve patient outcomes by merging their expertise in childhood trauma and artificial intelligence.

Led by Alison Salloum, professor in the USF School of Social Work, and Shaun Canavan, associate professor in the Bellini College for Artificial Intelligence, Cybersecurity, and Computing, the researchers are building a system that could provide clinicians with an objective, cost-effective tool to help identify PTSD in children and adolescents, while tracking their recovery over time.

Traditionally, diagnosing PTSD in children relies on subjective clinical interviews and self-reported questionnaires, which can be limited by cognitive development, language skills, avoidance behaviors, or emotional suppression.

“This really started when I noticed how intense some children’s facial expressions became during trauma interviews,” Salloum says.

“Even when they weren’t saying much, you could see what they were going through on their faces. That’s when I talked to Shaun about whether AI could help detect that in a structured way.”

Canavan, who specializes in facial analysis and emotion recognition, repurposed existing tools in his lab to build a new system that prioritizes patient privacy. The technology strips away identifying details and only analyzes de-identified data, including head pose, eye gaze, and facial landmarks, such as the eyes and mouth.

“That’s what makes our approach unique,” Canavan says. “We don’t use raw video. We completely get rid of the subject identification and only keep data about facial movement, and we factor in whether the child was talking to a parent or a clinician.”

The study, which appears in Science Direct, is the first of its kind to incorporate context-aware PTSD classification while fully preserving participant privacy. The team built a dataset from 18 sessions with children as they shared emotional experiences. With more than 100 minutes of video per child and each video containing roughly 185,000 frames, Canavan’s AI models extracted a range of subtle facial muscle movements linked to emotional expression.

The findings revealed distinct patterns are detectable in the facial movements of children with PTSD. The researchers also found that facial expressions during clinician-led interviews were more revealing than parent-child conversations. This aligns with existing psychological research showing children may be more emotionally expressive with therapists and may avoid sharing distress with parents due to shame or their cognitive abilities.

“That’s where the AI could offer a valuable supplement,” Salloum says. “Not replacing clinicians, but enhancing their tools. The system could eventually be used to give practitioners real-time feedback during therapy sessions and help monitor progress without repeated, potentially distressing interviews.”

The team hopes to expand the study to further examine any potential bias from gender, culture, and age, especially preschoolers, where verbal communication is limited and diagnosis relies almost entirely on parent observation.

Though the study is still in its early stages, Salloum and Canavan feel the potential applications are far-reaching. Many of the current participants had complex clinical pictures, including co-occurring conditions like depression, ADHD, or anxiety, mirroring real-world cases and offering promise for the system’s accuracy.

“Data like this is incredibly rare for AI systems, and we’re proud to have conducted such an ethically sound study. That’s crucial when you’re working with vulnerable subjects,” Canavan says. “Now we have promising potential from this software to give informed, objective insights to the clinician.”

If validated in larger trials, the new approach could redefine how PTSD in children is diagnosed and tracked, using everyday tools like video and AI to bring mental health care into the future.

Source: University of South Florida

The post Face analysis tech could spot PTSD in kids appeared first on Futurity.

link

Q. What is the main challenge in diagnosing post-traumatic stress disorder (PTSD) in children?
A. Diagnosing PTSD in children can be notoriously difficult due to limited communication skills, emotional awareness, and cognitive development.

Q. How does the new technology analyze facial expressions to identify childhood PTSD?
A. The technology analyzes de-identified data from head pose, eye gaze, and facial landmarks, such as the eyes and mouth, without identifying any specific details about the child.

Q. What is unique about this approach compared to traditional methods of diagnosing PTSD in children?
A. This approach prioritizes patient privacy by stripping away identifying details and only analyzing de-identified data, making it a more ethically sound method.

Q. How did the researchers build their AI model?
A. The researchers repurposed existing tools in Shaun Canavan’s lab to build a new system that extracts subtle facial muscle movements linked to emotional expression from video recordings of children sharing emotional experiences.

Q. What patterns were detectable in the facial movements of children with PTSD?
A. The researchers found distinct patterns in the facial movements of children with PTSD, which aligns with existing psychological research on emotional expression and avoidance behaviors.

Q. How does this technology complement traditional methods of diagnosing PTSD in children?
A. This technology offers a valuable supplement to clinicians’ tools by providing real-time feedback during therapy sessions and helping monitor progress without repeated, potentially distressing interviews.

Q. What are the potential applications of this technology if validated in larger trials?
A. If validated, this technology could redefine how PTSD in children is diagnosed and tracked, using everyday tools like video and AI to bring mental health care into the future.

Q. How does the study address concerns about bias in AI systems?
A. The team hopes to expand the study to further examine any potential bias from gender, culture, and age, especially preschoolers, where verbal communication is limited and diagnosis relies almost entirely on parent observation.

Q. What are the implications of this technology for clinicians and practitioners?
A. This technology could provide clinicians with an objective, cost-effective tool to help identify PTSD in children and adolescents, while tracking their recovery over time, enhancing their tools and improving patient outcomes.

Q. How does this study contribute to the field of mental health care?
A. The study contributes to the development of more accurate and effective methods for diagnosing and treating PTSD in children, using cutting-edge technology like AI and facial analysis to bring mental health care into the future.