There are many reasons why people do not seek help for mental health disorders—stigma, high costs, and lack of access to services are some common barriers. There is also a tendency to minimise signs of mental disorders or conflate them with stress. They may seek help with some prompting, and that is where digital screening tools can make a difference.
Dartmouth researchers have built an Artificial Intelligence (AI) model for detecting mental disorders using conversations on web posts, part of an emerging wave of screening tools that use computers to analyse social media posts and gain an insight into people’s mental states.
What sets the new model apart is a focus on the emotions rather than the specific content of the social media texts being analysed. The researchers show that this approach performs better over time, irrespective of the topics discussed in the posts.
Social media offers an easy way to tap into people’s behaviours. The data is voluntary and public, published for others to read. This platform offers a massive network of user forums, was their platform of choice because it has nearly half a billion active users who discuss a wide range of topics. The posts and comments are publicly available, and the researchers could collect data dating back to 2011.
In their study, the researchers focused on what they call emotional disorders—major depressive, anxiety, and bipolar disorders—which are characterised by distinct emotional patterns. They looked at data from users who had self-reported as having one of these disorders and from users without any known mental disorders.
They trained their model to label the emotions expressed in users’ posts and map the emotional transitions between different posts, so a post could be labelled “joy,” “anger,” “sadness,” “fear,” “no emotion,” or a combination of these. The map is a matrix that would show how likely it was that a user went from any one state to another, such as from anger to a neutral state of no emotion.
Different emotional disorders have their own signature patterns of emotional transitions. By creating an emotional “fingerprint” for a user and comparing it to established signatures of emotional disorders, the model can detect them. To validate their results, they tested it on posts that were not used during training and show that the model accurately predicts which users may or may not have one of these disorders.
This approach sidesteps an important problem called “information leakage” that typical screening tools run into. Other models are built around scrutinising and relying on the content of the text and while the models show high performance, they can also be misleading.
As reported By OpenGov Asia, to more securely explore Artificial Intelligence (AI) applications to advance patient care, the Virginia Department of Behavioural Health and Developmental Services (DBHDS) will be creating an anonymised digital twin of patient data to more securely explore artificial intelligence applications to advance patient care.
BHDS will first deploy GEMINAI synthetic data engine, which creates a duplicate dataset of patient information. That digital twin will have the same statistical properties, nuances and characteristics of a population of interest, but it will contain no personal information associated with patients that might reveal their identity.
The department said the data it had been using in its test and development environment did not meet security baselines for the protection of patient data. For those less secure applications, DBHDS needed synthetic, or properly de-identified and HIPAA compliant data. In addition to providing the synthetic data, the department said it wanted capabilities for machine learning prediction, data characterisation, decision reasoning, transparency and auditability.