For the past couple of decades, there has been a loneliness pandemic, marked by rising rates of suicides and opioid use, lost productivity, increased health care costs and rising mortality. The COVID-19 pandemic, with its associated social distancing and lockdowns, have only made things worse, say experts.
Accurately assessing the breadth and depth of societal loneliness is daunting, limited by available tools, such as self-reports. In a new proof-of-concept paper, published online September 24, 2020 in the American Journal of Geriatric Psychiatry, a team led by researchers at University of California San Diego School of Medicine used artificial intelligence technologies to analyze natural language patterns (NLP) to discern degrees of loneliness in older adults.
“Most studies use either a direct question of ‘ how often do you feel lonely,’ which can lead to biased responses due to stigma associated with loneliness or the UCLA Loneliness Scale which does not explicitly use the word ‘lonely,'” said senior author Ellen Lee, MD, assistant professor of psychiatry at UC San Diego School of Medicine. “For this project, we used natural language processing or NLP, an unbiased quantitative assessment of expressed emotion and sentiment, in concert with the usual loneliness measurement tools.”
In recent years, numerous studies have documented rising rates of loneliness in various populations of people, particularly those most vulnerable, such as older adults. For example, a UC San Diego study published earlier this year found that 85 percent of residents living in an independent senior housing community reported moderate to severe levels of loneliness.
The new study also focused on independent senior living residents: 80 participants aged 66 to 94, with a mean age of 83 years. But, rather than simply asking and documenting answers to questions from the UCLA Loneliness Scale, participants were also interviewed by trained study staff in more unstructured conversations that were analyzed using NLP-understanding software developed by IBM, plus other machine-learning tools.
“NLP and machine learning allow us to systematically examine long interviews from many individuals and explore how subtle speech features like emotions may indicate loneliness. Similar emotion analyses by humans would be open to bias, lack consistency, and require extensive training to standardize,” said first author Varsha Badal, Ph.D., a postdoctoral research fellow.
Among the findings:
- Lonely individuals had longer responses in qualitative interview, and more greatly expressed sadness to direct questions about loneliness.
- Women were more likely than men to acknowledge feeling lonely during interviews.
- Men used more fearful and joyful words in their responses compared to women.
Authors said the study highlights the discrepancies between research assessments for loneliness and an individual’s subjective experience of loneliness, which NLP-based tools could help to reconcile. The early findings reflect how there may be “lonely speech” that could be used to detect loneliness in older adults, improving how clinicians and families assess and treat loneliness in older adults, especially during times of physical distancing and social isolation.
The study, said the authors, demonstrates the feasibility of using natural language pattern analyses of transcribed speech to better parse and understand complex emotions like loneliness. They said the machine-learning models predicted qualitative loneliness with 94 percent accuracy.
“Our IBM-UC San Diego Center is now exploring NLP signatures of loneliness and wisdom, which are inversely linked in older adults. Speech data can be combined with our other assessments of cognition, mobility, sleep, physical activity and mental health to improve our understanding of aging and to help promote successful aging” said study co-author Dilip Jeste, MD, senior associate dean for healthy aging and senior care and co-director of the IBM-UC San Diego Center for Artificial Intelligence for Healthy Living.
Varsha D. Badal et al, Prediction of Loneliness in Older Adults using Natural Language Processing: Exploring Sex Differences in Speech, The American Journal of Geriatric Psychiatry (2020). DOI: 10.1016/j.jagp.2020.09.009
University of California – San Diego
Researchers use artificial intelligence tools to predict loneliness (2020, September 24)
retrieved 24 September 2020
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Published at Thu, 24 Sep 2020 19:30:00 +0000
SciBite’s artificial intelligence (AI) software platform is designed to help pharmaceutical researchers and other life-science professionals parse through their data to unlock useful insights. According to the company, the platform pairs machine learning with ontology-based semantic capabilities.
James Malone (JM), SciBite’s chief technology officer, spoke with Outsourcing-Pharma about the progress of AI use and understanding in the pharma industry, and how the company’s AI technology seeks to build upon previous technological capabilities.
OSP: Please talk a bit about the evolution of AI’s use in life sciences—how long it’s been present, how its understanding and application has changed in the industry, and what might lie ahead?
JM: There is a broad spectrum of approaches in AI, some of which have been used for a long time in life sciences. For instance, knowledge engineering using ontologies to describe metadata, expert systems for helping triage symptoms online, and machine learning for image analysis.
Most recently the innovation in deep learning, combined with availability of big data and powerful compute, has provided huge improvements in the performance of some of these approaches. This is particularly true of areas such as language comprehension where they now represent the state of the art. It is likely these approaches will be increasingly combined into software in the near future and that scientists will benefit from the innovation without having to become deep learning experts.
In some domains, the future is already here, with voice recognition software commonplace in many applications. In the area of semantics, this may include approaches in harmonizing electronic medical records, analyzing self-reported patient data, and enabling natural language questions to be asked of large data stores.
OSP: What are some of the challenges the industry has faced that SciBiteAI is designed to help overcome?
JM: The primary goals of SciBiteAI are to combine our expertise in semantics and life sciences to enable a broad as possible audience to benefit from machine learning approaches we are offering. One of the biggest barriers to building machine learning models is obtaining high-quality training data.
Our existing technology means we are able to identify and create relevant training sets in an efficient and accurate manner. Our understanding of biomedical entities – drugs, diseases, genes, assays, etc. – are encoded in our ontologies and in turn are built into the machine learning models we derive from them.
We are building our life sciences understanding into SciBiteAI – this is what we call semantics-based deep learning. A huge amount of human understanding is already encoded in a computer-readable form via ontologies. However, many AI companies don’t utilize this resource and are in essence using AI with “one hand tied behind its back”; we’re asking AI to make predictions and it’s much better to arm it with what we know already rather than ask it to work “in the dark”.
This combined strategy has been shown many times to outperform a single approach and is perhaps most famously demonstrated as the strategy that Watson used to win Jeopardy. Given SciBite’s vast ontology resources, we can create workflows that comprehend scientific data and extract entities and patterns pertinent to those working in the field with many possible applications, for instance drug-adverse event detection, finding biomarkers or identifying novel biologics in text.
OSP: Why is incorporating AI technology that does not require becoming an AI wizard beneficial to life-science users?
JM: The ethos of SciBite is to enable the widest audience possible to benefit from advances in semantic technology. Tools such as TERMite for advanced named entity recognition and CENtree for democratizing enterprise ontology management, have brought technology often seen as the domain of experts, to a large audience of scientists, researchers, and application developers; SciBiteAI follows this same pattern, enabling simple calls to the tool which can exploit a lot of powerful deep learning underneath the hood.
The alternative, of data collection, building training sets, creating code to train models and tweak them, then to wrap that up for use by others, consistently, can represent a significant time investment and a barrier for many. Data is an incredibly important asset for everyone working in the life science field, from big pharma to clinics to academic groups. Maximizing the value of this data to anyone using it is our mission.
Another aspect of this is systems integration and in particular “productionizing” AI. Many AI models are developed to address specific questions raised in a particular experiment or study. As such there is less attention paid to how such models are deployed or re-used within other applications.
Our ultimate goal is to have services based on AI that are integrated into day-to-day scientific applications – indeed the user may not even know there is an AI-based algorithm operating. All they get is a system that does what they expect. For instance, smart data-entry systems that understand what users are entering into form fields and modify the forms behavior based on constant assessment of what the user is trying to do.
OSP: Can you share any examples of the SciBiteAI technology being put to use in a real-world situation?
JM: We are about to publish a study on the use of the technology in accurately identifying novel interactions between biological molecules, distinguishing between inconsequential mentions (e.g. two molecules mentioned in a list) from significant events (‘X’ activates ‘Y’ for example). This is a key part of generating computable knowledge and a hard problem for the field, but these advances will lead to better accuracy of extracted facts and consequently better insights and productivity.
OSP: What would you like to add about the technology that I didn’t touch upon above?
JM: As with any technology area that makes rapid advances, there is much excitement, a degree of hype and a lot of potential. Our approach to utilizing these innovations, in deep learning in particular, is to cherry-pick the most suitable for a given task, ensure they offer real improvements and deploy them appropriately.
We don’t see machine learning as a panacea for all data challenges. SciBiteAI offers an exciting addition to the SciBite suite and it is the combination of our tools for making data FAIR (Findable, Accessible, Interoperable, Reusable) across an organization and applying them to critical questions where we see SciBiteAI fulfilling a valuable need.
Upcoming supplier webinars
Published at Thu, 24 Sep 2020 17:48:45 +0000