Advertisement

How Artificial Intelligence Will First Find Its Way Into Mental Health

By on
Read more about author Bruce Bassi.

Artificial intelligence (AI) startup Woebot Health made the news recently for some of its disastrously flawed artificial bot responses to text messages that were sent to it mimicking a mental health crisis. Woebot, which raised $90 million in a Series B round, responded that it is not intended for use during crises. Company leadership woefully expects patients, who may not be thinking completely rationally, to have the recognition to stop using their typical form of communication and reach out to an alternative system.

While physicians are held responsible for harm inflicted upon patients for treatment, startup companies seeking to enter this space are not held to the same standard. To make matters worse for vulnerable patients, these systems are also not held to the same privacy standards. Entering the AI space and interacting directly with patients is especially complicated because many patients routinely experience attenuated crises, below the threshold of needing to call 911, making a bot not equipped for handling crises most likely not well equipped to handle the throes that patients experience on a daily basis.

Despite the risks involved in artificially and unintentionally bungling patient crises, mental health startups lured into this space raised a total of $1.3 billion in the first half of 2022. Unfortunately, there are many difficulties in communicating directly with patients, and AI is not yet ready for this task. Words can be used in slang or with alternative meanings. The meaning of a sentence may change depending on the patient’s history, cultural values, gestures, prosody, and tone of voice. Further, it is important to consider a patient’s subconscious motives in a therapeutic session – which is not easily elucidated from AI.

As much as artificial intelligence may be able to detect literal meanings of words, it will not be able to understand the meaning behind that which is unsaid to the extent that a human therapist can. Given the number of difficulties in replacing human therapists, artificial intelligence is more likely to have an impact behind the scenes in other ways.

Although there are many challenges when relying on an artificial bot to interact with patients, there are still areas where artificial intelligence can augment decision-making. Health insurance companies already see the value in AI in reducing costs by identifying patients who are high utilizers of health care services. Prescribing providers routinely receive notifications from health insurance companies regarding irregular refills of prescriptions to encourage discontinuation of prescriptions that are not optimally used. Indeed, large insurance companies possess sizable datasets that are currently being analyzed to predict the onset of Alzheimer’s, diabetes, heart failure, and chronic obstructive pulmonary disease (COPD). In fact, AI has already become FDA-approved for specific uses, and currently, AI shines when it is applied to a very specific clinical issue. AI systems are initially being sought to enhance clinical judgment rather than replace clinical judgment. Ideally, AI will enhance clinician productivity by handling mundane tasks and alerting to that which may be equivocal and require further investigation by a human. According to insurance company Optum, the top three AI applications are monitoring data with wearables, accelerating clinical trials, and improving the accuracy of healthcare coding. The current goals are not to increase the amount of data but to present the data in a way that is meaningful and actionable by the clinician.

Artificial intelligence will begin to impact providers with informative tips and alerts, thus helping augment decision-making and reducing human error. The practice of medicine is full of rote tasks that are ripe opportunities to be offloaded to a computer. For example, one common application of AI is in the evaluation of retina images, which allows ophthalmologists to focus on other areas of medicine they find more rewarding. As AI makes its way into health care, clinicians should not worry about whether they will be replaced but instead how their practice will continue to evolve over time – and hopefully for the better.

One difficulty in applying AI to the provider space is that medical records are not uniformly structured, and styles are highly variable from provider to provider. Medical records may also contain inherent bias, depending on the patient population that is most typical of that practice. Bias fed into an AI system will yield a biased result. Thus, the “what” of AI is not the only important factor in its application, but how it is applied and what is done with the results is also very meaningful in the impact it has. Tips and alerts that appear at moments when the clinician is distracted or accustomed to viewing another screen may be overlooked. The user experience of AI will have an impact on alert fatigue, a well-known phenomenon recently that has led to some landmark cases. Thus, AI is only as impactful as the medium in which it is delivered and the state of the user at the time it’s presented.

If we have learned anything from the newsworthy AI blunders, it is that we may not hold AI to the same privacy standards as humans, but we do hold it to a higher standard than typical human performance. It would be entirely unacceptable for an AI system to harm a single patient. We expect AI to not only perform better than humans but to not harm any patients. So, for now, AI will continue to work its alchemic magic in the background, quietly taking responsibility – or not – for how it affects health care.