Need2Know - Digital Mental Health

Digital mental health includes mobile and web-based apps, digitally delivered interactions (via e.g., video calls, chats, artificial intelligence chatbots, virtual reality), and devices for assessing, predicting and monitoring health (e.g., wearables and smartwatches) for: (i) information provision, (ii) screening and monitoring, (iii) intervention, and (iv) social support.

Here is what you “need to know” from the current evidence base

The evidence at-a-glance

Which digital mental health solutions work?

The model that has shown the most promising evidence of effectiveness is that of digital navigators. This “blended model” of digital mental health care incorporates members of healthcare teams who are dedicated to supporting patient use of digital resources. Evidence shows that this support is fundamental to effectiveness.

Which digital mental health solutions are not effective?

The evidence indicates that digital tools and mobile phone-based interventions delivered with no supervision or support do not work better than existing clinical alternatives and offer no - or only modest - clinical improvement. Many apps and AI-driven solutions are making unsupported claims and offer questionable content.

What is required for digital mental health solutions to be successful?

  • Integrate digital tools in treatment protocols (e.g. with digital navigators),

  • Develop organisational readiness (technological infrastructure, workflow, support by the management, etc.) and sustainable financial models,

  • Train the (mental) health workforce and offer supervision and support,

  • Ensure co-design with patients and service users.

What are the concerns to address in digital mental health solutions?

  • Ethics: Digital solutions carry concerns about privacy, confidentiality, safeguarding and information governance. A lot more evidence is necessary for these solutions to support marginalised populations with limited access to internet and software.

  • Measurement and Data: Research is still developing standardised validated assessments for these solutions. Systems for the protection of sensitive data is a priority, especially considering weaker data protection systems in certain countries.

  • Role of AI: A lack of evaluated evidence-based research remains a disadvantage.

  • Involvement: The need for co-production is a top priority for people with lived experience of mental health issues in order to ensure appropriate design, implementation and testing.

What do people with lived experience want in digital solutions?

Overall people with lived experience tend to be positive about digital mental health in its potential to have a positive role in supporting mental health. However, it is important that the products offer additional support, and are not used to replace professional help, whilst the information and guidance they offer needs to be correct, with professional involvement.

What does investment in new digital mental health solutions need to include?

  • True co-design that is sensitive to digital cultures and contexts,

  • Systems-level approaches that will help design for sustainability early on,

  • Large-scale evaluations,

  • Partnerships (with industry, academics and patients) to design for impact and scale.

Digital Mental Health: “Need to Know” Frequently Asked Questions

What is digital mental health?

Digital mental health is broadly seen as covering four types of services:

(i) information provision,

(ii) assessment for screening and monitoring,

(iii) intervention, and

(iv) social support.

More specifically, this can include mobile and web-based apps, digitally delivered interactions via, for example, video calls and chats, artificial intelligence chatbots or virtual reality, and devices for assessing, predicting and monitoring health (for example, wearables and smartwatches) (1–3).

What do we know about which digital interventions work?

As the number and availability of digital mental health tools increases, patients and clinicians see benefit only when these tools are engaging with and are well integrated into care (4). The model that has shown the most promising evidence of effectiveness is that of digital navigators. This “blended model” of digital mental health care (with navigators also called coaches, digital guides or clinical technology specialists) incorporates members of healthcare teams who are dedicated to supporting patient use of digital resources (4). Evidence shows that this support is fundamental to effectiveness (5). As the human relationship is seen as indispensable for the therapeutic alliance, this model enables more open attitude by health professionals, and greater acceptability by service users (6)

To note: As defined by this model, a digital navigator is a member of a healthcare team dedicated to assisting both patients and clinicians with the integration of technology, whose duties can include facilitating access to technology, teaching digital literacy skills, supporting telehealth visits, recommending health apps, troubleshooting technical issues, and interpreting clinically relevant data sourced from health apps (7).

What do we know about which digital interventions are not effective?

Digital tools have not become the panacea once hoped for, as unmet mental health needs continue to expand despite a proliferation of digital health technologies (4). The evidence indicates that digital tools and mobile phone-based interventions delivered with no supervision or support do not work better than clinical alternatives (such as already free and accessible resources like mood tracking, distraction games, or walking) and indeed offer no - or only modest symptomatic - clinical significance (5,8–10). The number of apps making unsupported claims combined with the number of apps offering questionable content warrants a cautious approach by both patients and clinicians in selecting safe and effective options (11). AI-driven solutions are a huge focus of investment at the moment, but there is still a lack of evidence-based research, as these approaches are limited by a small pool of studies on their reliability, validity, and reproducibility (12).

What are the pre-conditions necessary for digital solutions on mental health to be successful?

The relative lack of implementation research in digital mental health challenges the implementation process, especially when considering what works. For digital mental health interventions to meet their potential, they need to be embedded into established routine health care workflow processes in strengthened health systems. Hence, there are certain prerequisites to this, which are in line with aligning health systems for universal coverage (6):

  • integrate digital tools in treatment protocols where applicable (e.g. blended care with digital navigators),

  • develop organisational readiness (technological infrastructure, workflow, support by the management, etc.) and sustainable financial models,

  • train the (mental) health workforce and offer supervision and support,

  • ensure co-design with patients and service users.

Digital therapeutics is an industry‐created term that has little grounding in either health care regulation or research; the term is actually confusing, as it is very hard to evaluate the entire ev­idence base for digital mental health (13). Vast amounts of new patient data generated by technology, combined with constant care through synchronous and asynchronous telehealth, require new clinical workflows, practices and training for true integration (1).

What are the concerns that need to be addressed when investing in digital solutions?

  • Ethics

    • Ethical issues and potential harms are present in digital mental health. As well as common challenges (such as the choice of an appropriate placebo, and placebo/nocebo effects) because of the nature of the data collected, digital intervention evidence and studies also carry concerns about privacy, confidentiality, safeguarding and information governance. These need to be carefully addressed to ensure that trust is maintained between patients and clinicians (14). At the moment, engagement and adherence remain challenges in participation in digital mental health interventions (15), and with their potential to serve a significantly wider portion of the population, it is necessary that the acceptability and feasibility research steps are not skipped, especially if it is to be shown that such interventions can serve marginalised populations across the “digital divide”, especially those at disproportionate risk of mental illness (16,17). To address the ethical issues raised by the use of digital mental health interventions, it is important for all the stakeholders involved (funders, researchers, developers, users, and providers) to know their responsibilities and establish frameworks for their development and ethical use (18). It remains the case that many governments and companies purchase software for digital mental health that never gets used (19)

  • Measurement and Data

    • Objective engagement data (e.g. frequency of use) and standardised, validated assessments can contribute to an in-depth understanding of the impact that digital mental health interventions are having both at the organisational and individual levels (17). Such initiatives exist already, e.g. the NIHR MindTech Healthcare Technology Co-operative, based in the Institute of Mental Health at the University of Nottingham, has set up a collaboration to develop a common set of criteria for evaluating digital mental health tools such as apps and mobile websites (20), whilst the Canadian Network for Mood and Anxiety Treatments has added app evaluation into its guidelines (21). Although studies have shown that participants are generally happy for data owners to share their health and social data if the purpose was transparent and if the information would inform and improve health policy and practice, some reservations around digital data remain (22). Thus, development of systems for the protection of sensitive data should be a research priority, especially considering weaker data protection systems in certain countries (3).

  • Role of AI

    • Digital platforms and AI-driven apps are currently the most discussed solutions for intervention in high-prevalence conditions and recognition of symptoms, but a lack of evaluated evidence-based research remains a disadvantage, as these approaches are limited by a small pool of studies on their reliability, validity, and reproducibility (12). More generally, scientists have intended to separate machines from humans via AI and replace human decision-making with AI-based decision-making. On such a contentious topic, with continuous advancements in the field, the arguments for and against will have to remain core research questions in the foreseeable future, and ethically assessed and tested before implementation (3,12). At the moment, evidence shows that - as above - the “blended model” of digital mental health solutions is what works.

  • Involvement

    • The need for co-production remains a top priority for people with lived experience of mental health issues in order to ensure appropriate design, implementation and testing of digital tools (23). Collaboration is needed to support commissioners and decision-makers and it must involve the people affected by such decisions (24).

What do people with lived experience say they want in terms of digital solutions?

This is an emerging research space, but overall people with lived experience tend to be positive about digital mental health in its potential to have a positive role in supporting mental health. Some key themes, though, are important (25):

  • The products offer additional support, and are not used to replace professional hellp.

  • Vulnerable people are properly protected, particularly on data and finances.

  • If someone is in immediate need of help, there should be a clear pathway to get this.

  • The information and guidance they offer needs to be correct, with professional involvement.

When it comes to investment in new digital innovations, it is critical that investment includes (26):

  • True co-design that is sensitive to digital cultures and contexts,

  • Systems-level approaches that will help design for sustainability early on,

  • Large-scale evaluations,

  • Partnerships (incl. with industry) to design for impact and scale.

What is the state of regulation in digital mental health?

In May 2024, the European Council approved the Artificial Intelligence Act aiming to harmonise rules on artificial intelligence (27). This legislation is the first of its kind in the world and follows a ‘risk-based’ approach, which means the higher the risk to cause harm to society (which includes psychological harm), the stricter the rules. The Act expects lawful practices to be followed in the context of medical treatment such as psychological treatment of a mental illness, in accordance with the applicable law and medical standards. 

Whilst the Act clarifies that digital and AI systems intended for therapeutic use should follow separate medical or safety guidelines, it makes an explicit mention to serious concerns about the scientific basis of AI systems aiming to identify or infer emotions, particularly as expression of emotions vary considerably across cultures and situations, and even within a single individual. Among the key shortcomings of such systems are the limited reliability, the lack of specificity and the limited generalisability (28)

In the United Kingdom, the Medicines and Healthcare products Regulatory Agency (MHRA) is now leading a three-year project, focusing on effective regulation and evaluation of digital mental health technology (29). In the United States, a reorganisation at the Food and Drug Administration has led to the development of a Digital Health Center for Excellence that will also address regulatory approaches (30).

Previous
Previous

Need2Know - Air Pollution and Mental Health

Next
Next

The Global Mental Health Action Network at AIDS 2024