Supported Browser
  • About Us
  • Subscribe
  • Contact us
Entrepreneurship

The Hidden Hazards of Smart Device Medical Advice

Boris Babic, INSEAD Assistant Professor of Decision Sciences; Sara Gerke, Research Fellow at Harvard Law School; Theodoros Evgeniou, INSEAD Professor of Decision Sciences and Technology Management; and I. Glenn Cohen, Deputy Dean at Harvard Law School  |

Diagnostic mobile medical apps call for increased regulatory intervention, even if they do not dispense advice or treatment.

For many of us, our electronic device can be a communications lifeline, entertainment system and professional networking hub. If trends continue, it may become our health advisor as well.

Direct-to-consumer (DTC) medical apps are a growing segment of the USD10 billion market for healthcare solutions, incorporating machine learning (ML) and artificial intelligence (AI). Most are designed to flag symptoms that may require attention from a healthcare professional. For instance, the Apple Watch’s heartbeat sensor periodically checks for irregular rhythms associated with atrial fibrillation (AFib), a disorder that can cause strokes and hospitalisation.

Despite their increasing accessibility to consumers, these apps have yet to generate much interest from regulators. At first glance, this may seem sensible. The apps do not claim to dispense advice or treatment, but rather notifications of possible early warning signs.

It is short-sighted, however, to let DTC medical apps slip under the regulatory radar. As we describe in a recent article for Nature, they could turn out to have costs which insurers or taxpayers might ultimately be responsible for.

From a standard medical regulatory perspective, DTC medical apps are particularly advantageous due to their ability to cheaply reduce the risk of false negative medical judgments – i.e. the number of people who unknowingly carry illnesses requiring treatment. But from the standpoint of safeguarding healthcare infrastructure, false positives – the number of people who unnecessarily seek treatment – are also a problem to be reckoned with. The manifold benefits of identifying disease in the early stages, when it can be easily treated, should be measured against the costs incurred by skittish patients booking needless clinical appointments on the advice of their smartphone or other device.

Decision theory suggests that the risk of false positives is far from negligible here. For example, a famous 1998 study found that patients believed positive diagnostic test results were much more indicative of disease than they actually are, often ignoring the associated base rate in the population. The flexibility and ease-of-use of DTC medical apps further heightens the probability of false positives.

Consider an app that purports to scan one’s skin lesions for signs of cancer based on photos taken with a smartphone camera. Without a limit on the number of times a single lesion can be checked, there is a greater likelihood that one of the images will be flagged as requiring medical attention. When that occurs, people are likely to anchor on the one positive result. A 2010 study on genetic risk information revealed that people grossly overestimate their risk of contracting a severe illness such as oesophageal cancer once they learn they are susceptible to it.

Moreover, DTC medical apps are often marketed to a generally young and healthy demographic and are targeted at relatively rare diseases such as AFib. This is an ideal combination for generating false positives.

What regulators can and should do

To prevent the potentially significant costs that could attach to the false positive judgments caused by large scale use of DTC medical apps, regulators should intervene early.

We identify three specific ways they could take action. First, they should encourage developers to perform behavioural research on how consumers respond to DTC medical apps in the real world. While medical device developers already work hard on improving the sensitivity and specificity of their diagnostic systems, without clinical trials or field research we cannot sufficiently understand how such technology will fare in the hands of imperfectly rational users.

Second, regulators could mitigate the cost of false positive verdicts by requiring that positive predictions be verified through a virtual appointment with a healthcare professional. Developers could be further required to bear a portion of the consultation costs. Such a requirement could be tied in to experimental government initiatives such as Singapore’s recent tele-medicine regulatory sandbox.

Third, regulators could give doctors the right to “prescribe” mobile medical apps to patients who may be at higher risk, thus keeping these apps out of the hands of the general public. In the case of AFib, the app could be activated only for patients of a certain age, or with a family history of the disorder. Something like this already exists in Germany, where healthcare costs incurred by certain medical apps are not covered unless, among other things, a doctor or insurer has prescribed their use. Our recommendation would be to rely on doctors’ judgements rather than insurers because medical professionals are best equipped to adjust the availability of the app in accordance with existing risks, which can significantly reduce the rate of false positive judgments.

In sum, we aim to highlight that absent regulatory intervention, free or cheap diagnostic medical information can generate significant social costs, which have been underappreciated by policy makers. As always, there is no such thing as a free lunch.

Boris Babic is an INSEAD Assistant Professor of Decision Sciences.

Sara Gerke is Research Fellow, Medicine, Artificial Intelligence, and Law at The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and The Project on Precision Medicine, Artificial Intelligence, and the Law (PMAIL).

Theodoros Evgeniou is a Professor of Decision Sciences and Technology Management at INSEAD. He has been working on machine learning and AI for almost 25 years.

I. Glenn Cohen is the Faculty Director, Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics at Harvard Law School where he is also the James A. Attwood and Leslie Williams Professor of Law and Deputy Dean.

INSEAD Knowledge is now on LinkedIn. Join the conversation today.

Follow INSEAD Knowledge on Twitter and Facebook.

Add a comment Already a member?
We welcome your comments and encourage lively debate. However, to ensure the quality of discussion, our moderators reserve the right not to publish personal attacks, abusive comments or overly promotional content. You can view our Terms & Conditions

Your Privacy

INSEAD takes your privacy very seriously. For this reason, we inform you that the data collected via the form above is processed electronically for the purpose(s) specified in this form and will not be used outside this framework. In accordance with the Data Protection Act of 6 January 1978 amended by the GDPR, you are granted statutory rights of access, modification, update, deletion and limitation of treatment of your personal data. You may exercise these rights at any time by writing or sending an email to INSEAD at [email protected]. You have the right, on legitimate grounds, to object to the collection and processing of your personal information. For more information, please see our privacy policy.