The overall diagnostic accuracy of smart phone applications designed to help nonclinicians identify malignant skin lesions was highly variable, with three of four of the applications wrongly classifying 30% of melanomas.
The overall diagnostic accuracy of smart phone applications designed to help nonclinicians identify malignant skin lesions was highly variable, with three of four of the applications wrongly classifying 30% of melanomas, according to results of a study published in JAMA Dermatology.
“In dermatology, several applications are available that offer educational information about melanoma and skin self-examination and that aid the user in tracking the evolution of individual skin lesions,” the researchers, led by Joel A. Wolf, BA, of the University of Pittsburgh Medical Center, wrote in their discussion of the results. “However, the applications we evaluated in our study go beyond aiding patients in cataloging and tracking lesions and additionally give an assessment of risk or probability that a lesion is benign or malignant.”
The researchers wrote that the findings of this study were of “particular concern” because people may be substituting the findings of these applications for medical attention.
Wolf and colleagues used existing images of 188 lesions, 60 of which were melanomas. All images were submitted to and reviewed by four different smart phone applications that provide feedback to the user about the likelihood of malignancy.
Results indicated that the sensitivity of the applications to detect malignancy varied greatly from 6.8% (95% confidence interval [CI], 2.2%–17.3%) to 98.1% (88.8% to 99.9%). The specificity of the applications had a narrower range of variability, but still varied greatly from application to application, with the highest specificity being 93.7% (95% CI, 87%–97.2%) and the lowest 30.4% (95% CI, 22.1%–40.3%).
The application with the highest specificity had the lowest sensitivity and vice versa. The highest sensitivity for melanoma diagnosis was observed for an application that sent the image directly to a board-certified dermatologist for analysis; whereas, the lowest sensitivity for melanoma diagnosis were applications that use automated algorithms to analyze images.
The positive predictive values of the applications ranged from 33.3% to 42.1%, and the negative predictive values from 65.4% to 97%.
The application with the best-performing results only classified 18 of the 60 melanomas correctly.
“Technologies that improve the rate of melanoma self-detection have potential to improve mortality due to melanoma and would be welcome additions to our efforts to decrease mortality through early detection,” the researchers concluded. “However, extreme care must be taken to avoid harming patients in the process.”