Publication:
DXplain Mobile: An Assessment of a Smartphone-Based Expert Diagnostic System

No Thumbnail Available

Date

2016-07-27

Published Version

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Hamilton, Baker H. 2016. DXplain Mobile: An Assessment of a Smartphone-Based Expert Diagnostic System. Master's thesis, Harvard Medical School.

Research Data

Abstract

Objectives: Clinical decision support tools may help reduce diagnostic error, and improving the accessibility and usability of these tools may encourage more frequent use. We have developed a novel, self-contained, iPhone-based implementation of DXplain, a popular diagnostic decision support system. In this study we evaluated the diagnostic agreement of this new application compared to the standard web-based version of DXplain. Methods: A native DXplain application for iOS was developed, with modifications made to DXplain’s original database to maintain acceptable performance within the more limited form factor of a smartphone. Each of the 41 Clinical Pathological Cases (CPCs) from the New England Journal of Medicine (NEJM) in 2015 were entered into the smartphone application as well as the standard version of DXplain, and the ranking of each case’s final diagnosis was compared. Results: DXplain’s database contained 52 of the 65 final diagnoses found within the CPCs (80%), and this set of final diagnoses appeared within the calculated differential of both versions of the software in 38 instances (73%). In 21 of these cases (55%) the iOS application and the web version of DXplain agreed exactly on the position of the final diagnosis, and the weighted kappa score for agreement between the 38 diagnoses was 0.83 (95% CI 0.76 - 0.90). Conclusions: DXplain for iOS appears to have strong agreement with the traditional web version of DXplain. Diagnostic discrepancies against actual cases should be explored to improve the underlying algorithms and knowledge base. Additional usability testing should also be performed, with a possible pilot study of user interaction and satisfaction prior to an official release of the application.

Description

Other Available Sources

Keywords

Decision Support Systems, Clinical, Medical Informatics Applications

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories