Caitlyn Vlasschaert, PhD student (Translational Medicine) and Queen’s Internal Medicine Resident
In 1997, “Man Loses to Computer” was splayed across newspaper headlines worldwide after Garry Kasparov– the reigning grandmaster of chess– was defeated by the IBM Deep Blue. The breadth of innovation achievable through artificial intelligence (AI) has since captivated and frightened us in its multiple implementations. During his Medical Grand Rounds lecture on February 18th, Dr. David Holland walked us through the history of AI and shared what can (and cannot) be done with AI in 2021. This framework allowed us to explore the question of the hour: will AI replace physicians?
The roots of artificial intelligence (AI) can be traced to Alan Turing, who derived the framework for machine-based computation with his invention of the Turing Machine in 1936. Technological advances have since broadened the realm of feasibility in digital computing. Computing power has doubled every year or so, following a rapid expansion known as Moore’s law [1]. Many of us now hold powerful compact computers in the palms of our hands. AI-based pattern recognition software installed on our cellphones (and other advanced computers) enable digitization of the living world. These perceptron-based algorithms convert faces, drawings, speech to digital data– analogous to human sensory perception. How does the computer then discriminate what exactly this input is? Artificial neural networks– a type of machine learning named after its functional similarity to the interconnected neurons of the brain– can be trained to classify data. As the classification task increases in complexity, multi-layered neural networks are required, which is referred to as deep learning. Deep learning pattern recognition algorithms are being adapted to assist with visual diagnostics in medicine, from interpretating ECGs and radiographic images to identifying skin lesions [2–4].
Deep learning is also being used to predict medical outcomes using electronic health record (EHR) data. DeepMind, a Google-owned AI company that rose to prominence in 2016 for its successful AlphaGo program [5], recently developed an application called Stream that can forecast acute kidney injury (AKI) from laboratory data and warn care providers of imminent renal threats [6]. These deep learning algorithms are trained using large amounts of data with labelled outcomes. In some cases, data labelling is relatively simple: AKI, for example, is identified by numerical changes in serum creatinine. But in other areas of medicine, data labelling can be a challenge, as Dr. David Maslove shared during the Q&A period. Dr. Maslove directs a machine learning-focused critical care research program at Queen’s (http://www.conduitlab.org/). He discussed that, for instance, in order to develop a neural network that accurately identifies complex heart rhythms from ICU tracings, one needs to train the algorithm using multiple reference points, which requires significant human resources.
After his presentation, Dr. Holland sat down with the graduate students in the Translational Medicine (TMED) Program. We discussed several ethical implications of AI in healthcare, including data privacy and algorithm biases. As mentioned, these algorithms are trained using user-defined data; if these data represent a biased sample, then this can perpetuate bias in practice [7]. Conversely, if carefully implemented, AI algorithms can help us address racial disparities and unconscious bias in healthcare as well as assist in the delivery of care to underserviced regions globally [8].
As of 2021, AI capabilities are restricted to task-oriented functions. As such, Dr. Holland is confident that the complex roles of many healthcare workers including physicians will not be replaced by AI anytime soon. With the advent of AI-assisted visual diagnostics and flagging of EMR trends, healthcare providers may however benefit from formal training in AI-assisted medicine in order to responsibly harness its benefits in their practice.
References
- Moore GE. Cramming more components onto integrated circuits. Electronics. 1965 Apr 19; 38(8). Available at: https://newsroom.intel.com/wp-content/uploads/sites/11/2018/05/moores-l….
- Ribeiro AH, et al. Automatic diagnosis of the 12-lead ECG using a deep neural network. Nat Commun. 2020 Apr 9;11(1):1760.
- Montagnon E, et al. Deep learning workflow in radiology: a primer. Insights Imaging. 2020 Feb 10;11(1):22. PMID: 32040647.
- Esteva A, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017 Feb 2;542(7639):115-118.
- DeepMind. AlphaGo: the story so far. [Website] Available at: https://deepmind.com/research/case-studies/alphago-the-story-so-far.
- Tomašev N, et al. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature. 2019 Aug;572(7767):116-119. PMID: 31367026.
- To, SB. Humans are the cause of bias in AI, but we’re also the solution. Forbes. 2020 Mar 3. [Website] Available at: https://www.forbes.com/sites/forbestechcouncil/2020/03/03/humans-are-the-cause-of-bias-in-ai-but-were-also-the-solution/
- Pearl, R. How AI can remedy racial disparities in healthcare. Forbes. 2021, Feb 16. [Website] Available at: https://www.forbes.com/sites/robertpearl/2021/02/16/how-ai-can-remedy-r…