K Ganapathy, President, Apollo Telemedicine Networking Foundation, Director, Apollo Tele Health Services, Past President, Telemedicine Society of India,Former Secretary, Past President, Neurological Society of India, Emeritus Professor, Dr MGR Medical University, Tamil Nadu

The good physician treats the disease; the great physician treats the patient who has the disease; medicine is a science of uncertainty and an art of probability. One of the first duties of the physician is to educate the masses not to take medicine. Listen, listen, listen the patient is telling you the diagnosis. I often wonder how Sir William Osler, author of the above statements would respond to the introduction of artificial intelligence (AI) in healthcare, 110 years later. For centuries, the essence of practicing medicine has been a physician obtaining as much data about the patient's health or disease as possible and making decisions. White hair (like I have) presupposed experience, judgment, and problem-solving skills using rudimentary tools and limited resources. Today with the imminent disruptive technology, fashionably termed AI, poised to become a reality even in healthcare, do we need to sit back and critically evaluate what this would actually mean. In my lifetime I am sure I will see driverless cars in some places in India and also AI in healthcare.

AI is defined as the use of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Tomorrow's 5P (predictive, personalized, precision, participatory, and preventive) medicine when fully functional will have AI as a major component. This presupposes availability of genomics, biotechnology, wearable sensors, and hi-speed real time supercomputing. Tomorrow's healthcare will essentially be big data analysis. As 80 percent of the 41 zetabytes (410 trillion GB) of digital information currently available is unstructured, AI will be required to detect patterns and trends, which our gray matter at present is unable to decipher.

Illustrations of AI in Healthcare

In a conference on biomedical imaging, computational systems were created for detecting metastatic breast cancer in whole slide images of sentinel lymph node biopsies. The winner algorithm had a 92.5 percent success rate. When a pathologist independently reviewed the same images, the success rate was 96.6 percent. Combining the deep learning system's predictions with the human pathologist's diagnoses increased the pathologist's success rate to 99.5 percent, IBM has launched another program called Medical Sieve to assist in clinical decision making in radiology and cardiology. It can scan hundreds of radiology images in seconds to detect an out of place phenomenon. FDA also approved the first cloud-based deep learning algorithm for cardiac imaging developed by Arterys in 2017. Within 3 years we will have many machine learning algorithms in active clinical pilot testing and in approved use. AI reporting will eventually be available in certain fields. The next-generation cognitive assistant will be an emotion sensitive robot with analytical, reasoning capabilities and a range of clinical knowledge. Automated lab test and ECG diagnosis are now routine. In the future, radiologists might only have to look at the most complicated cases where human supervision is necessary.

Chipping perhaps indicates the cultural transformation that is commencing, to accept AI in our daily life. Individuals are having RFID (radio-frequency identification) microchips injected into their hands so they can open office doors, log in to computers, share business cards, and even buy snacks with just a wave. A Swedish company Epicenter and Three Square Market, a technology company in Wisconsin have started with their employees. Eye control now makes Windows 10 more accessible by empowering people with disabilities to operate an onscreen mouse, keyboard, and text-to-speech experience using only their eyes and a compatible eye tracker like the Tobii 4C.

Who is liable if an AI system makes a false decision or prediction? Who will build in safety features? How will the economy respond if AI makes redundant certain jobs? Forecasting and prediction used in AI are based on precedence. In the case of machine learning, algorithms can be underperforming in novel cases of drug side effects or treatment resistance where there is no prior example to build on. Hence, AI may not replace tacit knowledge that cannot be codified easily. As is common with technologic advances, AI could replace jobs that previously required humans with computers. AI could be applied to repetitive types of jobs or actions in healthcare.

Machine learning, the basis of AI, is a field of computer science that gives computers the ability to learn without being explicitly programmed. Evolving from the study of pattern recognition and computational learning theory, these algorithms can learn from and make predictions on data. These analytical models allow researchers, data scientists, engineers, and analysts to produce reliable, repeatable decisions and results, and uncover hidden insights through learning from historical relationships and trends in the data. Can this be extrapolated to individual patient care?

Where are we heading? A robot made in China scored 456 in a national level qualification test for doctors compared to the pass mark of 350. The same test was answered in the same time in a designated room without Internet access! The robot had mastered self-learning and problem-solving abilities to a degree. The robot now certified will soon make home visits – of course in driverless cars. Saudi Arabia has gone one step further. It became the first country to grant citizenship to a robot. A very attractive intelligent Sophia was introduced at a large investment conference in the Saudi capital, Riyadh. She (obviously I cannot use the term it – even our definition of living and nonliving will have to be reviewed) was able to think and respond appropriately at a global press interview. Mirai, a 7-year-old humanoid chatbot whose name means future in Japanese and who functions on the Line messaging app, has become a resident of Shibuya, a Tokyo ward with a population of around 224,000 people. The goal is to make the district's local government more familiar to residents and allow officials to hear their opinions.

Altruism, benevolence compassion, commiseration, concern, consideration, empathy, humanity, kindness, knowledge, sympathy, trust, understanding, wisdom, – this is what a doctor of the twentieth century was identified with. Will the Sofias and Mirai's of the next decade shed tears when a patient dies. Who knows? May be they will, but then the old individualized family doctor–patient sanctified relationship is being replaced with terms like the healthcare industry, healthcare provider, consumer, client, predictive analytics, machine learning, and AI. Doctors of today beware! Your grandchildren will consider you a relic belonging to the neolithic era. So, buck up! Become familiar with deep learning and Bayesian networks if you want to understand the language tomorrow's medical students will be talking.

10 Diagnostic Imaging Trends for 2018



Digital version