Through a long career in embedded reasoning solving challenges in healthcare, industrial automation, aviation and aerospace safety, I’ve applied systems engineering, modeling, simulation, diagnosis, prognostics, and optimization to everything from IoT health and status monitoring to NASA’s efforts in space exploration. My career focus has been at the intersection of health management and artificial intelligence blending engineering, science and biomedicine. As a former associate editor of journals in Artificial Intelligence and Prognostic Health Management, among others, I’ve had the opportunity to be exposed to a very broad application of A.I., logic, and embedded reasoning across industries. I’m a serial entrepreneur and hold 15 patents. I’d welcome your question on any of these topics above or to simply lend my experience and best practices for your professional development.
I will be back at 1 pm ET to answer your questions AMA!
My technical expertise is in:
Artificial intelligence (AI) and Data Science
Internet of Things (IoT)
Prognostics and health management (PHM)
I am still very new to the role of AI + Data Science in medicine. With centralised EMR being adopted by many large health districts around the world, the opportunities for innovation are exciting and even overwhelming at times.
Can you shed light on the ethical/legal implications of using large data analysis for clinical decision making? For example:
I create an algorithm which over time will allow nurses/doctors to anticipate insulin requirements in critically ill patients where the BSL must be maintained in a very specific and narrow range. The algorithm will adjust for factors which generally affect insulin duration of action eg - dialysis, acute kidney injury, steroid administration, sepsis etc.
The algorithm will utilise real time data from multiple patients in multiple ICUs to continuously compare its own predictions and true data.
Eventually, let us assume a robust version of this algorithm is developed, which is able to anticipate insulin requirements within clinically acceptable margin of error. In this situation:
a) Who owns the algorithm as a product? Should the health care district which provided the patient data be entitled to profits as well?
b) If the use of algorithm leads to a poor outcome (death or hypoglycemic event), who is held responsible? The architect of the algorithm? The end user? The individual responsible for approving the version of algorithm which led to the poor outcome?
c) What if the data is collected from two different health care systems and subsequently different set of laws? Eg ICUs in USA (private health care) vs ICUs in New Zealand (Public health care).
Brilliant questions! Here are my thoughts.
a) Presumably, the algorithm is developed as a result of a research project that is funded by some entity. Government research agencies may have stipulations on IP rights, but it is possible for a company to commercialize such a product and claim ownership. Being able to access data across multiple ICUs is a completely different matter, especially given patient data protection regulations. If one were to obtain permission to access data from multiple ICUs in real time (not very likely), the administrators of those ICUs are likely to seek some sort of renumeration.
A more likely scenario is that you would access data from multiple ICU patients while developing and fine-tuning the application and perhaps repeat these experiments at multiple ICUs in order to make the method more robust. You would compensate these ICUs for their participation during the R&D effort (e.g., fund them through your government grant). Once you commercialize the product, it would (should) operate solely based on real-time and lab data for the patient that the algorithm is dosing insulin for.
b) The hospital that authorizes the use of the system would be held responsible first, and they would quickly shift the liability toward the company that sells the automated insulin delivery system. If the system is not administering insulin but just merely recommending the dose, then one needs to make sure the nurse has administered the right dose and not 10x the recommended dose by mistake (happens every so often). If it's an automated delivery system (or if it could be ascertained that the nurse administered the recommended dose), then it's a liability for the owner of the algorithm (typically a company that has legal protections and not an individual). Within the company, it's a shared responsibility and liability, since there are multiple technical and legal safeguards to prevent such an error from happening (e.g., testing, QA, government approvals, etc.).
c) Real-time collection and use of data from ICUs in multiple countries is not likely to happen due to legal and patient confidentiality regulations. However, as I discussed in (a), it would be possible to train the algorithm on multiple data sets from multiple ICUs, possibly across the world.
Hope that helps!
Thanks for coming to talk with us! One of the shortcomings of current hot techniques in AI like deep learning is that they may come up with an answer with a high statistical likelihood of being correct, but are unable to explain why that answer was recommended. This seems particularly problematic for applications in the medical domain. Does this change what you would use AI approaches for?
Let me share an interesting historical perspective. AI in Medicine efforts started in the 1970s with efforts to automate diagnosis. These efforts produced some interesting results but without much acceptance in the medical community. A similar observation was made back then – that doctors could not these systems since these systems could not explain how they reached their decisions. In the 1980’s, the community shifted its attention to AI systems that explained their findings. Regrettably, the ability to explain results did not make an impact for acceptance and use of these systems.
Given that historical perspective, I got a déjà vu feeling when deep learning became an overnight success and the lack of explanations became a concern for many. You may know that DARPA initiated a program last year to investigate that specific concern – the program is called Explainable AI (https://www.darpa.mil/program/explainable-artificial-intelligence).
Back in the 1980s, my perspective was that people did NOT need explanations from the AI system – they just needed to be convinced that the results are accurate, consistent, complete, and repeatable. My current perspective remains the same. I don’t need explanations from an AI system. It’s far more important for me for the decisions to be robust and the failures graceful and gradual. For instance, a human may have never seen a mountain lion before, but she could accurately classify it as a four-legged land animal that resembles a large cat. I would expect the same from AI – instead of classifying it as a domestic cat because that’s all it has ever been trained on.
Today, some AI image recognition systems exhibit remarkable accuracy in detecting and diagnosing melanoma. They are more accurate than humans already, and definitely more consistent than a panel of human experts. That would be sufficient for me – I would not need an explanation from such an AI system as long as I am sure that it is looking at a skin lesion and not a stain on the carpet.
To summarize, I believe it is not necessary for an AI system to explain itself as long as the domain is sufficiently constrained so that the system is never required to classify and identify something that it has not seen before.
Do you feel that current legislation like HIPAA and GINA is sufficiently strong to protect patients from exploitative AI data mining efforts?
Interesting question -- one worthy of a much longer debate. My own perspective is that such legislation is necessary but not sufficient (or precise enough, for that matter). HIPAA and similar patient data protection standards provide an incentive for health care providers and payers to implement necessary security measures in order to prevent accidental disclosure. On the other hand, penalties are relatively low (for HIPAA, penalty for accidental disclosure has an annual maximum of $1.5M; pocket change for an insurance company). If hackers are able to steal closely-protected financial information from banks and credit card companies, it would not be terribly difficult for them to steal patient data from health care providers and payers. How they might exploit that data for financial gain is another matter entirely.
Yet another perspective is that such legislation could become an impediment to free flow of information for life science research purposes. In our own work, we often find it difficult to move clinical data across country borders in order to integrate results across various geographies or genetic pools, even when the data in question is already stripped of all identifiers that could be traced back to individuals.
The best solution, in my opinion, is precise legislation that spells out all required security measures and imposes very stiff penalties for accidental or intentional to encourage full participation. At the same time, the same legislation should lower barriers for cross-border research and development as long as the data is fully anonymized in order to prevent exploitation.
Hi! What an incredible life. What's been the biggest hurtle of your career thus far? Do you have an opinion on the single payer system?
Thanks! Being able to hang out with astronauts and top surgeons was among the true privileges of my life. The single payer question is outside the scope of this AMA, but it’s a simple and robust economic principle that has been in use for thousands of years. Consolidation of purchasing authority in one entity reduces risks for the supplier, allows the payer to enjoy substantial discounts in exchange for reduced risk, and lowers transaction costs for everyone. Politics around the single payer system are a different matter entirely.
I have to ask the obvious question. Should we change the old adage "It's not rocket science..." to "It's not brain surgery"?
Good one :-) Actually “it’s not brain surgery” is already an adage! See https://dictionary.cambridge.org/us/dictionary/english/it-s-not-brain-surgery.
Sorry! I had another question!!
How do you propose we ensure the accuracy of data that is being analysed?
The process involved in acquiring data is fraught with potential mistakes eg incomplete/unstructured data, inappropriate extraction, inappropriate interpretation/analysis of investigations.
How would one go about ensuring quality control of the data that will be used to feed into these artificial intelligence decision making systems?
Obviously (correct me if I am wrong), incorrect data will lead to incorrect adjustments to underlying assumptions, therefore compromising the integrity of the clinical decision making tool and potentially resulting in adverse outcomes for the patient. What methods can be used to avoid this situation?
You are absolutely correct - if you feed an AI system with incorrect data, it will issue incorrect recommendations. However, an AI system is no different than a human decision maker in this regard. The main difference is that a human decision maker is more capable of evaluating the entire context and questioning the validity of a lab result or other bits of data. For instance, coming back to your earlier question, a physician would question a normal blood glucose reading if the patient has the tell-tale smell of diabetic ketoacidosis! Also, the physician would have some insights as to how fast certain lab readings could change and what kind of impact those changes might have on the general appearance of the patient.
I'm afraid I don't have a perfect answer to your question, but I think it is important for AI systems to assign probabilities of correctness to their recommendations and decline to offer a recommendation if the uncertainty is too high.
Will AI ever be able to do a more precise and educated job performing a surgery than a human? I imagine surgery is something that requires a bit of on the spot thinking should something go wrong.
Very good question. The state-of-the-art today is robot-assisted surgery. It will be a long time before AI is allowed to operate on humans without human participation or at least oversight. Most of the trivial aspects of surgery (or, for that matter, flying a 777 across the Pacific with three hundred people on board) may be performed by robots today. What makes human expertise indispensable, as you said, is management of emergencies. In surgery, I have seen situations where certain internal structures are not where you expect to find them due to anatomic variations or impact of disease (e.g., a tumor displacing everything around it). Every so often a small blood vessel gets nicked and there’s blood everywhere, obstructing visual cues and requiring fine finger manipulation to locate the bleeder. When working near the intestinal tract, a sense of smell is useful to have!
Having said all that, there might be limited instances where surgery may be performed safely by AI on humans today. One example that comes to mind is Mohs surgery for certain skin cancers. This is a kind of surgery that requires precise excision of the cancer followed by a rapid microscopic analysis of the sample while the patient awaits on the operating table, and enlarging the excision progressively until no traces of tumor are found on the excised tissue sample. Another example is LASIK eye surgery, which, for all intents and purposes, is already done by a robotic device under the close supervision of an eye surgeon. Today, the eye surgeon is still responsible for all measurements, adjustments, and calibration of the device, but it’s safe to assume that more of those functions will be handed to AI in the near future.
Dr. Uckun, what sorts of machine learning do you did in your work? I know that deep learning has become quite popular, but it seems that there is still some way to go to make it effective compared to things like image recognition, since the problem itself has different data encoding, etc.
Do you feel that machine learning will become prominent in the near future in your company?
My experience with machine learning is mostly with traditional ML methods and more recently with SVMs/kernel methods. ML based on deep learning is a different and very powerful beast. The ability of these systems to learn by simple observation (that is, unsupervised learning) is already making a huge impact and allowing these systems to be more robust.
Maybe I am misunderstanding your question, but one of the major applications of deep learning today is in image recognition already (primarily for classification of objects). Image recognition still needs to go a long way, for instance, to identify a photograph from a random high school play as a rendition of Cinderella. Humans are remarkably good with such interpretations whereas machines are not that robust (yet).
My company (RowAnalytics) utilizes various AI (or AI-inspired) methods in its products for genomics analysis, semantic search, and personalized health advice. We don't have a need to use deep learning methods yet, but I am certain we will have an occasion to use them in a future product.
One of the potential problems I've seen brought up about using AI assisted diagnostics is that many doctors may be resistant to accepting the AI's recommendations if they conflict with their instinct. Do you have any sense of how easily doctors will start to accept AI assisted diagnosis, or of ways to make them accept it more easily?
In medicine, AI made its first appearance as a diagnostic tool, powered by rule-based systems, case-based systems, and model-based systems (both physics-based and statistical, like Bayesian methods). What you describe is more or less the learnings from the 1970s with the initial efforts in AI in Medicine. Back then, we found out that was that diagnosis is the primary work product of doctors, something they did rather well, and didn’t need help with. That remains true today. Besides, there’s the whole issue of liability for a wrong diagnosis or treatment. With a doctor in charge, the chain of accountability is well defined. With AI, not so much.
A major shift happened in AI in the 2000s, initially using kernel methods/SVMs and subsequently with a revival of neural networks (deep learning). That major shift was from cognitive tasks like diagnosis or clinical decision-making to classification (a relatively mundane task which machines do incredibly well). For instance, see my comment below re: classification of melanoma samples. On the other hand, doctors usually don’t need help from AI to diagnose common diseases like asthma or diabetes. As long as AI focuses on tasks that augment physicians and make their life easier, there will be nothing to worry about – AI and physicians would happily get along!
What are your thoughts on IOTA?
I don't know much about it, frankly. It looks fascinating at first glance.
Why is it so difficult to get a job in Bioinformatics with a Bachelors degree? Do you think there should be more positions available to Bachelors Degree holders?
I think the situation you're describing is a reflection of how complex the field of bioinformatics is. Bioinformatics requires a fairly broad background in biology (diverse topics from physiology to genetics and protein metabolism), computer science, chemistry, and math. A four-year BS curriculum only provides a shallow coverage of all these fields. Furthermore, what is missing from most undergraduate curricula is hands-on experience with the broad range of tools and methods that are used in practice. This includes familiarity with various data analysis packages as well as DNA/RNA sequencing tools and their idiosyncrasies. By necessity, such knowledge has to be learned on the job. Unfortunately, many employers prefer hiring someone with experience and practical knowledge rather than taking the time to train new graduates.
One possible remedy to the situation would be on-the-job training programs that could be offered to those with bioinformatics Bachelors degrees. That would reduce risks for employers and save time and money for those who do not need (or want) to get a post-graduate degree just to get a job.
Getting that first job might be difficult indeed, but getting your second job will be much easier with that "on-the-job" training and experience in your back pocket!
I wish you well.
When you say something is easy, do you say "How hard could it be? It's not rocket surgery!"? If not, what do you say?
I've been known to say that :-)
Do you think AI will ever be advanced enough to perform surgeries like you do? Able to not only follow the book, but to make quick, calculated decision in order to save someone's life?
I wrote a detailed answer to a similar question earlier in this AMA session. In short, I see a role in AI for well-defined surgical procedures with little risk of bleeding or inadvertent damage to surrounding tissues. For anything else, well ... that's difficult. We are all encouraged by the rapid advances in autonomous cars and the "quick, calculated decisions" they are able to make to save someone's life. Surgery is more complicated than that. Many surgical decisions require a deep understanding of not only the anatomical and physiological mechanisms, but the entire context surrounding the patient including their age, physical condition, other co-existing diseases, and many other factors including ethics. I don't believe AI is there yet.
My best bet is that surgeons are going to be replaced by robots. When is that going to happen, meaning when are there going to be more robots doing surgeries in the developed world than human surgeons?
The use of robots in surgery is well established already, but surgical robots are mostly used as helpers to human surgeons. The use of teleoperated robots is also expanding and giving hope to patients who cannot be served otherwise (e.g., battlefield or remote locations). As far as robots taking over from human surgeons entirely, that requires robots reaching a level of consistency and reliability that's on par with the average surgeon - and similar skills to handle and manage emergencies and unknowns. We're not there yet, and that might take another fifty years.
When can we expect an App for smartphones that uses AI to determine if the mole is malignant?
It's there already! https://www.skinvision.com
- t3_7uinlk_comments.json 87.6 KB
This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.