Hello, Reddit! My name is Aydogan Ozcan, and I am currently a Chancellor’s Professor at UCLA, in Electrical & Computer Engineering, and Bioengineering. I am also an HHMI Professor at the Howard Hughes Medical Institute, an Associate Director of the California NanoSystems Institute, and a Fellow of SPIE, IEEE, OSA, AIMBE, RSC and the Guggenheim Foundation.
My research focuses on the use of computation/algorithms to create new optical microscopy, sensing, and diagnostic techniques, significantly improving measurement existing tools for probing micro- and nano-objects while also simplifying the designs of these analysis tools. Some examples include smartphone-based microscopes, cell counters, diagnostic test readers, bacteria sensors, blood analyzers, allergen detectors, heavy metal sensors among others.
I have authored, and will be presenting, multiple papers on these technologies at SPIE Photonics West in February 2018. More information about this conference can be found here.
Let’s get the discussion started. I'll be back at 2 pm Et to answer your questions, Ask me anything
Please make a guess based on your technical expertise: how many years or decades away are fully programmable biological nanobots that, for example, could selectively terminate otherwise-inoperable brain tumor cells, or knit severed nerves back together, or the like?
Great question :) I don’t believe fully programmable biological nanobots will be able to serve our toughest medical needs (for example “selectively terminate otherwise-inoperable brain tumor cells”) within the next 10 years. Within the next 2 decades? – May be, although I must say this is not exactly my area of expertise. Researchers are making progress on various elements of such a vision – but what is articulated here is far away from the reach of our current state and requires substantial amount of breakthrough research to be funded by the government and NGOs, foundations etc. It is certainly possible after some more breakthroughs are made by scientists and engineers.
Was it hard to get this job? Did you go with the usual PhD->postdoc track?
Yes, it was the usual PhD-->Postdoc path. I guess the key for me at least was to gain complementary and somewhat orthogonal skills during my postdoc compared to my graduate training, which helped me broaden my perspectives, and I still find this extremely useful.
What is your most encouraging advice for someone about to finish their PhD in this area of sensing/photonics/miniaturization and looking to continue in academia? What is your most discouraging advice?
See you in San Francisco!
Another great question! I believe your timing is very good. Photonics based sensing is extremely hot these days, and both academia and industry are expanding in this area and related fields. Wearable, implantable, digestible sensors are some examples where especially industry is very excited about. So your training will indeed pay off to give you great opportunities.
Whatever you choose to do at the next phase of your career, after graduation, make sure you are indeed very passionate about it. Don’t just follow the trend because it is hot and fashionable. Among all these opportunities that are in front of you, find the one that makes you the most excited and interested.
I am sure you are aware of some of the wearable technology that utilizes optical light to read things like heart rate. My question is how far can this technology be extended?
Specifically, can this technology be developed to measure things in blood from outside the body like triglycerides?
This is a very hot area of research, both in industry and academia. For example, as part of an NSF funded center, we are working on an implantable sensor that will go under the skin (may be 1-2 mm underneath the surface of the skin) to sense biomarkers (for example glucose). We are planning to non-invasively read such an implantable sensor on demand or continuously from outside using a wearable microscope and sensor - similar to a watch in form factor.
Of course this is a platform technology, and would be applied to various other biomarkers and diseases that need continuous or periodic monitoring of patients, even at their homes.
We thank NSF for providing us the funding and the opportunity to work on such exciting projects.
So can you talk about the algorithms you're building? What kinds of problems are you having to overcome?
These days we are more and more excited about the use of machine learning tools, especially deep learning, in biomedical imaging and sensing. The basic, most primitive version of this, is to use machine learning for labeling images or classifying sensor data into, for example, healthy vs. diseased, etc. This is very exciting and what deep learning enables us today is an unprecedented level of accuracy compared to before, especially for the cases where we have easy access to well-characterized training data. In most of the applications that we chase (for example the detection of a waterborne parasite in water samples) access to data is not issue since within a few days we can generate gold standard data in our labs to train a neural network and get the task done.
Beyond this basic use of machine learning, I believe for biomedical imaging and sensing there are some very exciting opportunities for “machine learning inspired instrumentation” which is a holistic approach to computational imaging and sensing. We are rapidly moving forward in this direction, and a recent example of this can be found here: http://newsroom.ucla.edu/releases/machine-learning-helps-researchers-design-less-costly-optical-sensors
What kind of smartphone sensor tech can we look forward to in the near future?
There are various exciting and emerging uses of smartphones for biomedical and environmental monitoring applications, ranging from the detection of pathogens (including for example bacteria, and even viruses), performing medical tests, blood counts, counting particles in water and air, measurement of contamination, heavy metals in drinking water, among many others. Mobile phones provide a unique toolset to "democratize" measurement science, and enable inexpensive, portable and yet powerful/advanced imaging, sensing and diagnostic technologies to potentially reach resource limited settings and even the home for monitoring of chronic patients, early diagnostics etc.
I have a second question that just popped into the mind: what are the national security implications--that you are able to discuss within the confines of the law--of your research?
Very interesting question. In general any smart system, used for a biomedical purpose or others, has the potential danger of being hacked. In this regard we are all in the same boat - online banking, medical health records, smart biomedical devices (wearables, implantables etc.) and others - they all need to comply with the state of the art encryption and data security protocols. Future wars among nations might very well expand into hacking of biomedical devices (unfortunately) and we need to be prepared for this by adopting the same strategies that we are using to defend our national infrastructure from such attacks.
What disruptive healthcare technologies do you see becoming mainstream in the next 30 years that could fundamentally alter the healthcare delivery system and the traditional doctor patient relationship? Are traditional doctors going to be squeezed out of the equation by apps, smartphone sensors, and virtual access to international experts?
Fantastic question! Medicine includes empathy for the patient as part of the patient-physician interactions. So in this regard, I don’t consider technology as a replacement for health-care professionals in the practice of medicine. Technology will make medical profession more efficient, more accurate and overall more effective in treatments and preventive care. It is also not a threat for doctors or medical professionals as transformative technologies will enable them to see many more patients per day, increasing their reach and overall impact.
The practice of future medicine for sure will fundamentally change as a result of disruptive technologies, including AI and machine learning. Although not a threat for medical professionals, this will undoubtedly change the requirements and the nature of their education, training and practice. The question is: are we, at universities, ready for this transformation and are we educating/training our next generation of medical professionals accordingly? I am not sure.
Low cost hardware / software tools that work with an existing smart phone platform seem like such a clear way to improve health outcomes in the developing world. What are the barriers to getting these medical diagnostic tools deployed? What needs to change to make progress faster? Feel free to focus on a specific example like malaria.
Very important question. The massive volume of mobile phone users, which has now reached ~7 billion, drives the rapid improvements of the hardware, software and high-end imaging and sensing technologies embedded in our phones, transforming the mobile phone into a cost-effective and yet extremely powerful platform to run for example biomedical tests and perform scientific measurements that would normally require advanced laboratory instruments. This rapidly evolving and continuing trend will help us transform how medicine, engineering and sciences are practiced and taught globally.
On the other hand, this rapid pace of advances in consumer electronics market also creates major challenges for standardization of mobile phone based imaging, sensing and diagnostics tools and technologies. Considering the fact that the regulatory approval processes in biomedical device industry are rather cumbersome, costly and time consuming, these rapid changes in the hardware and/or software of our mobile phones can become a major limitation for commercialization efforts toward e.g., telemedicine and mobile health related applications of cellphone based tools. This obstacle can be potentially addressed by new business ideas that turn the challenge into an opportunity; for example a consortium that is formed to provide the industry with standardized and regulated supply of certain mobile phones and/or mobile phone components could be a possible solution. Another potential solution toward this challenge could be the systematic development and standardization of open software and hardware platforms for mobile phones, such as the Android OS and the Project Ara.
In your professional opinion, do you think that this research can be applied to visualize and study more complex biological processes such as protein conformations or microtubule binding patterns?
Yes, indeed. With the development of proper assays involving for example FRET reporters, bio-physicists have been studying such biological processes, and I believe mobile phone based instrumentation can also be used as an inexpensive experimental platform for such experiments. Per spot, a few tens of fluorophores can be routinely detected using mobile phone based instruments. See for example: https://www.nature.com/articles/s41598-017-02395-8
There is any available (even if closed) algorithm to test lensfree in-line holography microscope images?
I am not aware of any publicly available software. Ours has been licensed by a company.
What are the primary obstacles to miniaturization of sensor technology?
The exact answer depends on the application and what is aimed for. The requirements (the budget, user needs, form factor, life-time, power needs, etc.) dictate the difficulty of the problem at hand. In general I would argue that "miniaturization" alone is not a bottleneck in today's sensor technology - except may be the ultimate miniaturization of high energy density batteries.
- t3_7hhacf_comments.json 60.1 KB
This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.