AAAS AMA: Hi, we're the authors of the research articles in the inaugural issues of Science Robotics. Ask us anything!

Abstract

This December, the first issue of Science Robotics was released. We wrote the research articles in that issue.

I'm Huichan Zhao, and my research focused on how to imbue prosthetics with some attributes of the sense of touch. (http://robotics.sciencemag.org/content/1/1/eaai7529). Our final demonstration saw a robotic hand "feeling" three tomatoes to determine which one was ripe.

I'm Duncan W. Haldane, and my team created a jumping robot that used as its model a leaping primate called a galago. (http://robotics.sciencemag.org/content/1/1/eaag2048). One powerful application for our robot would be in buildings that have collapsed and need to have a light, nimble robot search for survivors without disturbing the debris.

I'm Surjo Soekadar, and I led a team that created a noninvasive, hybrid brain/neural hand exoskeleton (B/NHE) for quadriplegics restoring their ability to perform activities of daily living, such as eating and drinking independently (http://robotics.sciencemag.org/content/1/1/eaag3296). The results broadly suggest that brain/neural-assistive technology can restore autonomy and independence in quadriplegic individuals’ everyday life.

And I'm Holly Russell. My team investigated how humans and autonomous vehicles adapt when the control of the vehicle switches from car to human and back again. (http://robotics.sciencemag.org/content/1/1/eaah5682) Our findings have implications for the design of vehicles that transition from automated to manual driving and for understanding of human motor control in real-world tasks.

We will be back at 1 pm ET to answer your questions, Ask us anything!

What are the biggest limitations robotics faces right now, both technologically and socially?

TeamMLRS

dh: If we're thinking about limitations of the field of robotics as a whole, that's a hard question. One limitation that comes to mind is how hard it is to generalize robotic tasks. i.e. if a robot learns how to peel an apple, can it generalize and transfer that learning to some new task like peeling a banana?

Another problem is the difficulty that comes with creating robotic hardware. To do research in robotics you have to solve a lot of hard engineering problems before even getting to the science, and again the effort that goes into those engineering problems is rarely reusable or generalizable. And so we end up with robots that are good at only very specific things.

Anyway, thanks everyone for the questions! Signing off now.


How did you start your respective careers?

bobogargle

Huichan Zhao: My story is very simple. After I finish my Bachelor’s degree in Mechanical Engineering, I came to the US and started to pursue my PhD degree. Initially I was interested in the general area of robotics, then I was introduced to soft robotics by my advisor. I found unlike other areas, this area was relatively new and people were building up robots from liquids. Then I got a lot of hands-on experiences with fabricating soft robotics and were amazed by their soft and gentle motion. My advisor and I decided we should make a more human prosthetic hand using this technology, with both powerful grasping and sensation. And now we are here.


How did you start your respective careers?

bobogargle

HR: I did my bachelor’s degree in mechanical engineering at UCSB, then went to Stanford University for master’s and doctoral work. I found my way into automotive research because I liked control systems and there are many components of the automobile that require control. This was a little before automated driving started becoming really popular, so it was fun to find myself in this exciting field. It wasn’t something that I planned from the start, but a path where I have taken advantage of opportunities at each decision point, and it’s cool to see where that has led me.


Before I ask my question, I am a high school science and robotics teacher, and also spent several years coaching a First Robotics team. Please consider seeking out high-needs high schools in your area and volunteering your time to start and/or mentor robotics programs. FRC is the biggest name, and the most innovative, but FTC and VEX both have lower entry costs.

As for my question, particularly for Duncan and Surjo. Do you ever worry about your research being adapted for military purposes? Is that a concern of yours, or perhaps a possible goal/funding source that you are pursuing?

monkeydave

Duncan Haldane (dh): I want to give a shout out to all the volunteers that make those robotics competitions happen. I'm seeing a whole slew of students achieve success in college thanks to the experience they get in these programs. It really does make a difference.

To answer your question: I don't build weapons, and my research is not targeted at weaponized applications for robotics. I do basic research for robotic systems, trying to find generalizable underlying principles and approaches for robotic locomotion. If it has applications for the military, it is only because it has applications for a whole range of areas. It is worth noting that my research was funded both by the National Science Foundation and the Army Research Lab. Both the NSF and The DoD are major funders of basic research (that is, research without a specific application) in robotics in the US.


Before I ask my question, I am a high school science and robotics teacher, and also spent several years coaching a First Robotics team. Please consider seeking out high-needs high schools in your area and volunteering your time to start and/or mentor robotics programs. FRC is the biggest name, and the most innovative, but FTC and VEX both have lower entry costs.

As for my question, particularly for Duncan and Surjo. Do you ever worry about your research being adapted for military purposes? Is that a concern of yours, or perhaps a possible goal/funding source that you are pursuing?

monkeydave

Surjo R. Soekadar (SRS): Thank you for your question! It cannot be excluded that any knowledge will be used by the military, and every scientist working on technologies that can be used for military purposes (some more, some less) should be aware of the possible consequences of course. In Germany, there was a very strong movement after 1945, mainly driven by Carl Friedrich von Weizsäcker, that aimed at raising awareness of each scientist’s responsibility. This spirit is still very present, and the question whether it would be ok to accept funding from the military was discussed a lot in our institution. My personal view on this issue is the following: In my work, I am solely committed to people’s well-being (also in my role as physician). Knowledge (including source codes, data etc.) that we gain will be made public (also a great aspect of the Science Robotics forum, BTW) and accessible to everybody on this globe. The exoskeletons we develop can be used by patients improving their quality of life, but they can also help in different working environments, e.g. to lift heavy objects, including military use. If the technology safes lives in the military even better! The underlying knowledge should be available to everybody, though. I would accept funding from the military as long as it would not exclude that results are made publicly available.


What do you find are currently the most significant technological limitations for robotics in each of the domains of software, electronics, and mechanical hardware? Thanks.

fluffynukeit

dh: I think you put your finger on the main challenge by posing the question in this way. Creating a robotic platform requires domain expertise in mechanical, electrical, and software engineering. And that's before you even start answering the research questions you built the platform to study. My major gripe right now is that current design tools don't have great support for these multi-disciplinary engineering problems, and that all of the engineering effort you sink into creating a robot is rarely re-usable for your next project.

If we're talking specific technological limitations, man-made actuators are pretty lousy compared to vertebrate muscle. An actuator that is both power-dense and force-dense and efficient would help tremendously.


Before all else, thanks for doing this. Although I have a firm grasp what we're capable with robotics, I'm constantly amazed by everything I see or read and I'm so excited for our future.

That being said; what negative impact do you think robotics will have on mankind, aside from job replacement?

In regards to job replacement; can you think of any solutions for reducing the impact that it could have on the economy, other than universal income?

photobeatsfilm

dh: I think that job replacement is a false-flag when it comes to thinking about robotics. I frame the impact of any technology with one question: "Is it value-creating or value-destroying?" Robots create value. They do work that would otherwise have to be done by people, which is dull or sometimes has to be done in dangerous or harmful environments. I think that robotics makes people especially nervous because it has the ability to create outsize value, and we as a society haven't put anything into place to make sure that that value benefits everyone.


Do you find that most people are excited by or afraid of advanced robotics?

vilnius2013

HR: Speaking specifically about automated vehicles, I tend to get a mix of reactions. Many people hate commuting and are thrilled with the idea of a car doing it for them so they can relax or work. Others are uncomfortable with the idea of not having control of the car, and there is an element of fear. Some people also really enjoy driving and don’t want to lose the feeling of freedom and sport. On the whole I would say that most people I've talked to are positive on the idea of automated driving.


Holly,

Thank you for your time. What are the regulatory implications of your work and do you believe that a better understood co-existence of driver and car will ease policy development for autonomous vehicles?

adenovato

Holly Russell (HR): Great question! The regulatory environment is a big issue for developers of automated vehicles. If you haven’t already seen the U.S. Department of Transportation’s Automated Vehicles Policy, which was released in September, I would recommend taking a look. This policy uses the SAE “Levels of Automation” to characterize automated vehicles. Our research is most applicable to Level 3 automation, where the human driver is required to be ready to take back control from the car. For these vehicles, it is critical to ensure that protocols for handover of control from car to driver be thoroughly tested for safety.

While we don’t have specific recommendations for how regulations should be made, our study suggests that motor adaptation is one aspect of handover that needs to be taken into account in the design of these systems. This is in addition to other factors like making sure a human driver is aware of the driving situation, figuring out how to signal that it is time for them to take control, and allowing a reasonable amount of time for the handover to take place. So I would hope that regulators and safety experts would study the limitations of human motor control as part of their evaluation of the safety of any Level 3 systems that are intended for public release.


Historically, basic researchers studying physiology, biophysics, and mechanics of biological systems have helped to lay the groundwork for robotics design. In fact, many of the topics you mention (jumping robots inspired by galagos, artificial limbs, etc.) are clearly inspired by basic biology. Presumably, there is a similar connection between basic neuroscience and AI.

What about bacterial cell biology (particularly the study of how bacteria deal with their mechanical environment) and nanotech applications?

This seems particularly important, because humans have a decent intuition about how to build a mechanical system at roughly human scale, but little intuition about, say, fluid mechanics at the micron scale.

Thanks for the AMA! Follow-up questions, if you have time:

  • We can also learn about organisms, or at least validate proofs-of-concept, by building working machines. Do you think we will one day understand microbial-scale mechanics better by building nanobots?

  • How do we incorporate these ideas into our communication (to the public, to each other, and to funding bodies) without sounding crazy? I study bacterial mechanics, and I love drawing the connection between current biophysical studies and future engineering applications, but no one ever seems to take it seriously.

subito_lucres

dh: Good questions! Bio-inspired robotic design is a virtuous-cycle, where you look at biological systems, extract scientific principles, and use those ideas to build better robots. After that you can use the robots to answer more scientific questions and if everything goes well the process loops back on itself and drives progress in both fields. Micro-scale fluid/bacterial mechanics is a great place to apply these ideas. Nothing crazy about that.

p.s. Robotic conferences will sometimes host micro-robot competitions.


Not sure if this makes sense but, do you guys think we can ever have a fully controlled robotic 'body' with a straight interface to the brain. Basically just transferring the whole 'brain' and 'consiousness' into a fully robotic body with complete senses(seeing,hearing,feeling, etc).

If it is possible how far do you think we are from achieving it?

Thanks!

qazwertsad232

Thanks for this exciting question! Well - yes: I would not exclude that controlling a full robotic body with your brain is possible. Here the main issue is the input/output (I/O) constraint of brain-machine interfaces. It remains unclear whether increasing the number of recording sites at high spatial resolution would one day allow to overcome the present input/output (I/O) constraint of BMIs and unveil the individual neural codes of each brain. The development of advanced sensors, e.g. based on nanotechnology, might, however, substantially broaden BMI bandwidths in the near future, even to a degree at which full control of a robotic body becomes feasible. The problem of transferring consciousness, however, is of a complete different nature...


How will a quadriplegic with a mental illness be able to effectively control the B/NHE due to wildly oscillating brain signals?

As an undergrad working on a lower limb EEG controlled orthosis, I'm curious as to how extreme amounts of noise from the EEG can be filtered out from people with mental illnesses. Currently the only solution I can think of is to use massive amounts of machine learning. There should be a simpler solution.

saivelamala

SRS: Thank you for this excellent question! Our system requires that some for of brain activity can be voluntarily controlled. In our study, we used motor imagery-induced modulations of electric brain waves generated in the cortical hand-knob area. If the person has a mental illness, it would be necessary to evaluate in how far his/her ability to imagine movements is compromised. This could vary a lot between individuals. Our experience is that most people, including people with depression or schizophrenia, can modulate motor related brain waves. There are, however, approximately 10-15% of the general population who have trouble to purposefully modulate brain waves. Here, machine learning algorithms substantially improved this “brain-computer interface (BCI)-illiteracy”. There is now also increasing knowledge about the possibility of user-training that can help to overcome BCI-illiteracy. It is very likely that BCI control is possible as long as there is conscious intentionality. Newer developments, e.g. passive BCI or neuroadaptive technologies, may even overcome the necessity of conscious intentionality. Machine learning will probably improve the accuracy of classification and calibration of the system, but our experience is that it also has a limit.


This AMA is being permanently archived by The Winnower, a publishing platform that offers traditional scholarly publishing tools to traditional and non-traditional scholarly outputs—because scholarly communication doesn’t just happen in journals.

To cite this AMA please use: https://doi.org/10.15200/winn.148224.45440

You can learn more and start contributing at authorea.com

redditWinnower

(SRS) That's fantastic! :)


Do you worry about the societal changes in the human workforce that your research could bring?

janosrock

HR: I work on automated vehicles because I believe they have the potential to save many lives that would have otherwise been lost in traffic accidents. This is very important and I really think it is worth doing. However, I am concerned about potential job loss for people like taxi/Uber/Lyft/delivery drivers. There are no easy answers here but it is something to consider as a society.


What are the social and philosophical implications of your work? What kind of society do you envision? Why do you do what you do?

propermandem

HZ: Our work is to help many hand amputees to get affordable prosthetic hand that are useful and cool. But still a long way to go, we have to do more.


What are the social and philosophical implications of your work? What kind of society do you envision? Why do you do what you do?

propermandem

(SRS) It is very hard to grasp a concept like society, and even harder to have a vision of or for a society. But we can focus on values, and my hope is that we share those values that will successively make our lives better. A key for realizing these values may be the awareness of the connectedness, inter-dependency and value of all people. The reason why I am doing what I do is simple: it just fills me with joy!! At least most of the time ;)


Can you speak to how haptics and soft robotics are changing the medical and exploration industries?

akornblatt

HZ: In soft robotics area, some researcher already had concept design of soft manipulators that will assist in surgery. I think soft robotics will benefit medical by serving as surgery tools and prosthetics. Their major advantage will be low-cost, safety and gentle motion.


Getting robots to walk seems like a big deal in robotics, e.g. this Atlas video. Whenever I see video of robots walking, though, it seems like they don't try to walk the same way that humans walk. Humans have more of an inverted pendulum thing going on (torso bobbing up and down), while robots seem to always try to keep their torso at the same height. Is there a reason we don't try to build robots that walk like us? Is it significantly harder for some reason?

N8CCRG

HZ: please have a look at Professor Andy Ruina's work. He is building passive bi-pedal walking robots inspired by human walking and his robot is very energy-efficient.


Science Robotics is not open access. Universities pay hundreds of millions of USD every year for journal subscription. It is a waste of the taxpayers' money, and many universities cannot afford such high journal subscription fees, which means this hampers the work of many researchers. Why do you publish there?

Franck_Dernoncourt

(SRS) I completely understand your point! The system is only gradually changing (cannot be done over night), but there is light at the end of the tunnel :)


Can you please tell us about your findings for how ai control to human control works? 've always wondered how a person would react to suddenly being in control of a vehicle that is normally driven by ai. Or is the trend changing to a more human driven vehicle? I know google was shooting a 100 percent ai driven vehicle for a long time. Thank yoo.

Edit question mark

mattcm5

HR: Your phrasing of “suddenly being in control of a vehicle” is key. This is not something that humans can handle very well. If they are expected to take control of the vehicle, there needs to be a safe transition period, which could take on the order of a minute. In our study, drivers were told exactly what would happen and when, and they were paying close attention, yet they still had reduced performance in the maneuver when the handling conditions changed.

Regarding trends in the industry, there are a variety of approaches being taken. Some manufacturers plan to gradually introduce more autonomous features into their cars, kind of like extensions of current driver assistance features. Others are targeting higher levels of autonomy (SAE Levels 4 and 5) where the human would not be required to take over control. I believe Google/Waymo is still targeting this latter approach. My personal take is that focusing on Levels 4 and 5 is the safer way to go.


Background - Currently pursuing masters (3D-printing soft robotics), Physics & Chemistry undergraduate

Given the recent advances in machine learning, soft robotics, and technology in general, it is guaranteed that numerous jobs held by non-skilled workers will soon be automated. Is it ethical to pursue work that will eventually cause people to lose their job and decrease the quality of their lives if it means to keep furthering human achievement and ambition?

chuck3663

(SRS) Thank you very much for your important question! I personally think it is absolutely ethical to pursue any work that will make life easier, improve quality of life or take the burden of an illness. It cannot be excluded, of course, that this technology would then cause people to lose their job. But if we follow this argument, we would not have invented the car, the telephone, the engine or anything that makes our life much better today, because hundreds of coachmen or letter carrier lost their job. However, I personally think that it is extremely important that technological change is accompanied by efficient policy frameworks that dampen the social effects of innovation. This has been underestimated over the last decades of globalization and countries that do not have appropriate social mechanisms to dampen these effects have a fundamental problem now. I can only see that a well-balanced eco-social market economy could provide such framework.


Would you consider Neuroscience a neighbouring field? Would a person with a Ph.D in computational neuroscience be able to find a position in a robotics lab?

FabulousMrFox

HZ: yes of course! I recommend you to look for groups working on exoskeletons or prosthetics.


Would you consider Neuroscience a neighbouring field? Would a person with a Ph.D in computational neuroscience be able to find a position in a robotics lab?

FabulousMrFox

HR: Yes, one of the authors on my paper is a neuroscientist. I think neuroscience and robotics/engineering go really well together and there is a lot of interest in working at the intersection of these fields.


Would you consider Neuroscience a neighbouring field? Would a person with a Ph.D in computational neuroscience be able to find a position in a robotics lab?

FabulousMrFox

(SRS) Absolutely! These two fields have a great overlap (always had, think e.g. on biological cybernetics) and the now fast growing area of neuroprosthetics, bio-robotics and brain-machine interfaces will certainly have a demand of well-trained people who are competent in both fields!


What is your main hope/aspirations in the future of robot technology?

Whitemanwithafro

HR: I’m hopeful that we can save a lot of lives with automated vehicles. I really think this technology has a lot of potential for good and I hope that we see this realized.

Thanks to everyone for your insightful questions in this AMA!


Outside few large, select institutions that offer robotics graduate degrees or even bachelors, there are very places that offer specialized robotics training and research. Do you think the study of robotics will be expansive and important enough in the future (with enough jobs) that it should be offered at most institutions and not just be a small segment of electrical or mechanical engineering? Will it be the next bioengineering?

uriman

HZ: As far as I know, even though many institutions don't offer robotics degree, their robotics research and teaching are prosperous. Robotics research and courses can be in ME, EE, CS, and even Bioengineering. Robotics need interdisciplinary collaborations and thus spreading into different majors might not be a bad thing.


Which newly emerging technologies do you think are likely to have the biggest impact on robotics?

Syentist

HZ: From my working area (soft robotics), I will pick 3D printing. It makes the design iteration fast and efficient.


In terms of the human interface device, what technologies show the most promise, or at what level are the technologies you are using, in terms of their technological readiness level?

At the top there is probably the touchscreen and keyboard but these might not be feasible when talking about quadriplegics or use in tough and disturbing situations such as in a collapsed buildings.

Rzzth

(SRS) I see great potential is voice recognition, wearable sensors (including biosensors) and context awareness. The brain-computer interfaces (BCI) have shown great promise, but the input/output (I/O) constraint poses a great challenge. Currently, we can only decode a few bit/s using BCI technology, while the brain processes tens of millions of bit each second. Once I/O bottleneck has been widened, I also see great potential for broader BCI use. Currently, this technology is mainly relevant to restore lost functions, however.


First, I used to dream of living in a world with robots and robotic prosthedics as a little kid, so thank you all for your work. Do you ever worry how your work will be weaponized in the future?

2DamnBig

Thanks for your great question! The technology that we develop (probably any technology) could be certainly used for military purposes as well. We, thus, make everything and all technologies we developed publically available.


Question for Duncan: what are the advantages of your future vision for your robot over a different approach such as a similarly sized multirotor for exploring a collapsed building? I don't understand how jumping could be a better approach than flying for this kind of application.

nitpickyCorrections

dh: This is a good question about the practical applicability of a platform that could be built using the ideas put forward in our paper. Off the top of my head, the three major benefits of a high-mobility terrestrial platform vs a multi-rotor:

  • More battery life
  • Safe to use near walls/floor. At present, most multi-rotors are going to have a bad time if you fly them near anything.
  • Dust. Every disaster site is different but the common element is dust, which would cause problems if you start blowing it everywhere

I think drones do have their applications for search and rescue (like aerial mapping) but if you want to operate on/in the rubble, a terrestrial platform could be better suited to some tasks.


Hi guys, my 13 year old son loves robotics, technology, and space, notnin that particular order. What (briefly) kind of path did you follow to end up working in this fascinating field?
Also, what did you have for breakfast?

Edit: I'd personally like to know if there's an immediate future for nanobots in health and science and if so is it something you're excited about?

lala989

That's great that your son is so in love with robotics and technology! :) I hope he will stay with it and join us in a few years in one of the projects. To answer your question: I went to medical school (BTW no tuition in Germany!) and wrote a thesis on neuroplasticity. I became a psychiatrist and psychotherapist on the clinical side, but always continued to work on neuroplasticity and how we can modulate it (e.g. using brain-machine interfaces or brain stimulation). Once we had the technology (i.e. controlling a prosthetic device in the lab), I thought it makes sense to bring it where it belongs: to the everyday life of paralyzed people! Wish you and your son all the best!!


Question mainly to Holly. As someone working in the field of autonomous vehicles I'm guessing you've probably thought about the "choose lesser evil" problem which autonomous cars will have to face. I was wondering what is your take on it. Should the car always protect the passengers, or implement some kind of value-of-life criterion and choose whether to save the lives of, for example, the pedestrians over the passengers?

Stasky

HR: Great question! I have definitely thought about this issue and have not come to a clear conclusion. I think many people in the field are wrestling with how this should be considered in automated vehicle design. Ethical considerations are part of the U.S. federal government’s new automated vehicle guidelines (pages 14, 26). This is also something that Patrick Lin has been writing a lot about over the past few years. I think this will evolve in the public and industry conversation as new automated vehicles and regulations are developed.


Holly,

What an exciting and challenging field you are in.

My question pertains to stop/start methods at stoplights. The last two questions at the bottom are what I am curious about.

When a red stoplight turns green, and the human operator does not move forward, there is latency keeping all cars from starting and stopping at that magical same time.

The military taught me if a group of 1,000 people all started on their left foot, the group moves simultaneously. If you have one person not paying attention at the front, it ruins this for everyone. (No hate guys, we are simply human - I do it too.)

I understand these are likely short term solutions to prepare us for absolute automation: Would a 'ding' work as a command to prepare the operator to look up when the light turns green? When will we accelerate the car before the human operator is ready to go?

socksgin

HR: Vehicle-to-vehicle (V2V) communication will be useful for many situations including improving efficiency of traffic flow and helping to reduce crashes. There is some good information at NHTSA if you’re interested. At a stoplight, a car could use V2V to broadcast a signal when it starts to accelerate, so cars behind it would know it’s safe to start moving, too.

If you’re looking for ways to get humans to pay attention better and not hold up traffic, there are currently systems that make a noise when the car in front has moved, but this doesn’t handle being first in line at a light. One thing to consider when trying to give the driver more information is how many different notifications are they expected to differentiate between? It seems simple to ding a bell, but if it sounds the same for several different notifications you then have the driver looking around the dashboard trying to figure out what the car is saying instead of paying attention to the road. Drivers can quickly be overwhelmed with too much information.


Thank you so much for doing this AMA!

A few questions:

1) How did each of you get into your field? Education background? Work background? Personal projects?

2) What opportunities are there for developing robotics going forward?

3) What advice do you have for aspiring roboticists or robotic engineers‽

Bonus Round:

What do you think is the horizon for autonomous robots and the development of artificial intelligence within them?

Sumethingbetter

(SRS) 1) I did my doctoral thesis on neuroplasticity under guidance of Herta Flor (Mannheim/Heidelberg, Germany), and through her I met Niels Birbaumer who developed the first though-translation device (TTD) allowing locked-in patients who were completely paralyzed to communicate using brain waves only. I thought this was pretty exciting! Besides research, I continued my medical training as a psychiatrist and psychotherapist. Later I became fellow at the NIH working with Leo Cohen. My research group’s main focus is the development of innovative and reliable non-invasive brain-machine interfaces and their clinical use.

2) Merging robotics with biological systems is – in my opinion – a great opportunity.

3) As a clinician, I would say: Try to be open for other fields and look for interdisciplinary work. There is great need of people who are knowledgeable across fields! :)


What is the current state or knowledge behind BMI tech?

I mean what kind of external networks can we connect successfully to the brain, can we look at what peoples eyes see yet?

I know we can hijack some parts of the brains signals and translate them into movement of prosthetics, but how far are we at the upstream methods, are we able to feedback from a prosthetic and get a sense of touch for instance? or are we pretty far away form that sort of tech.

Sharkytrs

(SRS) Thanks for this question! Right now there is a substantial input/output (I/O) constraint that limits bidirectional connections between the brain and external networks. BMI can decode 1-4 bit/s, but the information processed each second in the brain was estimated at the tens of millions of bit/s. As long as this constraint has not been mastered, the ability to decode from the brain is limited. Using implantable BMI, single-cell activity of a few hundred neurons can be recorded. If placed into the visual cortex, this should be sufficient to infer what the person is looking at. But who would want to risk an implant in the primary visual area? There is progress to make this technology better and there are many very promising new developments that may help to overcome the I/O constraint. Also, providing feedback through direct electric stimulation of the brain is possible. My lab has established the first non-invasive approaches combining BMI and brain stimulation, so it is conceivable that we can avoid implantations for bidirectional communication soon. Such stimulation has to be “over threshold”, i.e. discharging neurons, for the emergence of a sensation and qualia, however. It’s not entirely clear how long it would take to establish this in a well-tolerable way. Invasively, this is already feasible now (to get the sense for touch e.g.), but for doing this without an implantation of microelectrodes, I would say it may take another 10 years.


To All - Thanks for the AMA and I have one question.

What inspires you to move forward on the days that nothing is going right or making sense?

Side note to Duncan and Holly - The links to Linkedin you might want to change as they require a log in by us.

Sun-Anvil

HZ: The days of nothing works happen all the time. Sometimes I feel frustrated, but later on I always find a reason why it doesn't work. The process of analyzing what is going wrong and then solving it is the fun part of research.


To All - Thanks for the AMA and I have one question.

What inspires you to move forward on the days that nothing is going right or making sense?

Side note to Duncan and Holly - The links to Linkedin you might want to change as they require a log in by us.

Sun-Anvil

SRS: Thanks for your question! There are indeed many days in which nothing seems to go right :) On these days, I try to take a step back and focus on something different. Often the answer or solution just appears by itself...Once you have experienced this a few times, you get substantially more relaxed over time ;)


How are prostectic limbs able to recreate the sensation of touch and feeling?

thephant0mlimb

(SRS) There are two options: 1) sensors placed at the prosthetic limb send signals to a stimulator (e.g. vibrational) providing the information to the user (non-invasive version) or 2) implanting an electrode in the sensory cortex and directly stimulation there (implantable version). There is also the possibility to stimulate a peripheral nerve. Hope this answered your question! :)


First off, congrats to all of you. Your work all looks fantastic. Question for all of you, what are your goals with future research and after completion of your degrees? Also, what sort of work do you each participate in to encourage children to pursue the sciences?

Huichan, what forms of feedback has your research explored for a wearer of the prosthetic? Has thought gone into integrating the sensor system in various prosthetic systems?

Duncan, your robot has gotten some good attention of /r/robotics. How scalable is this technology and do you foresee the capabilities of integrating this into an exo-skeleton for human use?

Surjo, with the current system, you can actuate motors to complete simple arm based tasks. Has your research indicated any scalability to the system? With the signal you're receiving, how many different states are you able to detect?

Holly, will your future work conduct the same experiment but with obstacle avoidance to aid the transition from autonomous to human control? What other sort of work do you see that could make an impact on the advancement of autonomous vehicles?

YT__

Huichan Zhao (HZ): Hi, thanks for your question. I will be a post-doc and keep working on soft robotics after graduation. I volunteer to attend a workshop at Cornell named "Command Your Robot" every year to teach young girls to learn to program a robot every year. I really enjoyed this workshop. Our research currently didn't involve a patient and that is definitely our future direction. Yes, the waveguide system should work for other prosthetics, but as hand is almost the most sensitive organ of human, replicating the sensory capability of hand is the most challenging one.


First off, congrats to all of you. Your work all looks fantastic. Question for all of you, what are your goals with future research and after completion of your degrees? Also, what sort of work do you each participate in to encourage children to pursue the sciences?

Huichan, what forms of feedback has your research explored for a wearer of the prosthetic? Has thought gone into integrating the sensor system in various prosthetic systems?

Duncan, your robot has gotten some good attention of /r/robotics. How scalable is this technology and do you foresee the capabilities of integrating this into an exo-skeleton for human use?

Surjo, with the current system, you can actuate motors to complete simple arm based tasks. Has your research indicated any scalability to the system? With the signal you're receiving, how many different states are you able to detect?

Holly, will your future work conduct the same experiment but with obstacle avoidance to aid the transition from autonomous to human control? What other sort of work do you see that could make an impact on the advancement of autonomous vehicles?

YT__

HR: Thank you! I completed my degree about a year ago and am currently working as an autonomous driving researcher with Renault. I’m working on a piece of the technology for path generating and path following that we plan to put in automated vehicles in the future. Regarding other research related to my study, I think it would be really interesting to develop more complex scenarios to further the study of adaptation in the car. As complexity of the task increases, though, it can quickly become challenging to isolate what is causing the behavior that we see. This is why we did a simple scenario to focus on motor adaptation and not other aspects of handover like situation awareness.


First off, congrats to all of you. Your work all looks fantastic. Question for all of you, what are your goals with future research and after completion of your degrees? Also, what sort of work do you each participate in to encourage children to pursue the sciences?

Huichan, what forms of feedback has your research explored for a wearer of the prosthetic? Has thought gone into integrating the sensor system in various prosthetic systems?

Duncan, your robot has gotten some good attention of /r/robotics. How scalable is this technology and do you foresee the capabilities of integrating this into an exo-skeleton for human use?

Surjo, with the current system, you can actuate motors to complete simple arm based tasks. Has your research indicated any scalability to the system? With the signal you're receiving, how many different states are you able to detect?

Holly, will your future work conduct the same experiment but with obstacle avoidance to aid the transition from autonomous to human control? What other sort of work do you see that could make an impact on the advancement of autonomous vehicles?

YT__

dh: Scalable in terms of size? The actuation principles we figured out for jumping aren't size specific, so you really could apply them at any scale. So yeah you could build it into a human-scale exoskeleton, but there might be other more humanitarian applications for a device like that i.e. helping someone walk again before trying to make them win the high-jump contest.


First off, congrats to all of you. Your work all looks fantastic. Question for all of you, what are your goals with future research and after completion of your degrees? Also, what sort of work do you each participate in to encourage children to pursue the sciences?

Huichan, what forms of feedback has your research explored for a wearer of the prosthetic? Has thought gone into integrating the sensor system in various prosthetic systems?

Duncan, your robot has gotten some good attention of /r/robotics. How scalable is this technology and do you foresee the capabilities of integrating this into an exo-skeleton for human use?

Surjo, with the current system, you can actuate motors to complete simple arm based tasks. Has your research indicated any scalability to the system? With the signal you're receiving, how many different states are you able to detect?

Holly, will your future work conduct the same experiment but with obstacle avoidance to aid the transition from autonomous to human control? What other sort of work do you see that could make an impact on the advancement of autonomous vehicles?

YT__

(SRS) Thanks for this question! Right now, it can detect 2 states: opening and closing the hand. Decoding single fingers is still not feasible, particularly outside the laboratory. We hope, however, that making the exoskeleton more intelligent (semi-autonomous) will further improve applicability. It would be possible to decode more from brain waves, but then the calibration and training would need more time, and the reliability may drop.


Hi, this question is mainly aimed at Surjo but can be answered by anyone.

I'm currently a penultimate year mechatronics engineering student at uni doing an internship at a very large company in the field of mechanical engineering. That being said, my main interest is in biomechatronics but I've always been told it's a very narrow field and not worth it.

The reason I wanted to get into it was because since I was in high school, I had an idea for a project I wanted to work in. I want to delve into prosthesis and mainly research robotics that can interface with the brain and be controlled directly by neural impulses. I've done all but the last year of biomedicine (before switching to engineering) and understand the main limitations and how much work would have to go into it. The project is essentially the article you linked.

My question to you is, regardless of geographical limitations (I'm prepared to move to another country, I can move to most places with the passport I currently possess), what are some good resources to research to get a head start into this field. Will I need to get an internship as early as now to even be considered? Will it be worth it in terms of job prospects? Is the project I want to do very specialised and will I be able to work on something like that for an employer in the field as an intern or a graduate?

variantt

(SRS) It's great to hear that you are interested in this kind of technology and work. I would recommend contacting e.g. the Wyss Center in Geneva (John Donoghue's laboratory). Or one of the big BMI labs in the US that are doing fantastic work. An internship would be definitely advisable to get some more insights into the field (and its challenges). Also, it will help you to get an idea what skill will be needed for the specific area you are interested in.


Hey, I'm curious about the personal beliefs and opinions from Surjo or Huichan, about the future of robotics given your professional experiences, knowledge, and insight to how advanced we are currently. It's okay, if you may not answer some of the questions, also Holly's and Duncan's opinions are welcome as well.

I'm interested in rough estimates about the developments of artificial intelligence or advancement of human prosthetics. I assume certain body parts are easier to augument or replace than others that are more complex.

On brain-computer interfacing, what can our current technology enable people to do actually do?

Can a robotic hand rival the same effective use of a flesh hand, yet?

Or how much can our technology replace our sense organs? I occasionally hear about devices that enable blind people to see or deaf people to hear, but what is the extent of our current technology?

It would be great if we could replace organs, or if I could be a cyborg, that would be cool.

Also what fields or professions seem to be rapidly expanding regarding robotics or AI, or medical augments?

I think modern automation factors into the job market today, so it would probably have an effect on those respective fields.

Hypothetically, what are your beliefs about how far our technologies or specific technology will advance in a number of years? 10? 50? 100? 300? 1000?

ThePu55yDestr0yr

HZ: I am sorry I can only answer a small segment of your question based on the prosthetic hand we designed. Our current design can achieve grasping, pressing, shaking, and detect surface roughness and object softness, but it is still far from a flesh hand. For example. a flesh hand can achieve sensory resolution of several nanometers and ours are several orders of magnitude larger than that. But our prosthetic hand has a big advantage, which is its cost. Our goal is not to compete with a real hand, but to regain some important functions of hand at a low cost.


Dr. Soekadar,

Thank you for your time. How much of the EEG control do you think comes from the hand area itself? Did you see evidence that there is contributory signal outside the omega? There is with microarray placement, and with the 6cm2 area you have a much broader 'view.' Also, congratulations on making a noninvasive practical BCI/BMI requiring almost no training. Very exciting. And the smile in your participant's face as he eats the potato chip is beautific.

3-2-1_liftoff

(SRS) Thanks for your question! The control was very specific for intended (or imagined) finger movements. You can see in video S5 that the participant could move his shoulders or head, or could speak without activating the closing motion. So I am quite confident that the EEG control came primarily from the hand area itself. Using the small Laplacian filter reduced the contributions from other alpha sources.


Huichan: Do you know an undergrad student named Ziwei?

He is an engineering student at Cornell who used to be the captain of the robotics team I coached.

monkeydave

HZ: Sorry I don't know him/her, but glad to hear Cornell students are participating robotics competition!


Are there things that irk you about the way the media has covered your research specifically, or how we cover robotics in general? How could we do a better job?

BotJunkie

dh: First off, I think that the folks over at IEEE Spectrum do a great job. A general note for other journalists is that it's a bit annoying as a researcher to have your results constantly framed in the context of a robot apocalypse. It detracts from our ability to have a more nuanced discussion of how robots can add value to society in the near term, and stops us from exploring the next places we want our research to take us.


Additional Assets

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.