Science AMA Series: We are Stanford neuroengineers who created a neural prosthesis for monkeys to type Shakespeare with their "minds". AUA!

Abstract

We are Paul Nuyujukian MD PhD (Postdoc, soon to be Bioengineering faculty at Stanford) and Jonathan Kao PhD (Postdoc, soon to be Electrical Engineering faculty at UCLA), neuroengineers in the Stanford Neural Prosthetics Systems Laboratory, which is directed by Professor Krishna Shenoy PhD.

We just published a paper in the Proceedings of IEEE in which we demonstrated a high-performance neural prosthesis where monkeys transmitted text one character at a time at a rate of up to 12 words per minute.

Video of monkey typing

Before we get ahead of ourselves, let us assure you that monkeys don't understand English. In the video above, the monkeys only saw the green and yellow dots, and not the black letters (which were added afterwards in post-production as a visual aid). The game engine prompted the green targets in a specific sequence that if the monkeys got correct, would spell out words and sentences that we can all understand. The video above was a selection from Hamlet, but the primary data of the paper were articles from the New York Times that the monkeys were asked to "transcribe." All they are doing though is navigating the white cursor to the green target at every trial, and earning a liquid reward for each success.

The ability of the monkeys to control the cursor with their brain was accomplished via a brain-machine interface (BMI). BMIs are systems that record from the brain and translate these measurements to useful control signals, which could be used to control a robotic limb, wheelchair, or, as was in this case, a computer cursor. In this case, the BMI has similar functionality as a one-button computer mouse: it can move in two dimensions and click.

The hardware interfaces used in the BMI, neural electrodes (the one we used was the Utah mulitelectrode array), are not new. They have been around for decades. What is new are the algorithms that translate (or, as we refer to them, "decode") the brain signals into movement of the cursor. The machine learning decoding algorithms used in this study were ones that we developed recently (cursor movement and click decoders) that significantly improve the performance of communication BMIs, enabling our monkeys to achieve rates that are 2-3 times faster than rates achievable with prior algorithms.

There is tons more we could write about (algorithm details, clinical trials that these findings have resulted in, what other medical conditions BMIs may help with, etc), but we'll stop here and open it up to you all for questions. We look forward to answering as many of your questions as possible!

2PM PT - We are live!

6PM PT - We are done, thank you for the great questions!

More videos:

Dwell typing

Click typing

Media coverage:

Stanford press release

IEEE Spectrum

NPR KQED - Future of You

The Verge

Wired UK

Thanks for taking the time to talk with us! I had a few questions I was hoping you could answer:

  • Is the equipment for the BMI invasive? I assume not if you have humans using them.

  • How focused do you need to be on the task of controlling the dot? If my mind wanders or I start thinking about getting something to eat will it start sending unintended signals that might get picked up and translated into movement?

  • When I'm controlling my arm, I can think about moving it without actually moving it. Can you do that with the BMI?

Edit:

Additional questions

  • How much training and familiarization does it take to use the BMI?

  • Do you foresee more complex algorithms? Do you think that some day we could directly type letters or whole words just by thinking about them? Images?

PapaNachos

Jonathan: Great questions -- answers below.

  • Is the equipment for the BMI invasive? I assume not if you have humans using them.

The equipment for the BMI in this study is implanted into the brain. We used the Utah electrode array, which has a 10x10 grid of electrodes in a 4mm x 4mm, approximately the size of your pinky fingernail or a baby aspirin. With respect to humans using them, there are several active studies as part of US FDA pilot clinical trials where this same technology has been implanted into humans, and used to control similar neural prostheses. An example from our sister lab, the Neural Prosthetics Translational Lab at Stanford University, which is part of the BrainGate piot clinical trial, has published work on this (e.g., Clinical translation of a high performance neural prosthesis).

  • How focused do you need to be on the task of controlling the dot? If my mind wanders or I start thinking about getting something to eat will it start sending unintended signals that might get picked up and translated into movement?

If your mind begins to wander, it could lead to unintended movements of the cursor. But there are a few things we do to make sure this doesn’t have deleterious effects on the typing task. One thing that we have done is to decode an “engagement” signal -- we call it an “idle” signal in our paper discussing the algorithm used in this study, linked here. By decoding this signal, we are able to reliably detect if the monkey was fully engaged in performing the task, and when they weren’t, we didn’t allow any selections to be made (i.e., it would not be possible to click one of those targets). Clever use of this signal could allow the brain-machine interface to behave in ways that don’t lead to inadvertent selections or actions when you’re not focused on performing a motor task.

  • When I'm controlling my arm, I can think about moving it without actually moving it. Can you do that with the BMI?

Regarding real arm movements vs imagined movements: great question! Our lab had the exact same question. The short answer is: yes. What follows is a more detailed answer. When we, for example, plan to make a reach, the neurons in our motor cortex fire very strongly, yet don't produce movements. A paper from our group by Matt Kaufman and colleagues, Cortical activity in the null space: permitting preparation without movement, proposed a mechanism for how this might be so. The basic intuition is as follows: the mapping from neurons to our muscles (or in our specific case, a brain-machine interface) is many-to-few. Let's take the example of a brain-machine interface. In our study, it's approximately 100 neurons controlling the 2-dimensional position and velocity of the cursor. This means, in linear algebra terms, that there is a 98 dimensional "null space" where certain patterns of neural activity don't affect the 2-dimensional position and the velocity of the cursor. It turns out that imagined or planned movements result in patterns of neural activity that live in a muscle "output null space," so that they don't cause movement. So fundamentally, real movements and imagined movements have different patterns of neural firings. Because they do, we can design machine learning algorithms that specifically separate out neural activity related to performing a physical movement, while ignoring imagined movements. But the fundamental reason why this is possible is exactly the study by Kaufman and colleauges -- i.e., that these two types movements result in different patterns of neural activity. I think this is a pretty cool application of linear algebra! :)

  • How much training and familiarization does it take to use the BMI?

The short answer: very little time -- Monkey J was able to use the brain-machine interface on the very first day we tried it with him. The amount of time it takes to train the actual decoder algorithm can be less than ten minutes. In general, though, the time it takes to learn is related to different approaches to designing brain-machine interfaces. In some cases, you want the brain-machine interface to work as if you were controlling your natural arm, which is sometimes called a "biomimetic" brain-machine interface. In other cases, you might want to design a brain-machine interface that the user has to learn. Both of these approaches are well-summarized in this perspective: Combining Decoder Design and Neural Adaptation in Brain-Machine Interfaces . In our scenario, we design our brain-machine interfaces to be biomimetic, so that you would control them as you would your natural arm. As our algorithms achieve higher and higher performance in 2D cursor control, this means that the monkeys don't require any additional training to learn how to use a brain-machine interface. They control it, as far as we can observe (e.g., by analyzing their neural data), as they would their native arm. As for these brain-machine interfaces where learning occurs, this is also a nuanced question. It turns out some brain-machine interfaces are easier to learn than others, and if you're interested in reading more about this, I recommend this paper: Neural constraints on learning.

  • Do you foresee more complex algorithms? Do you think that some day we could directly type letters or whole words just by thinking about them? Images?

Definitely! These are areas of active research. We're constantly working on novel algorithms. Regarding typing letters or words directly just by thinking about them, this could be possible in at least a couple of ways. Our group has done discrete decoding where you just think about selecting a target in space (see this study by Santhanam and colleagues) and if each target corresponded to a letter, this can lead to very fast typing. However, if you want to think of the letter "E" and have it appear, we would likely have to record from different regions of the brain. For example, some are working on decoding speech signals from the brain.


First of all, thank you for doing this AMA. Some weeks ago we had an AMA with Hugh Herr, these are exciting times for bionics enthusiasts!

I am currently studying Neural Engineering and have been interested in BMIs and bionics since I was a kid, so I'd love to ask you a few questions.

  1. What are the best ways to get into the field? I have recently got a B. Sc. in Biomedical Engineering and have started a M. Sc. in Neural Engineering. Should I pursue a PhD, or should I try to apply for job positions in companies that focus on bionics? Any advice on this would be greatly appreciated.

  2. What kind of neuroprosthesis are in the making? After the already outstanding developments of many devices which improved the lives of many, what do you think will be the next big achievement?

  3. BMIs are most certainly the main bottleneck in bionics. We have the appropriate mechanical and electronic engineering to mimick human limbs, but we don't know how to create a two-way connection between the bionic device and the nervous system. This is because the signal processing for biological neural systems is a challenge to say the least, and because it's difficult to pick selectively signals coming from individual axons. So, my question here is: which do you think will prove to be the most successful way to build a BMI, peripheral or central nervous system interfaces? Also, do you think we'll be able to use EEG for high end BMIs? Or should we look for a completely new way to build the proper interface?

  4. Do you think we'll be able to obtain, in the near future, BMIs that will overcome the bottleneck I have mentioned before? And maybe even extend our senses and capabilities?

  5. Could you share some details about the new algorithm you used?

I asked quite a lot of stuff I guess, I'd be very thankful if you'd answer even to just a few of the questions I posed. Thank you again!

EDIT: added a question.

dd_hexagon

Paul: excellent questions. Answer below.

  • What are the best ways to get into the field? I have recently got a B. Sc. in Biomedical Engineering and have started a M. Sc. in Neural Engineering. Should I pursue a PhD, or should I try to apply for job positions in companies that focus on bionics? Any advice on this would be greatly appreciated.

This is very much still a young field under heavy research. There are definitely some companies pursuing BMIs, but even they are looking for people with extensive experience. My advice would be to get into a PhD program at a university where this type of work is happening. You can do an engineering degree like I (Bioengineering) or Jonathan (Electrical Engineering) did, or you can also go for neuroscience. BMI research requires a lot of insight in mathematics, statistics, signal processing, statistical estimation, and machine learning; so those are the key areas to focus on, in addition to neuroscience and computer programming.

  • What kind of neuroprosthesis are in the making? After the already outstanding developments of many devices which improved the lives of many, what do you think will be the next big achievement?

There are many BMIs under active development. The comunication neural prostheses that we discuss in this study are very exciting, as are the motor neural prostheses, which restore the ability of people to interact with the physical world. Good examples of motor neural prostheses where human participants are controlling robotic arms are these two studies: Collinger, et al. Lancet 2013 and Hochberg, et al. Nature 2012.

  • BMIs are most certainly the main bottleneck in bionics. We have the appropriate mechanical and electronic engineering to mimick human limbs, but we don't know how to create a two-way connection between the bionic device and the nervous system. This is because the signal processing for biological neural systems is a challenge to say the least, and because it's difficult to pick selectively signals coming from individual axons. So, my question here is: which do you think will prove to be the most successful way to build a BMI, peripheral or central nervous system interfaces? Also, do you think we'll be able to use EEG for high end BMIs? Or should we look for a completely new way to build the proper interface?

  • Do you think we'll be able to obtain, in the near future, BMIs that will overcome the bottleneck I have mentioned before? And maybe even extend our senses and capabilities?

This is a very insightful question. The interface itself is the key challenge for our field and the single most difficult question to answer. In this study, we used an electrical interface to the brain (central nervous system). As you pointed out, there are other options, such as interfacing with the peripheral nervous system, or using only scalp electrical measurements such as EEG. It's too early to say which will be the successful interface. What I can tell you is that we need continued research in all these interfaces, and new ones so that we find the right ones. The right interface is one that provides a high-quality, multi-channel, high-resolution view of the state of the brain and is robust and safe for decades of use. There is not a single interface that yet meets all this criteria, and thus we need continued investment and work in this area so that we find the best interface that maximizes benefits and minimizes risk to the people that so greatly need such technology.

  • Could you share some details about the new algorithm you used?

Absolutely, there are two algorithms used in this study. Combined, the provide the equivalent functionality that one would have with a one-button mouse (think old Apple computers). The first is the ReFIT Kalman Filter, which is used to control the movements of the cursor. We published this in a prior paper Gilja*, Nuyujukian*, et al., Nature Neuroscience 2012. Briefly, the ReFIT algorithm improves the cursor control by better estimating the intended direction of movement in a two-stage fitting process. ReFIT doubled the performance of existing algorithms by over 2x. The second algorithm used in this study is a hidden Markov model used for click transmission. We published details of this algorithm recently in another paper: Kao*, Nuyujukian*, et al. IEEE TBME 2016.


What are the monkeys actually doing if the letters were added in post? I don't really understand what's happening to be honest.

dopestpesto

Jonathan: The monkeys saw the yellow grid of targets and not the black letters. We would prompt a target by making a yellow target turn green. The monkeys are trained to acquire this prompted green target. They would then control the brain-machine interface so as to move the cursor to acquire the green target. This process would repeat, so that the monkeys would continuously be controlling the brain-machine interface to move the cursor from one prompted green target to the next prompted green target, etc.

Now, we controlled which green target was prompted. They can be prompted randomly (when we want to measure information theoretic performance metrics like bitrate, see our study on this). But for the purposes of measuring words per minute, we can prompt the green targets in a specific order so as to have comprehensible text spelled out. We know which target corresponds to which letter, so we can prompt them in a specific order. The monkeys weren't aware of any of this; only the fact that they were acquiring the green target among the yellow targets.


This is some pretty neat stuff :) Thanks for setting aside the time to do this AMA!

I'm an undergraduate neuroscience major and I'm looking to develop skills that would help me succeed in BMI work.

1) Besides learning concepts in programming, neurophysiology and signal processing, could you recommend any specific interfaces or demonstrations that can help develop an understanding of BCIs? Specifically in this context, in learning how to navigate interfaces used in control.

2) Have you thought about incorporating feedback mechanisms through the array as well? Either as an additional means of sensory feedback, or for a more direct reward?

catharsis724

Jonathan: Thanks for the kind words. We're very happy to be doing this AMA.

Besides learning concepts in programming, neurophysiology and signal processing, could you recommend any specific interfaces or demonstrations that can help develop an understanding of BCIs? Specifically in this context, in learning how to navigate interfaces used in control.

Regarding demonstrations to help develop an understanding of brain-machine interfaces -- nothing on intracortical brain-machine interfaces (that we work on) comes to mind immediately aside from demonstrations reported in research articles. Several of these studies have movies that show robotic arms being controlled, and describe in detail the types of algorithms they used to decode neural activity into prosthesis kinematics. See for example: Wodlinger et al., J Neural Eng 2015, Collinger, et al., Lancet 2013, Gilja*, Pandarinath* et al., Nature Medicine 2015, and Hochberg, et al. Nature 2012. Aside from these studies, there are communities that openly publish data that might be relevant if you are interested in decoding. For example, there was a Kaggle competition where EEG and ECoG data was published with the goal of detecting seizures. Playing with data like this could give you a sense of how decoders work. There are also platforms like OpenEEG.

Have you thought about incorporating feedback mechanisms through the array as well? Either as an additional means of sensory feedback, or for a more direct reward?

This is an excellent question. Several studies have demonstrated how feedback plays a vital role in our ability to make very precise movements. Therefore, incorporating sensory feedback into brain-machine interfaces will be an important step to further increasing the performance of these systems. Take for example the task of grasping a styrofoam cup. With our hands, we can feel that it is soft, and thus not apply excessive force to accidentally crush the cup. We would not have known this without sensory feedback. Likewise, for a brain-machine interface, this sensory feedback plays a large role in the motor commands we seek to execute. Several groups are working on incorporating sensory feedback into brain-machine interfaces. For a nice overview, see the following review by Bensmaia and Miller. For some example specific studies, see the following: A learning-based approach to artificial sensory feedback leads to optimal integration, Active tactile exploration using a brain–machine–brain interface. We believe that delivering sensory feedback in an accurate and precise way will be important for further increasing the performance of these systems.


How does neuroplasticity effect the implanted arrays of electrodes? Does the brain 'grow' around the interface or do the deeper parts of the cortex change the ways that the neurons near the interface work?

freedcreativity

Jonathan: Great question -- we've thought about similar things. Remarkably, the Utah electrode arrays we used are quite stable, capable of recording similar neurons through time. Even more remarkably, the properties of these neurons can remain relatively similar, so that decoders trained using the neural responses recorded a year ago can still lead to controllable BMIs today (see our study here). Thus, in our brain-machine interfaces, where we're trying to have them be controlled in a "natural" way (i.e., as if the monkey was still using his arm), we don't see dramatic effects of plasticity. However, for brain-machine interfaces where the goal is to learn new mappings, there can be so-called "neural adaption," where the firing rate properties of the neurons change. It's hard to know if this is plasticity or e.g., a change in strategy, or something else entirely (for a more detailed treatment, see Combining Decoder Design and Neural Adaptation in Brain-Machine Interfaces ). This is an area of active research.


Do you think your algorithm and software work could improve upon work done by other labs in similar areas such as hippocampal place cell monitoring for location prediction or analysis of dream content?

BatterMyHeart

Paul: Excellent question. The fundamental technology of recording from the brain using multiple electrode channels and using machine learning algorithms to estimate the state of the brain could definitely be applied to other areas of the brain beyond just motor cortex. Whether our algorithms would work in hippocampus is not something we can easily answer, because, to our knowledge, no one has used our specific algorithms in the hippocampus. However, that's just another example of how limited our understanding of the brain is right now and how much more work there is to do.


So I was wondering, what in the process of making a BMI is the largest software challenge, thank you.

Speedfreak501

Jonathan: The largest software challenge is algorithmic. In our systems, we are recording activity from approximately a hundred neurons every millisecond. These data are high-dimensional (there are a hundred neurons) and very noisy. About every 10 to 50 milliseconds, we have to look at the activity of these one hundred noisy neurons, and figure out how to move a computer cursor, or in other applications, a robotic arm. This entails translating the patterns of neural firing into control signals. There are many techniques we can use from machine learning and statistical signal processing to do this, including Kalman filters or recurrent neural networks (for a recent review, see here). However, there are aspects of control theory and basic neuroscience that we can incorporate into our algorithms to further increase performance (e.g., to see how control theory can improve performance, see Gilja*, Nuyujukian*, et al., Nature Neuroscience 2012, and to see how insights from basic motor neuroscience can improve performance, see Kao et al., Nature Communications 2015). We need to be constantly improving these algorithms. In particular, the algorithmic advances of Gilja*, Nuyujukian*, et al., Nature Neuroscience 2012 and Kao*, Nuyujukian*, et al. IEEE TBME 2016, were critical in achieving this 12 wpm performance.


In terms of education, what's the best way to move into this field of research? I'm a neuro grad student, but my training is primarily biology. Is self-studied coding enough to allow me to contribute to BCI work?

If I want to involve myself in the translation of this work to humans, do I need an MD? And how long do I have -- is it going to be 10 years before humans start using this technology?

zp122

Paul: Excellent questions and thanks for your interest.

In terms of education, what's the best way to move into this field of research? I'm a neuro grad student, but my training is primarily biology. Is self-studied coding enough to allow me to contribute to BCI work?

BMI research requires a lot of insight in mathematics, statistics, signal processing, statistical estimation, and machine learning; so those are the key areas to focus on, in addition to neuroscience and computer programming. You can certainly do a lot of this via self-study, but it can be challenging without some kind of engineering curriciulum to help guide your education.

If I want to involve myself in the translation of this work to humans, do I need an MD?

As far as participating in translation, it depends on what role you want to play. If you want to be the one implanting these devices into people, then yes, you need an MD and a neurosurgical residency. However, if you want to be a researcher who collaborates with surgeons, then there is no requirement to have an MD. Right now, I would say that most of researchers in pilot clinical trials of brain-machine interfaces are PhDs (or grad students) and not MDs.

And how long do I have -- is it going to be 10 years before humans start using this technology?

These devices are not yet approved for therapeutic use by the FDA, but people are already using this technology in pilot clinical trials. There's still lots to do though, so don't hesitate to get further involved.


Thanks for doing this AMA!

I'm interested in what you think possible medical implications of this technology could be. In particular, I think about people with disorders like locked-in syndrome. Obviously, the technology is still far off from being perfect, but do you think that an interface for people with conditions similar to locked-in syndrome could be developed?

Austion66

Paul: Great question. People with locked-in syndrome are definitely an important target population for these types of brain-machine interfaces. Right now, people with locked-in syndrome have very few options for communication, which can be very slow. An efficient and fast communication interface would significantly improve the quality of life for people that otherwise have very limited ability to communicate. In fact, one of the prior human participants in the BrainGate pilot clinical trial was nearly locked in due to a brainstem stroke. As part of her participation, she was able to type out words and sentences in one study and control a robotic arm in another study.


Hey there, also from Stanford here! I'm wondering what limits the speed at which one can "type"? Is it difficult to actually think the right thought? Also, has anyone in the lab tried this on themselves?

TheWhiteDuke

Paul: Great questions fellow colleague! The primary limitation at this point is the actual interface itself. The cursor can only go so fast and choosing one letter at a time can be pretty limiting. Ways to markedly improve this communication rate would be to incorporate autocompletion algorithms to predict which words are intended to be communicated (like in your phone), or change the interface to select entire words at at time instead of letters at at time (as eye-tracking interfaces do). Also, a radically different approach would be to record from a different area of the brain and try to decode phonemes and words all at once instead of control a cursor to spell words out one letter at time. There are some efforts in that direction, but it's still early days.


Thanks for doing this AMA! For a full blown BCI, would you need a single/a few chips in the right region(s) of the brain or a multitude of electrodes all over?

ReasonablyBadass

Jonathan: To get the levels of communication performance in our study, you only need one electrode array (we used the Utah electrode array, with 96 electrodes) implanted into the right motor cortical regions of the brain. (Note, our Monkey J had two electrode arrays, one in "premotor cortex" associated with movement planning, and the other in "primary motor cortex"; Monkey L had one array straddling these areas.) In the pilot clinical trial study by Hochberg, et al. Nature 2012, they also used one electrode array in the motor cortex. But more recent studies, like Collinger, et al., Lancet 2013 and Gilja*, Pandarinath* et al., Nature Medicine 2015 have used two electrode arrays. Having more information always helps, and so with more electrode arrays in the right areas, we would expect communication performance to increase.


First of all, thank you for doing this AMA and spreading the word about Neuroengineering to us.

What do you believe is the future for those in vegetative states? Will advances in medicine allow for those in rehabilitation to 'break the barrier' and go from being in a coma to being somewhat conscious of their surroundings? What advances in this area are the 'hot subjects' of the field that most people might not know about?

DSDresser

Paul: Technology for the scenario you are describing here is an active area of research: brain stimulation in the minimally conscious state. It is a fascinating topic of research where people who are minimally conscious will regain consciousness when areas of their thalamus are stimulated. It's a complex case that involves both cutting edge technology and complex ethical questions. We're not experts in this area though, so our understanding of this technology is limited.


Are the signals to move the cursor mapped to known brain signals for muscle movement (so that the monkey would move the cursor alongside or instead of a certain muscle group) or were the monkeys essentially given a new limb, so to speak? If so, how long did it roughly take for the monkeys to realize what they could do now and how to do it?

Suthek

Jonathan: We implanted the electrode arrays into regions of motor cortex that are related to arm movements so that the neurons we record are informative of reaching behavior. In native hand control, the position of the cursor is tied to exactly where the monkey's hand is. As the monkey moved his arm, and thus the cursor, we listened in on his neural activity. The goal of our decoder algorithm is to approximate this mapping as closely as possible. Therefore, we would want the monkey to control the brain-machine interface cursor as he would his native arm. The monkeys were able to control the brain-machine interface instantly when activated, even on the very first attempt, in part because the decoder algorithms were high-performance.


My 18 year old son has Down Syndrome. His speech is very guttural and difficult to understand despite over 15 years of speech therapy. Could your research possibly help someone like my son? Could he use the neural transmitters to transmit his speech thoughts onto a computer screen or can the neural transmitters be used to activate certain muscles that would enhance or allow him to articulate more clearly?

mamajamala

Paul: Thank you for your comment. I recognize how important this is for you and I'm sorry to hear about your son's struggle with speech for so long. To address your question, this technology is still in its infancy, and has only recently started in clinical trials with human participants. Right now, all of these participants have some form of paralysis, as that is the inclusion criteria for participation in the current crop of clinical trials. The type of neural prosthesis you are describing would likely record from a brain region other than motor cortex to pick up phonemes or words all at once. Such a prosthesis isn't approved by the FDA yet, but it is being explored in some very early clinical studies that are trying to decode speech. This research is still in very early days.


Hi, I have a few methodological questions (I do chronic ephys in rats).

You don't mention any spike sorting. Did you just threshold and then refrain from sorting, or did you sort as well? Also, how many isolated units did you get for each subject?

I'm also especially interested in whether you included MUA or LFP signals as inputs to your decoder.

Regarding your HMM: Did you do any kind of analysis using the same / a similar model to try to detect any other (non-click related) identifiable state transitions that might ultimately turn out to map onto some other interesting process?

Optrode

Jonathan: Great questions.

You don't mention any spike sorting. Did you just threshold and then refrain from sorting, or did you sort as well? Also, how many isolated units did you get for each subject? I'm also especially interested in whether you included MUA or LFP signals as inputs to your decoder.

We only decoded threshold crossings and did not spike sort. You could imagine how this might not be a good idea -- for example, you could have two neurons on the same electrode, one that fires a lot for reaches to the right, and another that fires for reaches to the left, but because we count their spikes together, we would see that there are a lot of spikes on the electrode whether reaching to the right or left. However, for intracortical brain-machine interfaces, we don't empirically observe a significant performance hit when decoding threshold crossings rather than sorted spikes (e.g., see Chestek et al., J Neural Engineering 2011 as well as Fraser et al., J Neural Engineering 2009) but we gain an added benefit of robustness. Indeed, some MUA and "hash" (threshold crossings difficult to separate from noise) that would be ignored during spike sorting are informative of reaching behavior, and improve decoding. To this end, we (and other groups working on brain-machine interfaces) standardly decode threshold crossings. We didn't include LFP signals in this specific study, but we have worked on this in the past. In particular, the study by Stavisky and colleagues demonstrated how LFP could be beneficially combined with threshold crossings.

Regarding your HMM: Did you do any kind of analysis using the same / a similar model to try to detect any other (non-click related) identifiable state transitions that might ultimately turn out to map onto some other interesting process?

We did! In our paper on the algorithm used for this study which goes into more detail, we were able to use the HMM to decode signals like whether the monkey was engaged in the task (if not, we would disable the ability to select targets) and if the monkey was reaching slowly or reaching fast. We found that decoding these other states could further increase performance.


I'm not sure if this AMA is still going on, but nonetheless:

I am a undergraduate student studying neuroscience and I'm extremely interested in research involving BMI. From once only considering MD school, I want to do a combined MD/PhD to study this. In fact, I've spent a lot of time researching how to join labs that focus on this, such as the Nicolelis Lab at Duke.

What advice would you have for someone who wants to get in this field? I'm considering at least one post-bac program between my senior year and applying for medical school. I'd love any advice you might have!

holamiamor

Paul: Great question and thanks for your interest. BMI research requires a lot of insight in mathematics, statistics, signal processing, statistical estimation, and machine learning; so those are the key areas to focus on, in addition to neuroscience and computer programming. Beyond that, get involved in a lab that works in BMI research. That's the primary way to get experience. Also, be sure to read as many papers from the field as you can. That's how you can learn about the challenges present in the field and what the state of the art is in the science and technology. Attend seminars and go to conferences if possible. It takes a lot of effort, but if that's what you want, you should go for it!


Hey There, thank you for taking the time to answer so many questions in detail.

I was wondering with your algorithms to filter out noise, is this something that can be helped by more processing power or is it more of a goal to optimize these very complex algorithms by finding faster, yet equivalent equations? Also how much of that problem is tackled from a mathematical standing opposed to software? I'd imagine you would be facing some very real issues with what approximations would be giving you accurate feedback.

Also you mentioned that their are currently multiple competing ways of interfacing with our neural network, yet no current method seems to be fulfilling all the criteria, is it possible some hybrid interface might go some way to closing some gaps in individual interfaces or would that come with too many additional detriments to be worth pursuing?

Valanthos

I was wondering with your algorithms to filter out noise, is this something that can be helped by more processing power or is it more of a goal to optimize these very complex algorithms by finding faster, yet equivalent equations? Also how much of that problem is tackled from a mathematical standing opposed to software? I'd imagine you would be facing some very real issues with what approximations would be giving you accurate feedback.

Jonathan: In our current setup, we aren't limited by processing power, although this may become a limitation in the future if decoding algorithms become more complex. Our major limitations right now are algorithmic. The neural activity we record can be very noisy, and we need to make decodes very quickly. The advances that allowed us to achieve typing rates of up to 12 wpm in monkeys were thus primarily mathematical. In particular, the algorithms we used in this study are discussed in further detail in Gilja*, Nuyujukian*, et al., Nature Neuroscience 2012 and Kao*, Nuyujukian*, et al. IEEE TBME 2016.

Also you mentioned that their are currently multiple competing ways of interfacing with our neural network, yet no current method seems to be fulfilling all the criteria, is it possible some hybrid interface might go some way to closing some gaps in individual interfaces or would that come with too many additional detriments to be worth pursuing?

Certainly. We ultimately want to provide the highest level of control to brain-machine interface users while minimizing risk to the user. This is an area of active research, and it may be that some hybrid interface can bring us closer to this goal.


Thanks for doing this AMA! Fascinating work.

1) What location in the brain are you working with? (I presume primary motor cortex.)

2) Would you say your algorithms are roughly analogous to the roles of the cerebellum and spinal cord in adjusting the cortex's motor output?

lets_trade_pikmin

Jonathan: You're welcome!

What location in the brain are you working with? (I presume primary motor cortex.)

We record from the primary motor cortex and caudal aspects of the dorsal premotor cortex (which is implicated in planning movements). Monkey J had two electrode arrays, one in primary motor cortex and one in dorsal premotor cortex, while Monkey L had one electrode array straddling these two areas.

Would you say your algorithms are roughly analogous to the roles of the cerebellum and spinal cord in adjusting the cortex's motor output?

This is a great question. Insofar as the algorithms translate spiking neural activity from motor cortical regions into hand kinematics, we can say that we are attempting to approximate the neural circuitry between cortex and the muscles. However, this is a limited approximation in that it is a linear approximation from a limited view of the brain (we are only measuring 100 out of 100 million neurons in motor cortex) to a limited view of the body (we are only measuring endpoint position and velocity, as opposed to every muscle in the arm). An interesting avenue of future research, though, is to use brain-machine interfaces for studying questions related to motor control and learning. The reason brain-machine interfaces may be helpful is exactly because we know the mapping from neurons to kinematic output -- it's our decoder algorithm. This allows the experimenter to ask precise questions that might otherwise be difficult without having also recorded from cerebellum and the spinal cord. An example of great work along these lines is a recent study by Sadtler and colleagues where they used a brain-machine interface to study motor learning.


What if you leave out the green dot after the monkeys have being trained with it and only leave the yellow dots? Will the monkeys produce text similar to a markov chain or a character-level recurrent neural network or will it be as random as it would be if an untrained monkey had done it?

KarlKastor

Paul: Great question. Without the prompting of the green target, the monkey will disengage from the game. At that point, the cursor will either drift to one corner of the screen and stay there, or randomly traverse the workspace, transmitting characters pseudo-randomly.


I am a game developer and this sort of technology is very interesting to me, especially considering what it could do for video games. Is there any BMI that I could get my hands on or are they only available to people doing research?

EvilCodeMonkey

Paul: Great question. The types of BMIs that we used involve neurosurgical implantation, so they are not appropriate for use with games by the general public. However, a related technology called electroencephalography (EEG), records brain signal from the surface of the head, which other researchers in our community use as the basis for communication neural prostheses. Consumer grade EEG systems can be relatively inexpensive and are available for purchase by the general public. Examples of such manufacturers are Emotiv and NeuroSky.


Additional Assets

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.