PLOS Science Wednesday: Hi Reddit, we're Eric and Konrad, and our article in PLOS CompBio reveals flaws in the typical analysis methods used in neuroscience, and suggests improvements for the research community moving forward -- Ask Us Anything!

Abstract

Hi Reddit!

We are Eric Jonas (a postdoctoral researcher at UC Berkeley in Electrical Engineering and Computer Science) and Konrad Kording (Professor at Northwestern and RIC). Our research focuses on trying to understand how the neurons in the brain compute and give rise to behavior. A lot of what we do is come up with mathematical techniques to understand neural data and work on new methods to acquire this data.

We recently published a paper titled “Could a neuroscientist understand a microprocessor?” that tried to address if the analysis methods we frequently use in neuroscience are likely to provide the level of “understanding” we seek. One of the biggest challenges facing neuroscience is that we don’t actually know ahead of time how neural systems work, so validating analysis techniques can be a catch-22. We attempted to use these techniques on a microprocessor -- a system we understand really well -- to see if we could make sense of how it works. This ended up being quite difficult, and we suggest ways that we might move forward as a community to make sure our analysis methods really do what we hope they will.

Our study has also been written up in the popular press. Read the articles in Arstechnica and The Economist) to learn more.

We will be answering your questions at 1pm ET -- Ask Us Anything! Follow Eric on Twitter @stochastician Konrad at @kordinglab.

What's your opinion regarding recent research suggesting that as much as 70% of fMRI studies could be false positives, a major concern for behavioral neuroscience?

Source: http://pnas.org/content/113/28/7900.full

kage52124

Konrad: Outstanding question. Neuroscience, just like psychology and other disciplines is having a statistical crisis where papers often do not replicate (see the open science project ). But isn’t it amazing that we have such a strong push now towards fixing these problems? On the other hand, it is hard to do science if you believe that about half of the research may not replicate. What do you believe then? We used to believe that conceptual replication could save us. Kahneman describes it best at: https://replicationindex.wordpress.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/comment-page-1/#comment-1454 We ourselves have unsuccessfully tried to replicate important work (e.g. here


Dear OPs,

I'm the "algorithm dude" for a Group of neuroscientists. I've been recently forced to work about "connectomics". Lots of inverse covariance analysis between regions of the brain. And I can't get what would the meaning of this be to save my life.

Also, regarding things like VBM... I don't understand what it's trying to prove to begin with.

So... What do you think about this? Why do neuroscientists love so much advanced algorithms with no understandable links to biology to begin with?

lucaxx85

Konrad: When people say connectomics, it feels causal. A connection after all is a wire from one place to another (and arguably EM connectomics gets at that http://www.nature.com/articles/ncomms8923?WT.ec_id=NCOMMS-20150805&spMailingID=49252352&spUserID=NDkwMjM5MzE2MwS2&spJobID=740779497&spReportId=NzQwNzc5NDk3S0). Anyone speaking plain English thinks of connectomics in causal terms.

However, as we know, it is very hard to get at causality from observational data. I.e. it is crazy hard to understand causality in the brain if you just record. In fact, the problem is so bad that economists write amazing papers about it. And they did work at better methods (see “Taking the “con” out of Economics” discussed in http://www.nber.org/papers/w15794).

So, statisticians using connectomics on typical fMRI, LFP, MEG, or spike data are not even pretending. Connections are defined as certain kinds of statistical regularities. However, this does not need to mean anything about the actual causality in the system. From my own work we have some evidence: The problem is complicated and we wrote a review about those problems: review of functional connectivity methods. I want to note that back then I still loved the idea. Maybe too much. Granger causality should not work for slow signals: this Functional connectivity on spikes from simulated neurons fails: here And obviously we fail at understanding the microprocessor.

So I think we have a serious social problem there. Everyone working on statistics (functional connectivity) knows that they are not talking about causal connections and is happy about the popularity of the field. Everyone outside the field, in particular the experimentalists, are very happy interpreting the results as causality. In fact, I believe that every single experimentalist I ever talked functional connectomics with wants to interpret the results as causal. I am planning to write a review paper soon addressing this mismatch.


As a neuroscientist AND an electrical engineer myself, your article “Could a neuroscientist understand a microprocessor?” was really eye-opening. When looking through the lens of engineering, it is amazing that we're using such rudimentary tools to study the brain.

At the same time, we don't want to feel paralyzed faced with such a complex task: sure, sticking an electrode in the middle of the mess and trying to relate the measured spiking activity to behavior sounds silly, but a tiny step is better than no step -- even if the goal is a full marathon.

I'm curious to hear about what the authors think are reasonable ways to widen our steps, though. How can we make quicker progress in reverse engineering the brain? What are reasonable approaches to understand deeper questions in cognitive (neuro)sciences? What is the role of theory, and how to go about building theories that are actually useful/testable?

TL;DR: PLEASE HELP!!!!

brainlogist

Eric: We come from similar backgrounds, this has basically been my reaction as well. One thing that the paper really made clear to me is that, while the “acquire all the data and hope the algorithms can figure it out” approach seems doomed to failure, we really do need to step up our acquisition technology. These days I spend a lot of my time working on computational imaging for neural data, and Konrad similarly has various high-throughput acquisition projects. So we still need big, “authoritative” data -- it’s surprising how much disagreement there still is about fundamental neuroanatomy!

I think we also need to try and tighten the experimental loop so we can more quickly ask questions and reject erroneous hypotheses. Techniques like fiber photometry are not “big data” but they do let you very rapidly ask very specific questions of neural systems, especially of deeper structures which might be easier to understand.

And I think that anyone doing systems neuroscience and above should study some computer science, especially theory of computation. I had one neuroscientist ask in a talk how I could possibly make the comparison to a microprocessor, considering the brain did fundamentally different things like emotion and affect. He did not have any patience for my arguments that those also were a form of computation.

Konrad: I think it could help to focus more on ideas than about their endless replication. Measuring tuning curves is great. Even calculating granger causality is great. But scaling them up (thousands of labs doing basically the same thing in slightly different settings) before we understand what they mean is a problem. Same thing for neurotechnology. Develop methods that acquire large datasets about anatomy or physiology is great. Applying one of them to the whole brain is less interesting.


I work in cognitive decision neuroscience, so a lot of our methods are under-fire, so to speak, especially with the publicity of the cluster failure paper from Eklund et al last year. Recently I've become quite interested in really assessing how our methodologies and analyses can be improved by taking criticisms about their inferential power to heart.

I'm inclined to think that the tools we have for collecting data are the ones we have, and that's that for now. In that case, we just have to try and use the best tools available for analyzing the data we collect in a way that hopefully can produce real theoretical advances, which is where my fascination for papers like this one (and dynamical systems modeling approaches) comes in.

My question is: for cognitivists using tools like fMRI along with behavioral manipulations, what would you say is the best way researchers like me can do better science, now, while more advanced data acquisition tools (which may or may not be better at probing for real understanding of functional, computational principles) are still in development?

calling_you_dude

Konrad I think there are many ways in which people like you can make science better. Share your datasets. Work on clean experimental manipulations that really separate one task from the others. Do internal replications, ideally pre-registered. But I think most important is to try to think about the brain in different ways from other scientists.


For those who don't know, Konrad Kording is interested not only in neuroscience, but also in what he calls the "science of science".

Konrad: Over the years that you've spent reading and researching about the science of science, what insights you consider most useful for the young scientist (i.e. young postdoc) in establishing a successful career? Please, list your top tips for us, mere mortals dreaming to one day build a career as successful as yours!

brainlogist

Konrad: Productivity is a far better predictor for future high impact than high impact is for future productivity. So I believe young scientists should write many paper and strive to make them be better. Instead of writing the best paper immediately. You need to fail to become good at it. Because it is all about the skills.

What else? Become good at writing (I tried to write some hints here: ten simple rules for structuring papers. Acquire data skills, because research without data problems is going the way of the dodo. Find a supervisor who acts more like a coach than a big boss. Try and develop a unique way of thinking. Leave academia if it is not great for you; industry is a surprisingly nice place.


From your perspective/respective studies, what could be some of the biggest misconceptions about the brain in neuroscience today?

dthemand

Konrad many of them. Let me try and list a few obvious ones: (1) We are very advanced at understanding how the brain works. (2) Attaching the allusion to neuroscientific ideas strengthens a behavioral finding (3) Functional connectivity/ granger causality/ DCM/ measure something of causal meaning, e.g. actual interactions (4) Having a list of all the brain areas and what leads to activity is a big step towards understanding the brain (it is enabling other things but not directly an understanding) (5) It is simple. (6) We have good theories of it.


Thanks for the AMA! I'm writing a sci-fi novel (I know, I apologize) and your area of expertise is very closely related to one of the main themes in the book.

From my basic understanding, when you learn something new you are forming a chain between a number of neurons. When you access that memory each neuron fires in turn until the signal reaches the desired location. The more you study/practice the more it enhances that connection. If you fail to use that connection, it will eventually die off.

Is there anything which increases the speed at which those connections are made? Steroids or other drugs?

On a semi-related note, if there happened to be such a drug would there be any concerns about too many of these connections being made? Could it be tied to autism at all?

JMace

Konrad the description is a bit cartoonish. But when it comes to the novel I would argue that tDCS has often been shown to accelerate learning. Also, companies like Kernel and Paradromics are working towards interfacing with brains. Which could enable fast learning. Full disclosure, I consult for both of them. However, if there was a way of just speeding up learning with no downside, the brain would for sure have evolved that. Incidentally, we show that learning slowly may be optimal in many cases here. So if there are ways of letting humans learn faster, there is probably a price to be paid (e.g. very slow forgetting of everything you ever learned in your life).


Hi, and thanks for doing this AMA - it was a really interesting paper! I recently read it together with a paper in Neuron by Krakauer et al. about how an understanding of behaviour is the starting point, not the result of studying the mechanics of the system. In this case, the microprocessor can produce multiple different 'behaviours', and the same behaviour can be reproduced by different microprocessors.

If this is the case, how can we best direct mechanistic research on neural computations if we don't yet understand the relevant behaviour?

avastassembly

Eric: I think behavior is absolutely crucial, and love the Krakauer paper. In our paper we make reference to David Marr's levels of analysis for understanding the brain:

  • computational : At this level we describe and specify the problems we are faced with in a generic manner, but do not say how these problems are to be solved. Do we aim to learn a function? Do we wish to estimate uncertainty?
  • Algorithmic : This level forms a bridge between the computational and implementational levels, describing how the identified computational problems can be solved. It is here that Bayesian and machine learning methods find their place.
  • Implementaitonal : The physical substrate or mechanism, and its organisation, in which computation is performed. This could be biological in the case of neurons and synapses, or in silicon using transistors, etc.

(Text from the linked to blog)

I think understanding the computational level -- that is, formal descriptions of behavior and the underlying computation that the behavior is performing -- is too often made a second-class citizen in neuroscience. For primary sensory physiology (early audition and vision) this might be ok, but for more complex behavioral processes it's quite problematic. But I think the computational cognitive science approaches of Josh Tenenbaum (MIT) and Tom Griffiths (Berkeley) might point us in the right direction.


This paper is so unlike other neuroscience papers that I've read and seems to be very subversive to many long held beliefs in neuroscience (lesion studies, connectomics, functional connectivity) and I'm pretty convinced from your paper that these ideas need to be reworked. What do we do now? In other words, you correctly pointed out lots of problems but what do you think the solutions are now?

peterclavin69

Konrad Develop meaningful theories that can explain real world performance. Build experiments that separate one behavior. Build data analysis methods that do justice to complexity. All these issues have great people working on.


What theory best describes how our brain is learning? I have read The Emotion Machine and I am a Strange Loop, both seem to consider medium unspecific "consciousness" a type of layered feedback loop with the environment. Which books/scholars would you recommend to learn more about how our brains are learning?

user1618033989

Konrad None of our theories are any good at predicting how the brain learns. Seriously. The best we have for describing actual learning are probably theories that do not even allude to brains, Bayesian theories that simply assume that people are good at solving their problems are pretty good. But when it comes to learning in the brain, I recently got very excited about using ideas for deep learning as components for a model of neural learning. We recently wrote a little paper about that.


Thanks for doing the AMA! I'm a grad student working in precisely this area of computational neuroscience/algorithmic analysis of brain imaging data.

I really enjoyed the paper, it tied up a lot of the problems that we as neuroscientists know are there in a neat and accessible package and it was really enjoyable to read.

I have two sort of linked questions.

1) Given the various shortcomings of the imaging methods that we have at our disposal, what are your suggestions for accessing the type of information that we are interested in (i.e. how information is encoded in the brain from stimulus to response)?

2) The analysis of a microprocessor as a 'brain' was an interesting choice of vehicle for your message, but especially in terms of lesion studies, is it truly fully analogous to the model organisms that are traditionally studied? Microprocessors have limited ranges of behavior that are incredibly constrained as a function of their purpose. On the other hand, if one were to study what effect lesions have on very specific behaviors in, say, non-human primates, then a much wider range of behaviors is available for study. Are we not accessing the function of some specific neural pathway or area when we do these types of comparative lesion studies and see that lesioning this area causes defects in reaching behavior, but not in grasping behavior, etc.?

Bulgarin

Eric: I started off in a tetrode electrophysiology lab where, at the time, we were getting >100 simultaneous cells in the hippocampus -- this seemed like an incredible number! Now, with two photon, light sheet, and other technologies, it feels like we can do much better. But can we? Deep structures in the brain, which might be more amenable to computational analysis, are still hard to record from. Many of these imaging techniques can only pull out superficial cortical layers, although that's improving as well. And of course, most of our optical techniques are looking at calcium signals, not actual membrane voltage. I think we need to go deeper, both in terms of deeper structures and in terms of even cortical acquisition. Some of our recent microscopy work tries to move in this direction, but it's difficult!

Conceptually, I think a big problem is thinking of these things as an encoding problem -- from stimulus to response, without algorithmic models of the underlying behavioral task. You say that a microprocessor has a limited range of behavior, but they are Turing complete, so they could do anything! It's true that the tiny microprocessor we examined had limited complexity and a limited set of behaviors -- in that sense, it may have been much more like C. elegans than a mouse or a human. I think the biggest problem in lesioning more complex organisms is the incredible plasticity and functional recovery we see in neural systems -- for which there is no known analog in the microprocessor.


Thanks for the paper, one of the most novel and inspiring I have read for a while. I am not a neuroscientist, but a mathematician and entrepreneur. I currently work on building a "Virtual Research Environment" to enable collaborative work among research mathematicians. What would you say is the tool that is most missing to facilitate your work everyday? To bring discoveries in neuroscience the fastest to the "real world"? Thanks for your answers!

pdehaye

Konrad I think our most readily solvable problem right now is to help biologists see the usefulness of preprint servers. I am now trying to put all my research immediately onto a server (biorxiv or arxiv). Personally I am not sure if we need more software. We might need a system to help us discover what we should read. What do you think the virtual research environment could improve?


What do you think about Topological Data Analysis as a tool for neuroscience? Do you have any opinion on where this is going?

pdehaye

Konrad Multiple people use this term under slightly different terms. I think using persistent homology to understand data can be an interesting step. However, I think that approach throws out the baby with the bathwater. A sphere has the same topology as a cube. And yet, if I have neural data, I would want to know which of the two it is. As such, I think that Topological Data analysis is an interesting method but that other methods, e.g. structure discovery are more promising.


A very interesting paper indeed. Since I come from an engineering background it is hard for me to believe in the experimental/behavioral studies where the authors try to draw some correlation or rules by choosing the tuning parameters that suit their story. Not to mention that with current statistical tools, one can find correlations in all kinds of data. I find it more plausible to understand the underlying physical/chemical laws that play a role in observable mechanisms: such as Connectomics. My question is what in your will help advance this field in the coming decade? Will it be next-gen EM methods (like FiB-SEM-ATUM) or will it draw applications from advancements in computational power and AI. Thanks.

seesawtron

Eric: I think that the rate of improvement in computational power is slowing (RIP Moore's law) and I think a lot of contemporary "advances in AI" are actually really overhyped -- we're getting much better at visual processing for some tasks, but in other areas things are much slower-going (I work in a ML group at Berkeley).

I think connectomics is incredibly important because we really need authoritative neuroanatomy. I think there may be many early systems (sensory, motor) where we have many decades of theories and we just need the anatomy to help us pick which ones are right. But so many of the challenges with FiB-SEM actually boil down to data postprocessing -- how are we going to turn these petabytes of voxels into actual connectivity graphs without millions of man-years of human annotation? There I hope that advances in both compute and ML will make things better.

I'm actually most enthusiastic about techniques that let you test specific behavioral hypotheses with cell-type specific targeting, like fiber photametry, in short periods of time (experiments that take hours, not weeks).


A very interesting paper indeed. Since I come from an engineering background it is hard for me to believe in the experimental/behavioral studies where the authors try to draw some correlation or rules by choosing the tuning parameters that suit their story. Not to mention that with current statistical tools, one can find correlations in all kinds of data. I find it more plausible to understand the underlying physical/chemical laws that play a role in observable mechanisms: such as Connectomics. My question is what in your will help advance this field in the coming decade? Will it be next-gen EM methods (like FiB-SEM-ATUM) or will it draw applications from advancements in computational power and AI. Thanks.

seesawtron

Konrad EM techniques are awesome. However, so far, we know almost nothing about their relevance. How much of neural activity could we predict if we had all the connectomics? Unknown. But not any worse than the tuning curve mapping approaches. What we learn there we also do not know. So, we will have to see how we can use neuroscience to improve AI. My AI friends believe that we will solve strong AI long before we will solve the brain.


What sort of feedback/criticism have you received in the neuroscience analysis (people like Russell Poldrack)?

ZachAttackonTitan

Eric: The feedback has ranged from “I thought about doing this experiment years ago!” (high praise) to “this paper has simply shown you are a bad experimentalist” (Sad!) . Many of the members of the community I have talked to agree with our criticism of methods A, B, and D, but think we are unfair to method C -- where C happens to be the method they use. That always makes me laugh!

Neuroscientists study neural systems at different levels. Some study individual molecules inside cells, some study whole cells, some study small networks of cells, some study brain regions, and some study the entire brain. Often, we pick a level to study because we think the details below are less important, and everything above is too hand-waiving or uninterpretable with current methods and knowledge. So there is a lot of internal criticism in the field to start.


Additional Assets

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.