Science AMA Series: I'm Sam Arbesman, a complexity scientist, Scientist in Residence at a venture capital firm, and the author of Overcomplicated, a book which examines technologies that are too complex to understand. AMA!

Abstract

Hi reddit!

I'm Sam Arbesman, Scientist in Residence at Lux Capital, a venture capital firm investing in emerging science and technology startups, where I help explore what the future of science and tech holds, make sure our firm is at the forefront of these trends, and help the startups we invest in stay ahead of the curve.

I'm the author of Overcomplicated: Technology at the Limits of Comprehension, which is about how our technologies have become so complex that we don't really understand them anymore (even if you are one of the experts who made them), and what that means for us as a society. I'm currently thinking a lot about this topic and how we can meet our technologies halfway, even if we can never fully understand them. I'm also the author of the Half-Life of Facts, which examines how knowledge changes over time.

My training is in complexity science, computational biology, and applied mathematics (I have a PhD in computational biology), and I use the ideas of complex systems to examine how science and technology change over time and what this means for society. This involves both academic research as well as popular writing, the latter of which has appeared in such places as The New York Times, the Wall Street Journal, and Wired, where I was previously a contributing writer.

I’ll be back at 11 am EST (8 am PST, 4 pm UTC) to answer your questions, ask me anything!

Update: Hi everyone! I had a blast interacting with everyone here, answering questions, and just being part of this fantastic conversation. Thanks so much! Time for me to sign out, but I'll try to check back and might be able to answer a few more.

Hi Sam -

A common problem with evolutionary algorithms and with neural net training is that once your model has found a solution, it is sometimes difficult to understand HOW the solution works. In the case of consumer technology, it seems like older generations lose understanding until the tech has become "magical" (e.g. teaching grandparents how to use their TV or phone).

In your estimation, how much of this "magic" is destined to become part of mainstream scientific research? How can we meaningfully forge ahead when we've lost the ability to interpret our models of the world because the models themselves have become too complex?

iandennismiller

Your intuition is correct: making scientific discoveries we can never fully understand (at least at some level) is going to happen more and more. And it has even begun to occur, with the kinds of machine learning techniques popping out answers that leave us with little insight. Some have even spoken of this as the “end of insight.” (I’ve actually written an essay on this topic)

But I think part of the concern is that we often think that the world needs to be amenable to only very simple mathematical models. So far, we have had a good run with simple equations that explain a lot. But it’s not entirely clear how often this works. Maybe we have plucked all the low-hanging scientific theory fruit and now we are only left with the more complex, less-intuitive models. If so, this “magic” is going to happen more and more. (for more on this, see another essay)

In essence then, forging ahead means being comfortable in some sort of machine-human partnership. Rather than despairing of understanding a model at every level, we might have a reduced understanding. But we can still build the machines that provide the models, so progress is still certainly possible. Just as we have always used tools and instruments for scientific advancement, these are another type, even if they certainly feel different.


Sam- I'm a computational biology/ biophysics PhD student interested in non-traditional career paths in data science. Can I ask what helped you the most in developing your career in industry (IE great github page, interesting publications, etc?) Thanks!

divergentdata

I spent my time in research working on projects in lots of different domains, from understanding cities to thinking about the pace of scientific discovery. Getting a broad array of experiences can help hone your skills for working with data in lots of different areas.

Another thing I did was a lot writing for the public (blogging, essays, and even books). This gave me experience interacting with a wide variety of groups, showed some of what I was working on and thinking about to a wider audience, and helped me get involved in the world beyond academia.


Can you give an example of a technology that is too complex for us to really understand?

extremememory

Certainly many of our machine learning technologies are really too complex to understand, with too many interacting variables and parameters to really hold in our heads properly. We understand the output and the shape of the algorithm that underlies the technique, but we might never understand how it arrived at that output in all of its details. But we also see this in cars with millions of lines of code (and spaghetti code), that then crash, and we can point to their massive complexity, but are unsure of the details.

And frankly, most technologies—from desktop software to medical devices to kitchen appliances to the code in our cars—are too large and too interconnected to really ever fully comprehend: what our brains are good at handling is very different from the massive, nonlinear, and interconnected systems that we build. Unfortunately we only discover this failure to understand when a bug arises, which exposes the gap between how we thought something works and how it actually does operate.


Hi Sam,

I'm a high school teacher and I've noticed that my students are often highly stressed by the concept of 'the future'. Our school is in a rural area, our internet is unreliable, and the kids have never seen a 3d printer much less meaningfully engaged with recent technologies.

Do you have any advice for them? Or me as their teacher?

sugarfreecummybear

This is a great question and the other responses are fantastic. Ultimately, it’s about making sure that your students have instilled within them a comfort and love of change, so they can adapt to whatever the future holds. Certainly bringing in cutting-edge tech will help. Make sure they are comfortable with computational thinking and basic programming. But also get them reading about scientific discoveries in the news. Or have them read articles that discuss recent science and technology trends.

And don’t forget science fiction! Have your students read lots of stories about the future, both the technologies they predict and what society might look like, so they can begin to envision scenarios as opposed to just a murky and scary “future.” And even share with them lots of older science fiction, to give them a sense of what people thought the future was going to be like, as well as the many times everybody got it wildly wrong. In the end, it’s hard to know the details of what the future will be, or even its broad shape, so your students shouldn’t feel like they need to either. As long as your students feel like they can adapt to whatever is going to come next—and since no one really knows what the future holds, they shouldn’t feel like they have to either— they will be prepared for the future.


What are your thoughts on Mixed Reality?

I'm a /r/futurology moderator and I'm really surprised at the pace in which augmented reality technology is emerging. I mean, look at Pokemon GO.

Magic Leap is promising seamless mixed reality for the mainstream by 2020. It's hard to grasp just how it'll affect the world since it'll enhance our daily lives from a fundamentally sociological level.

I was thinking of how modern culture sprang up in the early 20th century. I feel like we're about to see a futurist culture spring up in the early 21st century and it'll be a hyper-connected one.

Chispy

Mixed reality is fascinating. It will both, very practically, reduce the number of screens that are all around us, but also allow us to overlay information on top of everything we experience. This has implications for pretty much everything, from our decision-making to being able to better fix our appliances.


The second half of this essay considers the idea that to make problems manageable humans break problems down in to chunks that are discrete, transferable and repeatable. By focusing on this problem solving that breaks things down in this way (that we could call "logical problem solving") it allows our brains to solve larger problems than would otherwise be possible, either by ourselves or in groups.

But that would also imply that there are ways to solve problems without those limitations, and that likely that means there are better ways to solve problems.

If we're already at the point where our technology is at the limits (or maybe over the limit) of comprehension, does it make sense to abandon "logical" design and explore other types of design? Genetic algorithms for example?

PM_ME_UR_Definitions

Yes! Using other techniques, such as genetic algorithms or neural networks, opens up the set of possible styles of solutions, even if they are hard to understand, or are the kinds of solutions that a human brain might never come up with on its own. That being said, being able to understand a solution does offer many benefits, especially for debugging.


Potentially stupid question, but if even the experts who made them dont really understand them anymore, how are things improved on?

apophis-pegasus

Understanding is not really a binary situation. You can still understand a system partially and improve upon it. This might result in unexpected consequences, but that's part of the iterative approach to understanding something. You find a bug or something you don't fully comprehend, learn from it to better understand the technology, and repeat.


Do you think that there is a upper limit in complexity for intentional design? Discarding the hypothesis of "Intelligent Design ", the universe seems to point to some interesting tools for self assemblage of very complex systems.

Inducedvortex

I think any field where the space of potential solutions is too large to ever explore manually would have a limit for intentional design. To make progress in these means either working in concert with a machine (e.g. software that suggests potential designs), or having technology that is given design constraints and then tries to generate a potential solution algorithmically, sifting through the huge number of possibilities.


Sam, I appreciate your time spent doing this AMA. I'm wondering what the best approach is for helping the elderly understand how to use their technology.

I have an 88-year-old grandparent that desperately wants to learn how to use both his computer and Android tablet. I often sit down with him and give him one-on-one's, however I feel that he misses out on the most basic of concepts, and I often have to patiently repeat myself several times over several months. I've come to see that tech is a language that I'm fluent in. What is the best approach for me to help him become fluent?

mattv8

I love this question! I'm not sure I have the best answer, but here's one way to think about it. I've often found that tech fluency is related to a comfort with figuring things out and messing around with a technology. When people (no matter the age) are uncomfortable with a technology, they tend to rely on reading the manual, or an explicit recipe for accomplishing something.

I would recommend first focusing on making it clear that messing around with something is just fine. It's okay to get lost in a technology. Or do something wrong. That's how you learn a language.

But I would also try to make some of the technological language more explicit: what a minimize button would do, the kind of terms you can use to search effectively, etc. Don't try to teach how to use the entire Internet or a whole tablet at once. Focus on one or two apps or websites, and get him comfortable on this first, learning the conventions and the details, so he will then be ready to mess around with it, and even make mistakes.


how can a technology be too complicated to understand? do you mean something beyond the layman's ability to understand? I'm confused how a technology could be developed if nobody knew how it worked.

the only thing I can think of are accidental discoveries. like something that works, but people don't know why or how. is that what you mean?

Drak3

Great discussion here so far. I’m essentially focusing on technologies that are powerful and that work (at least most of the time), but that we don’t fully understand all of the details. And yes, this is true sometimes even if you are one of the people involved in their construction.

How can this happen? Most technologies are not built by a single person at one time. Rather, we end up with technologies that are too complicated for at least two reasons: Technologies involve massive specialization, with certain people working on different components. And technologies grow and accrete bits and pieces over time.

Taking specialization first: often, this is fine, as modularity and abstraction allow large and sophisticated systems to be constructed, with each person only needing to know about their one level or area. But in many cases, each component is not truly distinct and abstraction breaks down, and an engineer might need to know about something that is completely unrelated to what they were working on. We get spaghetti code, or just messy interactions that mean that while we thought we only needed to know about one subsystem, we actually need to know a lot more. Specialized knowledge ends up not being enough to understand the system, and yet at the same time it’s not really possible to understand all the different domains either. So a complete understanding ends up eluding us.

And then there’s the evolution of a system over time. While we would like to think that any technology starts from scratch and is completely logical, that’s not really true. Often, technologies are built on what came before. In case after case, we have messy legacy code, such as how the IRS has systems that were developed in the 1960’s that are still being used. The people who truly understand these parts might be long retired, or even dead. And so that’s another way that our understanding breaks down.

All taken together, combined with the fact that a system might be interconnected in highly nonlinear ways, means that in many cases, once a technology becomes large enough and interconnected enough, it is too messy and complex to ever fully understand.

And when a technology is evolved, or otherwise constructed with some machine learning technique (as someone else posted), we end up with systems that have little reason at all to accord with how we can think about technology. And so we often can end up with technologies that we don’t understand.


Good morning! What are your thoughts on meta technologies or enabling technologies that could assist us to learn and use technologies more rapidly? Are there any BIG ones you use or recommend and are there any standards existing or in the works so we can still leverage complexity without too bad of a plateau?

On a personal note what are you currently reading and how do you organize your day/life? Thanks so much for doing this AMA.

dCLCp

The most important technique in dealing with technology or leveraging complexity is that of abstraction), essentially "abstracting" way details that you don't need to be concerned with when working on a technology. It doesn't always work, but in general allows you to handle enormously sophisticated systems and is one of the most important ideas in engineering. Another enabling technology is simulation, which can allow people to see and play with the complexity and nonlinearity of a system, and the bounds of its behavior, even without understanding all of its details.

Related to reading, my reading queue currently consists of the following books: Kevin Kelly's THE INEVITABLE, Brian Christian and Tom Griffiths' ALGORITHMS TO LIVE BY, PLUS ONE by Christopher Noxon, THE REGIONAL OFFICE IS UNDER ATTACK! by Manuel Gonzales, and Daniel Dennett's INTUITION PUMPS AND OTHER RULES FOR THINKING.


What do you think of the influence of computers in financial markets, such as flash trading or constructing complex/exotic derivatives? It seems the more complex trading becomes, the bigger the risks of events like the flash crash. Do you forsee this type of trading expanding? And what effect will this have on individuals trying to invest personal money?

WhisperShift

This is a big issue, where large number of machines are interconnected and lots of algorithms are interacting rapidly and in complex ways. As this happens more and more, we will have failures to understand these systems. And I think it is likely that these failures of comprehension will manifest as events like the Flash Crash.


Thanks for the AMA opportunity. Since you are now working in venture capital, is that an industry currently at risk of "overcomplication?" I imagine the software necessary to trade data and money over extremely minute periods of time must be increasingly complex, especially if designed to adapt and learn.

I'm also curious if you've ever worked with the Santa Fe Institute, and what you think of their mission to encourage laypeople to study complexity, or at least to learn to think complxly across disciplines. How approachable a field is it for high school and college students?

Acidnapper

While finance certainly has a great deal of complexity, especially with trading frequency and amounts, venture capital is not about exploiting these small changes and investing based on them. Venture capital is about investing in and assisting startups, to make sure that a company has the greatest chance of success. That being said, since I am interested in startups in the science and technology space, overcomplication is something I think about, at least when it comes to understanding the science and tech.

And related to the Santa Fe Institute: I spent a summer there as an undergraduate, as part of its undergraduate REU program. It was a phenomenal experience and I received exposure to so many ideas and people in the world of complexity. I highly recommend it.


Hi Sam, and thanks for doing this AMA.

What clinical/pre-clinical biotechnologies are you most excited about right now as an investor?

For me, I am very bullish on bi-specific antibodies for the treatment of cancer and novel usages of antibody drug conjugates for delivering drugs (not just chemotherapy agents) specifically to target cells. I am also excited for 5-HT2A inverse agonists and the impact they may have in a number of neurological conditions.

Two popular answers that I remain bearish on are CRISPR and CAR-T cells. One that I am on the fence about is anticallins. I'd love to hear your thoughts - thanks!

SirT6

One area that we at Lux are really excited about is better understanding the gut-brain axis. We have an investment in this space Kallyope that is working on this.

I’m also really excited about technologies that allow for further discovery, specifically the growth in technologies that allow biological experimentation to be increasingly automated and run in the cloud (we are also invested in this space too). This allows experiments to be done more rapidly, more cheaply, and more reliably, all of which accelerate science.


Is "The Singularity" a thing?

zapitron

If by "thing" you mean a concept people argue about, then of course! If you mean, is it going to happen, then my (possibly emotionally-based) position is maybe, but if so, it will be slowly, and not for many decades. I'm much more concerned about technological incomprehensibility, even without the Singularity, which is all around us and happening currently.


When should we introduce statistics to children? And how?

Lowestprimate

When a child is young, they are obviously not going to be doing the math of statistics. But I think the easiest way to introduce statistical thinking to children early on is by teaching them that reasoning from a single example doesn't always work. We know the old saying that the plural of anecdote is not data, but lots of us still think this way.

If you show a kid that one example of something is not necessarily enough to learn from, then they are well on their way to thinking statistically.


Do you think there are applications where complexity can be a good thing?

Chevaboogaloo

Adding complexity means a system can be more sophisticated and powerful, which can be great. If a technology, like a self-driving car, can handle all the edge cases (different types of weather, pedestrian weirdness), then it is a better system. It's safer. The downside is that this can often make the system less understandable, and operate in sometimes unexpected ways. As long as we are building in the complexity intentionally, and aware of the unexpected consequences, that's okay, because complexity won't be not going away, and in many cases can be a good thing.


Additional Assets

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.