Science AMA Series: I’m Jessy Grizzle, the inaugural Director of Robotics at the University of Michigan. My group produces some of the world’s most advanced algorithms that allow bipedal robots to walk over uneven terrain. AMA!




I have a few questions:

  • how much of your algorithms are hand coded and how much are actually training sets for neural networks?
  • how does the overall "mainloop" look like? a big state machine? parallel networked cpus? neural network?
  • what sensors are used?
  • at what clock speed does the hardware run?

I code alot im my freetime, like let an AI teach itself to play super mario, but I pretty quickly got the the limits of this, like,

  • how do you stear such a learning (if used)?
  • how can I make testcases allowing for new solutions (creative adaption)?

thanks for your time and this AMA



We explained some of the low level control loops in this answer:

You're starting to open the discussion for higher level aspects of our locomotion algorithms. For example, suppose you have really good gaits and controllers for walking at speeds ranging from -1 m/s to +1 m/s, for walking up and down slopes, and other specific activities you would like a robot to do.

So now you can ask the question - how do you glue all of these individual motion primitives into a more global set of capabilities for the robot. So we have done that in a couple of different ways.

1) We built finite state machines. (It's a fancy way of organizing a bunch if/then/else statements). The problem with that is you need really clear ideas on when to switch from one type of locomotion idea to another. And if you mess up the transition conditions, you get VERY unexpected and often undesirable behavior.

2) We have built a "general" purpose feedback solution that can incorporate a subset of the behaviors I described above. So a small range of walking speeds and a small range of slopes. So this gives more predictable behavior, but as you can tell from the description, more limited behavior too.

3) Just recently, and you can find this on our website (, we are using a form of machine learning, specifically supervised learning, to smoothly stitch all of these control behaviors together into a more global solution to the locomotion problem. We have very limited experience with this so far. But it got us farther on the Wave Field than anything else we have ever tried!

Let's get into a few more of the technical details of our robot. MARLO has high-precision encoders on each of its joints to measure their angles and velocities. It has what's called an IMU, inertial measurement unit, that acts like your vestibular system and provides orientation of the robot's body with respect to the vertical (she now knows which way is up).

The computer on the robot executes one control cycle every millisecond. We develop the control code in a high level language called MATLAB, which allows us to program equations more or less like you would write them on a sheet of paper.

MATLAB also has tools to translate these equations into machine code that is executed on the robot. A human has never had to be involved in this translation process - it's all automatic.

This AMA is being permanently archived by The Winnower, a publishing platform that offers traditional scholarly publishing tools to traditional and non-traditional scholarly outputs—because scholarly communication doesn’t just happen in journals.

To cite this AMA please use:

You can learn more and start contributing at



Hi Jessy,

How does your lab develop the algorithms you use? For example, are they hard coded after each iteration, or do you incorporate machine learning into your robots?
In initial stages did you design the algorithms using simulations, or did you always work straight from physical feedback?



We are kind of iconoclasts in this field: We do not use a simplified model of the robot such as an inverted pendulum. Instead, we start with a complete differential equation model of the robot. Such a model has all the information of the geometry of the robot, mass of the links, how powerful the motors are, what type of feet we have on the robot, etc. To get an idea of their complexity, if we were to type it out in 10 pt font, it would fill probably 1000 pages.

Nevertheless, using modern computing tools, we can determine solutions to these equations that are optimal in the sense that they use the least amount of energy to accomplish a given task, subject to constraints such as how much friction can the robot's feet generate on the terrain, how fast do we want the robot to walk, and how rough is the ground in terms of slopes, discontinuities and such.

From these solutions, we have a means of automatically generating a feedback controller that will realize these solutions on the actual robot. We have a technique that we call virtual constraints, but those details are probably not really what you are after. The key thing is we use the math and optimization tools to find these wonderful gaits that our robot executes. We are not hacking on the robot itself.

Hello Jessy. Im Alex Barberá. Im working at the barcelona supercomputing centre under the life science department. Robotics (obviously at a smaller scale) is my hobby. I would like to know what were some of the challenges faced with the managing the center of gravity of the robot and how they were faced. Amazing job by the way!!


Thanks, Alex! Center of gravity is one of the biggest challenges for two-legged robots. To solve the problem we have a sensor called an IMU, or intertial measurement unit, that constantly measures how the robot's center of gravity changes as its orientation changes. This measurement's information is used for the controller to decide how to move the leg to keep the robot balanced.

How might such an algorithm account for terrain that is more difficult to navigate than its geometry suggests, such as wet floors or loose soil?


When we design the gaits through optimization, we include constraints to ensure that friction requirements are met. Of course, in many real-world scenarios, the the coefficient of friction will not be known. Perhaps it will be possible to use machine learning techniques to classify the terrain type and to use this information to select an appropriate gait.

Another big challenge is uneven terrain. One way to tackle this problem by optimizing several gaits (level ground / uphill / downhill). Information from these optimizations can be used to construct a "unified" walking controller that can handle terrain variations. This is how MARLO was able to walk on the Wave Field.

Hi, Professor Grizzle & co. I'm a computer engineering undergrad at UM. My question is: Have you programmed Asimov's Laws into your robots?
Just kidding(...sort of). I can't even begin to imagine the kind of control systems that goes into something like MARLO and MABEL. About how long does it take to get from conception to prototype?


No, we haven't. Those are more for AI-based machines. What we're doing in my lab is robot locomotion, and Asimov's laws don't pertain. Asimov's laws would need to be programmed into a robot's "sense of self." The "sense of self," or "ego" would supervise the locomotion to tell the robot where to go, etc. And there, yes, you might eventually want to program into the sense of self "do not harm humans" or "obey humans except when that goes counter to the first law," etc. But that's not what we are about. We're trying to give robots that basic walking capability: being able to stay upright over uncertain terrain and move in an energetically efficient manner so they don't use up their batteries too quickly.

Why bioinspired? What real world advantages do robotic designs that mimic nature provide?


Why bioinspired? What real world advantages do robotic designs that mimic nature provide?

We are not bio inspired in our work. That’s because robots are built of hard links in places of bones and flesh. Motors in place of muscles. Wires in place of nerves. Perhaps a dual-core computer in place of a brain. We really work directly with the mechanics and electronics of the machine and let that guide our development of locomotion algorithms.

Maybe you’re asking why do we work on robots that have two legs versus three or four legs. We want our robots to serve as helpers in search and rescue for example. Eventually assisting in homes – helping older people to stay there longer. If these robots are going to go charging into an accident in a factory or burning home, or work alongside humans in a home, they need to have a body shape that is compatible with our living environments. Since we build things for humans to work in and live, we are studying robots that have our basic body shape: two legs, upright, a torso, can step over objects and through narrow passages. Also, by developing locomotion algorithms for two-legged machines there are spin-offs into things that people care about like lower limb prosthetics and exoskeletons that can allow paralyzed people to walk...maybe even eventually without crutches. We’ve seen colleagues at UT Dallas and Georgia Tech apply our algorithms on prosthetics and we’re very excited about that. We know of at least one startup for exoskeletons that is building their locomotions solutions around our mathematics.

Animals have the ability to adjust their upper bodies to maintain balance. Do your robots do this or do they work mostly via leg adjustments?


Animals have the ability to adjust their upper bodies to maintain balance. Do your robots do this or do they work mostly via leg adjustments?

We’ve been doing that ever since our first robot Rabbit.

There, we were using the lean angle of Rabbit’s torso to regulate speed. We did not adjust step length at all. Now, we certainly understand that it’s more efficient to combine body posture such as lean angle with leg adjustment, and our new methods take full advantage of that.

Animals have the ability to adjust their upper bodies to maintain balance. Do your robots do this or do they work mostly via leg adjustments?


Hi, we do consider using upper body to maintain balance. It is done in an implicit way. We use the optimization and machine learning to generate body posture (both upper body and legs) to maintain balance. Unlike animals, MARLO is designed with a low weight upper body, thus moving the upper body does not change the center of mass much, so we adjust the leg mostly.

Hello and congratulations on the robot!! My question maybe a little stupid but I have to ask... Robotics is such a wide and diverse field so I don't really know where and how to get started...I found myself liking concepts like SLAM, underactuated robotics and multi robot systems. Can you shed a little light on how I can start my own advanced project and the process of how to research topics efficiently and go about it? Thank you.


Hi aksris, thanks for being interested in robotics field. I can give a basic overview of different concepts. Our lab focus on control algorithm to stabilize an underactuated robot. It requires mathematics background such as linear algebra and feedback control. SLAM involves computer vision and state estimation. Multi robot systems could include optimization and game theory. They are all attractive topics. As for your own robot project, there are many robot hobby kits that are easy to start with such as Arduino and LEGO mindstorm. Building a toy segway could be a fun project. You will learn CAD, machining, programming, PD control, sensing, etc.

Hi Jessy,. How does your lab develop the algorithms using simulations, or did you design the algorithms you use?


Hi Jessy,. How does your lab develop the algorithms using simulations, or did you design the algorithms you use?


We do most of our algorithm development using MATLAB. We take a mathematical model of our robot (developed using Lagrangian mechanics) and design gaits using trajectory optimization. These optimizations give us a desired path for the robot's joints to follow. We then design a feedback controller to achieve these desired paths, and to make sure the robot is stable.

We simulate our model in MATLAB before putting the controllers on the robot. If the simulations are successful, we test them out on the robot. We do not "hand-tune" anything experimentally on the robot. We use system identification to make sure that the simulation model matches the real robot. We also make sure that the controller is robust enough to handle any "real-world" uncertainties.

Do you sometimes reflect on what life (or death) will be like once these robots are weaponised?


reflect on what life (or death) will be like once these robots are weaponised?

What if Henry Ford had thought about whether people would weaponize the model T and then decided, you know, we're not going to make automobiles available to the masses?

Hi Jessy!

So I've loved robotics for my entire life and am deadset on U of M (Senior in HS, applying for early decision)-- what major should I choose and what should I pursue to be able to work with these bots and on these projects?

Huge fan of your work, and I hope to someday work on these bots myself!


Hi! There are a lot of ways into robotics at U-M! You can come in through traditional fields like electrical and computer engineering, computer science and engineering, and mechanical engineering. But the Michigan Robotics is also full of aerospace engineers and naval engineers who work on aerial and underwater robotics. So far we have two undergraduate robotics courses at Michigan - one is hands-on robotics by Prof Revzen and another is autonomous robots, EECS 467 (I think). Those are great ways to get started! We do hope to add more courses.

What languages, development environments, and operating systems do you use?


We do most of our algorithm development and simulation using MATLAB on Windows. We are running Simulink Real-Time on the robot MARLO.

We also use C/C++ intermittently when we need a particular program to run very fast, or when programming firmware.

We have all of our sensors (IMU and encoders) communicating to the main embedded computer though the EtherCAT protocol.

We recently bought a stereo camera that we are interfacing with using Linux and Robotic Operating System (ROS).

Do you think the wolverines will ever beat Urban Meyer and the Buckeyes?


We’re meeting with coach Harbaugh now on some new locomotion algorithms. Watch out.

Hi there:

Why does EECS 216 suck so much? :(

Source: Former CSE/CE double major who had to dump the CE because 216 killed me. And I think you were one of the guys teaching it that semester (Winter '14). It was no fault of yours that I dropped it, it's just an effing hard class.

In all seriousness, 216 really felt like one of those courses that if you don't take at exactly the right time, then you get completely overwhelmed -- I was so shaky on my remembrance of Calc 2 and DiffEq that I pretty much instantly started drowning.

The tutors I tried going to did make effort to help, but it just wasn't enough. I wish there was some kind of introductory goodness I could have done to (re)build up to 216.


You know, that is a tough course, and we certainly have to assume you have mastered the prereqs when you start your engineering curriculum. There just isn't time for us to ramp up slowly. I hope you can review your Calc a bit, your DiffEq and give it another go! At the end of the course, when we were talking about feedback control, I was using MABEL as a prime example of what you can do with some of the methods from EECS 216. I'm sorry it didn't work out for you in the beginning of the course because I think you would really have enjoyed the second part!

Additional Assets


This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.