Scientists have built a dog-sized robot that is teaching itself how to walk like an animal

The robot dog that learns to walk
The robot dog that learns to walk Copyright Felix Ruppert, Dynamic Locomotion Group at MPI-IS
By Luke Hurst
Share this articleComments
Share this articleClose Button

The researchers built the robot so they could find out more about the process animals go through when learning to walk.


Scientists have built a dog-sized robot with four-legs which learns how to walk.

Just like young animals stumble and stutter as they get their bearings on their legs for the first time, the robot learns from its experiences, improving its technique as it goes.

Built to fill in the gaps in our knowledge around how animals learn to walk, the robot can achieve the task within an hour.

The team behind the quick-learning dog-bot says it was built with animal-like features, and a computer to help gauge mistakes and learn to rectify them.

"As engineers and roboticists, we sought the answer by building a robot that features reflexes just like an animal and learns from mistakes," said Felix Ruppert, a former doctoral student in the Dynamic Locomotion research group at the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart.

"If an animal stumbles, is that a mistake? Not if it happens once. But if it stumbles frequently, it gives us a measure of how well the robot walks".

Robot learns faster than animals

Newborn animals are born with muscle coordination networks in their spinal cords - but they still have to learn the precise coordination.

This takes time, the researchers say, with an initial reliance on the hard-wired spinal cord reflexes.

These help the animal to avoid falling and hurting themselves on their initial attempts, while they learn the more precise movements over time, with the nervous system adapting to their leg muscles and tendons.

The scientists wanted to gain insights into this learning process using the robot dog.

The robot uses an algorithm to guide its learning. A food sensor sends information to target data from a modelled virtual spinal cord which runs as a program in the robot’s computer. It learns to walk by continuously comparing sent and expected sensor information, and adapting its motor control patterns.

The algorithm adapts a central pattern generator (CPG), which in humans and animals are networks of neurons in the spinal cord that produce muscle contractions without input from the brain. They help with rhythmic taks like walking, blinking or digestion.

When young animals walk over flat surfaces, CPGs can be enough to control the movement signals from the spinal cord.

But if bumps or uneven surfaces change the terrain, the young animals need to learn when to use reflexes to avoid falling, and when to revert back to CPG.

Until this system is perfected, the animal will stumble - however animals learn this quickly.

The robot dog - named Morti - optimises its movement patterns faster than an animal, learning how to walk steadily in an hour.

Its CPG is simulated on a computer that controls the movement of its legs, on which sensor data continuously compare the expected touch-down predicted by the robot’s CPG with what actually happens.

If the robot stumbles, the learning algorithm changes how far the legs swing back and forth, how fast the legs swing, and how long a leg is on the ground.


"Our robot is practically ‘born’ knowing nothing about its leg anatomy or how they work," Ruppert explained.

"The CPG resembles a built-in automatic walking intelligence that nature provides and that we have transferred to the robot. The computer produces signals that control the legs’ motors, and the robot initially walks and stumbles. Data flows back from the sensors to the virtual spinal cord where sensor and CPG data are compared.

“If the sensor data does not match the expected data, the learning algorithm changes the walking behaviour until the robot walks well, and without stumbling. Changing the CPG output while keeping reflexes active and monitoring the robot stumbling is a core part of the learning process".

The results were published in the journal Nature Machine Intelligence.

"We can't easily research the spinal cord of a living animal. But we can model one in the robot," said Alexander Badri-Spröwitz, who co-authored the publication with Ruppert and heads the Dynamic Locomotion Research Group.


"We know that these CPGs exist in many animals. We know that reflexes are embedded; but how can we combine both so that animals learn movements with reflexes and CPGs? This is fundamental research at the intersection between robotics and biology. The robotic model gives us answers to questions that biology alone can't answer".

Share this articleComments

You might also like