MIT removes the need for vision in stair-climbing robot
- Autor:Ella Cai
- Zwolnij na:2018-07-09
MIT has taught its 40kg Cheetah 3 robot to run across rough terrain and climb litter-strewn staircases by touch alone.
“There are many unexpected behaviours the robot should be able to handle without relying too much on vision,” said MIT robot designer Sangbae Kim. “If you rely too much on vision, your robot has to be very accurate in position and eventually will be slow. So we want the robot to rely more on tactile information. That way, it can handle unexpected obstacles while moving fast.”
Two algorithms are behind this ability, as well as its ability to recover its balance when unexpectedly shoved: ‘contact detection’ and ‘model-predictive control’.
Contact detection helps the robot determine the best time for a given leg to switch from swinging in the air to stepping on the ground – if it steps on an insubstantial surface, should it put more weight on or pull back and swing its leg in the hope of finding a solid surface. “When it comes to switching from the air to the ground, the switching has to be very well-done,” said Kim. “This algorithm is really about, ‘when is a safe time to commit my footstep?’”
To do this is constantly calculates three probabilities for each leg. The probability of: a leg making contact with the ground, the force generated once the leg hits the ground, and the probability that the leg will be in mid-swing. Input data comes from gyroscopes, accelerometers and joint positions.
For example, said MIT, if the robot unexpectedly steps on a wooden block, its body will suddenly tilt, changing angle and height. The three per-leg probabilities are used to estimate whether each leg should commit to pushing down, or to lift up and swing away in order to keep balance. “Without that algorithm, the robot was very unstable and fell easily,” said Kim.
The model-predictive control algorithm, predicts how much force a given leg should apply once the previous algorithm has committed to a step.
Every 50ms it calculates many possible positions of the robot’s body and legs a half-second into the future, should a certain force be applied by any given leg as it makes contact with the ground. Its effect is to quickly produce counter-forces to regain balance, and keep moving forward, without tipping too far in the opposite direction.
“Say someone kicks the robot sideways,” said Kim. “When the foot is already on the ground, the algorithm decides, ‘How should I specify the forces on the foot? Because I have an undesirable velocity on the left, so I want to apply a force in the opposite direction to kill that velocity. If I apply 100N in this opposite direction, what will happen a half second later?”
The team is working to further improvements to blind locomotion. For now, the cameras on the robot will be used for mapping the environment and spotting large obstacles like doors.
The robot’s vision-free capabilities will be presented at the International Conference on Intelligent Robots, in Madrid in October.
“There are many unexpected behaviours the robot should be able to handle without relying too much on vision,” said MIT robot designer Sangbae Kim. “If you rely too much on vision, your robot has to be very accurate in position and eventually will be slow. So we want the robot to rely more on tactile information. That way, it can handle unexpected obstacles while moving fast.”
Two algorithms are behind this ability, as well as its ability to recover its balance when unexpectedly shoved: ‘contact detection’ and ‘model-predictive control’.
Contact detection helps the robot determine the best time for a given leg to switch from swinging in the air to stepping on the ground – if it steps on an insubstantial surface, should it put more weight on or pull back and swing its leg in the hope of finding a solid surface. “When it comes to switching from the air to the ground, the switching has to be very well-done,” said Kim. “This algorithm is really about, ‘when is a safe time to commit my footstep?’”
To do this is constantly calculates three probabilities for each leg. The probability of: a leg making contact with the ground, the force generated once the leg hits the ground, and the probability that the leg will be in mid-swing. Input data comes from gyroscopes, accelerometers and joint positions.
For example, said MIT, if the robot unexpectedly steps on a wooden block, its body will suddenly tilt, changing angle and height. The three per-leg probabilities are used to estimate whether each leg should commit to pushing down, or to lift up and swing away in order to keep balance. “Without that algorithm, the robot was very unstable and fell easily,” said Kim.
The model-predictive control algorithm, predicts how much force a given leg should apply once the previous algorithm has committed to a step.
Every 50ms it calculates many possible positions of the robot’s body and legs a half-second into the future, should a certain force be applied by any given leg as it makes contact with the ground. Its effect is to quickly produce counter-forces to regain balance, and keep moving forward, without tipping too far in the opposite direction.
“Say someone kicks the robot sideways,” said Kim. “When the foot is already on the ground, the algorithm decides, ‘How should I specify the forces on the foot? Because I have an undesirable velocity on the left, so I want to apply a force in the opposite direction to kill that velocity. If I apply 100N in this opposite direction, what will happen a half second later?”
The team is working to further improvements to blind locomotion. For now, the cameras on the robot will be used for mapping the environment and spotting large obstacles like doors.
The robot’s vision-free capabilities will be presented at the International Conference on Intelligent Robots, in Madrid in October.