advertisement

Follow Mint Lounge

Latest Issue

Home > Smart Living> Innovation > Why engineers are adding ‘common sense’ to household robots

Why engineers are adding ‘common sense’ to household robots

MIT engineers say “exciting” new algorithm could change the way household robots perform their tasks

MIT engineers are aiming to give robots a little bit of common sense to help them excel in handling missteps.
MIT engineers are aiming to give robots a little bit of common sense to help them excel in handling missteps. (Pexels)

From cleaning homes to cooking meals, household robots are increasingly making life easier for those who can afford it. Although their capabilities might seem expansive, self-correction after even small missteps is not something they can often handle. Recognising this, engineers are aiming to give robots a little bit of common sense to help them excel in such situations. 

Engineers from the Massachusetts Institute of Technology (MIT) have developed a method that connects robot motion data with the “common sense knowledge” of large language models, (LLMs). This enables a robot to logically sort given household tasks into subtasks, and physically adjust to disruptions within a subtask so that it can move on without having to go back and start a task from scratch.

Also read: Mixed reviews for mixed reality

Previously, engineers had to program fixes for every possible failure along the way. “Imitation learning is a mainstream approach enabling household robots. But if a robot is blindly mimicking a human’s motion trajectories, tiny errors can accumulate and eventually derail the rest of the execution,” one of the engineers, Yanwei Wang said in a press statement. This method can self-correct execution errors and improve overall task success, Wang added.

To show the new approach, the researchers used a simple task. The robot was made to scoop marbles from one bowl and put them into another. Generally, engineers would move a robot through the motions of scooping and pouring. They might do this a few times, to help it mimic it. However, human demonstration can be exhaustive and time-consuming.

To address this, the team developed an algorithm to automatically link an LLM’s natural language label for a specific subtask with a robot’s position in physical space or an image that encodes the robot state, which is called grounding, the statement explained.

The new algorithm is designed to automatically identify what semantic subtask a robot is in such as “reach” versus “scoop”.

The researchers trained the robot physically and then used a pre-trained LLM to list the steps involved. They used the new algorithm to link the LLM’s subtasks with the robot’s motion trajectory data. The algorithm automatically learned to map the robot’s physical coordinates in the trajectories and the corresponding image view to a given subtask, the statement explained.

Then the researchers let the robot do the task and they tried to disrupt it by pushing or nudging it off the path. Instead of stopping and starting again as it used to do previously, it was able to self-correct and complete each subtask before moving on to the next.

The researchers said this new algorithm is “exciting” and it could change the way household robots do the tasks.

Also read: Week in tech: How a Nasa spacecraft altered an asteroid's shape

Next Story