Figure has introduced Helix, a new Vision-Language-Action (VLA) model designed to enhance the capabilities of humanoid robots in complex environments like homes.
The announcement follows the company's decision to end its collaboration with OpenAI and highlights its growing focus on creating robots that can respond to natural language prompts and adapt to dynamic household settings.
Helix integrates visual data and language commands, enabling robots to understand tasks and execute them in real time. It demonstrates advanced object generalisation, allowing robots to handle thousands of unfamiliar household items simply through verbal instructions.
Designed to control multiple robots simultaneously, Helix can coordinate complex tasks, such as transferring items between robots and organising objects within a home.
While home robotics presents unique challenges due to unpredictable layouts and varying environments, Figure aims to overcome these hurdles through Helix's adaptive learning capabilities.
By moving away from time-consuming manual programming, the company is working towards making humanoid robots more accessible and practical for domestic use. Although the project remains in its early stages, the Helix model represents a significant step towards bridging the gap between industrial robotics and home applications.