Santo Domingo. - LG Electronics (LG) announced today LG CLOiD™, an AI-enabled home robot that will be publicly demonstrated for the first time at CES 2026. Designed to perform and coordinate household tasks through connected appliances in the home, CLOiD seeks to reduce the time and physical effort required for daily chores. The system represents LG's most recent development in AI-based home robotics and smart home platforms, and is based on the company's Self-Driving AI Home Hub (LG Q9) and the ThinQ ecosystem.
Demonstration of home automation in a real residential environment
The head functions as a mobile AI hub for the home. It is equipped with a chipset —which acts as the brain of LG CLOiD—, a screen, a speaker, cameras, various sensors, and voice-based generative AI. Together, these elements allow the robot to communicate with people through spoken language and “facial expressions”, learn the living environments and lifestyle patterns of users, and control connected appliances based on that learning. Vision-based Physical AI: VLM and VLA At the core of LG CLOiD is the company's Physical AI technology, which combines: Vision Language Model (VLM) — converts images and video into a structured, language-based understanding. Vision Language Action (VLA) — translates visual and verbal inputs into physical actions. These models have been trained with tens of thousands of hours of domestic task data, allowing LG CLOiD to recognize appliances, interpret user intent, and execute context-appropriate actions, such as opening doors or transferring objects. Integration with ThinQ and ThinQ ON
LG CLOiD's capabilities are significantly expanded thanks to its integration with LG's smart home ecosystem, which includes the ThinQ™ home AI platform and the ThinQ ON hub. This seamless connectivity allows LG CLOiD to orchestrate a wider range of services across LG's various appliances. LG Actuator AXIUM: Robotic components for Physical AI
Along with the home robot, LG introduces LG Actuator AXIUM™, a new brand of robotic actuators for services and robots. An actuator functions as the articulation of a robot, integrating a motor that generates rotational force, a drive system that controls electrical signals, and a reducer that regulates speed and torque. As one of the most critical components
At CES 2026, the company will showcase LG CLOiD operating in various home environments. In one scenario, the robot takes milk from the refrigerator and places a croissant in the oven to prepare breakfast. After the occupants of the house leave, LG CLOiD starts the washing cycles and folds and stacks the garments after drying. These tasks demonstrate LG CLOiD's ability to understand the user's lifestyle and exert precise control over appliances.
You can also read: A "hell on Earth", the federal prison in New York where Maduro is imprisoned
Each arm has seven degrees of freedom, equivalent to the mobility of a human arm. The shoulder, elbow, and wrist allow for forward, backward, rotational, and lateral movements, while each hand includes five independently actuated fingers for fine manipulation. This configuration allows LG CLOiD to handle a wide variety of household objects and operate in kitchens, laundries, and living areas. The wheeled base uses autonomous driving technology derived from LG's experience with robot vacuum cleaners and the LG Q9. This format was selected for its stability, safety, and cost-effectiveness, with a low center of gravity that reduces the risk of tipping over if a child or pet comes into contact with the robot. LG CLOiD's head as a mobile AI hub for the homeThe head functions as a mobile AI hub for the home. It is equipped with a chipset —which acts as the brain of LG CLOiD—, a screen, a speaker, cameras, various sensors, and voice-based generative AI. Together, these elements allow the robot to communicate with people through spoken language and “facial expressions”, learn the living environments and lifestyle patterns of users, and control connected appliances based on that learning. Vision-based Physical AI: VLM and VLA At the core of LG CLOiD is the company's Physical AI technology, which combines: Vision Language Model (VLM) — converts images and video into a structured, language-based understanding. Vision Language Action (VLA) — translates visual and verbal inputs into physical actions. These models have been trained with tens of thousands of hours of domestic task data, allowing LG CLOiD to recognize appliances, interpret user intent, and execute context-appropriate actions, such as opening doors or transferring objects. Integration with ThinQ and ThinQ ON
LG CLOiD's capabilities are significantly expanded thanks to its integration with LG's smart home ecosystem, which includes the ThinQ™ home AI platform and the ThinQ ON hub. This seamless connectivity allows LG CLOiD to orchestrate a wider range of services across LG's various appliances. LG Actuator AXIUM: Robotic components for Physical AI
Along with the home robot, LG introduces LG Actuator AXIUM™, a new brand of robotic actuators for services and robots. An actuator functions as the articulation of a robot, integrating a motor that generates rotational force, a drive system that controls electrical signals, and a reducer that regulates speed and torque. As one of the most critical components







