Ubiquitous Walker S Lite can safely and stably approach vehicles on mobile automotive production lines. After collecting and processing video streams and depth information provided by RGB and RGBD cameras for the robot’s head, chest, and waist, it can detect the quality compliance of components. Core technologies include full body collaborative leg and arm motion planning, semantic navigation and 3D object precise positioning based on visual and depth information, and module quality inspection based on deep learning. The detection range covers 360 ° of the vehicle body and low areas below 0.5 meters, with a detection accuracy rate of over 99%.

For complex lighting, robots can supplement the detection parts with handheld LED lights and achieve millimeter level detection through cameras installed throughout the body. During the quality inspection process, humanoid robots can seamlessly integrate with factory automation control systems and provide real-time visual feedback on inspection results, enabling them to perform complex tasks accurately and efficiently.

By combining lightweight technology with high-performance servo drives, Ubiquitous has recently developed a lightweight humanoid double arm with a self weight/load ratio of less than 1, which can transport boxes weighing up to 15 kilograms and achieve “small arm with strong force”. Its homework height covers a range of 0.4 to 1.9 meters and supports customized end effectors to adapt to boxes of different sizes. Equipped with a six axis force sensor and full body motion control, the Walker S series maintains stable walking during transportation, with a speed of 4km/h under no load and 2km/h under load. In addition, the Walker S series can seamlessly integrate intelligent manufacturing systems, connect and collaborate with intelligent devices such as AGVs and unmanned flow vehicles, accelerate information flow, and enhance the reliability of operations.

When combined with large models, this’ intelligent assistant ‘will have even more stunning performance. You just need to give the command, ‘I want to play Rubik’s Cube’, which can identify which item is a Rubik’s Cube among many and grab it. It can also locate the item in real time, quickly readjust the grabbing position and angle.

Ubiquitous has conducted comprehensive technological iterations in the research and development of large model technology. Ubiquitous has not only iterated the model architecture of multimodal large models to achieve stronger multimodal perception and cognitive capabilities, but also updated the intelligent agent framework of large models: it has built a framework that includes speech recognition, word embedding, large model inference, speech generation, and tool invocation to achieve flexible coordination with semantic VSLAM navigation, grasping, fine operation, and other functions.