Introduction: A New Front in Robot Security
Humanoid robots promise convenience and enhanced capabilities, but recent research uncovers a troubling reality: the Unitree G1, a popular humanoid model, can be compromised through its Bluetooth setup process. Alias Robotics’ analysis shows a chain of weaknesses—from weak BLE provisioning to unprotected data transmissions—that collectively create an avenue for espionage, unauthorized access, and disruptive cyber activity. This article summarizes the core findings, their implications, and what the industry must do to address them.
BLE Provisioning: A Weak Link in Startup Procedures
The investigators found that when the G1 connects to Wi‑Fi, it relies on Bluetooth Low Energy (BLE) to receive the network name and password. This channel does not adequately filter what users send, enabling an attacker within BLE range to inject commands and potentially gain root access via the provisioning daemon. A striking detail is that all G1 units — and other models from the same manufacturer — share the same hardcoded AES key. This means a single compromised key can unlock all devices in a fleet, making the attack scalable and dangerous for organizations deploying multiple units.
Exploitation requires only proximity to the BLE interface and knowledge of these universal credentials. In practice, an attacker could establish control over the robot’s software provisioning, giving themselves persistent access and the ability to alter credentials or add remote accounts. The result is a single, vulnerable ingress point that undermines the robot’s overall security posture.
Weak Encryption: The Double Certification Gap
Beyond provisioning, the study scrutinizes the robot’s configuration encryption. The outer layer uses Blowfish in a basic mode that repeats patterns—a configuration known to be insecure and easily reversible. Compounding the issue, every Unitree G1 uses the same 128-bit key. Once decrypted on one device, other units become trivial to breach because the ciphertext is derived from a shared secret recovered from the software itself.
The inner layer employs a Linear Congruential Generator (LCG) for randomization. While researchers were able to reconstruct the generator, the seed space is limited to 32 bits, making brute-force attempts feasible with sufficient resources. Taken together, these weaknesses render configuration files readable and expose service settings, process names, and network details across the fleet.
Data Exfiltration: Live Traffic to Chinese Servers
Network telemetry paints a worrying picture. The G1 periodically transmits data to servers in China, including battery status, joint torque, motion state, and sensor data from cameras and microphones. Every five minutes, the robot dispatches JSON packets to two addresses on port 17883, with automatic reconnection if connection is interrupted. A separate process maintains a live WebSocket session with a third server, using an SSL channel that does not verify certificates. This combination creates a pathway for continuous, covert data exchange that could include audio and video streams.
Notably, users are not informed about these transmissions, and there are no visible indicators or consent options. In Europe, these practices conflict with GDPR requirements for lawful processing and user consent. In the United States, California-style privacy laws would demand an opt-out mechanism for such tracking.
Internal Communication: Multiple Doors for Attackers
The robot’s internal architecture features several communication frameworks, including DDS/RTPS for sensor-actuator messaging and MQTT/WebRTC for cloud connections. Alarmingly, DDS traffic is unencrypted, allowing anyone on the same local network to observe it. WebRTC checks its TLS certificate validation as well, enabling impersonation if an attacker has network access. When combined with the Bluetooth weakness and weak encryption, the system presents multiple attack paths across firmware, firmware update channels, and cloud services.
From Surveillance Tool to Offensive Vector
Researchers demonstrated two compelling scenarios. First, a robot can function as a covert surveillance device, transmitting microphone audio, camera video, and spatial data as soon as it powers up. Second, they showcased how a Cybersecurity AI framework could enable autonomous reconnaissance, vulnerability scanning, and exploitation planning. While the tests did not execute destructive payloads, they reveal a plausible route for robots to be converted into espionage or cyber-attack platforms.
Industry Takeaways and Next Steps
The study emphasizes that static defenses and periodic audits are insufficient for such integrated devices. The authors advocate for adaptive, automated security systems powered by Cybersecurity AI capable of real-time threat detection and containment. In addition to upgrading encryption, isolating sensitive communications, and enforcing strict provisioning controls, manufacturers should implement transparent data practices with explicit user consent and robust opt-out options. Until then, the security of humanoid robots remains a critical concern for users and operators alike.