AI Robots Have Proven Incredibly Susceptible to Hacker Attacks

A new IEEE study reveals that hacking AI-powered robots is just as easy as tricking chatbots. Researchers have demonstrated that simple text commands can lead robots to perform dangerous actions.

According to HotHardware, while hacking devices like iPhones or gaming consoles requires specialized tools and technical expertise, compromising large language models (LLMs) like ChatGPT is significantly simpler. All it takes is a cleverly crafted script to mislead the AI into believing a request is valid or that certain restrictions can be temporarily bypassed. For example, presenting a prohibited topic as part of a seemingly innocent “bedtime story” can prompt the model to produce unintended responses, including instructions for creating hazardous substances or devices—something the system should block immediately.

Hacking LLMs is so straightforward that even casual users, not just cybersecurity experts, can exploit them.

This vulnerability has raised serious concerns within the Institute of Electrical and Electronics Engineers (IEEE), a prominent U.S.-based engineering association. Their recent study shows that AI-controlled robots are susceptible to similar attacks. Researchers have proven that such cyberattacks could, for instance, cause autonomous vehicles to intentionally run over pedestrians.

The vulnerabilities extend beyond experimental prototypes to well-known devices, such as the Figure robots recently showcased at BMW’s factory or the Spot robot dogs from Boston Dynamics. These systems, which leverage technologies similar to ChatGPT, can be misled through specific prompts, leading them to take actions entirely contrary to their intended purpose.

In their experiment, the researchers targeted three systems: the Unitree Go2 robot, the Clearpath Robotics Jackal autonomous vehicle, and NVIDIA Dolphins LLM, a simulator for self-driving cars. Using a tool that automates the creation of malicious text prompts, they achieved unsettling results—all three systems were successfully hacked within days, with a 100% success rate.

The IEEE study also cites researchers from the University of Pennsylvania, who observed that in some cases, the AI not only carried out harmful commands but even offered additional suggestions. For instance, robots designed to search for weapons recommended using furniture as improvised tools to harm people. Experts emphasize that despite the impressive capabilities of modern AI systems, they remain predictive mechanisms without an understanding of the context or the consequences of their actions. Therefore, oversight and accountability for their use must firmly remain in human hands.

Scroll to Top