Researchers from UC Santa Cruz and Johns Hopkins demonstrated that printed, customized road signs can indirect‑prompt autonomous systems, effectively “hijacking” behavior captured through cameras. In controlled simulations, their technique succeeded against self-driving setups 81.8% of the time. Drones showed similar weaknesses when the malicious text entered the field of view. The work highlights how language-like cues in the visual channel can subvert downstream decision pipelines. 🛑🚗 theregister.com
The team boosted attack success by tweaking typography, phrasing and even language choices on the signs. In demonstrated scenarios, systems could be nudged toward dangerous outcomes, such as ignoring pedestrians or misidentifying vehicles. Because the instruction is embedded in the environment, traditional network perimeter defenses are irrelevant. This turns the public visual space into an attack surface that is cheap to stage and hard to control. ⚠️🛰️ theregister.com
The researchers plan to continue probing these vectors and to build mitigations, signaling an urgent security agenda for autonomy. Their findings argue for stronger safeguards wherever camera-captured prompts can influence control logic. For product teams, the message is clear: perception stacks must be hardened against instruction-like artifacts in the scene. Expect follow-on work focusing on detection and resilience techniques informed by these trials. 🔐 theregister.com