On March 5, Web3 security firm GoPlus reported a security incident involving the AI development tool OpenClaw. According to BlockBeats, the issue arose during the execution of automated tasks when the system incorrectly constructed a Bash command while creating a GitHub Issue, leading to a command injection that exposed numerous sensitive environment variables.
The incident involved AI-generated strings containing backticks around 'set', which Bash interpreted as command substitution and executed automatically. As a result, Bash outputted all current environment variables without parameters, leading to over 100 lines of sensitive information, including Telegram keys and authentication tokens, being directly published in a GitHub Issue.
GoPlus recommends using API calls instead of directly concatenating Shell commands in AI automation development or testing scenarios. They also advise adhering to the principle of least privilege to isolate environment variables, disabling high-risk execution modes, and incorporating manual review mechanisms in critical operations.