Loading...

Operant AI launches runtime security for AI agents

Operant AI has announced the launch of CodeInjectionGuard, a new capability within its Agent Protector product designed to detect and block malicious code before it can be executed by AI agents operating on endpoints. The launch addresses a rapidly expanding attack surface driven by the rise of agentic AI systems capable of autonomously downloading packages, executing shell commands, and interacting with live infrastructure at machine speed.

The announcement follows two significant security developments that highlight a critical gap in current AI security frameworks, where the pace of vulnerability discovery has accelerated, but the ability to prevent runtime attacks has not kept up.

In March, a developer’s machine was compromised by a poisoned version of LiteLLM, an open-source LLM routing library uploaded to PyPI just six minutes before it was automatically downloaded by an AI-powered IDE as a transitive dependency. The malicious package extracted SSH keys, cloud credentials, Kubernetes tokens, and other sensitive data, attempted lateral movement into Kubernetes clusters, and established persistence mechanisms within seconds of download. The developer had not knowingly installed the package, as the action was performed by an AI agent.

This incident underscores the emerging security challenge of the agentic era, where AI agents operate at speeds beyond human monitoring, dynamically pulling dependencies from public registries and executing unfamiliar code in real time.

While advancements in AI-powered vulnerability discovery, including Anthropic’s Claude Mythos model, have significantly improved the identification of flaws in code before deployment, these approaches remain limited to pre-deployment stages. Runtime attacks, however, emerge at the point of execution, often involving code that did not exist during earlier scans and therefore cannot be detected by traditional CI/CD pipelines or static analysis tools.

CodeInjectionGuard addresses this gap by operating at runtime, intercepting and inspecting packages retrieved through AI agent dependency chains before execution. It evaluates shell commands invoked by agents in real time, enforces policy controls when accessing sensitive files such as SSH keys and cloud credentials, and detects and blocks dynamically generated or obfuscated code before it is executed.

The capability would have prevented the LiteLLM supply chain attack by intercepting and analysing the compromised package before execution, thereby stopping credential theft, persistence installation, and attempted lateral movement.

Priyanka Tembey, CTO and Co-Founder, Operant AI, said, “Finding vulnerabilities and stopping attacks are fundamentally different problems, and the industry is solving them at very different speeds. AI agents can install packages, execute code, and access sensitive infrastructure in seconds faster than any human reviewer, and faster than any static analysis tool can respond. CodeInjectionGuard was built for this reality: defense at runtime, at the point of execution, where the fight actually happens.”

CodeInjectionGuard is now available as part of Operant AI’s Agent Protector for teams deploying AI agents across development and production environments.

Send news announcements/press releases to:
editor@thefoundermedia.com

About The Author