Anyscale Tackles Major Vulnerability in Ray Framework, Yet Thousands Remain Unprotected

The open-source Ray framework is widely utilized, with thousands of organizations relying on it for complex and resource-intensive workloads. Notably, GPT-3 was trained on this framework, underscoring its significance in the world of large language models (LLMs).

Recently, the discovery of the “ShadowRay” vulnerability raised serious concerns. For seven months, this vulnerability enabled attackers to access sensitive AI production workloads from thousands of companies, compromising computing power, credentials, passwords, keys, tokens, and other critical data.

While Anyscale, the framework's maintainer, initially disputed the vulnerability's severity, they have since released new tools to assist users in verifying whether their ports are exposed. “In light of reports of malicious activity, we have moved quickly to provide tools for verifying the proper configuration of clusters to prevent accidental exposure,” stated an Anyscale spokesperson.

The vulnerability, identified as CVE-2023-48022, can expose the Ray Jobs API to remote code execution attacks. This means that anyone with access to the dashboard network could invoke unauthorized jobs, as revealed by Oligo Security in a recent report.

Although Anyscale first characterized the issue as an expected behavior, they have now introduced the Open Ports Checker. This tool simplifies the process of identifying unexpectedly open ports. The client-side script defaults to contact a pre-configured Anyscale server, returning an “OK” message or a “WARNING” report regarding open ports.

A "WARNING" means the server detects something on the port, but does not necessarily indicate open access to unauthenticated traffic, as the script does not determine what is running on that port. An “OK” response indicates that no connections could be established to any ports. However, Anyscale cautions that this response does not guarantee that no ports are open due to potential configurations, such as firewalls or NAT rules.

Anyscale plans to host tests for the community to explicitly check these network paths. The repository is available under the Apache2 license and can be deployed on any Ray Head or Worker Node, functioning across all Ray versions and returning all existing ports via Ray APIs. The tool can also be configured to send test network calls to a lightweight web server, or users may send calls to their own servers if preferred.

The 'ShadowRay' vulnerability went largely unrecognized, as it had no patch in place. Thus, it was considered a “shadow vulnerability,” typically overlooked in standard scans. According to Oligo Security, this vulnerability affected:

- AI production workloads

- Access to cloud environments (AWS, GCP, Azure, Lambda Labs) and sensitive cloud services

- Kubernetes API access

- Credentials for OpenAI, Stripe, and Slack

- Production database credentials

- Tokens for OpenAI, Hugging Face, Stripe, and Slack

As of March 28, Censys identified 315 globally affected hosts, with over 77% exposing a login page and several exposing file directories.

Experts warn that ‘ShadowRay’ poses significant risks as it targets underlying infrastructure rather than specific applications. Nick Hyatt, a director of threat intelligence at Blackpoint Cyber, points out that threat actors can gain far more information by compromising infrastructure than through theoretical AI-driven attacks.

Many assume that this infrastructure is secure, leading to complacency regarding the data that LLMs utilize. This perception creates opportunities for attackers to access large volumes of sensitive information. Neil Carpenter of Orca Security emphasizes the challenge of releasing open-source AI projects without robust security measures, often relying on inadequate documentation for critical components.

The plight of ‘ShadowRay’ underscores the need for broader discussions around secure development practices, especially in a landscape where speed often overshadows security. Companies interested in adopting LLMs must prioritize data hygiene. “You can’t indiscriminately input an entire server into an LLM and expect smooth results, particularly when handling sensitive data,” warns Hyatt.

Organizations must validate datasets and understand regulatory requirements, particularly when developing on-premises LLMs. Questions regarding the provenance of data and its validation become critical when ensuring models provide accurate insights as part of regular business operations.

Ultimately, the challenges posed by ‘ShadowRay’ are not solely technological; they involve people, processes, and technology. As generative AI continues to evolve, Hyatt predicts a rise in infrastructure attacks rather than direct exploitation through generative AI. With data readily available and exploits common, attackers may find easier pathways to compromise rather than employing AI directly for intrusion.

Most people like

Find AI tools in YBX