Unauthorized Data Access using Azure SAS URLs served as Citation in LLM Application

Large Language Models (LLMs) are revolutionizing the way applications process and retrieve information. The particular implementation is of an LLM-based application that integrated with Azure services to allow users to query a knowledge source and retrieve summarized answers or document-specific insights. A critical vulnerability was identified during a review of this implementation which was later mitigated to avoid the risk exposure.

Implementation
The application leveraged the power of Retrieval-Augmented Generation (RAG) and LLM pipelines to extract relevant information from uploaded documents and generate accurate responses.

Document Management: Organization could upload documents to Azure Blob Storage from where the users could query information. The end users did not have the ability to upload documents.

Query Processing: The backend fetched content from Blob Storage, processed it using RAG pipelines, and generated responses through the LLM.

Transparency: Responses included citations with direct URLs to the source documents, allowing users to trace the origins of the information.

 

The design ensured seamless functionality, but the citation mechanism introduced a significant security flaw.

Identified Vulnerability
During testing, it was found that the application provided users with Shared Access Signature (SAS) URLs in the citations.

While intended to allow document downloads, this approach inadvertently created two major risks:

Unauthorized Data Access: Users were able to use the SAS URLs shared in citations to connect directly to the Azure Blob Storage using Azure Storage Explorer. This granted them access to the entire blob container, allowing them to view files beyond their permission scope and exposing sensitive data. Here is the step by step guide: - 

Select the appropriate Azure Resource

Select the Connection Method (we already have the SAS URL from our response)

 Enter the SAS URL from the response

Once we click on Connect, the Connection details are summarized: -
 
Complete the "Connect" process and observe that all container is accessible (with a lot more data than intended).


Malicious Uploads: Write permissions were inadvertently enabled on the blob container. Using Azure Storage Explorer, users can upload files to the blob storage which was not allowed. These files posed a risk of indirect prompt injection during subsequent LLM queries, potentially leading to compromised application behavior (more details on Indirect Prompt Injection can be read at - https://blog.blueinfy.com/2024/06/data-leak-in-document-based-gpt.html )

The combination of these two risks demonstrated how overly permissive configurations and direct exposure of SAS URLs could significantly compromise the application’s security and lead to unintended access of all documents provided to the LLM for processing.

Fixing the Vulnerability
To address these issues, the following actions were implemented:
Intermediary API: A secure API replaced direct SAS URLs for citation-related document access, enforcing strict access controls to ensure users only accessed authorized files.

Revised Blob Permissions: Blob-level permissions were reconfigured to allow read-only access for specific documents, disable write access for users, and restrict SAS tokens with shorter lifespans and limited scopes.

With these fixes in place, the application no longer exposed SAS URLs directly to users. Instead, all file requests were routed through the secure API, ensuring controlled access. Unauthorized data access and malicious uploads were entirely mitigated, reinforcing the application’s security and maintaining user trust.

This exercise highlights the importance of continuously evaluating security practices, particularly in AI/ML implementations that handle sensitive data.

Article by Hemil Shah & Rishita Sarabhai