Understanding Cross-Site Scripting and SOQL Vulnerabilities in Salesforce

Salesforce has remained a popular choice for CRM platforms. From customizing workflows to building applications, businesses can seamlessly achieve their goals with this cloud-based platform. What’s more, the extended functionality, like Visualforce pages, Lightning Web Components (LWC), Aura Components, and Apex, aids development for the enterprises significantly. 
 
We saw few vulnerabilities and security controls provided by Salesforce platform in past couple of blogs. We are going to discuss about two more common and severe vulnerabilities - Cross-Site Scripting (XSS) and SOQL Injection in this blog. 

Cross-Site Scripting (XSS) in Salesforce

XSS occurs when an application takes untrusted user input and returns the response containing user input without encoding to the browser. Attackers take this opportunity to run malicious JavaScript in the victim’s browser and extract sensitive information.
 
In Salesforce, XSS can appear in:
  • Visualforce Pages: Using raw {!} expressions without escaping.
  • Aura Components / LWC: Unsafe DOM manipulation or improper attribute use.
  • JS Controllers:  Unvalidated data is passed straight into the user interface.
 
How to Detect XSS in Code
  • Unescaped Output in Visualforce → Flag <apex:outputText value="{!userInput}" escape="false"/> or direct use of {!userInput} without proper encoding
  • Improper aura:unescapedHtml in Aura → Any use of aura:unescapedHtml or dynamic attributes directly rendering user data
  • Direct innerHTML assignment in LWC/JS → Using element.innerHTML = userInput instead of textContent. Moreover, the usage of insecure JavaScript functions such as html(), eval()
  • Input Fields Without Validation → Inputs accepted from users (comments, descriptions, messages) should be validated or sanitized before use
Vulnerable Example in Salesforce
 
Visualforce Page (Vulnerable):
<apex:page controller="XSSExampleController">
       <apex:form>
           <apex:inputText value="{!userInput}" label="Enter your name:"/>
           <apex:commandButton value="Submit" action="{!processInput}"/>
           <br/>
           <!-- Directly rendering user input -->
           <apex:outputText value="{!userInput}" escape="false"/>
       </apex:form>
</apex:page>

Controller:
public class XSSExampleController {
       public String userInput {get; set;}
       public void processInput() {
           // No validation or sanitization here
       }
}

If the attacker enters, <script>alert('XSS Attack!');</script> attack vector it will execute in the victim’s browser. 

Safe Example
 
Visualforce Page (Safe):
<apex:outputText value="{!userInput}" escape="true"/>

By default, escape value is true. 
Or manually encode:
<apex:outputText value="{!HTMLENCODE(userInput)}"/>

In LWC, avoid assigning user input directly to innerHTML/html():

// Vulnerable
element.innerHTML = userInput;

// Safe
element.textContent = userInput;


Impact
By exploiting XSS vulnerability, an attacker can steal user session. Once an attacker gets access to user session, he can perform tasks on behalf of the user which can result in data loss. In other cases, an attacker can redirect user to phishing site or can deface the application.


SOQL Injection in Salesforce 

 
What is SOQL Injection?
SOQL Injection is similar to SQL Injection in traditional applications. This occurs when the untrusted user input is directly combined into a dynamic SOQL query string. Once coupled, the attackers may be able to access unauthorized data or bypass restrictions by manipulating the query. 

How to detect in Code 
When reviewing Salesforce Apex code, focus on how queries are built:
  • Dynamic SOQL with Concatenation → Flag any use of Database.query() or string concatenation (+) with user-supplied input
String q = 'SELECT Id FROM Account WHERE Name = \'' + userInput + '\'';
Database.query(q);   
  • Red flag when user input (e.g., from ApexPages.currentPage().getParameters(), form fields, or API requests) is directly concatenated
String searchKey = ApexPages.currentPage().getParameters().get('search');
String query = 'SELECT Id FROM Contact WHERE Email LIKE \'%' + searchKey + '%\'';
List<Contact> contacts = Database.query(query);

Apex Controller (Vulnerable):
public with sharing class SOQLInjectionExample {

        @AuraEnabled(cacheable=true)
        public static List<Account>searchAccounts(String inputName) {
// VULNERABLE: direct string concatenation
String query = 'SELECT Id, Name FROM Account WHERE Name LIKE \'%' + inputName + '%\'';
             return Database.query(query);       
         }
}

Here, an attacker could supply input like,
test%' OR Account LIKE '
This would alter the query to return all accounts, bypassing intended restrictions.

Secure Code in Apex:
  • Safe queries use bind variables (:variable) instead of concatenation.
String searchKey = ApexPages.currentPage().getParameters().get('search');
List<Contact> contacts = [SELECT Id, Name FROM Account WHERE LIKE :('%' + inputName + '%')];

  • Verify Input Validation and Escaping - even with dynamic queries, Salesforce provides String.escapeSingleQuotes() to neutralize malicious input.
String userInput = ApexPages.currentPage().getParameters().get('name');
userInput = String.escapeSingleQuotes(userInput);
String query = 'SELECT Id FROM Account WHERE Name = \'' + userInput + '\'';
List<Account> accList = Database.query(query);



Impact
Like any SQL Injection, a successful exploitation of SOQL Injection can result in data loss which will result in compliance violation and can cause damage to customer trust and reputation of the company.

 

Conclusion

Both XSS and SOQL Injection are serious vulnerabilities of Salesforce applications resulting from improper handling of user input. XSS works on the client side. This includes how data is rendered. Attackers use this to execute scripts within the user's browser. SOQL Injection works on the server side, allowing attackers to access Salesforce data by using an insecure query. We saw different vulnerabilities from the code perspective in these series of blogs. During the next blog, we will understand how a configuration review can help secure salesforce applications with a real-world example of one of our engagements. 

Unauthorized Data Access using Azure SAS URLs served as Citation in LLM Application

Large Language Models (LLMs) are revolutionizing the way applications process and retrieve information. The particular implementation is of an LLM-based application that integrated with Azure services to allow users to query a knowledge source and retrieve summarized answers or document-specific insights. A critical vulnerability was identified during a review of this implementation which was later mitigated to avoid the risk exposure.

Implementation
The application leveraged the power of Retrieval-Augmented Generation (RAG) and LLM pipelines to extract relevant information from uploaded documents and generate accurate responses.

Document Management: Organization could upload documents to Azure Blob Storage from where the users could query information. The end users did not have the ability to upload documents.

Query Processing: The backend fetched content from Blob Storage, processed it using RAG pipelines, and generated responses through the LLM.

Transparency: Responses included citations with direct URLs to the source documents, allowing users to trace the origins of the information.

 

The design ensured seamless functionality, but the citation mechanism introduced a significant security flaw.

Identified Vulnerability
During testing, it was found that the application provided users with Shared Access Signature (SAS) URLs in the citations.

While intended to allow document downloads, this approach inadvertently created two major risks:

Unauthorized Data Access: Users were able to use the SAS URLs shared in citations to connect directly to the Azure Blob Storage using Azure Storage Explorer. This granted them access to the entire blob container, allowing them to view files beyond their permission scope and exposing sensitive data. Here is the step by step guide: - 

Select the appropriate Azure Resource

Select the Connection Method (we already have the SAS URL from our response)

 Enter the SAS URL from the response

Once we click on Connect, the Connection details are summarized: -
 
Complete the "Connect" process and observe that all container is accessible (with a lot more data than intended).


Malicious Uploads: Write permissions were inadvertently enabled on the blob container. Using Azure Storage Explorer, users can upload files to the blob storage which was not allowed. These files posed a risk of indirect prompt injection during subsequent LLM queries, potentially leading to compromised application behavior (more details on Indirect Prompt Injection can be read at - https://blog.blueinfy.com/2024/06/data-leak-in-document-based-gpt.html )

The combination of these two risks demonstrated how overly permissive configurations and direct exposure of SAS URLs could significantly compromise the application’s security and lead to unintended access of all documents provided to the LLM for processing.

Fixing the Vulnerability
To address these issues, the following actions were implemented:
Intermediary API: A secure API replaced direct SAS URLs for citation-related document access, enforcing strict access controls to ensure users only accessed authorized files.

Revised Blob Permissions: Blob-level permissions were reconfigured to allow read-only access for specific documents, disable write access for users, and restrict SAS tokens with shorter lifespans and limited scopes.

With these fixes in place, the application no longer exposed SAS URLs directly to users. Instead, all file requests were routed through the secure API, ensuring controlled access. Unauthorized data access and malicious uploads were entirely mitigated, reinforcing the application’s security and maintaining user trust.

This exercise highlights the importance of continuously evaluating security practices, particularly in AI/ML implementations that handle sensitive data.

Article by Hemil Shah & Rishita Sarabhai